WO1993014605A1 - Rate-based adaptive congestion control system and method for integrated packet networks - Google Patents

Rate-based adaptive congestion control system and method for integrated packet networks Download PDF

Info

Publication number
WO1993014605A1
WO1993014605A1 PCT/US1992/010638 US9210638W WO9314605A1 WO 1993014605 A1 WO1993014605 A1 WO 1993014605A1 US 9210638 W US9210638 W US 9210638W WO 9314605 A1 WO9314605 A1 WO 9314605A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
congestion
fast
queue
frame
Prior art date
Application number
PCT/US1992/010638
Other languages
French (fr)
Inventor
Michael G. Hluchyj
Nanying Yin
Daniel B. Grossman
Original Assignee
Codex Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Codex Corporation filed Critical Codex Corporation
Priority to AU32741/93A priority Critical patent/AU650195B2/en
Priority to JP5512442A priority patent/JPH06507290A/en
Publication of WO1993014605A1 publication Critical patent/WO1993014605A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L12/5602Bandwidth control in ATM Networks, e.g. leaky bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/563Signalling, e.g. protocols, reference model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5635Backpressure, e.g. for ABR
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5636Monitoring or policing, e.g. compliance with allocated rate, corrective actions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5636Monitoring or policing, e.g. compliance with allocated rate, corrective actions
    • H04L2012/5637Leaky Buckets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • H04L2012/5648Packet discarding, e.g. EPD, PTD

Definitions

  • This invention relates generally to data processing and data communications, and more particularly relates to controlling congestion in an integrated packet network.
  • a fast packet typically contains a header that includes connection identification and other overhead information, and is fixed or restricted in length (e.g., to contain a "payload" of 48 octets of user information).
  • a fast packet adaption protocol is utilized, operating at a higher level than a fast packet relaying protocol, where the fast packet adaption protocol includes a segmentation and reassembly function. It is useful to note that transmission of integrated packets requires congestion control because of its bursty nature, while transmission of constant bit rate (CBR) traffic does not require such control.
  • CBR constant bit rate
  • Packet switched networks are subject to congestion when traffic offered to the network exceeds a capacity of the network. Such congestion is random in nature. End-user equipment tends to offer traffic in "bursts," interspersed with periods of inactivity. Networks are typically designed to accommodate some expected aggregate offered load. However, cost to an operator of the transmission network facilities and related equipment increases as the capacity of the network. Thus, networks are often designed to accommodate less than an absolute maximum possible offered load, and to rely on statistical effects to avoid blocking of transmission. Such a design may lead to congestion.
  • values of quality-of-service parameters are negotiated among originating end-user equipment, the network(s), and terminating end-user equipment. Categories of negotiated parameters include throughput and transit delay. Throughput parameters typically describe the users' expectations of traffic to be offered during a given time period, an estimation of a greatest amount of traffic the users expect to offer during such a period, and a metric for describing "burstiness" of traffic. This throughput information is used by the network(s) for purposes such as resource allocation and rate enforcement.
  • nere is a need for a device and method for providing a rate-based congestion control for integrated packet networks that performs rate enforcement such that end-user equipment may exceed an expected throughput agreed during negotiation, utilizing the network(s) on a space-available basis when capacity is available in the network(s).
  • a system and method are included for providing rate- based congestion control in an integrated fast packet network.
  • Each packet is capable of conveying a plurality of levels of congestion indication.
  • the system includes units for and the method includes steps for providing the functions of, a source edge node unit, operably coupled to receive the fast packet traffic for transmission over the network, said unit having a predetermined throughput rate, for rate-based monitoring and rate enforcement of the traffic utilizing a monitor/enforcer unit, a transit node unit, operably coupled to the source edge node unit, having a plurality of intermediate nodes for providing at said nodes fast packet transmission paths, and a destination edge node unit, operably coupled to the transit node unit, for providing at least a correlated congestion level and for outputting traffic at a realized throughput rate, such that the realized throughput rate of the transmitted fast packets may exceed the negotiated throughput rate where the fast packets utilize unallocated or unused network capacity.
  • FIG. 1 numeral 100, illustrates relevant fields of an information packet (102), typically a fast packet, and information gathering of congestion information along a fast packet source-to-destination connection path of a subnetwork (SUBPATH) utilizing a fast packet with a multiple bit congestion level field in accordance with the present invention.
  • FIG. 2, numeral 200 illustrates typical queue group congestion level transitions in accordance with the present invention.
  • FIG. 3 illustrates a protocol profile of a network, modelled according to the Open System Interconnection (OSI) reference model, utilizing the method of the present invention.
  • OSI Open System Interconnection
  • FIG. 4, numeral 400 illustrates a leaky bucket monitor/enforcer in accordance with the present invention where said monitor/enforcer is visualized as a fictitious queue model system.
  • FIG. 5, numeral 500 sets forth a flow diagram illustrating a leaky bucket operation and discard priority marking in accordance with the present invention.
  • FIG. 6, numeral 600 illustrates a first embodiment of a system that utilizes the present invention for providing rate- based congestion control in an integrated fast packet network, each fast packet capable of conveying a plurality of levels of congestion.
  • FIG. 7, numeral 700 sets forth a flow chart illustrating steps for sampling a transit queue group for congestion levels in accordance with the method of the present invention.
  • FIG. 8 numeral 800, sets forth a flow chart showing steps for fast packet discarding and fast packet congestion level marking at a data transmit queue in accordance with the present invention.
  • FIG. 9, numeral 900 sets forth a flow chart showing steps for fast packet discarding and fast packet congestion level marking at a voice transmit queue in accordance with the present invention.
  • FIG. 10 sets forth a flow chart illustrating steps for updating congestion and tagging status, forward explicit congestion notification (FECN) marking and creation of backward control fast packet at a destination edge node in accordance with the method of the present invention.
  • FIG. 11 numeral 1100, is flow chart illustrating steps for updating a tag status when receiving a first fast packet in a frame at a destination edge node in accordance with the method of the present invention.
  • FECN forward explicit congestion notification
  • FIG. 12, numeral 1200, is a flow chart illustrating steps for creating and storing a backward congestion code at a destination edge node in accordance with the method of the present invention.
  • FIG. 13, numeral 1300, is a flow chart illustrating steps for receiving a control fast packet in a backward direction at a source edge node in accordance with the method of the present invention.
  • FIG. 14, numeral 1400 is a flow chart illustrating steps for receiving a data fast packet in a backward direction and setting a backward explicit congestion notification (BECN) at a source edge node in accordance with the method of the present invention.
  • BECN backward explicit congestion notification
  • FIG. 15, numeral 1500 shows the steps of the method of the present invention for providing rate-based congestion control in an integrated fast packet network, each fast packet capable of conveying a plurality of levels of congestion and having a predetermined throughput rate.
  • the present invention provides several advantages over known approaches. First, at least four levels of network congestion are conveyed in a fast packet header in a forward direction (i.e., toward a destination user's equipment), providing corresponding levels of action by the network. Since other schemes have two levels or none, the present invention, by utilizing the four levels in a unique fashion, provides for a greater scope of action by the network.
  • levels of congestion are tracked and filtered independently by intermediate switching nodes. An indication of a highest congestion level encountered between an entry and egress point is conveyed in each fast packet.
  • This approach provides more efficient network operation for paths that cross multiple switching nodes. In addition, this avoids problems (such as are found utilizing density marking schemes) of distinguishing actual congestion from short-term, inconsequential changes in the states of individual switching nodes, when said changes are summed over all the nodes traversed by a path.
  • FIG. 1 numeral 100, illustrates relevant fields of an information packet (102), typically a fast packet, and information gathering of congestion information along a packet source-to-destination connection path of a subnetwork utilizing a fast packet with a multiple bit congestion level field in accordance with the present invention.
  • 102 information packet
  • FIG. 1 numeral 100, illustrates relevant fields of an information packet (102), typically a fast packet, and information gathering of congestion information along a packet source-to-destination connection path of a subnetwork utilizing a fast packet with a multiple bit congestion level field in accordance with the present invention.
  • the subnetwork typically includes INTERMEDIATE SWITCHING NODES (110), coupled, respectively, to a SOURCE EDGE NODE (108) and to a DESTINATION EDGE NODE (112), the subnetwork further being coupled to at least two end systems, at least one being the SOURCE END SYSTEM (109), and at least one being a DESTINATION END SYSTEM (111), to provide operable transmission of fast packets along the path between the SOURCE EDGE NODE (108) and the DESTINATION EDGE NODE (112), described more fully below (FIG. 6).
  • the data packet includes at least a field CL (104) that represents a multiple bit field, typically two bits, used to store and indicate a congestion level of a most congested internodal link along the path and a data field (106).
  • CL represents a multiple bit field, typically two bits, used to store and indicate a congestion level of a most congested internodal link along the path and a data field (106).
  • the congestion level of an internodal link is determined locally in each intermediate node feeding the link.
  • the two bit field of CL takes on values corresponding to normal, mild, moderate, and severe levels of congestion.
  • fast packets are queued into at least voice and data transit queues, and the congestion level is determined by comparing an average depth of transit queues within a queue group to a set of predetermined thresholds. Voice and data are in separate queue groups.
  • the threshold values depend on both the queue group and a specific queue within the group (typically high, medium, and low priority queues for the data group).
  • the congestion level is tracked separately for voice and data queue groups, with the congestion level of the data queue group set to that of the most congested priority queue.
  • the DESTINATION EDGE NODE typically (e.g., periodically or after changes in the state of the path) copies the two bit field CL (114) into a second packet (120) containing no user data and only one of three field codes (normal, moderate, and severe) and utilizes a closed-loop feedback scheme to provide backward congestion rate adjustment information.
  • BCCL Backward Correlated Congestion Level
  • FIG. 2 illustrates typical queue group congestion level transitions in accordance with the present invention.
  • hysteresis is introduced by only allowing queue group congestion level transitions to a normal state when congestion subsides. This hysteresis helps to return the queue group to a normal state more rapidly and provides a more stable operation of the control process.
  • the congestion state of the queue group is 'normal' (202).
  • the congestion state of the queue group becomes 'mild (204)'.
  • the congestion state of the queue group becomes 'mild (204)'.
  • the congestion state of the queue group becomes 'severe' (208).
  • the congestion state of the queue group becomes 'normal'. Note that the congestion state of the queue group cannot transition from 'severe' to 'moderate' or 'mild', and similarly cannot transition from 'moderate' to 'mild'.
  • FIG. 3 illustrates a protocol profile of a network, modelled according to the principles of the Open System Interconnection (OSI) Reference Model, utilizing the method of the present invention.
  • ISO 7498 International Organization for Standardization standard
  • APPLICATION APPLICATION
  • EDGE NODES OF THE END SYSTEMS typically comprise at least a first layer being a physical layer (PH)(15,40) for input (15, 40) and for output (21 , 34), a second layer being a data link layer (DL)(16, 39) operably coupled to an end system (302, 312) and being further subdivided into corresponding data link control (DLC) (18, 37), fast packet adaption (FPA) (19, 36), and fast packet relay (FPR) (20, 35) sub-layers operably coupled to at least a first intermediate node (306, 308) via the physical layer (PH) (21 , 34), and a third layer (NETWORK) (17, 38).
  • DLC data link control
  • FPA fast packet adaption
  • FPR fast packet relay
  • PH 308 typically comprise at least a first layer being a physical layer (PH) for input (22, 33) and for output (27, 28), a second layer having data link control (DLC) (25, 31), fast packet adaption (FPA) (24, 30), and fast packet relay (FPR) (23, 29) layers operably coupled to at least a first intermediate node (306, 308) via the physical layer (PH) (27, 28), and a third layer (NETWORK) (26, 32).
  • PH physical layer
  • DLC data link control
  • FPA fast packet adaption
  • FPR fast packet relay
  • congestion control is rate-based making use of an unbuffered leaky bucket monitor/enforcer.
  • Leaky buckets are known in the art. The role of the leaky bucket is to monitor a rate of traffic on each fast packet connection at a source edge of the network. By their nature these connections are bursty. Thus the leaky bucket measures an average rate. A time interval over which the leaky bucket averages rate is selectable. Said time interval determines a degree of burstiness allowed to the monitored connection. Typically a short averaging interval is used for connections with CBR-Iike demand characteristics, and a long averaging interval is used for more bursty connections (such as LAN interconnections).
  • a leaky bucket monitor/enforcer may be visualized (but typically not implemented) as a fictitious queue model system, illustrated in FIG. 4, numeral 400.
  • Frames comprised of at least one fast packet, are transmitted to two switches (402, 408), and are selectively directed (in parallel) from the first switch (402) to a summer (406) and from the second switch (408) to a fictitious queue (410) having a queue length Q where the arriving frame finds that Q is less than or equal to a preselected maximum allocation (bucket size) B.
  • the fictitious queue (410) is served at a rate R. Where the frame finds that Q is less than or equal to B, the frame is allowed into the fast packet subnetwork (leaky bucket output). Where the arriving frame finds that Q is greater than B, the first switch directs the frame to a violation handling unit (404), and the second switch opens to block the frame from entering the fictitious queue (410).
  • the violation handling unit (404) When a frame is received in excess of the negotiated rate R and bucket size B, the violation handling unit (404) provides for using a field code F for marking a first fast packet in the frame for use by the FPA layer (to indicate to the destination edge node that the frame is in violation of the negotiated rate R and bucket size B), and the frame is allowed into the fast packet network. Also, the discard priority for the first fast packet of the frame is set to Last Discarded, as are all fast packets of non-violating frames. In addition, the discard priority of the second and subsequent fast packets of the frame is lowered.
  • the violation handling unit (404) treats the frame as a violating frame, discards it, and creates an empty fast packet with Last Discard priority in which field code F is marked (for indicating the violation to the destination edge node for that connection).
  • a frame arriving at the leaky bucket with a frame relay discard eligibility (DE) bit set to 1 (due to a prior leaky bucket determination or set by the end system) is treated as a violating frame, as set forth above.
  • the DE bit of any frame marked by the leaky bucket at the source edge node and successfully delivered to the destination edge node is set at the destination edge node before the frame exits the subnetwork.
  • the monitor/enforcer at the source edge node determines the state of congestion along the forward direction of the connection by a feedback mechanism (described below).
  • a feedback mechanism (described below).
  • all but the first fast packet of the violating frame is discarded at the source edge node to remove the burden of discarding fast packets from the intermediate nodes since an intermediate node overburdened with discarding fast packets from several violating frames could result in poor performance for fast packets from well- behaved connections.
  • the discard priority for all but the first fast packet (which carries the marked field code F) of the violating frame is lowered.
  • the fast packet discard priority is used to determine which fast packets are discarded first. Hence, fast packets from violating frames are discarded before any of those from fast packet connections that are within the predetermined negotiated rate R.
  • An excess rate parameter, R2 is used to switch the leaky bucket to the discard mode in an open loop fashion.
  • R2 represents an amount of excess traffic that can be discarded from an intermediate link node queue for a fast packet " connection without degrading service provided to other fast packet connections sharing the link.
  • Leaky bucket operation and discard priority marking in accordance with the present invention is set forth in a flow diagram in FIG. 5, numeral 500.
  • the terminology "where affirmative”, used below, is defined to mean that a determination recited immediately before “where affirmative” has been executed, and the determination has been found to be positive.
  • step set (A) determining whether the fast packet is a first packet in a frame (504) [see also step set (B)]; where affirmative, updating the leaky bucket queue length Q and setting a first clock (506); determining whether the discard eligibility bit (DE) is set (508) [see step set (G)]; where DE is unset, determining whether Q > B (510)[see step set (F)]; where affirmative, updating leaky bucket 2 with queue length Q2 and setting a second clock (512); determining whether Q2 is greater than a second preselected maximum allocation (bucket size) B2 (514)[see step set (H)]; where Q2 ⁇ B2, unset an excess indication (516); determining a number of bits (K) in the fast packet and updating Q2 such that Q2 « Q2 + K (517); setting frame mark indication (518); determining whether a severe congestion indication (greater than or equal to a predetermined congestion level) is set (520)[see step set (I)]; where severe congestion indication is un
  • data fast packet connections are assigned to a transit internodal (head-of-line priority) queue based on an expected burst size of the source.
  • interactive terminal traffic is typically assigned a high priority, and file transfers a low priority.
  • FIG. 6, numeral 600 illustrates a first embodiment of a system that utilizes the present invention for providing rate- based congestion control in an integrated fast packet network, each packet having at least a two bit congestion level field.
  • the system comprises at least a source edge node unit (602), operably coupled to receive the fast packet traffic for transmission over the network, said unit having a negotiated throughput rate, for rate-based monitoring and rate enforcement of the traffic utilizing a monitor/enforcer unit that provides a fast packet discard priority, a transit node unit (604), operably coupled to the source edge node unit, having a plurality of intermediate nodes for providing at said nodes fast packet transmission paths, and a destination edge node unit (606), operably coupled to the transit node unit (604), for providing at least a correlated congestion level and for outputting reassembled frames of transmitted fast packets at a realized throughput rate, such that the realized throughput rate of the transmitted fast packets may exceed the predetermined throughput rate where the fast packets utilize unused or unallocated network capacity.
  • a source edge node unit (602), operably coupled to receive the fast packet traffic for transmission over the network, said unit having a negotiated throughput rate, for rate-based monitoring and rate enforcement of the traffic utilizing
  • an end system typically both transmits and receives frames related to the same end-to-end communication
  • an end system is both a source end system in one direction of communication and a destination end system in the other
  • an edge node- is both a source edge node in one direction of communication and a destination edge node in the other.
  • the source edge node unit (602) includes at least a monitor/enforcer unit (608), a BCCL state detection unit (612) and a frame relay BECN marking unit (610).
  • the monitor/enforcer unit at least performs leaky bucket operation and discard priority marking.
  • the discard priority field of a fast packet has two possible values, the one being 'Last Discarded' and the other being 'First Discarded'.
  • the discard priority of the first fast packet comprising the frame is set to Last Discarded and that of subsequent fast packets is set to First Discarded.
  • the BCCL state detection unit (612) determines whether congestion along the path is 'severe'. If the BCCL state detection unit (612) indicates that congestion along the path is 'severe', the frame is discarded and a control fast packet is sent, or if the previously negotiated excess rate R2 and excess bucket size B2 is exceeded, the frame is discarded and no control fast packet is sent.
  • the BCCL state detection unit (612) stores the BCCL from said control fast packet.
  • the BECN bit of said frame is set if the BCCL is equal to or greater than a predetermined congestion level.
  • the transit node unit includes at least congestion- reducing unit (614) for determining a transit node queue group (TNQG) congestion level for fast packets and for discarding fast packets based on said TNQG congestion level and on said discard priority.
  • the transit node unit generally further includes a low pass filter, operably coupled to the congestion- reducing unit (614), for providing a measurement that allows congestion determination of a particular queue. Typically, said measurement is obtained by averaging the sampled queue depth that is input from a queue depth sampler (618).
  • the destination edge node unit (606) generally comprises at least: a connection congestion level (CL) state determiner (620), operably coupled to the transit node means, for determining a connection congestion level (CL) state; a fast packet adaption (FPA) frame tag state determiner (622), operably coupled to the source edge node means, for determining a FPA frame tag state; a correlated CL determiner (626), operably coupled to the connection CL state determiner (620) and to the FPA frame tag state determiner (622), for utilizing the CL state and the FPA frame tag state to determine a correlated congestion level that provides a backward correlated congestion level (BCCL) state to a BCCL signal unit (624) that is operably coupled to provide a BCCL state indication to the source edge node unit (602), as described more fully herein, and that further provides a forward correlated congestion level (FCCL) to a FCCL state determiner (628) that is operably coupled to provide a frame relay forward explicit congestion notification (FECN) to a frame relay
  • the frame relay connection is allowed to exceed its negotiated (predetermined) throughput rate R without suffering any negative consequences if there is unused capacity along its assigned route.
  • the negotiated throughput rate as monitored by the leaky bucket, determines which connections are in violation, and sets fast packet discard priorities that are used by transit nodes to distinguish violating and non-violating frame traffic in the subnetwork.
  • the closed-loop feedback system of the first embodiment provides congestion notification across a frame relay interface.
  • the congestion state of the forward connection is maintained by examining a congestion level (CL) field in an arriving fast packet for that connection.
  • the congestion state is correlated with the frame tag state that is determined by checking the first fast packet of each frame for a field code F.
  • the frame tag state is held for a predetermined number of consecutive frames (e.g., 10) after a frame is received with the field code F.
  • correlated congestion level for forward and backward directions may be set as shown in the table below.
  • the frame relay forward explicit congestion notification (FECN) bit is set on frames crossing the frame relay interface at the destination edge whenever the forward correlated congestion level (FCCL) is in a mild, moderate or severe state.
  • FCCL forward correlated congestion level
  • the correlated congestion is returned to the source via a fast packet containing no user data and only one of the three field codes (normal, moderate, severe) for backward congestion, such packet being sent only upon change of congestion level or after a predetermined number of packets have been received since the last such packet was sent.
  • the normal and mild FCCL states are combined to form a backward correlated congestion level (BCCL) normal state.
  • the frame relay backward explicit congestion notification (BECN) bit is set for all frames crossing the frame relay interface to the source whenever the BCCL state is moderate or severe.
  • the leaky bucket begins to strictly enforce the negotiated throughput rate, R, by discarding the violating frames.
  • the frame tag state is still conveyed to the destination edge node for each of these discarded frames by creating an empty fast packet containing the field code F tag.
  • the BCCL control signalling is via the unreliable fast packet relay service, it must be reenforced by repeating the indication to compensate for control fast packets lost in the subnetwork.
  • a control fast packet is sent in a backward direction whenever the BCCL state changes or, alternatively, after a predetermined number of frames have been received in a forward direction.
  • the destination edge congestion correlation, FECN marking, backward congestion indication, and BECN marking functions are described more fully below.
  • FIG. 7, numeral 700 sets forth a flow chart illustrating steps for sampling a transit queue group for congestion levels in accordance with the method of the present invention. For each queue group that is sampled, one of the following sets of steps is executed (in accordance with determinations as set forth below):
  • a flow chart for illustrating steps for packet discarding and packet congestion level marking at a data transmit queue in accordance with the present invention is set forth in FIG. 8, numeral 800.
  • For each data packet to be transmitted one of the following sets of steps is followed (in accordance with determinations as set forth below): (A) selecting a packet from a queue of data packets to be transmitted (802); determining whether an instantaneous queue length is less than a predetermined high discard level (804)[see step set (B)]; where affirmative, determining whether the queue group congestion level (CL) is severe, or alternatively, whether the instantaneous queue length is greater than a predetermined low discard level (806)[see step set (C); where affirmative, determining whether the discard priority is Last Discarded (808) [see step set (D)]; where affirmative, determining whether the packet CL is less than the queue group CL (810)[see step set (E); where affirmative, setting the packet CL equal to the queue group CL (812); and transmitting the packet (814);
  • FIG. 9, numeral 900 A flow chart for illustrating steps for packet discarding and packet congestion level marking at a voice transmit queue in accordance with the present invention is set forth in FIG. 9, numeral 900.
  • Voice fast packets are typically not processed at the source edge node by the monitor/enforcer.
  • the packet discard priority is typically determined by a speech coding unit, based on the significance of the packet for purposes of reconstruction of the speech signal. For each voice packet to be transmitted, one of the following sets of steps is followed (in accordance with determinations as set forth below):
  • step set (A) selecting a packet from a queue of voice packets to be transmitted (902); setting variable PDP equal to a packet discard priority and variable IQL to the instantaneous queue length (904); determining whether IQL is greater than a predetermined voice watermark3 (906)[see step set (B)]; where the IQL is less than or equal to the predetermined voice watermark3, determining whether IQL is greater than a predetermined voice watermark2 and PDP is unequal to Last Discard (908) [see step set (C); where IQL is less than or equal to the predetermined voice watermark2 or PDP is equal to Last Discarded, determining whether IQL is greater than a predetermined voice watermarkl and PDP equals a first discard setting (910)[see step set (D)]; where IQL is less than or equal to the predetermined voice watermarkl or PDP is unequal to First Discarded, determining whether the packet CL is less than the queue group CL (912)[see step set (E);
  • FIG. 10 A flow chart for illustrating steps for updating congestion and tagging status, forward explicit congestion notification (FECN) marking and creation of backward control packet at a destination edge node in accordance with the method of the present invention is set forth in FIG. 10, numeral 1000.
  • FECN forward explicit congestion notification
  • FIG. 11 A flow chart for illustrating steps for updating a tag status when receiving a first packet in a frame at a destination edge node in accordance with the method of the present invention is set forth in FIG. 11 , numeral 1100.
  • For each first packet in a frame received one of the following sets of steps is followed (in accordance with determinations as set forth below):
  • FIG. 12 A flow chart for illustrating steps for creating and storing a backward congestion code at a destination edge node in accordance with the method of the present invention is set forth in FIG. 12, numeral 1200. For each packet having a tag or congestion status changed, one of the following sets of steps is followed (in accordance with determinations as set forth below):
  • FIG. 13 A flow chart for illustrating steps for receiving a control packet in a backward direction at a source edge node in accordance with the method of the present invention is set forth in FIG. 13, numeral 1300. For each control packet received, one of the following sets of steps is followed (in accordance with determinations as set forth below):
  • a flow chart for illustrating steps for receiving a data fast packet in a backward direction and setting a backward explicit congestion notification (BECN) bit at a source edge node in accordance with the method of the present invention is set forth in FIG. 14, numeral 1400. For each data fast packet received in a backward direction, one of the following sets of steps is followed (in accordance with determinations as set forth below):
  • step (A) determining whether the packet is a first packet in its frame (1402)[see step set (B)]; where the packet is a first packet in its frame, determining whether the backward congestion indication is equal to normal (1404) [see step set (C)]; where the backward congestion indication is equal to normal, ceasing taking further steps to set BECN; (B) in step (A) where the packet is other than a first packet in its frame, ceasing taking further steps to set the BECN bit; and
  • FIG. 15 shows the steps of the method of the present invention for providing rate-based congestion control in an integrated fast packet traffic network, each packet having at least a two bit congestion level field and a predetermined throughput rate, comprises at least the steps of: rate-based monitoring and rate enforcing the traffic utilizing a monitor/enforcer that provides a fast packet discard priority (1502), providing, at a plurality of intermediate nodes, fast packet transmission paths (1504), and providing at least a correlated congestion level and for outputting reassembled frames of transmitted fast packets at a realized throughput rate (1506), such that the realized throughput rate of the transmitted fast packets may exceed the predetermined throughput rate where the fast packets utilize available network capacity.
  • rate-based monitoring and rate enforcing the traffic utilizing a monitor/enforcer that provides a fast packet discard priority (1502), providing, at a plurality of intermediate nodes, fast packet transmission paths (1504), and providing at least a correlated congestion level and for outputting reassembled frames of transmitted fast packets at a
  • a system carrying transparent framed traffic utilizes the method of the present invention.
  • the same function is provided as described above, except that the access interface does not support FECN or BECN. That is, the congestion control will still be closed loop, but will have no way of informing the source or destination to slow its rate.
  • a system carries point-to-point simplex traffic, or alternatively, poi ⁇ t-to-multipoi ⁇ t simplex traffic.
  • the leaky bucket continues to lower the fast packet discard priority and to discard entire frames, but now does so in an open-loop fashion.
  • the excess rate parameter, R2 is used to switch the leaky bucket to a discard mode.
  • R2 is used to switch the leaky bucket to a discard mode.
  • the leaky bucket operates identically to that of the frame relay service described above.

Abstract

An adaptive congestion control device (600) and method provides for minimizing congestion on a basis of independent congestion level indicators (626). The invention further provides efficient recovery in an integrated packet network (17, 26, 32, 38) that becomes congested. In addition, the invention ensures that a user may utilize the network on a space-available basis when capacity is available in the network.

Description

RATE-BASED ADAPTIVE CONGESTION CONTROL SYSTEM AND METHOD FOR INTEGRATED PACKET
NETWORKS
Field of the Invention
This invention relates generally to data processing and data communications, and more particularly relates to controlling congestion in an integrated packet network.
Background of the Invention
Businesses, institutions, government agencies, common carriers, and value-added service providers utilize packet- switched networks that integrate frame-delimited data traffic (including frame relay), packetized speech, and constant bit rate (non-delimited) traffic onto common transmission facilities. For such networks, a basic unit of transmission has been defined to be a "fast packet." A fast packet typically contains a header that includes connection identification and other overhead information, and is fixed or restricted in length (e.g., to contain a "payload" of 48 octets of user information). Where is it necessary to convey information of length greater than that of a maximum payload size, a fast packet adaption protocol is utilized, operating at a higher level than a fast packet relaying protocol, where the fast packet adaption protocol includes a segmentation and reassembly function. It is useful to note that transmission of integrated packets requires congestion control because of its bursty nature, while transmission of constant bit rate (CBR) traffic does not require such control.
Packet switched networks are subject to congestion when traffic offered to the network exceeds a capacity of the network. Such congestion is random in nature. End-user equipment tends to offer traffic in "bursts," interspersed with periods of inactivity. Networks are typically designed to accommodate some expected aggregate offered load. However, cost to an operator of the transmission network facilities and related equipment increases as the capacity of the network. Thus, networks are often designed to accommodate less than an absolute maximum possible offered load, and to rely on statistical effects to avoid blocking of transmission. Such a design may lead to congestion.
During occurrence of congestion, queues internal to nodes that constitute the network grow, and may exceed the memory allocated to them, forcing packets to be discarded. In addition, the end-to-eπd transit delay experienced by packets traversing the network increases significantly as offered traffic increases. Discarding of packets and increased end-to-end transit delay degrades an end-user's perceived quality of service. In addition, discard or excessive delay cause commonly used end-to-end protocols to retransmit not only those packets which were discarded or delayed, but all packets which were unacknowledged at a time that the discard or time-out was determined. This typically results in an avalanche effect, during which the network ceases to perform useful work and cannot recover without manual intervention. - In a "connection-oriented" packet network, values of quality-of-service parameters are negotiated among originating end-user equipment, the network(s), and terminating end-user equipment. Categories of negotiated parameters include throughput and transit delay. Throughput parameters typically describe the users' expectations of traffic to be offered during a given time period, an estimation of a greatest amount of traffic the users expect to offer during such a period, and a metric for describing "burstiness" of traffic. This throughput information is used by the network(s) for purposes such as resource allocation and rate enforcement.
. nere is a need for a device and method for providing a rate-based congestion control for integrated packet networks that performs rate enforcement such that end-user equipment may exceed an expected throughput agreed during negotiation, utilizing the network(s) on a space-available basis when capacity is available in the network(s).
Summary of the Invention
A system and method are included for providing rate- based congestion control in an integrated fast packet network. Each packet is capable of conveying a plurality of levels of congestion indication. The system includes units for and the method includes steps for providing the functions of, a source edge node unit, operably coupled to receive the fast packet traffic for transmission over the network, said unit having a predetermined throughput rate, for rate-based monitoring and rate enforcement of the traffic utilizing a monitor/enforcer unit, a transit node unit, operably coupled to the source edge node unit, having a plurality of intermediate nodes for providing at said nodes fast packet transmission paths, and a destination edge node unit, operably coupled to the transit node unit, for providing at least a correlated congestion level and for outputting traffic at a realized throughput rate, such that the realized throughput rate of the transmitted fast packets may exceed the negotiated throughput rate where the fast packets utilize unallocated or unused network capacity.
The terms 'fast packet' and 'packet' are used interchangeably. Brief Description of the Drawings
FIG. 1 , numeral 100, illustrates relevant fields of an information packet (102), typically a fast packet, and information gathering of congestion information along a fast packet source-to-destination connection path of a subnetwork (SUBPATH) utilizing a fast packet with a multiple bit congestion level field in accordance with the present invention. FIG. 2, numeral 200, illustrates typical queue group congestion level transitions in accordance with the present invention.
FIG. 3, numeral 300, illustrates a protocol profile of a network, modelled according to the Open System Interconnection (OSI) reference model, utilizing the method of the present invention.
FIG. 4, numeral 400, illustrates a leaky bucket monitor/enforcer in accordance with the present invention where said monitor/enforcer is visualized as a fictitious queue model system.
FIG. 5, numeral 500, sets forth a flow diagram illustrating a leaky bucket operation and discard priority marking in accordance with the present invention.
FIG. 6, numeral 600, illustrates a first embodiment of a system that utilizes the present invention for providing rate- based congestion control in an integrated fast packet network, each fast packet capable of conveying a plurality of levels of congestion.
FIG. 7, numeral 700, sets forth a flow chart illustrating steps for sampling a transit queue group for congestion levels in accordance with the method of the present invention.
FIG. 8, numeral 800, sets forth a flow chart showing steps for fast packet discarding and fast packet congestion level marking at a data transmit queue in accordance with the present invention.
FIG. 9, numeral 900, sets forth a flow chart showing steps for fast packet discarding and fast packet congestion level marking at a voice transmit queue in accordance with the present invention.
FIG. 10, numeral 1000, sets forth a flow chart illustrating steps for updating congestion and tagging status, forward explicit congestion notification (FECN) marking and creation of backward control fast packet at a destination edge node in accordance with the method of the present invention. FIG. 11 , numeral 1100, is flow chart illustrating steps for updating a tag status when receiving a first fast packet in a frame at a destination edge node in accordance with the method of the present invention.
FIG. 12, numeral 1200, is a flow chart illustrating steps for creating and storing a backward congestion code at a destination edge node in accordance with the method of the present invention. FIG. 13, numeral 1300, is a flow chart illustrating steps for receiving a control fast packet in a backward direction at a source edge node in accordance with the method of the present invention.
FIG. 14, numeral 1400, is a flow chart illustrating steps for receiving a data fast packet in a backward direction and setting a backward explicit congestion notification (BECN) at a source edge node in accordance with the method of the present invention.
FIG. 15, numeral 1500, shows the steps of the method of the present invention for providing rate-based congestion control in an integrated fast packet network, each fast packet capable of conveying a plurality of levels of congestion and having a predetermined throughput rate. Detailed Description of a Preferred Embodiment
The present invention provides several advantages over known approaches. First, at least four levels of network congestion are conveyed in a fast packet header in a forward direction (i.e., toward a destination user's equipment), providing corresponding levels of action by the network. Since other schemes have two levels or none, the present invention, by utilizing the four levels in a unique fashion, provides for a greater scope of action by the network.
Second, levels of congestion are tracked and filtered independently by intermediate switching nodes. An indication of a highest congestion level encountered between an entry and egress point is conveyed in each fast packet. This approach provides more efficient network operation for paths that cross multiple switching nodes. In addition, this avoids problems (such as are found utilizing density marking schemes) of distinguishing actual congestion from short-term, inconsequential changes in the states of individual switching nodes, when said changes are summed over all the nodes traversed by a path.
Third, when congestion occurs in one or more switching nodes, only those connections which exceed their negotiated throughput parameters receive an explicit congestion indication unless one or more switching node experiences severe congestion. Thus, during periods of mild or moderate congestion, connections operating within the throughput parameters previously negotiated between the end-user equipment and the network continue to receive the throughput so negotiated. Fourth, stability of the network is independent of specific behavior of end-user equipment. Fifth, feedback of severe congestion information toward an entry to the network causes discarding of information, thus not relying, as does existing art, on discarding at a congested switching node. This frees processing capacity in congested nodes.
FIG. 1 , numeral 100, illustrates relevant fields of an information packet (102), typically a fast packet, and information gathering of congestion information along a packet source-to-destination connection path of a subnetwork utilizing a fast packet with a multiple bit congestion level field in accordance with the present invention. The subnetwork typically includes INTERMEDIATE SWITCHING NODES (110), coupled, respectively, to a SOURCE EDGE NODE (108) and to a DESTINATION EDGE NODE (112), the subnetwork further being coupled to at least two end systems, at least one being the SOURCE END SYSTEM (109), and at least one being a DESTINATION END SYSTEM (111), to provide operable transmission of fast packets along the path between the SOURCE EDGE NODE (108) and the DESTINATION EDGE NODE (112), described more fully below (FIG. 6). The data packet includes at least a field CL (104) that represents a multiple bit field, typically two bits, used to store and indicate a congestion level of a most congested internodal link along the path and a data field (106). The congestion level of an internodal link is determined locally in each intermediate node feeding the link. In the preferred embodiment the two bit field of CL takes on values corresponding to normal, mild, moderate, and severe levels of congestion. At each intermediate node fast packets are queued into at least voice and data transit queues, and the congestion level is determined by comparing an average depth of transit queues within a queue group to a set of predetermined thresholds. Voice and data are in separate queue groups. The threshold values depend on both the queue group and a specific queue within the group (typically high, medium, and low priority queues for the data group). The congestion level is tracked separately for voice and data queue groups, with the congestion level of the data queue group set to that of the most congested priority queue. The DESTINATION EDGE NODE typically (e.g., periodically or after changes in the state of the path) copies the two bit field CL (114) into a second packet (120) containing no user data and only one of three field codes (normal, moderate, and severe) and utilizes a closed-loop feedback scheme to provide backward congestion rate adjustment information. Generally, normal and mild congestion level states are combined to form a Backward Correlated Congestion Level (BCCL) normal state, as described more fully below.
FIG. 2, numeral 200, illustrates typical queue group congestion level transitions in accordance with the present invention. As is clear from FIG. 2, hysteresis is introduced by only allowing queue group congestion level transitions to a normal state when congestion subsides. This hysteresis helps to return the queue group to a normal state more rapidly and provides a more stable operation of the control process. During normal operation, the congestion state of the queue group is 'normal' (202). When the average length of any queue in the queue group exceeds a predetermined level for said queue, the congestion state of the queue group becomes 'mild (204)'. When, in the mild state, the average length of any queue in the queue group exceeds a predetermined level for the queue, the congestion state of the queue group becomes
'moderate' (206). When, in the moderate state, the average length of any queue in the queue group exceeds a predetermined level for the queue, the congestion state of the queue group becomes 'severe' (208). When, in the mild, moderate or severe states the average length of each queue in the queue group becomes less than a predetermined threshold established for said queue, the congestion state of the queue group becomes 'normal'. Note that the congestion state of the queue group cannot transition from 'severe' to 'moderate' or 'mild', and similarly cannot transition from 'moderate' to 'mild'.
FIG. 3, numeral 300, illustrates a protocol profile of a network, modelled according to the principles of the Open System Interconnection (OSI) Reference Model, utilizing the method of the present invention. End systems (302, 312) are illustrated, each having the seven layers (PH = PHYSICAL (1 ,8), DATA LINK (2,9), NETWORK (3,10), TRANSPORT (4,11), SESSION (5,12), PRESENTATION (6,13), and APPLICATION (7,14) for the functions of data communication as defined more fully, as is known in the art, in the International Organization for Standardization standard (ISO 7498) and in the International Telegraph and Telephone Consultative Committee's Recommendation X.200. EDGE NODES OF THE END SYSTEMS (304, 310) typically comprise at least a first layer being a physical layer (PH)(15,40) for input (15, 40) and for output (21 , 34), a second layer being a data link layer (DL)(16, 39) operably coupled to an end system (302, 312) and being further subdivided into corresponding data link control (DLC) (18, 37), fast packet adaption (FPA) (19, 36), and fast packet relay (FPR) (20, 35) sub-layers operably coupled to at least a first intermediate node (306, 308) via the physical layer (PH) (21 , 34), and a third layer (NETWORK) (17, 38). INTERMEDIATE NODES (306, .... 308) typically comprise at least a first layer being a physical layer (PH) for input (22, 33) and for output (27, 28), a second layer having data link control (DLC) (25, 31), fast packet adaption (FPA) (24, 30), and fast packet relay (FPR) (23, 29) layers operably coupled to at least a first intermediate node (306, 308) via the physical layer (PH) (27, 28), and a third layer (NETWORK) (26, 32). Typically, only the PH layer and FPR and FPA sub-layers are active in the subnetwork during information transfer.
In a first embodiment of the present invention wherein the invention is used for congestion control interworking with frame relay, congestion control is rate-based making use of an unbuffered leaky bucket monitor/enforcer. Leaky buckets are known in the art. The role of the leaky bucket is to monitor a rate of traffic on each fast packet connection at a source edge of the network. By their nature these connections are bursty. Thus the leaky bucket measures an average rate. A time interval over which the leaky bucket averages rate is selectable. Said time interval determines a degree of burstiness allowed to the monitored connection. Typically a short averaging interval is used for connections with CBR-Iike demand characteristics, and a long averaging interval is used for more bursty connections (such as LAN interconnections).
A leaky bucket monitor/enforcer may be visualized (but typically not implemented) as a fictitious queue model system, illustrated in FIG. 4, numeral 400. Frames, comprised of at least one fast packet, are transmitted to two switches (402, 408), and are selectively directed (in parallel) from the first switch (402) to a summer (406) and from the second switch (408) to a fictitious queue (410) having a queue length Q where the arriving frame finds that Q is less than or equal to a preselected maximum allocation (bucket size) B. The fictitious queue (410) is served at a rate R. Where the frame finds that Q is less than or equal to B, the frame is allowed into the fast packet subnetwork (leaky bucket output). Where the arriving frame finds that Q is greater than B, the first switch directs the frame to a violation handling unit (404), and the second switch opens to block the frame from entering the fictitious queue (410).
When a frame is received in excess of the negotiated rate R and bucket size B, the violation handling unit (404) provides for using a field code F for marking a first fast packet in the frame for use by the FPA layer (to indicate to the destination edge node that the frame is in violation of the negotiated rate R and bucket size B), and the frame is allowed into the fast packet network. Also, the discard priority for the first fast packet of the frame is set to Last Discarded, as are all fast packets of non-violating frames. In addition, the discard priority of the second and subsequent fast packets of the frame is lowered.
Where a path that the frame is to follow is congested to a level that is greater than or equal to the predetermined level of congestion (as indicated by BCCL), the violation handling unit (404) treats the frame as a violating frame, discards it, and creates an empty fast packet with Last Discard priority in which field code F is marked (for indicating the violation to the destination edge node for that connection). A frame arriving at the leaky bucket with a frame relay discard eligibility (DE) bit set to 1 (due to a prior leaky bucket determination or set by the end system) is treated as a violating frame, as set forth above. Also, the DE bit of any frame marked by the leaky bucket at the source edge node and successfully delivered to the destination edge node is set at the destination edge node before the frame exits the subnetwork.
The monitor/enforcer at the source edge node determines the state of congestion along the forward direction of the connection by a feedback mechanism (described below). Where a path is congested to a level that is greater than or equal to the predetermined level of congestion, all but the first fast packet of the violating frame is discarded at the source edge node to remove the burden of discarding fast packets from the intermediate nodes since an intermediate node overburdened with discarding fast packets from several violating frames could result in poor performance for fast packets from well- behaved connections. Where the path is congested to a level that is less than the predetermined level of congestion, the discard priority for all but the first fast packet (which carries the marked field code F) of the violating frame is lowered. Where a link becomes congested to a predetermined level wherein packets are dropped, the fast packet discard priority is used to determine which fast packets are discarded first. Hence, fast packets from violating frames are discarded before any of those from fast packet connections that are within the predetermined negotiated rate R.
An excess rate parameter, R2, is used to switch the leaky bucket to the discard mode in an open loop fashion. Conceptually, there are two leaky buckets. A first leaky bucket is used to lower the fast packet discard priority if R is exceeded, and the second leaky bucket is used to discard frames at the source edge where R + R2 is exceeded. In effect, R2 represents an amount of excess traffic that can be discarded from an intermediate link node queue for a fast packet" connection without degrading service provided to other fast packet connections sharing the link.
Leaky bucket operation and discard priority marking in accordance with the present invention is set forth in a flow diagram in FIG. 5, numeral 500. The terminology "where affirmative", used below, is defined to mean that a determination recited immediately before "where affirmative" has been executed, and the determination has been found to be positive. Upon receiving a fast packet (502), one of the following sets of steps are executed:
(A) determining whether the fast packet is a first packet in a frame (504) [see also step set (B)]; where affirmative, updating the leaky bucket queue length Q and setting a first clock (506); determining whether the discard eligibility bit (DE) is set (508) [see step set (G)]; where DE is unset, determining whether Q > B (510)[see step set (F)]; where affirmative, updating leaky bucket 2 with queue length Q2 and setting a second clock (512); determining whether Q2 is greater than a second preselected maximum allocation (bucket size) B2 (514)[see step set (H)]; where Q2 < B2, unset an excess indication (516); determining a number of bits (K) in the fast packet and updating Q2 such that Q2 « Q2 + K (517); setting frame mark indication (518); determining whether a severe congestion indication (greater than or equal to a predetermined congestion level) is set (520)[see step set (I)]; where severe congestion indication is unset, tagging (marking) an FPA frame state (522); setting a discard priority to Last Discarded (524); and transmitting the fast packet (526);
(B) where the fast packet is other than the first packet in the frame, determining whether excess indication is set (544) [see also step set (C)]; and where excess indication is set, and discarding the fast packet (556);
(C) where the forward path is congested and the packet is tagged, or alternatively, the forward path is severely congested, determining whether a frame mark indication is set (546) [see step set (D)]; where the frame mark indication is set, determining whether severe congestion level indication is set (548)[see step set (E)]; where the severe congestion indication is set, and discarding the fast packet (556);
(D) where the frame mark indication is unset, determining the number of bits (K) in the fast packet (538); updating Q such that Q=Q + K (540); setting a discard priority to Last Discarded (542); and transmitting the fast packet (526);
(E) where the severe congestion level indication is unset, determining the number of bits (K) in the fast packet
(550); updating Q2 such that Q2=Q2 + K (552); setting a discard priority to First Discarded (544); and transmitting the fast packet (526);
(F) where Q < B, unsetting the frame mark indication (536); determining the number of bits (K) in the fast packet
(538); updating Q such that Q=Q + K (540); setting a discard priority to Last Discarded (542); and transmitting the fast packet (526);
(G) where the DE bit is set in the frame relay frame, bypassing the step of determining whether Q > B in (A) above, and otherwise proceeding as set forth in step (A);
(H) where Q2 > B2, setting an excess indication (528); and discarding the fast packet (530); and (I) where a severe congestion indication is set, discarding the fast packet (532); creating an empty packet (534); tagging (marking) an FPA frame state (522); setting a discard priority to Last Discarded (524); and transmitting the fast packet (526).
In a preferred embodiment, data fast packet connections are assigned to a transit internodal (head-of-line priority) queue based on an expected burst size of the source. The larger the expected burst entering the subnetwork, the lower the assigned priority queue and the higher an expected delay. Hence, interactive terminal traffic is typically assigned a high priority, and file transfers a low priority. FIG. 6, numeral 600, illustrates a first embodiment of a system that utilizes the present invention for providing rate- based congestion control in an integrated fast packet network, each packet having at least a two bit congestion level field. The system comprises at least a source edge node unit (602), operably coupled to receive the fast packet traffic for transmission over the network, said unit having a negotiated throughput rate, for rate-based monitoring and rate enforcement of the traffic utilizing a monitor/enforcer unit that provides a fast packet discard priority, a transit node unit (604), operably coupled to the source edge node unit, having a plurality of intermediate nodes for providing at said nodes fast packet transmission paths, and a destination edge node unit (606), operably coupled to the transit node unit (604), for providing at least a correlated congestion level and for outputting reassembled frames of transmitted fast packets at a realized throughput rate, such that the realized throughput rate of the transmitted fast packets may exceed the predetermined throughput rate where the fast packets utilize unused or unallocated network capacity. Since an end system typically both transmits and receives frames related to the same end-to-end communication, an end system is both a source end system in one direction of communication and a destination end system in the other, and, similarly, an edge node- is both a source edge node in one direction of communication and a destination edge node in the other.
The source edge node unit (602) includes at least a monitor/enforcer unit (608), a BCCL state detection unit (612) and a frame relay BECN marking unit (610). The monitor/enforcer unit at least performs leaky bucket operation and discard priority marking. Typically, the discard priority field of a fast packet has two possible values, the one being 'Last Discarded' and the other being 'First Discarded'. When a frame is received in excess of previously negotiated rate R and bucket size B, the discard priority of the first fast packet comprising the frame is set to Last Discarded and that of subsequent fast packets is set to First Discarded. However, if the BCCL state detection unit (612) indicates that congestion along the path is 'severe', the frame is discarded and a control fast packet is sent, or if the previously negotiated excess rate R2 and excess bucket size B2 is exceeded, the frame is discarded and no control fast packet is sent. When a control fast packet is received in the backward direction (i.e., from the destination edge node) the BCCL state detection unit (612) stores the BCCL from said control fast packet. When the first fast packet of a frame is received in the backward direction, the BECN bit of said frame is set if the BCCL is equal to or greater than a predetermined congestion level.
The transit node unit includes at least congestion- reducing unit (614) for determining a transit node queue group (TNQG) congestion level for fast packets and for discarding fast packets based on said TNQG congestion level and on said discard priority. The transit node unit generally further includes a low pass filter, operably coupled to the congestion- reducing unit (614), for providing a measurement that allows congestion determination of a particular queue. Typically, said measurement is obtained by averaging the sampled queue depth that is input from a queue depth sampler (618).
The destination edge node unit (606) generally comprises at least: a connection congestion level (CL) state determiner (620), operably coupled to the transit node means, for determining a connection congestion level (CL) state; a fast packet adaption (FPA) frame tag state determiner (622), operably coupled to the source edge node means, for determining a FPA frame tag state; a correlated CL determiner (626), operably coupled to the connection CL state determiner (620) and to the FPA frame tag state determiner (622), for utilizing the CL state and the FPA frame tag state to determine a correlated congestion level that provides a backward correlated congestion level (BCCL) state to a BCCL signal unit (624) that is operably coupled to provide a BCCL state indication to the source edge node unit (602), as described more fully herein, and that further provides a forward correlated congestion level (FCCL) to a FCCL state determiner (628) that is operably coupled to provide a frame relay forward explicit congestion notification (FECN) to a frame relay FECN marking unit (630) that outputs the reassembled frames of transmitted fast packets.
Thus, there is a one-to-one mapping between a frame relay connection and its supporting fast packet connection such that the frame relay connection is allowed to exceed its negotiated (predetermined) throughput rate R without suffering any negative consequences if there is unused capacity along its assigned route. Where the network is congested, the negotiated throughput rate, as monitored by the leaky bucket, determines which connections are in violation, and sets fast packet discard priorities that are used by transit nodes to distinguish violating and non-violating frame traffic in the subnetwork.
The closed-loop feedback system of the first embodiment provides congestion notification across a frame relay interface. At the destination edge node, the congestion state of the forward connection is maintained by examining a congestion level (CL) field in an arriving fast packet for that connection. The congestion state is correlated with the frame tag state that is determined by checking the first fast packet of each frame for a field code F. The frame tag state is held for a predetermined number of consecutive frames (e.g., 10) after a frame is received with the field code F. For example, correlated congestion level for forward and backward directions may be set as shown in the table below.
Figure imgf000020_0001
In this embodiment the frame relay forward explicit congestion notification (FECN) bit is set on frames crossing the frame relay interface at the destination edge whenever the forward correlated congestion level (FCCL) is in a mild, moderate or severe state. The correlated congestion is returned to the source via a fast packet containing no user data and only one of the three field codes (normal, moderate, severe) for backward congestion, such packet being sent only upon change of congestion level or after a predetermined number of packets have been received since the last such packet was sent. Here, the normal and mild FCCL states are combined to form a backward correlated congestion level (BCCL) normal state. The frame relay backward explicit congestion notification (BECN) bit is set for all frames crossing the frame relay interface to the source whenever the BCCL state is moderate or severe. Also, if the BCCL state is severe, the leaky bucket begins to strictly enforce the negotiated throughput rate, R, by discarding the violating frames. The frame tag state is still conveyed to the destination edge node for each of these discarded frames by creating an empty fast packet containing the field code F tag. Since the BCCL control signalling is via the unreliable fast packet relay service, it must be reenforced by repeating the indication to compensate for control fast packets lost in the subnetwork. A control fast packet is sent in a backward direction whenever the BCCL state changes or, alternatively, after a predetermined number of frames have been received in a forward direction. The destination edge congestion correlation, FECN marking, backward congestion indication, and BECN marking functions are described more fully below.
FIG. 7, numeral 700, sets forth a flow chart illustrating steps for sampling a transit queue group for congestion levels in accordance with the method of the present invention. For each queue group that is sampled, one of the following sets of steps is executed (in accordance with determinations as set forth below):
(A) reading (sampling) a queue length from a transit queue (702); updating an average queue length for the transit queue (704); determining whether the average (i.e., filtered) queue length is greater than a first predetermined threshold^ (706)[see step set (B)]; where affirmative, determining whether the average queue length is greater than a second predetermined threshold2 (708)[see step set (C)]; where affirmative, determining whether the average queue length is greater than a third predetermined threshold3 (710)[see step set (D); where affirmative, setting the congestion level for the queue to severe (712); determining whether all queues in a selected group have been sampled (714) [see step set (E)]; and, where affirmative, determining whether the queue group congestion level (CL) is greater than a maximum congestion level in the queue group [see step set (F);
(B) in step (A) where the average queue length is less than or equal to the thresholdl, setting the CL to normal and proceeding to the step of determining whether all queues in the selected group have been sampled (714) in step (A);
(C) in step (A) where the average queue length is less than or equal to the threshold2, setting the CL to mild and proceeding to the step of determining whether all queues in the selected group have been sampled (714) in step (A);
(D) in step (A) where the average queue length is less than or equal to the thresholds, setting the CL to moderate and proceeding to the step of determining whether all queues in the selected group have been sampled (714) in step (A);
(E) in step (A) where at least one queue in the selected group is unsampled, proceeding to the step of reading a queue length from a transit queue (702) in step (A); and (F) in step (A) where the previous value of the queue group congestion level (CL) is less than the greatest congestion level of any queue in the queue group, or if the congestion levels of all queues in the queue group are 'normal', setting the queue group congestion level to the greatest congestion level of any queue in the queue group (724). Otherwise, the queue group CL is unchanged.
A flow chart for illustrating steps for packet discarding and packet congestion level marking at a data transmit queue in accordance with the present invention is set forth in FIG. 8, numeral 800. For each data packet to be transmitted, one of the following sets of steps is followed (in accordance with determinations as set forth below): (A) selecting a packet from a queue of data packets to be transmitted (802); determining whether an instantaneous queue length is less than a predetermined high discard level (804)[see step set (B)]; where affirmative, determining whether the queue group congestion level (CL) is severe, or alternatively, whether the instantaneous queue length is greater than a predetermined low discard level (806)[see step set (C); where affirmative, determining whether the discard priority is Last Discarded (808) [see step set (D)]; where affirmative, determining whether the packet CL is less than the queue group CL (810)[see step set (E); where affirmative, setting the packet CL equal to the queue group CL (812); and transmitting the packet (814);
(B) in step (A) where the instantaneous queue length is greater than or equal to the predetermined high discard level, discarding the packet (816); and proceeding to selecting another packet from a queue of data packets to be transmitted (802) in step (A);
(C) in step (A) where the queue group congestion level (CL) is other than severe, and the instantaneous queue length is less than or equal to a predetermined low discard level, proceeding to determining whether the packet CL is less than the queue group CL (810) in step (A);
(D) in step (A) where the discard priority is other than the Last Discarded, discarding the packet (818); and proceeding to select another packet from a queue of fast packets to be transmitted (802) in step (A); and
(E) in step (A) where the packet CL is greater than or equal to the queue group CL, transmitting the packet (814).
A flow chart for illustrating steps for packet discarding and packet congestion level marking at a voice transmit queue in accordance with the present invention is set forth in FIG. 9, numeral 900. Voice fast packets are typically not processed at the source edge node by the monitor/enforcer. However, the packet discard priority is typically determined by a speech coding unit, based on the significance of the packet for purposes of reconstruction of the speech signal. For each voice packet to be transmitted, one of the following sets of steps is followed (in accordance with determinations as set forth below):
(A) selecting a packet from a queue of voice packets to be transmitted (902); setting variable PDP equal to a packet discard priority and variable IQL to the instantaneous queue length (904); determining whether IQL is greater than a predetermined voice watermark3 (906)[see step set (B)]; where the IQL is less than or equal to the predetermined voice watermark3, determining whether IQL is greater than a predetermined voice watermark2 and PDP is unequal to Last Discard (908) [see step set (C); where IQL is less than or equal to the predetermined voice watermark2 or PDP is equal to Last Discarded, determining whether IQL is greater than a predetermined voice watermarkl and PDP equals a first discard setting (910)[see step set (D)]; where IQL is less than or equal to the predetermined voice watermarkl or PDP is unequal to First Discarded, determining whether the packet CL is less than the queue group CL (912)[see step set (E); where affirmative, setting the packet CL equal to the queue group CL (914); and transmitting the packet (916);
(B) in step (A) where the IQL is greater than the predetermined voice watermark3, discarding the packet (918); and proceeding to selecting a packet from a queue of voice packets to be transmitted (902) in step (A);
(C) in step (A) where the IQL is greater than the predetermined voice watermark2 and PDP is unequal to last discard, discarding the packet (920); and proceeding to selecting a packet from a queue of voice packets to be transmitted (902) in step (A);
(D) in step (A) where the IQL is greater than the predetermined voice watermarkl , and PDP is equal to first discard, discarding the packet (922); and proceeding to selecting a packet from a queue of voice packets to be transmitted (902) in step (A);
(E) in step (A) where the packet CL is greater than or equal to the queue group CL, transmitting the packet (916).
A flow chart for illustrating steps for updating congestion and tagging status, forward explicit congestion notification (FECN) marking and creation of backward control packet at a destination edge node in accordance with the method of the present invention is set forth in FIG. 10, numeral 1000. For each received outbound packet, one of the following sets of steps is followed (in accordance with determinations as set forth below):
(A) updating the congestion status of the received outbound fast packet (1002); determining whether the fast packet is a first fast packet in the frame (1004)[see step set (B)]; where affirmative, updating a fast packet adaption tagging status (state) (1006); determining whether the outbound packet is congested and tagged, or alternatively, severely congested (1008)[see step set (C)]; where the forward path is uncongested or the packet is untagged, and where the forward path is other than severely congested, determining whether the tagging or congestion status has changed (1010)[see step set (D)]; where the tagging or congestion status has changed, creating and storing a backward congestion code (1012); storing a current tag and congestion status (1014); setting a counter! to a predetermined number N1 (1016); creating a control packet for a backward direction (1018); and setting a control field for a backward congestion code and setting the control packet discard priority to Last Discard (1020);
(B) in step (A) where the outbound packet is other than a first packet in the frame, proceeding to the step of determining whether the tagging or congestion status has changed (1010) in step (A);
(C) in step (A) where the forward path is congested and the packet is tagged, or alternatively, the forward path is severely congested, setting a FECN bit in the frame relay frame;
(D) in step (A), where tagging and congestion status are unchanged, determining whether the packet is the first packet in the frame (1024)[see step set (E)]; where the packet is the first packet in the frame, determining whether a couπterl is set to zero (1026); where the counterl is set to zero, proceeding to the step of setting counterl to the predetermined number N1 (1016) of step (A);
(E) in step (D) where the packet is other than the first packet in the frame, ending the status determining steps; and
(F) in step (D) where the counterl is set to other than zero, setting the counterl to counterl - 1.
A flow chart for illustrating steps for updating a tag status when receiving a first packet in a frame at a destination edge node in accordance with the method of the present invention is set forth in FIG. 11 , numeral 1100. For each first packet in a frame received, one of the following sets of steps is followed (in accordance with determinations as set forth below):
(A) determining whether the fast packet adaption (FPA) state is marked (tagged) (1102)[see step set (B)]; where the FPA is untagged, determining whether a counter2 is set to zero (1104)[see step set(C)]; where affirmative, setting a current tag status equal to untagged (1106);
(B) where the FPA is tagged, setting the counter2 equal to a predetermined second number N2(1108); and
(C) where the counter2 is greater than 0, decrementing counter2 and, setting the current tag status equal to tagged (1 110).
A flow chart for illustrating steps for creating and storing a backward congestion code at a destination edge node in accordance with the method of the present invention is set forth in FIG. 12, numeral 1200. For each packet having a tag or congestion status changed, one of the following sets of steps is followed (in accordance with determinations as set forth below):
(A) determining whether a congestion level (CL) is one of: normal and mild (1202)[see step set (B)]; where CL is other than one of normal and mild, determining whether CL is moderate (1204)[see step set (C)]; where CL is other than one of normal, mild, and moderate, determining whether the tag status (state) is tagged (1206) [see step set (D)]; where the tag status is tagged, setting a backward congestion code equal to severe (1208); storing the backward congestion code (1210); (B) in step (A) where the congestion level is equal to one of normal and mild, setting the backward congestion code equal to normal (1212) and storing the backward congestion code (1210);
(C) in step (A) where the congestion level is equal to moderate, determining whether the tag status is tagged
(1214)[see step set (E); where the tag status is tagged, setting the backward congestion code equal to moderate (1216); and storing the backward congestion code (1210); (D) in step (A) where the tagged status is untagged, setting the backward congestion code equal to moderate (1216); and storing the backward congestion code (1210);
(E) in step (C) where the tag status is untagged, setting the backward congestion code equal to normal (1212); and storing the backward congestion code (1210).
A flow chart for illustrating steps for receiving a control packet in a backward direction at a source edge node in accordance with the method of the present invention is set forth in FIG. 13, numeral 1300. For each control packet received, one of the following sets of steps is followed (in accordance with determinations as set forth below):
(A) reading a control field of the received control packet (1302); determining whether the control field has changed from a previous backward congestion indication (value) (1304)[see step set (B)]; where affirmative, determining whether the control field is a normal value (1306)[see step set (C)]; where the control field is other than normal, determining whether the control field is a moderate value (1307)[see step set (D)]; where the control field is moderate, setting the backward congestion indication to a moderate indication (1308);
(B) in step (A) where the control field is unchanged from, a previous backward congestion indication (value), ceasing taking further steps to set the control field;
(C) in step (A) where the control field is a normal value, setting the backward congestion indication to a normal indication (1310); and
(D) in step (A) where the control field is other than a normal value and other than a moderate value, setting the backward congestion indication to a severe indication (1312). A flow chart for illustrating steps for receiving a data fast packet in a backward direction and setting a backward explicit congestion notification (BECN) bit at a source edge node in accordance with the method of the present invention is set forth in FIG. 14, numeral 1400. For each data fast packet received in a backward direction, one of the following sets of steps is followed (in accordance with determinations as set forth below):
(A) determining whether the packet is a first packet in its frame (1402)[see step set (B)]; where the packet is a first packet in its frame, determining whether the backward congestion indication is equal to normal (1404) [see step set (C)]; where the backward congestion indication is equal to normal, ceasing taking further steps to set BECN; (B) in step (A) where the packet is other than a first packet in its frame, ceasing taking further steps to set the BECN bit; and
(C) in step (A) where the backward congestion indication is indicated as other than normal, setting the frame relay BECN bit to a set state (1406).
Thus, FIG. 15 , numeral 1500, shows the steps of the method of the present invention for providing rate-based congestion control in an integrated fast packet traffic network, each packet having at least a two bit congestion level field and a predetermined throughput rate, comprises at least the steps of: rate-based monitoring and rate enforcing the traffic utilizing a monitor/enforcer that provides a fast packet discard priority (1502), providing, at a plurality of intermediate nodes, fast packet transmission paths (1504), and providing at least a correlated congestion level and for outputting reassembled frames of transmitted fast packets at a realized throughput rate (1506), such that the realized throughput rate of the transmitted fast packets may exceed the predetermined throughput rate where the fast packets utilize available network capacity.
In a second embodiment, a system carrying transparent framed traffic utilizes the method of the present invention. In this embodiment, the same function is provided as described above, except that the access interface does not support FECN or BECN. That is, the congestion control will still be closed loop, but will have no way of informing the source or destination to slow its rate.
In a third embodiment, a system carries point-to-point simplex traffic, or alternatively, poiπt-to-multipoiπt simplex traffic. In this embodiment there is no reverse path to close the feedback loop. Here, the leaky bucket continues to lower the fast packet discard priority and to discard entire frames, but now does so in an open-loop fashion. The excess rate parameter, R2, is used to switch the leaky bucket to a discard mode. As indicated above, conceptually, there are two leaky buckets, the first being used to lower the fast packet discard priority where R is exceeded, and the second being used to discard frames at the source edge where R + R2 is exceeded. In the point-to-point embodiment, the leaky bucket operates identically to that of the frame relay service described above.
Obviously numerous modification and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.
We claim:

Claims

Claims:
1. A system for providing rate-based congestion control in an integrated fast packet network, each packet being capable of conveying a plurality of levels of congestion indication, comprising at least: source edge node means, operably coupled to receive the fast packet traffic for transmission over the network, said means having a predetermined throughput rate, for rate-based monitoring and rate enforcement of the traffic utilizing a monitor/enforcer means, transit node means, operably coupled to the source edge node means, having a plurality of intermediate nodes for providing at said nodes fast packet transmission paths, and destination edge node means, operably coupled to the transit node means, for providing at least a correlated congestion level and for outputting traffic at a realized throughput rate, such that the realized throughput rate of the transmitted fast packets may exceed the negotiated throughput rate where the fast packets utilize unallocated or unused network capacity.
2. The system of claim 1 wherein the monitor/enforcer means further provides a fast packet discard priority, and, where selected, at least one of 2A-2B:
2A) wherein the transit node means further includes at least congestion-reducing means for determining a transit node queue group (TNQG) congestion level for fast packets and for discarding fast packets based on said TNQG congestion level and on said discard priority, and, where selected, wherein the means for determining a transit node queue group (TNQG) congestion level and for discarding fast packets based on said TNQG on said discard priority incorporates hysteresis for state transitions, and 2B) wherein the monitor/enforcer means that provides a fast packet discard priority comprises at least a leaky bucket monitor/enforcer or equivalent means for at least determining and marking a fast packet discard priority such that, when a frame is received in excess of a previously negotiated rate R and bucket size B, the discard priority of the first fast packet comprising said frame is set to 'last discarded' and the discard priority of subsequent fast packets is set to 'first discarded', and, where selected, further including means for, when a frame is received in excess of a previously negotiated rate R2 and bucket size B2, discarding said frame.
3. The system of claim 1 , wherein at least one of 3A-3D: 3 A) at least one ofthe transit node means further includes a low pass filter for providing a measurement that allows congestion determination of a particular queue, and, where selected, wherein the measurement that allows congestion determination of the particular queue is obtained by averaging the sampled queue depth that is input from a queue depth sampler, 3B) the congestion level field is two bits or an equivalent of two bits,
3C) further including a backwards correlated congestion level state determiner (BCCL STATE DET), operably coupled to the destination edge node means, for providing a backwards correlated congestion level (BCCL) state to the leaky bucket monitor/enforcer means via a control fast packet, and, where selected, further including at least one of 3C1-3C2:
3C1 ) a frame relay backwards explicit congestion notification (BECN) marking unit, operably coupled to the BCCL STATE DET, for setting a BECN bit when the backward correlated congestion level is equal to or greater than a predetermined congestion level, and
3C2) a means for discarding fast packets when the BCCL state is greater than a predetermined congestion value, and
3D) the destination edge node means comprises at least 3D1 -3D3: 3D1 ) a connection CL state determiner, operably coupled to the transit node means, for determining a connection congestion level (CL) state;
3D2) a fast packet adaption (FPA) frame tag state determiner, operably coupled to the transit node means, for determining a FPA frame tag state;
3D3) a correlated CL determiner, operably coupled to the connection CL state determiner and to the FPA frame tag state determiner, for utilizing the CL state and the FPA frame tag state to determine a correlated congestion level that provides a backward correlated congestion level (BCCL) state to a BCCL signal unit that is operably coupled to provide a BCCL state indication to the source edge node means and that further provides a forward correlated congestion level (FCCL) to a FCCL state determiner, and, where selected, the FCCL state determiner is further operably coupled to provide a frame relay forward explicit congestion notification (FECN) to a frame relay FECN marking unit that outputs the reassembled frames of transmitted fast packets.
4. A system for providing rate-based congestion control in an integrated fast packet network, each packet being capable of conveying a plurality of levels of congestion indication, comprising at least: source edge node means, operably coupled to receive the fast packet traffic for transmission over the network, said means having a negotiated throughput rate R and bucket size B, for rate-based monitoring and rate enforcement of the traffic utilizing a leaky bucket monitor/enforcer means for at least determining and marking a fast packet discard priority such that, when a frame is received in excess of said rate R and bucket size B, the discard priority of the first fast packet comprising said frame is set to 'last discarded' and the discard priority of subsequent fast packets is set to 'first discarded', except that if the BCCL state detection unit indicates that congestion along the path exceeds a predetermined level, the frame is discarded and a control fast packet is sent, or if the negotiated excess rate R2 and excess bucket size B2 is exceeded, the frame is discarded, transit node means, operably coupled to the source edge node means, having a plurality of intermediate nodes for providing at said nodes fast packet transmission paths utilizing congestion-reducing means for determining a transit node queue group (TNQG) congestion level for fast packets and for discarding fast packets based on said TNQG congestion level and on said discard priority, and destination edge node means, operably coupled to the transit node means, for providing at least a correlated congestion level and for outputting reassembled frames of transmitted fast packets at a realized throughput rate, such that the realized throughput rate of the transmitted fast packets may exceed the predetermined throughput rate where the fast packets utilize unused or unallocated network capacity.
5. The system of claim 4, wherein at least one of 5A-5C:
5A) further including a backwards correlated congestion level state determiner (BCCL STATE DET), operably coupled to the destination edge node means, for providing a backwards correlated congestion level (BCCL) state to the leaky bucket monitor/enforcer means via a control fast packet, and, where selected, further including a frame relay backwards explicit congestion notification (BECN) marking unit, operably coupled to the BCCL STATE DET, for setting a BECN bit when the backward correlated congestion level is equal to or greater than a predetermined congestion level,
5B) further including a low pass filter, operably coupled to the congestion-reducing means, for averaging a sampled queue depth that is input from a queue depth sampler, and
5C) wherein the destination edge node means comprises at least 5C1-5C3:
5C1) a connection CL state determiner, operably coupled to the transit node means, for determining a connection congestion level (CL) state;
5C2) a fast packet adaption (FPA) frame tag state determiner, operably coupled to the source edge node' means, for determining a FPA frame tag state; 5C3) a correlated CL determiner, operably coupled to the connection CL state determiner and to the FPA frame tag state determiner, for utilizing the CL state and the FPA frame tag state to determine a correlated congestion level that provides a backward correlated congestion level (BCCL) state to a BCCL signal unit that is operably coupled to provide a BCCL state indication to the source edge node means and that further provides a forward correlated congestion level (FCCL) to a FCCL state determiner, and, where selected, wherein the FCCL state determiner is further operably coupled to provide a frame relay forward explicit congestion notification (FECN) to a frame relay FECN marking unit that outputs the reassembled frames of transmitted fast packets.
6. A method for providing rate-based congestion control in an integrated fast packet network, each packet being capable of conveying a plurality of levels of congestion indication, comprising at least the steps of: 6A) rate-based monitoring and rate enforcing the traffic utilizing a monitor/enforcer,
6B) providing, at a plurality of intermediate nodes, fast packet transmission paths, and
6C) providing at least a correlated congestion level and for outputting reassembled frames of transmitted fast packets at a realized throughput rate, such that the realized throughput rate of the transmitted fast packets may exceed the predetermined throughput rate where the fast packets utilize unused or unallocated network capacity.
7. The method of claim 6 wherein at least one of 7A-7B: 7A) the congestion level field is two bits or an equivalent of two bits, and 7B) further including the step of utilizing the monitor/enforcer for providing a fast packet discard priority.
8. The method of claim 6 wherein utilizing the monitor/enforcer that provides a fast packet discard priority comprises at least the steps of utilizing a leaky bucket monitor/enforcer for at least determining and marking a fast packet discard priority such that when a frame is received in excess of the negotiated rate R, the discard priority of the first fast packet comprising said frame is set to Last discard and the discard priority of subsequent fast packets is set to First discard, and, where selected, at least one of 8A-8B: 8A) wherein the leaky bucket and discard priority marking, upon receiving a fast packet, utilize one of the following sets of steps 8A1-8A9:
8A1 ) determining whether the fast packet is a first packet in a frame [see also step set (8A2)]; where affirmative, updating the leaky bucket queue length Q and setting a first clock; determining whether the discard eligibility (DE) bit is set in the frame relay frame [see step set (8A7)]; where DE is unset, determining whether Q > B [see step set (8A6)]; where affirmative, updating leaky bucket 2 with queue length Q2 and setting a second clock; determining whether Q2 is greater than a second preselected maximum allocation (bucket size) B2[see step set (8A8)]; where Q2 < B2, unsetting an excess indication; determining a number of bits (K) in the fast packet and updating Q2 such that Q2 = Q2 + K; setting frame mark indication; determining whether a severe congestion level (greater than or equal to a predetermined congestion level) is set [see step set (8A9)J; where severe congestion indication is unset, tagging (marking) an FPA frame state; setting a discard priority to Last Discarded; and transmitting the fast packet; 8A2) where the fast packet is other than the first packet in the frame, determining whether excess indication is set [see also step set (8 A3)]; and where excess indication is set, and discarding the fast packet;
8 A3) where excess indication is unset, determining whether a frame mark indication is set [see step set (8A4)]; where the frame mark indication is set, determining whether severe congestion indication is set [see step set (8A5)]; where the severe congestion indication is set, and discarding the fast packet; 8A4) where the frame mark indication is unset, determining the number of bits (K) in the packet; updating Q such that Q=Q + K ; setting a discard priority to Last Discarded; and transmitting the fast packet;
8A5) where the severe congestion level indication is unset, determining the number of bits (K) in the packet; updating Q2 such that Q2=Q2 + K; setting a discard priority to First Discarded; and transmitting the fast packet;
8A6) where Q < B, unsetting the frame mark indication; determining the number of bits (K) in the packet; updating Q such that Q=Q + K; setting a discard priority to Last Discarded; and transmitting the fast packet;
8A7) where the DE bit is set (typically to 1), bypassing the step of determining whether Q > B in (8A1) above, and otherwise proceeding as set forth in step (8A1); 8A8) where Q2 > B2, setting an excess indication; and discarding the fast packet; and
8A9) where a severe congestion level indication is set, discarding the fast packet; creating an empty packet; tagging (marking) an FPA frame state; setting a discard priority to Last Discarded; and transmitting the fast packet,
8B) further including the step of providing a backwards correlated congestion level (BCCL) state to the leaky bucket monitor/enforcer, and, where selected, further including the step of setting a backward explicit congestion notification (BECN) bit when the backward correlated congestion level is equal to or greater than a predetermined congestion level,
9. The method of claim 6, wherein at least one of 9A-9D:
9A) wherein the step of providing a plurality of intermediate nodes, fast packet transmission paths includes at least utilizing a congestion-reducing unit for determining a transit node queue group (TNQG) congestion level for fast packets and for discarding fast packets based on said TNQG congestion level and on said discard priority, and, where selected, wherein steps for determining a transit node queue group congestion level includes sampling a transit queue group for congestion levels such that one of the following sets of steps is executed (in accordance with determinations as set forth below): 9A1) reading (sampling) a queue length from a transit queue; updating an averaged (i.e., filtered) queue length for the transit queue; determining whether an average queue length is greater than a first predetermined threshold [see step set (9A2)]; where affirmative, determining whether the average queue length is greater than a second predetermined threshold2 [see step set (9A3)]; where affirmative, determining whether the average queue length is greater than a third predetermined thresholds [see step set (9A4); where affirmative, setting the congestion level for the queue to severe; determining whether all queues in a selected group have been sampled [see step set (9A5)]; and, where affirmative, determining whether the queue group congestion level (CL) is greater than a predetermined maximum congestion level in the queue group [see step set (9A6);
9A2) in step (9A1) where the average queue length is less than or equal to the thresholdl , setting the CL to normal and proceeding to the step of determining whether all queues in the selected group have been sampled in step (9A1); 9 A3) in step (9A1) where the average queue length is less than or equal to the threshold2, setting the CL to mild and proceeding to the step of determining whether all queues in the selected group have been sampled in step (9A1);
9A4) in step (9A1) where the average queue length is less than or equal to the thresholds, setting the CL to moderate and proceeding to the step of determining whether all queues in the selected group have been sampled in step (9A1 );
9A5) in step (9A1) where at least one queue in the selected group is unsampled, proceeding to the step of reading a queue length from a transit queue in step (9A1); and
9A6) in step (9A1), where the previous value of the queue group congestion level (CL) is less than the greatest congestion level of any queue in the queue group, or if the congestion levels of all queues in the queue group are 'normal', setting the queue group congestion level to the greatest congestion level of any queue in the queue group; otherwise, the queue group CL is unchanged, 9B) wherein the step of providing at least a correlated congestion level and for outputting reassembled frames of transmitted fast packets at a realized throughput rate comprises at least the steps of: determining a connection congestion level (CL) state; determining a fast packet adaption (FPA) frame tag state; utilizing the CL state and the FPA frame tag state to determine a correlated congestion level, providing a backward correlated congestion level (BCCL) state to a BCCL signal unit, and providing a forward correlated congestion level (FCCL) to a FCCL state determiner, and where selected (for 9B), further including the step of providing a frame relay forward explicit congestion notification (FECN) to a frame relay FECN marking unit that outputs the reassembled frames of transmitted fast packets,
9C) further including packet discarding and congestion level marking for data packets such that, for each data packet to be transmitted, one of the following sets of steps is followed at an intermediate node (in accordance with determinations as set forth below):
9C1 ) selecting a packet from a queue of data packets to be transmitted; determining whether an instantaneous queue length is less than a predetermined high discard level [see step set (9C2)]; where affirmative, determining whether the queue group congestion level (CL) is severe, or alternatively, whether the instantaneous queue length is greater than a predetermined low discard level [see step set (9C3); where affirmative, determining whether the discard priority is Last Discarded [see step set (9C4)]; where affirmative, determining whether the packet CL is less than the queue group CL [see step set (9C5); where affirmative, setting the packet CL equal to the queue group CL; and transmitting the packet;
9C2) in step (9C1 ) where the instantaneous queue length is greater than or equal to the predetermined high discard level, discarding the packet; and proceeding to select another packet from a queue of data packets to be transmitted in step (9C1);
9C3) in step (9C1) where the queue group congestion level (CL) is other than severe and the instantaneous queue length is less than or equal to a predetermined low discard level, proceeding to determining whether the packet CL is less than the queue group CL in step (9C1);
9C4) in step (9C1) where the discard priority is other than the Last Discarded, discarding the packet; and proceeding to select another packet from a queue of data packets to be transmitted in step (9C1); and
9C5) in step (9C1) where the packet CL is greater than or equal to the queue group CL, transmitting the packet, and
9D) further including packet discarding and congestion level marking at a voice transmit queue such that, for each voice packet to be transmitted, one of the following sets of steps is followed (in accordance with determinations as set forth below):
9D1) selecting a packet from a queue of voice packets to be transmitted; setting variable PDP equal to a packet discard priority and variable IQL to an instantaneous queue length; determining whether IQL is greater than a predetermined voice watermark3 [see step set (9D2)]; where the IQL is less than or equal to the predetermined voice watermark3, determining whether IQL is greater than a predetermined voice watermark2 and PDP is unequal to Last Discard [see step set (9D3); where IQL is less than or equal to the predetermined voice watermark2 or PDP is equal to Last Discarded, determining whether IQL is greater than a predetermined voice watermarkl and PDP equals a first discard setting [see step set (9D4)]; where IQL is less than or equal to the predetermined voice watermarkl or PDP is unequal to First Discarded, determining whether the packet CL is less than the queue group CL [see step set (9D5); where affirmative, setting the packet CL equal to the queue group CL; and transmitting the packet;
9D2) in step (9D1) where the IQL is greater than the predetermined voice watermark3, discarding the packet; and proceeding to selecting a packet from a queue of voice packets to be transmitted in step (9D1 ); 9D3) in step (9D1) where the IQL is greater than the predetermined voice watermark2 and PDP is unequal to Last Discard, discarding the packet; and proceeding to selecting a packet from a queue of voice packets to be transmitted in step (9D1 ); 9D4) in step (9D1) where the IQL is greater than the predetermined voice watermarkl and PDP equals to the first discard, discarding the packet; and proceeding to selecting a packet from a queue of voice packets to be transmitted in step (9D1); 9D5) in step (9D1) where the packet CL is greater than or equal to the queue group CL, transmitting the packet.
10. The method of claim 6, wherein at least one of 10A-10E: 10A) wherein the step of providing at least a correlated congestion level and outputting reassembled frames of transmitted fast packets further includes steps for updating congestion and tagging status, forward explicit congestion notification (FECN) marking and creation of backward control packet at a destination edge node such that, for each received outbound packet, one of the following sets of steps is followed (in accordance with determinations as set forth below):
1 0A1 ) updating the congestion status of the received outbound packet; determining whether the packet is a first packet in the frame [see step set (10A2)]; where affirmative, updating a fast packet adaption tagging status (state); determining whether the forward path is congested and tagged, or alternatively, severely congested [see step set (10A3)]; where the forward path is other than congested or the packet is untagged, and the forward path is other than severely congested, determining whether the tagging or congestion status has changed [see step set (10A4)]; where the tagging or congestion status has changed, creating and storing a backward congestion code; storing a current tag and congestion status; setting a counterl to a predetermined number N1 ; creating a control packet for a backward direction; and setting a control field for a backward congestion code and setting the control packet discard priority to Last Discard;
10A2) in step (10A1) where the outbound packet is other than a first packet in the frame, proceeding to the step of determining whether the tagging or congestion status has changed in step (10A1);
10 A3) in step (10A1) where the forward path is congested and the packet is tagged, or alternatively, the forward path is severely congested, setting a FECN bit in the frame relay frame;
10A4) in step (10A3) where tagging and congestion status are unchanged, determining whether the packet is the first packet in the frame [see step set (10A5)]; where the packet is the first packet in the frame, determining whether a counterl is set to zero; where the counterl is set to zero, proceeding to the step of setting counterl to the predetermined number N1 of step (10A1);
10A5) in step (10A4) where the packet is other than the first packet in the frame, ending the status determining steps; and
10A6) in step (10A4) where the counterl is set to other than zero, setting the counterl to counterl - 1 , 10B) wherein the step of providing at least a correlated congestion level and outputting reassembled frames of transmitted fast packets further includes steps for updating a tag status when receiving a first packet in a frame at a destination edge node such that, for each first packet in a frame received, one of the following sets of steps is followed (in accordance with determinations as set forth below):
10B1 ) determining whether the fast packet adaption (FPA) state is marked (tagged)[see step set (10B2)]; where the FPA is untagged, determining whether a counter2 is set to zero [see step set(10B3)]; where affirmative, setting a current tag status equal to untagged ;
10B2) where the FPA is tagged, setting the counter2 equal to a predetermined second number N2; and
10B3) where the counter2 is greater than zero, decrementing counter2, setting the current tag status equal to tagged,
10C) where the step of providing at least a correlated congestion level and outputting reassembled frames of transmitted fast packets further includes steps for creating and storing a backward congestion code at a destination edge node such that, for each packet having a tag or congestion status changed, one of the following sets of steps is followed (in accordance with determinations as set forth below):
10C1 ) determining whether a congestion level
(CL) is one of: normal and mild [see step set (10C2)]; where CL is other than one of normal and mild, determining whether CL is moderate [see step set (10C3)]; where CL is other than one of normal, mild, and moderate, determining whether the tag status (state) is tagged [see step set (10C4)]; where the tag status is tagged, setting a backward congestion code equal to severe; storing the backward congestion code; 10C2) in step (10C1) where the congestion level is equal to one of normal and mild, setting the backward congestion code equal to normal and storing the backward congestin code; 10C3) in step (10C1) where the congestion level is equal to moderate, determining whether the tag status is tagged [see step set (10C5); where the tag status is tagged, setting the backward congestion code equal to moderate; and storing the backward congestion code; 10C4) in step (10C1) where the tagged status is untagged, setting the backward congestion code equal to moderate; and storing the backward congestion code;
10C5) in step (10C1) where the tag status is untagged, setting the backward congestion code equal to normal; and storing the backward congestion code,
10D) wherein the rate monitoring/enforcing of traffic utilizing a monitor/enforcer further includes steps for receiving a control packet in a backward direction at a source edge node such that for each control packet received, one of the following sets of steps is followed (in accordance with determinations as set forth below):
10D1 ) reading (sampling) a control field of the received control packet; determining whether the control field has changed from a previous backward congestion indication (value)[see step set (10D2)]; where affirmative, determining whether the control field is a normal value [see step set (10D3)]; where the control field is other than normal, determining whether the control field is a moderate value [see step set (10D4)]; where the control field is moderate, setting the control field to a moderate indication;
10D2) in step (10D1) where the backward congestion indication (value) is unchanged, ceasing taking further steps to set the control field; 10D3) in step (10D1) where the control field is a normal value, setting the backward congestion indication to a normal indication; and 10D4) in step (10D1) where the control field is other than a normal value and other than a moderate value, setting the control field to a severe indication, and
10E) wherein the rate monitoring/enforcing of traffic utilizing a monitor/enforcer further includes steps for receiving a data fast packet in a backward direction and setting a backward explicit congestion notification (BECN) at a source edge node such that, for each control packet received in a backward direction, one of the following sets of steps is followed (in accordance with determinations as set forth below):
10E1 ) determining whether the packet is a first fast packet in its frame [see step set (10E2)]; where the fast packet is a first fast packet in its frame, determining whether a backward congestion indication is equal to normal [see step set (10E3)]; where backward congestion indication is equal to normal, ceasing taking steps to set BECN;
10E2) in step (10E1) where the fast packet is other than a first fast packet in its frame, ceasing taking further steps to set the BECN; and
10E3) in step (10E1) where the backward congestion indication is indicated as other than normal, setting the frame relay BECN to a set state.
PCT/US1992/010638 1992-01-21 1992-12-10 Rate-based adaptive congestion control system and method for integrated packet networks WO1993014605A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU32741/93A AU650195B2 (en) 1992-01-21 1992-12-10 Rate-based adaptive congestion control system and method for integrated packet networks
JP5512442A JPH06507290A (en) 1992-01-21 1992-12-10 Rate-based adaptive congestion control system and method for integrated packet networks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US823,724 1992-01-21
US07/823,724 US5426640A (en) 1992-01-21 1992-01-21 Rate-based adaptive congestion control system and method for integrated packet networks

Publications (1)

Publication Number Publication Date
WO1993014605A1 true WO1993014605A1 (en) 1993-07-22

Family

ID=25239549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1992/010638 WO1993014605A1 (en) 1992-01-21 1992-12-10 Rate-based adaptive congestion control system and method for integrated packet networks

Country Status (6)

Country Link
US (1) US5426640A (en)
EP (1) EP0576647A4 (en)
JP (1) JPH06507290A (en)
AU (1) AU650195B2 (en)
CA (1) CA2104002C (en)
WO (1) WO1993014605A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995015637A1 (en) * 1993-11-30 1995-06-08 Nokia Telecommunications Oy Control of overload situations in frame relay network
WO1995015636A1 (en) * 1993-11-30 1995-06-08 Nokia Telecommunications Oy Control of overload situations in frame relay network
WO1996029806A2 (en) * 1995-03-21 1996-09-26 Cisco Systems, Inc. Frame based traffic policing for a digital switch
EP0743803A2 (en) * 1995-03-24 1996-11-20 Kabushiki Kaisha Toshiba Method and system for controlling cell transmission rate in ATM network using resource management cell
US5886982A (en) * 1993-08-25 1999-03-23 Hitachi, Ltd. ATM switching system and cell control method
US5923657A (en) * 1994-08-23 1999-07-13 Hitachi, Ltd. ATM switching system and cell control method
KR100341391B1 (en) * 1999-10-22 2002-06-21 오길록 Adaptive added transmission method and packet loss recovery method for interactive audio service, and audio input-output control device in multimedia computer
WO2015015141A1 (en) * 2013-07-31 2015-02-05 British Telecommunications Public Limited Company Fast friendly start for a data flow
US9985899B2 (en) 2013-03-28 2018-05-29 British Telecommunications Public Limited Company Re-marking of packets for queue control
US10469393B1 (en) 2015-08-06 2019-11-05 British Telecommunications Public Limited Company Data packet network
US10645016B2 (en) 2015-08-06 2020-05-05 British Telecommunications Public Limited Company Data packet network

Families Citing this family (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847611B1 (en) 1990-12-10 2005-01-25 At&T Corp. Traffic management for frame relay switched data service
US6771617B1 (en) 1993-06-17 2004-08-03 Gilat Satellite Networks, Ltd. Frame relay protocol-based multiplex switching scheme for satellite mesh network
US5434850A (en) 1993-06-17 1995-07-18 Skydata Corporation Frame relay protocol-based multiplex switching scheme for satellite
ZA946674B (en) * 1993-09-08 1995-05-02 Qualcomm Inc Method and apparatus for determining the transmission data rate in a multi-user communication system
JP3632229B2 (en) * 1994-12-07 2005-03-23 株式会社日立製作所 ATM switching equipment
US5524006A (en) * 1995-02-15 1996-06-04 Motorola, Inc. Second-order leaky bucket device and method for traffic management in cell relay networks
US5659541A (en) * 1995-07-12 1997-08-19 Lucent Technologies Inc. Reducing delay in packetized voice
US6075768A (en) 1995-11-09 2000-06-13 At&T Corporation Fair bandwidth sharing for video traffic sources using distributed feedback control
US5751969A (en) * 1995-12-04 1998-05-12 Motorola, Inc. Apparatus and methods for predicting and managing congestion in a network
US5777987A (en) * 1995-12-29 1998-07-07 Symbios, Inc. Method and apparatus for using multiple FIFOs to improve flow control and routing in a communications receiver
US5864539A (en) * 1996-05-06 1999-01-26 Bay Networks, Inc. Method and apparatus for a rate-based congestion control in a shared memory switch
US5918182A (en) * 1996-08-30 1999-06-29 Motorola, Inc. Method and apparatus for mitigating data congestion in an integrated voice/data radio communications system
US5805595A (en) 1996-10-23 1998-09-08 Cisco Systems, Inc. System and method for communicating packetized data over a channel bank
US6081524A (en) 1997-07-03 2000-06-27 At&T Corp. Frame relay switched data service
US6424624B1 (en) * 1997-10-16 2002-07-23 Cisco Technology, Inc. Method and system for implementing congestion detection and flow control in high speed digital network
US6421355B1 (en) * 1998-01-20 2002-07-16 Texas Instruments Incorporated Methods and linecard device for allocating multiple timeslots over digital backplane
US6477143B1 (en) 1998-01-25 2002-11-05 Dror Ginossar Method and apparatus for packet network congestion avoidance and control
DE69840321D1 (en) * 1998-02-05 2009-01-22 Alcatel Lucent Cell elimination process
US7145907B1 (en) 1998-03-09 2006-12-05 Siemens Aktiengesellschaft Method for removing ATM cells from an ATM communications device
US6333917B1 (en) * 1998-08-19 2001-12-25 Nortel Networks Limited Method and apparatus for red (random early detection) and enhancements.
US7707600B1 (en) * 1998-08-21 2010-04-27 Intel Corporation Confirming video transmissions
US6195332B1 (en) * 1998-08-28 2001-02-27 3Com Corporation Rate-based flow control protocol on an ethernet-over-ring communications network
US6308214B1 (en) * 1998-09-23 2001-10-23 Inktomi Corporation Self-tuning dataflow I/O core
US6868061B1 (en) * 1998-12-10 2005-03-15 Nokia Corporation System and method for pre-filtering low priority packets at network nodes in a network service class utilizing a priority-based quality of service
DE69939366D1 (en) * 1999-04-12 2008-10-02 Ibm Method and apparatus for improving overall network response time in file exchange between Telnet 3270 server and Telnet 3270 clients
CA2279728A1 (en) 1999-08-06 2001-02-06 Spacebridge Networks Corporation Soft, prioritized early packet discard (spepd) system
US6781956B1 (en) 1999-09-17 2004-08-24 Cisco Technology, Inc. System and method for prioritizing packetized data from a distributed control environment for transmission through a high bandwidth link
US6650652B1 (en) * 1999-10-12 2003-11-18 Cisco Technology, Inc. Optimizing queuing of voice packet flows in a network
US20050018611A1 (en) * 1999-12-01 2005-01-27 International Business Machines Corporation System and method for monitoring performance, analyzing capacity and utilization, and planning capacity for networks and intelligent, network connected processes
US6678244B1 (en) * 2000-01-06 2004-01-13 Cisco Technology, Inc. Congestion management system and method
US6646988B1 (en) * 2000-01-31 2003-11-11 Nortel Networks Limited System, device, and method for allocating excess bandwidth in a differentiated services communication network
US6977930B1 (en) * 2000-02-14 2005-12-20 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
GB2360168B (en) * 2000-03-11 2003-07-16 3Com Corp Network switch including hysteresis in signalling fullness of transmit queues
US6674717B1 (en) * 2000-03-30 2004-01-06 Network Physics, Inc. Method for reducing packet loss and increasing internet flow by feedback control
US7107535B2 (en) * 2000-05-24 2006-09-12 Clickfox, Llc System and method for providing customized web pages
US20020071388A1 (en) * 2000-11-16 2002-06-13 Einar Bergsson Selectable network protocol
WO2002043331A1 (en) * 2000-11-22 2002-05-30 Siemens Aktiengesellschaft Device and method for controlling data traffic in a tcp/ip data transmission network
US7325049B2 (en) * 2000-12-29 2008-01-29 Intel Corporation Alert management messaging
US7177279B2 (en) * 2001-04-24 2007-02-13 Agere Systems Inc. Buffer management for merging packets of virtual circuits
US7551560B1 (en) 2001-04-30 2009-06-23 Opnet Technologies, Inc. Method of reducing packet loss by resonance identification in communication networks
US7072297B2 (en) * 2001-04-30 2006-07-04 Networks Physics, Inc. Method for dynamical identification of network congestion characteristics
US7855966B2 (en) * 2001-07-16 2010-12-21 International Business Machines Corporation Network congestion detection and automatic fallback: methods, systems and program products
US7068601B2 (en) * 2001-07-16 2006-06-27 International Business Machines Corporation Codec with network congestion detection and automatic fallback: methods, systems & program products
US7215639B2 (en) * 2001-08-31 2007-05-08 4198638 Canada Inc. Congestion management for packet routers
US8125902B2 (en) * 2001-09-27 2012-02-28 Hyperchip Inc. Method and system for congestion avoidance in packet switching devices
US7102997B2 (en) * 2002-03-04 2006-09-05 Fujitsu Limited Aggregate rate transparent LAN service for closed user groups over optical rings
US7391785B2 (en) * 2002-09-02 2008-06-24 Motorola, Inc. Method for active queue management with asymmetric congestion control
US7260064B2 (en) * 2002-10-11 2007-08-21 Lucent Technologies Inc. Method and apparatus for performing network routing based on queue lengths
US7280482B2 (en) * 2002-11-01 2007-10-09 Nokia Corporation Dynamic load distribution using local state information
CN100539542C (en) * 2002-12-20 2009-09-09 国际商业机器公司 Maximum life span route in the wireless self-organization network
US7324535B1 (en) * 2003-04-10 2008-01-29 Cisco Technology, Inc. Methods and apparatus for maintaining a queue
US7453798B2 (en) * 2004-02-19 2008-11-18 Internationl Business Machines Corporation Active flow management with hysteresis
JP2006013050A (en) * 2004-06-24 2006-01-12 Sharp Corp Laser beam projection mask, laser processing method using the same and laser processing system
DE102004030631A1 (en) * 2004-06-24 2006-01-19 Infineon Technologies Ag Suppression of bursts caused by burst-like changes in data rate in synchronous radio transmission
US7859996B2 (en) * 2004-10-29 2010-12-28 Broadcom Corporation Intelligent congestion feedback apparatus and method
US7480304B2 (en) * 2004-12-29 2009-01-20 Alcatel Lucent Predictive congestion management in a data communications switch using traffic and system statistics
DE602006011125D1 (en) * 2005-01-31 2010-01-28 British Telecomm CONTROLLING A DATA FLOW IN A NETWORK
US7606159B2 (en) * 2005-08-30 2009-10-20 Cisco Technology, Inc. Method and apparatus for updating best path based on real-time congestion feedback
US20070153683A1 (en) * 2005-12-30 2007-07-05 Mcalpine Gary L Traffic rate control in a network
US20070230369A1 (en) * 2006-03-31 2007-10-04 Mcalpine Gary L Route selection in a network
JP4295292B2 (en) * 2006-04-03 2009-07-15 リコーソフトウエア株式会社 Image transfer method and program storage recording medium
US8140731B2 (en) * 2007-08-27 2012-03-20 International Business Machines Corporation System for data processing using a multi-tiered full-graph interconnect architecture
US7958183B2 (en) * 2007-08-27 2011-06-07 International Business Machines Corporation Performing collective operations using software setup and partial software execution at leaf nodes in a multi-tiered full-graph interconnect architecture
US7958182B2 (en) * 2007-08-27 2011-06-07 International Business Machines Corporation Providing full hardware support of collective operations in a multi-tiered full-graph interconnect architecture
US8185896B2 (en) * 2007-08-27 2012-05-22 International Business Machines Corporation Method for data processing using a multi-tiered full-graph interconnect architecture
US8014387B2 (en) * 2007-08-27 2011-09-06 International Business Machines Corporation Providing a fully non-blocking switch in a supernode of a multi-tiered full-graph interconnect architecture
US7904590B2 (en) * 2007-08-27 2011-03-08 International Business Machines Corporation Routing information through a data processing system implementing a multi-tiered full-graph interconnect architecture
US8108545B2 (en) * 2007-08-27 2012-01-31 International Business Machines Corporation Packet coalescing in virtual channels of a data processing system in a multi-tiered full-graph interconnect architecture
US7921316B2 (en) * 2007-09-11 2011-04-05 International Business Machines Corporation Cluster-wide system clock in a multi-tiered full-graph interconnect architecture
EP2079205A1 (en) * 2008-01-14 2009-07-15 British Telecmmunications public limited campany Network characterisation
US8077602B2 (en) * 2008-02-01 2011-12-13 International Business Machines Corporation Performing dynamic request routing based on broadcast queue depths
IT1390712B1 (en) * 2008-07-09 2011-09-15 Saverio Mascolo IMPLEMENTATION MECHANISM FOR SENDING PACKAGES IN RATE-BASED MODE ON PACKAGE SWITCHED NETWORKS
FR2946820B1 (en) * 2009-06-16 2012-05-11 Canon Kk DATA TRANSMISSION METHOD AND ASSOCIATED DEVICE.
JP5498889B2 (en) * 2010-08-06 2014-05-21 アラクサラネットワークス株式会社 Packet relay apparatus and congestion control method
JP6281327B2 (en) * 2014-03-06 2018-02-21 富士通株式会社 Information processing system, information processing apparatus, switch apparatus, and information processing system control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4769180A (en) * 1986-04-04 1988-09-06 Doryokuro Kakunenryo Kaihatsu Jigyodan Process for separately recovering uranium and hydrofluoric acid from waste liquor containing uranium and fluorine
US4769811A (en) * 1986-12-31 1988-09-06 American Telephone And Telegraph Company, At&T Bell Laboratories Packet switching system arranged for congestion control
US4849968A (en) * 1985-09-06 1989-07-18 Washington University Buffer management system
US4942569A (en) * 1988-02-29 1990-07-17 Kabushiki Kaisha Toshiba Congestion control method for packet switching apparatus

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3614062A1 (en) * 1986-04-23 1987-10-29 Siemens Ag METHOD FOR FLOW CONTROLLING DATA WITHIN A MESHED NETWORK
US4769810A (en) * 1986-12-31 1988-09-06 American Telephone And Telegraph Company, At&T Bell Laboratories Packet switching system arranged for congestion control through bandwidth management
JPH02220531A (en) * 1989-02-22 1990-09-03 Toshiba Corp Call connection control system and flow monitor system
NL8901171A (en) * 1989-05-10 1990-12-03 At & T & Philips Telecomm METHOD FOR MERGING TWO DATA CELL FLOWS TO ONE DATA CELL FLOW, AND ATD MULTIPLEXER FOR APPLICATION OF THIS METHOD
US5058111A (en) * 1989-06-19 1991-10-15 Oki Electric Industry Co., Ltd. Subscriber line interface circuit in a switching system
FR2648645B1 (en) * 1989-06-20 1991-08-23 Cit Alcatel METHOD AND DEVICE FOR EVALUATING THE THROUGHPUT OF VIRTUAL CIRCUITS EMPLOYING A TIME-MULTIPLEXED TRANSMISSION CHANNEL
US5179557A (en) * 1989-07-04 1993-01-12 Kabushiki Kaisha Toshiba Data packet communication system in which data packet transmittal is prioritized with queues having respective assigned priorities and frequency weighted counting of queue wait time
US5179556A (en) * 1991-08-02 1993-01-12 Washington University Bandwidth management and congestion control scheme for multicast ATM networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4849968A (en) * 1985-09-06 1989-07-18 Washington University Buffer management system
US4769180A (en) * 1986-04-04 1988-09-06 Doryokuro Kakunenryo Kaihatsu Jigyodan Process for separately recovering uranium and hydrofluoric acid from waste liquor containing uranium and fluorine
US4769811A (en) * 1986-12-31 1988-09-06 American Telephone And Telegraph Company, At&T Bell Laboratories Packet switching system arranged for congestion control
US4942569A (en) * 1988-02-29 1990-07-17 Kabushiki Kaisha Toshiba Congestion control method for packet switching apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0576647A4 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5886982A (en) * 1993-08-25 1999-03-23 Hitachi, Ltd. ATM switching system and cell control method
US7095726B2 (en) 1993-08-25 2006-08-22 Hitachi, Ltd. ATM switching system and cell control method
US6389026B1 (en) 1993-08-25 2002-05-14 Hitachi, Ltd. ATM switching system and cell control method
US6252877B1 (en) 1993-08-25 2001-06-26 Hitachi, Ltd. ATM switching system and cell control method
US6021130A (en) * 1993-08-25 2000-02-01 Hitachi, Ltd. ATM switching system and cell control method
AU689518B2 (en) * 1993-11-30 1998-04-02 Nokia Telecommunications Oy Control of overload situations in frame relay network
WO1995015636A1 (en) * 1993-11-30 1995-06-08 Nokia Telecommunications Oy Control of overload situations in frame relay network
AU689517B2 (en) * 1993-11-30 1998-04-02 Nokia Telecommunications Oy Control of overload situations in frame relay network
US5889762A (en) * 1993-11-30 1999-03-30 Nokia Telecommunications Oy Control of overload situations in frame relay network which discards the contents of a virtual-channel-specific buffer when said buffer is full
US5970048A (en) * 1993-11-30 1999-10-19 Nokia Telecommunications Oy Control of overload situations in frame relay network
CN1073317C (en) * 1993-11-30 2001-10-17 诺基亚电信公司 Control of overload situations in frame relay network
WO1995015637A1 (en) * 1993-11-30 1995-06-08 Nokia Telecommunications Oy Control of overload situations in frame relay network
US5923657A (en) * 1994-08-23 1999-07-13 Hitachi, Ltd. ATM switching system and cell control method
WO1996029806A3 (en) * 1995-03-21 1997-02-20 Stratacom Inc Frame based traffic policing for a digital switch
WO1996029806A2 (en) * 1995-03-21 1996-09-26 Cisco Systems, Inc. Frame based traffic policing for a digital switch
EP0743803A3 (en) * 1995-03-24 1998-11-18 Kabushiki Kaisha Toshiba Method and system for controlling cell transmission rate in ATM network using resource management cell
EP0743803A2 (en) * 1995-03-24 1996-11-20 Kabushiki Kaisha Toshiba Method and system for controlling cell transmission rate in ATM network using resource management cell
KR100341391B1 (en) * 1999-10-22 2002-06-21 오길록 Adaptive added transmission method and packet loss recovery method for interactive audio service, and audio input-output control device in multimedia computer
US9985899B2 (en) 2013-03-28 2018-05-29 British Telecommunications Public Limited Company Re-marking of packets for queue control
WO2015015141A1 (en) * 2013-07-31 2015-02-05 British Telecommunications Public Limited Company Fast friendly start for a data flow
CN105432046A (en) * 2013-07-31 2016-03-23 英国电讯有限公司 Fast friendly start for a data flow
EP3107252A1 (en) * 2013-07-31 2016-12-21 BRITISH TELECOMMUNICATIONS public limited company Fast friendly start for a data flow
US9860184B2 (en) 2013-07-31 2018-01-02 British Telecommunications Public Limited Company Fast friendly start for a data flow
CN105432046B (en) * 2013-07-31 2018-09-18 英国电讯有限公司 The quick friendly method, apparatus started and medium for data flow
US10469393B1 (en) 2015-08-06 2019-11-05 British Telecommunications Public Limited Company Data packet network
US10645016B2 (en) 2015-08-06 2020-05-05 British Telecommunications Public Limited Company Data packet network

Also Published As

Publication number Publication date
CA2104002C (en) 1998-09-29
CA2104002A1 (en) 1993-07-22
AU650195B2 (en) 1994-06-09
EP0576647A1 (en) 1994-01-05
AU3274193A (en) 1993-08-03
US5426640A (en) 1995-06-20
EP0576647A4 (en) 1995-04-19
JPH06507290A (en) 1994-08-11

Similar Documents

Publication Publication Date Title
WO1993014605A1 (en) Rate-based adaptive congestion control system and method for integrated packet networks
US6219713B1 (en) Method and apparatus for adjustment of TCP sliding window with information about network conditions
US6219339B1 (en) Method and apparatus for selectively discarding packets
AU703228B2 (en) Data link interface for packet-switched network
US6490251B2 (en) Method and apparatus for communicating congestion information among different protocol layers between networks
EP0719012B1 (en) Traffic management and congestion control for packet-based network
EP1798915B1 (en) Packet forwarding device avoiding packet loss of out of profile packets in the shaper by remarking and redirecting the packet to a lower priority queue
EP0997020A2 (en) Flow control in a telecommunications network
US5956322A (en) Phantom flow control method and apparatus
US5978357A (en) Phantom flow control method and apparatus with improved stability
KR100411447B1 (en) Method of Controlling TCP Congestion
KR100460958B1 (en) Communication system capable of improving data transmission efficiency of TCP in the asymmetric network environment and a method thereof
JP4580111B2 (en) Network system
JP4135007B2 (en) ATM cell transfer device
AU717162B2 (en) Improved phantom flow control method and apparatus
Elloumi et al. Improving RED algorithm performance in ATM networks
WO1998043395A9 (en) Improved phantom flow control method and apparatus
Aweya et al. TCP rate control with dynamic buffer sharing
KR19990048607A (en) Device and control method of cell buffer in ATM switch

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 2104002

Country of ref document: CA

Ref document number: 1993901367

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1993901367

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1993901367

Country of ref document: EP