WO1997043869A1 - Method and apparatus for per traffic flow buffer management - Google Patents

Method and apparatus for per traffic flow buffer management Download PDF

Info

Publication number
WO1997043869A1
WO1997043869A1 PCT/US1997/007839 US9707839W WO9743869A1 WO 1997043869 A1 WO1997043869 A1 WO 1997043869A1 US 9707839 W US9707839 W US 9707839W WO 9743869 A1 WO9743869 A1 WO 9743869A1
Authority
WO
WIPO (PCT)
Prior art keywords
buffer
threshold
cell
connection
common
Prior art date
Application number
PCT/US1997/007839
Other languages
French (fr)
Inventor
David A. Hughes
Daniel E. Klausmeier
Original Assignee
Cisco Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology, Inc. filed Critical Cisco Technology, Inc.
Priority to JP09540952A priority Critical patent/JP2000510308A/en
Priority to EP97924631A priority patent/EP0898855A1/en
Priority to AU30010/97A priority patent/AU730804B2/en
Priority to CA002254104A priority patent/CA2254104A1/en
Publication of WO1997043869A1 publication Critical patent/WO1997043869A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/108ATM switching elements using shared central buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5682Threshold; Watermark

Definitions

  • the present invention relates generally to the field of cell switching network communications and, more specifically, to the efficient management of shared buffer resources within such a network.
  • ATM traffic is switched and multiplexed in fixed length cells and an ATM network typically provides a number of interconnected nodes which are capable of receiving data from other network nodes and forwarding that data through to other network nodes to its ultimate destination.
  • Nodes are interconnected by transmission paths, each of which supports one or more virtual paths.
  • Each virtual path contains one or more virtual channels. Switching can be performed at the transmission path, virtual path or virtual channel level.
  • Network nodes generally employ buffering schemes to prevent contention for switch resources (e.g., ports). In the past, this has included relatively unsophisticated solutions, such as a first-in-first-out (FIFO) queue at each port. This solution quickly leads to cells being dropped indiscriminately when the volume of network traffic is large.
  • Other schemes involve "per connection” buffering where each logical connection (i.e., virtual path, virtual channel) is allocated its own cell memory. When the number of supported connections is large, however, the sum of the maximum buffer requirements for individual connections may drastically exceed the physical available memory.
  • This and other objects of the invention are achieved by an effective method for managing oversubscription by dynamically changing the maximum buffer space allowed for a particular traffic flow or connection in response to the global utilization of a single buffer resource.
  • a buffer utilization threshold for each of a number of various traffic flows is established. As new cells arrive, the global usage of the buffer resource is monitored. As the buffer fills, the individual thresholds for the various traffic flows are dynamically scaled based upon the global usage of the buffer.
  • This method allows guaranteed buffer space for sensitive traffic flows despite the oversubscription. Aggressive buffer allocations are scaled back when necessary, thereby leaving space for traffic flows which are using only a small portion of their allocated buffer space.
  • the present invention in effect maintains isolation between well behaved traffic flows, insuring that only flows which are using a disproportionate amount of memory are blocked from storing further cells or packets in the memory when the global resource usage approaches capacity.
  • the thresholds are coded in mantissa and exponent form so that the scaling is accomplished by adjusting the exponent. This approach allows a minimum of memory to be used to store the flow thresholds and simplifies the mechanism for scaling the thresholds.
  • Figure la is a flow diagram illustrating the initialization of various parameters according to one embodiment
  • Figure lb is a flow diagram illustrating dynamic per traffic flow buffer management according to one embodiment
  • Figure lc is a flow diagram illustrating a cell service routine according to one embodiment
  • Figure 2 is a plot which graphically illustrates the number of cells stored in a common buffer by each of a number of traffic flows;
  • Figure 3 is a plot similar to the plot shown in Figure 2 that further shows a number of buffer thresholds corresponding to the various traffic flows sharing the common buffer;
  • Figure 4 is a plot similar to the plot shown in Figure 3 and shows the common buffer utilization at a later time
  • Figure 5 is a plot similar to the plot shown in Figure 4 and illustrates the effect of dynamic buffer threshold scaling for one traffic flow according to one embodiment
  • Figure 6 is a plot similar to the plot shown in Figure 4 and illustrates the effects of dynamic buffer threshold scaling for a different traffic flow according to one embodiment
  • An improved method and apparatus to efficiently manage a common communications buffer resource shared by a large number of traffic flows, such as a cell memory shared by ATM virtual channels or paths, is described.
  • oversubscription of a shared buffer resource is managed by dynamically changing the maximum buffer space allowed for each traffic flow in response to the global utilization of a single shared buffer resource.
  • each node is interconnected to other network nodes by a variety of transmission paths.
  • the apparent capacity of these transmission paths is increased using virtual connections.
  • each node connects a source-destination pair only when information, in the form of a cell, is present.
  • Cells are packets of fixed length and comprise both flow control (i.e., cell header) and payload information.
  • any or each of the nodes in a telecommunications network may comprise a cell memory or buffer which is available to a number of traffic flows.
  • These buffers may exist at various levels, for example, at the port level, the card level (where a single card supports multiple ports), the switch level, the class of service level, etc.
  • per flow is meant to include any or all situations where a single buffer resource is shared by a number of traffic flows, regardless of the level on which the sharing may occur.
  • an initialization procedure begins at step 10.
  • a shared buffer is initialized and a buffer count and flow cell counts are reset to zero. The use of these counts is described below.
  • Figure lb illustrates the operation of dynamic threshold scaling for a preferred embodiment
  • die corresponding traffic flow is determined from the cell header information at step 16.
  • the default threshold and buffer count for that flow are then retrieved from memory at step 18.
  • the default threshold can be thought of as representing the maximum amount of buffer resources that a flow may use if no other flows are currently using the buffer.
  • the default threshold represents the maximum number of cells a given flow may store in the buffer under the "ideal" condition where no other traffic flows or resources are using the buffer.
  • These thresholds may be determined based on factors such as total available buffer size, customer requirements, traffic type, etc.
  • the flow cell count represents the number of cells corresponding to the particular traffic flow of interest which are already stored in the buffer.
  • the total buffer utilization is determined. That is, the total number of cells from all traffic flows which are stored in the buffer is determined.
  • a scaling factor for the flow threshold is retrieved from a lookup table stored in memory at step 22.
  • Dynamic Threshj the dynamic threshold for the i ⁇ traffic flow
  • Threshi the default threshold for the i ⁇ traffic flow
  • SFj the scaling factor for the i ⁇ traffic flow according to the global buffer utilization.
  • a comparison is made to determine if the number of cells corresponding to the traffic flow of interest already stored in the buffer exceeds the dynamic threshold for that flow. If so, the process moves to step 28 and the new cell is dropped. Otherwise, the process moves to step 30 where the new cell is admitted and the buffer count for the flow of interest and the global buffer count are incremented.
  • FIG. 2 a graph depicting the common usage of a single buffer resource by a number of traffic flows is shown.
  • the horizontal axis of the graph of Figure 2 shows the traffic flows which are sharing the buffer.
  • Figure 2 shows only five flows sharing the single buffer, those skilled in the art will appreciate that this is for purposes of clarity and simplicity only and that the buffer management methods of the present invention are equally applicable to situations where any number of traffic flows share a single common buffer.
  • the vertical axis of the graph shown in Figure 2 is a count of the number of cells stored in the buffer by each flow. For the example shown in Figure 1, traffic flow 1 has 500 cells stored, traffic flow 2 has 1250 cells stored, traffic flow 3 has 750 cells stored, traffic flow 4 has 650 cells stored and traffic flow 5 has 1000 cells stored. Thus, for the example shown in Figure 2, a total of 4150 cells are stored in the shared buffer.
  • FIG. 3 further illustrates the example begun in Figure 2.
  • Each flow has the same number of cells stored in the shared buffer as in Figure 2.
  • a number of default thresholds are shown.
  • Each default threshold corresponds to a respective one of the traffic flows 1 through 5.
  • Threshi is set at 3000 cells
  • Thresh 2 is set at 9000 cells
  • Thresh 3 is set at 4000 cells
  • Thresh is set at 5000 cells
  • Threshs is set at 7000 cells.
  • the default thresholds represent the maximum amount of buffer resources that each particular flow may use if no other connections are currently using the buffer.
  • the thresholds have been determined based on factors such as total available buffer size, customer requirements, traffic type, etc.
  • FIG 4 the graph of buffer utilization for the shared buffer is shown at a later point in time than was depicted in Figures 2 and 3.
  • flows 2 and 4 have added a number of cells to the common buffer.
  • Traffic flow 2 now has 5,500 cells stored in the buffer and traffic flow 4 has 1250 cells stored in the buffer.
  • Flows 1, 3 and 5 have neither added nor removed cells from the buffer.
  • a total of 9000 cells are stored in the buffer for the instant of time shown in Figure 4.
  • the common buffer is capable of storing a maximum of 10,000 cells total, for the example depicted in Figure 4 the buffer is at 90% capacity.
  • the new cell's corresponding traffic flow information is determined.
  • the default threshold and buffer count for flow 2 are retrieved from memory.
  • Thresh 2 is 9000 cells and the global buffer count is 9000 cells (i.e., 90% of capacity).
  • the appropriate scaling factor for the flow 2 threshold is retrieved from a lookup table stored in memory. For this example, suppose the flow 2 threshold is to be scaled back to one-half of its default value when global buffer utilization reaches 90% of capacity (the very situation depicted in Figure 4).
  • the default threshold (3000 cells) and current buffer count (500 cells) for flow 1 are retrieved from memory.
  • the appropriate scaling factor for the flow 1 threshold is retrieved from the lookup table stored in memory. For this example, suppose that like flow 2, the flow 1 threshold is to be scaled back to one-half of its default value when global buffer utilization reaches 90% of capacity.
  • flow 1 is only storing 500 cells in the common buffer. This is less than the number of cells permitted by the dynamically scaled threshold for this flow. As a result, the new cell is admitted to the buffer and the global buffer count and flow 1 buffer count are incremented.
  • scaling Class 1 might be used for UBR traffic
  • Class 2 might be for ABR traffic
  • Classes 3 and 4 used for more sensitive traffic such as VBR and CBR.
  • Table 1 shows some exemplary settings, although it will be appreciated mat other scaling factors could be used.
  • the per flow thresholds for ABR and UBR traffic are likely to be set aggressively high as these classes can tolerate scaling back early.
  • Other traffic types (such as CBR and VBR) would generally have smaller per flow thresholds but would be more sensitive to scaling back.
  • the 1 % increments, the starting value of 90%, and the scaling fractions are all examples only.
  • the contents of the table are, in general, configurable. For example, to provide a safety margin for CBR and VBR queues, it may be desirable to move the scale table lower, that is, replacing the 9X% with 8X% or 7X%. Also, the scaling factors can be made user selectable based on network conditions and/or requirements.
  • the thresholds are preferably stored in a format having a 4-bit mantissa (M) and a common 4-bit exponent (E).
  • M mantissa
  • E 4-bit exponent
  • the dynamic threshold scaling method has several performance advantages over existing techniques.
  • the method is scalable to a large number of traffic flows and for a large number of per flow queues.
  • Sensitive traffic flows can be isolated from “memory hogs”.
  • the method further ensures "fair" allocation of resources between flows in the same scale and class. Note that "fair” does not necessarily mean equal (at 90% buffer utilization, flow 2 was permitted to store 4500 cells while flow 1 was only allowed 1500), rather, resource allocation may be determined by individual customer needs.
  • Dynamic scaling further allows preferential treatments of groups of traffic flows via the selection of scaling classes. Global resource overflows are avoided and, hence, the performance degradation that accompanies these events is avoided.
  • CLP cell loss priority
  • each node in a network maintains information regarding each traffic flow (e.g., VP and/or VC) it supports. To implement per flow dynamic scaling management options, additional info ⁇ nation would be maintained by these nodes. Then, for each cell, the cell header is used to generate a flow indicator that indexes a lookup table that contains information regarding the traffic flow of interest.
  • the lookup table may be stored in a memory associated with the network node of interest and would store a number of thresholds which could be dynamically scaled.
  • CLP 1
  • the CLP thresholds can be dynamically scaled according to the procedure described above.
  • logic associated with the node containing the common buffer would keep track of end-of- frame (EOF) indicators in arriving cells. In this way, frames could be distinguished.
  • EEF end-of- frame
  • Various state information determined from die EOF indicators and a dynamically scaled early packet discard threahold could then be used to trigger frame discarding.
  • Per flow buffer management can also be used to set the EFCI bit in cell headers to allow for the use of other congestion management processes.
  • the EFCI threshold is checked as cells are serviced. If the buffer count for the traffic flow of interest is greater than the EFCI threshold for diat flow, the EFCI bit in die cell header set Again, the EFCI threshold can be dynamically scaled according to the above described process.

Abstract

A method of managing oversubscription of a common buffer resource shared by a number of traffic flows in a cell switching network in response to the utilization of the common buffer resource. A buffer utilization threshold is established for each of the traffic flows. As new cells arrive, the global usage of the buffer resource is monitored. As the buffer utilization increases, the thresholds for each of the traffic flows are dynamically adjusted based upon the global usage of the buffer. Aggressive buffer allocations are scaled back when necessary, thereby leaving space for traffic flows which are relatively empty. In one embodiment, the thresholds are coded in mantissa and exponent form so that the scaling is accomplished by adjusting the exponent value.

Description

METHOD AND APPARATUS FOR PER TRAFFIC FLOW BUFFER MANAGEMENT
FIET ,D OF THE INVENTION
The present invention relates generally to the field of cell switching network communications and, more specifically, to the efficient management of shared buffer resources within such a network. BACKGROUND
The desire to integrate data, voice, image and video over high speed digital trunks has led to the development of a packet switching technique called cell relay or asynchronous transfer mode (ATM). ATM traffic is switched and multiplexed in fixed length cells and an ATM network typically provides a number of interconnected nodes which are capable of receiving data from other network nodes and forwarding that data through to other network nodes to its ultimate destination. Nodes are interconnected by transmission paths, each of which supports one or more virtual paths. Each virtual path contains one or more virtual channels. Switching can be performed at the transmission path, virtual path or virtual channel level.
Network nodes generally employ buffering schemes to prevent contention for switch resources (e.g., ports). In the past, this has included relatively unsophisticated solutions, such as a first-in-first-out (FIFO) queue at each port. This solution quickly leads to cells being dropped indiscriminately when the volume of network traffic is large. Other schemes involve "per connection" buffering where each logical connection (i.e., virtual path, virtual channel) is allocated its own cell memory. When the number of supported connections is large, however, the sum of the maximum buffer requirements for individual connections may drastically exceed the physical available memory.
If one large buffer resource is to be shared among a number of connections then, some form of buffer management must be employed. In the past, one solution has been to divide the buffer into a number of queues of fixed length and "hard allocate" capacity for each connection. The problem with this solution is that the fixed length queues offer no flexibility depending upon network traffic conditions. Ir. addition, because of size and cost constraints, each queue would have to remain relatively small as a single switch may support thousands of logical connections. Those network connections with significant amounts of traffic would likely soon fill up their allotted queue and cell dropping would soon result. Another solution has been to oversubscribe the single memory resource and allow each connection to buffer up to a fixed maximum, but where the sum of all the connection maxima exceeds the memory capacity. This alternative relies on the fact that all connections are unlikely to require their maximum buffer space at the same time. Although this condition is true most of the time, it is inevitable that contention for buffer space will result at some point. Once contention does result, cells are dropped indiscriminately, i.e., without regard for whether a connection is already using a significant amount of buffer space or not A third solution has been to reserve a minimum buffer allocation for each connection with the unallocated space available on a first-come-first-served basis. This allows each connection a guaranteed minimum buffer space. The problem with this solution is that where the number of logical connections runs into the thousands, a very large (i.e., expensive) common buffer is required for any reasonable minimum.
None of the buffer management schemes of the prior art have satisfactorily addressed the problem of per connection buffer management for large numbers of connections. Hence, it would be desirable to have a mechanism for effectively managing the oversubscription of a shared buffer resource.
SI TMM AR Y AND OBJECTS OF THE INVENTION
It is therefore an object of the present invention to provide an improved method for managing the oversubscription of a common communications resource shared by a large number of traffic flows, such as ATM connections.
It is a further object of the present invention to provide an efficient method of buffer management at the connection level of a cell switching data communication network so as to minimize the occurrence of resource overflow conditions.
This and other objects of the invention are achieved by an effective method for managing oversubscription by dynamically changing the maximum buffer space allowed for a particular traffic flow or connection in response to the global utilization of a single buffer resource. A buffer utilization threshold for each of a number of various traffic flows is established. As new cells arrive, the global usage of the buffer resource is monitored. As the buffer fills, the individual thresholds for the various traffic flows are dynamically scaled based upon the global usage of the buffer. This method allows guaranteed buffer space for sensitive traffic flows despite the oversubscription. Aggressive buffer allocations are scaled back when necessary, thereby leaving space for traffic flows which are using only a small portion of their allocated buffer space. The present invention in effect maintains isolation between well behaved traffic flows, insuring that only flows which are using a disproportionate amount of memory are blocked from storing further cells or packets in the memory when the global resource usage approaches capacity.
In one embodiment, the thresholds are coded in mantissa and exponent form so that the scaling is accomplished by adjusting the exponent. This approach allows a minimum of memory to be used to store the flow thresholds and simplifies the mechanism for scaling the thresholds.
Other objects, features and advantages of the present invention will be apparent from the accompanying drawings and from the detailed description which follows.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which:
Figure la is a flow diagram illustrating the initialization of various parameters according to one embodiment;
Figure lb is a flow diagram illustrating dynamic per traffic flow buffer management according to one embodiment;
Figure lc is a flow diagram illustrating a cell service routine according to one embodiment;
Figure 2 is a plot which graphically illustrates the number of cells stored in a common buffer by each of a number of traffic flows;
Figure 3 is a plot similar to the plot shown in Figure 2 that further shows a number of buffer thresholds corresponding to the various traffic flows sharing the common buffer;
Figure 4 is a plot similar to the plot shown in Figure 3 and shows the common buffer utilization at a later time; Figure 5 is a plot similar to the plot shown in Figure 4 and illustrates the effect of dynamic buffer threshold scaling for one traffic flow according to one embodiment; and
Figure 6 is a plot similar to the plot shown in Figure 4 and illustrates the effects of dynamic buffer threshold scaling for a different traffic flow according to one embodiment
DETAILED DESCRIPTION
An improved method and apparatus to efficiently manage a common communications buffer resource shared by a large number of traffic flows, such as a cell memory shared by ATM virtual channels or paths, is described. According to one embodiment, oversubscription of a shared buffer resource is managed by dynamically changing the maximum buffer space allowed for each traffic flow in response to the global utilization of a single shared buffer resource.
Upon review of this specification, those skilled in the art will appreciate that the methods and apparatus to be described are applicable at a number of levels. For example, the methods can be employed at the "per logical connection" level or at the "per quality of service" level, among others. To account for the numerous levels at which the present invention is applicable, the term "traffic flow" is used throughout this specification. Those skilled in the art will appreciate that this term describes the general nature of the levels at which the present invention is applicable. A presently preferred embodiment utilizes the invention at the per logical connection level in managing common buffer resources in ATM network nodes. In this particular case, a traffic flow is associated with the transport of cells on a single logical connection. The particular nature of this embodiment should not, however, be seen as limiting the more general nature and scope of die present invention as set forth in the appended claims.
In a fully integrated voice and data telecommunications network, a variety of switching nodes will be present Each node is interconnected to other network nodes by a variety of transmission paths. The apparent capacity of these transmission paths is increased using virtual connections. In other words, raϋher than committing specific resources to a given source-destination pair, each node connects a source-destination pair only when information, in the form of a cell, is present. When cells are not being created for a given source-destination pair, the same network resources are used to transmit cells for other source-destination pairs. Cells are packets of fixed length and comprise both flow control (i.e., cell header) and payload information.
Any or each of the nodes in a telecommunications network may comprise a cell memory or buffer which is available to a number of traffic flows. These buffers may exist at various levels, for example, at the port level, the card level (where a single card supports multiple ports), the switch level, the class of service level, etc. As used hereafter, the term "per flow" is meant to include any or all situations where a single buffer resource is shared by a number of traffic flows, regardless of the level on which the sharing may occur.
The basic operation for per flow buffer control according to the present invention is described with reference to Figures la-lc. As shown in Figure la, an initialization procedure begins at step 10. At step 12, a shared buffer is initialized and a buffer count and flow cell counts are reset to zero. The use of these counts is described below.
Figure lb illustrates the operation of dynamic threshold scaling for a preferred embodiment As a new cell arrives at step 14, die corresponding traffic flow is determined from the cell header information at step 16. The default threshold and buffer count for that flow are then retrieved from memory at step 18. The default threshold can be thought of as representing the maximum amount of buffer resources that a flow may use if no other flows are currently using the buffer. In other words, the default threshold represents the maximum number of cells a given flow may store in the buffer under the "ideal" condition where no other traffic flows or resources are using the buffer. These thresholds may be determined based on factors such as total available buffer size, customer requirements, traffic type, etc. The flow cell count represents the number of cells corresponding to the particular traffic flow of interest which are already stored in the buffer. At step 20, the total buffer utilization is determined. That is, the total number of cells from all traffic flows which are stored in the buffer is determined. Using the global buffer utilization from step 20 and the flow identification from step 16 as indices, a scaling factor for the flow threshold is retrieved from a lookup table stored in memory at step 22. The scaling factor is used to calculate the flow dynamic threshold in step 24 according to the following formula: Dynamic Threshi = Threshi * SFi where:
Dynamic Threshj = the dynamic threshold for the iώ traffic flow; Threshi = the default threshold for the iώ traffic flow; and SFj = the scaling factor for the iΛ traffic flow according to the global buffer utilization.
At step 26 a comparison is made to determine if the number of cells corresponding to the traffic flow of interest already stored in the buffer exceeds the dynamic threshold for that flow. If so, the process moves to step 28 and the new cell is dropped. Otherwise, the process moves to step 30 where the new cell is admitted and the buffer count for the flow of interest and the global buffer count are incremented.
As shown in Figure lc, when a cell departs the buffer, the corresponding flow cell count and the global buffer counts are decremented. In this way, current buffer counts are maintained at both the flow and global levels. The process 100 of Figure 1 is further described in detail with reference to Figures 2-6, below.
Referring to Figure 2, a graph depicting the common usage of a single buffer resource by a number of traffic flows is shown. The horizontal axis of the graph of Figure 2 shows the traffic flows which are sharing the buffer. Although Figure 2 shows only five flows sharing the single buffer, those skilled in the art will appreciate that this is for purposes of clarity and simplicity only and that the buffer management methods of the present invention are equally applicable to situations where any number of traffic flows share a single common buffer. The vertical axis of the graph shown in Figure 2 is a count of the number of cells stored in the buffer by each flow. For the example shown in Figure 1, traffic flow 1 has 500 cells stored, traffic flow 2 has 1250 cells stored, traffic flow 3 has 750 cells stored, traffic flow 4 has 650 cells stored and traffic flow 5 has 1000 cells stored. Thus, for the example shown in Figure 2, a total of 4150 cells are stored in the shared buffer.
Figure 3 further illustrates the example begun in Figure 2. Each flow has the same number of cells stored in the shared buffer as in Figure 2. In Figure 3, however, a number of default thresholds are shown. Each default threshold (Threshi through Threshs) corresponds to a respective one of the traffic flows 1 through 5. For the example shown in Figure 3, Threshi is set at 3000 cells, Thresh2 is set at 9000 cells, Thresh3 is set at 4000 cells, Thresh is set at 5000 cells and Threshs is set at 7000 cells. As indicated above, the default thresholds represent the maximum amount of buffer resources that each particular flow may use if no other connections are currently using the buffer. The thresholds have been determined based on factors such as total available buffer size, customer requirements, traffic type, etc.
Referring now to Figure 4, the graph of buffer utilization for the shared buffer is shown at a later point in time than was depicted in Figures 2 and 3. In the situation depicted in Figure 4, flows 2 and 4 have added a number of cells to the common buffer. Traffic flow 2 now has 5,500 cells stored in the buffer and traffic flow 4 has 1250 cells stored in the buffer. Flows 1, 3 and 5 have neither added nor removed cells from the buffer. Thus, a total of 9000 cells are stored in the buffer for the instant of time shown in Figure 4. If the common buffer is capable of storing a maximum of 10,000 cells total, for the example depicted in Figure 4 the buffer is at 90% capacity.
Suppose now a new cell arrives. In accordance with the methods of the present invention, the new cell's corresponding traffic flow information is determined. For this example, suppose the new cell is associated with flow 2. The default threshold and buffer count for flow 2 are retrieved from memory. As shown in Figure 4, Thresh2 is 9000 cells and the global buffer count is 9000 cells (i.e., 90% of capacity). Using the global buffer utilization, the appropriate scaling factor for the flow 2 threshold is retrieved from a lookup table stored in memory. For this example, suppose the flow 2 threshold is to be scaled back to one-half of its default value when global buffer utilization reaches 90% of capacity (the very situation depicted in Figure 4). Thus, the Dynamic Thresh2 based on the current buffer utilization is 1/2 * 9000 = 4500 cells. This is graphically illustrated in Figure 5.
The decision on whether to admit the new cell for flow 2 is now based on the dynamically scaled threshold for flow 2. As shown in Figure 5, flow 2 is already storing 5500 cells in die common buffer. This exceeds the dynamically scaled threshold for this flow (which is 4500 cells for the depicted buffer utilization condition). As a result, the new cell is dropped.
Suppose now a new cell corresponding to traffic flow 1 arrives. The buffer utilization has not changed from the situation depicted in Figures 4 and 5. Flow 1 still stores 500 cells, flow 2 is storing 5500 cells, flow 3 is storing 750 cells, flow 4 is storing 1250 cells and flow 5 is storing 1000 cells. Thus, buffer utilization remains at 90% of capacity. Also, recall that the default threshold for flow 1 is 3000 cells, as shown in Figure 3.
As the new cell for flow 1 arrives, the default threshold (3000 cells) and current buffer count (500 cells) for flow 1 are retrieved from memory. Using the global buffer utilization (90%), the appropriate scaling factor for the flow 1 threshold is retrieved from the lookup table stored in memory. For this example, suppose that like flow 2, the flow 1 threshold is to be scaled back to one-half of its default value when global buffer utilization reaches 90% of capacity. Thus, the Dynamic Threshi based on the current buffer utilization is 1/2 * 3000 = 1500 cells. This is graphically illustrated in Figure 6.
The decision on whether to admit the new cell for flow 1 is now based on the dynamically scaled threshold for flow 1. As shown in Figure 6, flow 1 is only storing 500 cells in the common buffer. This is less than the number of cells permitted by the dynamically scaled threshold for this flow. As a result, the new cell is admitted to the buffer and the global buffer count and flow 1 buffer count are incremented.
The above examples illustrate how dynamic scaling penalizes only those flows which are using significant portions of their allocated buffer space. Row 2 was using a significant portion of its allocated capacity (5500 out of 9,000). As the buffer reached the 90% full level, dynamic scaling was employed and flow 2 was not permitted to store any more cells. Under these conditions, flow 2 would not be allowed to store any more cells until global buffer utilization had declined. On the other hand, given the same buffer utilization (90% of capacity), flow 1, which was storing only 500 cells (1/6 of its configured maximum), was permitted to store another cell. Note also that although flow 2 was using a significant amount of buffer space, dynamic scaling only affected newly arriving cells. That is, although flow 2 was already storing more cells (5500) than would otherwise be permitted according to the dynamically scaled threshold (4500), no previously stored cells were discarded.
It will be appreciated that different scaling tables can be provided for different traffic flows. For instance, scaling Class 1 might be used for UBR traffic, Class 2 might be for ABR traffic and Classes 3 and 4 used for more sensitive traffic such as VBR and CBR. Table 1 shows some exemplary settings, although it will be appreciated mat other scaling factors could be used. The per flow thresholds for ABR and UBR traffic are likely to be set aggressively high as these classes can tolerate scaling back early. Other traffic types (such as CBR and VBR) would generally have smaller per flow thresholds but would be more sensitive to scaling back.
Table 1
Figure imgf000011_0001
The 1 % increments, the starting value of 90%, and the scaling fractions are all examples only. The contents of the table are, in general, configurable. For example, to provide a safety margin for CBR and VBR queues, it may be desirable to move the scale table lower, that is, replacing the 9X% with 8X% or 7X%. Also, the scaling factors can be made user selectable based on network conditions and/or requirements.
Limiting the scaling factors to binary fractions can drastically simplify the implementation. In such an embodiment, the thresholds are preferably stored in a format having a 4-bit mantissa (M) and a common 4-bit exponent (E). The linear threshold (T) is calculated as T=Mx2E. Thus, the scaling can be easily achieved by adjusting the exponent such that T=Mx2<E-A) where A is obtained from Table 1 (in other words, Table 1 would actually store the adjustment A = {0,1, 2, 3 . . . } rather than the fraction { 1,1/2, 1/4, 1/8 }).
It will be appreciated that the dynamic threshold scaling method has several performance advantages over existing techniques. For example, the method is scalable to a large number of traffic flows and for a large number of per flow queues. Sensitive traffic flows can be isolated from "memory hogs". The method further ensures "fair" allocation of resources between flows in the same scale and class. Note that "fair" does not necessarily mean equal (at 90% buffer utilization, flow 2 was permitted to store 4500 cells while flow 1 was only allowed 1500), rather, resource allocation may be determined by individual customer needs. Dynamic scaling further allows preferential treatments of groups of traffic flows via the selection of scaling classes. Global resource overflows are avoided and, hence, the performance degradation that accompanies these events is avoided.
Storing the diresholds associated with the flow in the form of separate mantissas with a shared exponent drastically reduces the memory which would otherwise be required to store these thresholds. A conventional approach would require 20 bits per threshold per flow. The preferred method, however, requires just 20 bits to store all the thresholds (assuming a 4-bit representation). This makes a significant difference when the number of flows is large. Furthermore, it reduces the processing bandwidth because the buffer count comparisons share a common exponent test and a simple 4-bit mantissa comparison.
In addition to maximum threshold discard control as described above, a number of other options can be supported using the methods of the present invention. For example, decisions on whether to discard cells which have their cell loss priority (CLP) bits set or decisions on whether to use EFCI congestion control can be made according to dynamically scaled thresholds. Such implementations are discussed below.
Typically, each node in a network maintains information regarding each traffic flow (e.g., VP and/or VC) it supports. To implement per flow dynamic scaling management options, additional infoπnation would be maintained by these nodes. Then, for each cell, the cell header is used to generate a flow indicator that indexes a lookup table that contains information regarding the traffic flow of interest. The lookup table may be stored in a memory associated with the network node of interest and would store a number of thresholds which could be dynamically scaled.
As an example, consider the case where a CLP threshold is to be scaled. In such an embodiment, cells which have their CLP bits set (CLP = 1) will be discarded when the buffer count for the traffic flow of interest exceeds the CLP threshold. The CLP thresholds can be dynamically scaled according to the procedure described above.
As another example, consider the case of frame discard. In many applications, data is sent in frames. In such cases, once one cell is lost, the rest of the cells in the frame are not useful. "Goodput" can therefore be improved by discarding the remaining cells in the frame. A single bit per flow (frame discard enable bit) could be used to enable this feature.
In this case, logic associated with the node containing the common buffer would keep track of end-of- frame (EOF) indicators in arriving cells. In this way, frames could be distinguished. Various state information determined from die EOF indicators and a dynamically scaled early packet discard threahold could then be used to trigger frame discarding.
Per flow buffer management can also be used to set the EFCI bit in cell headers to allow for the use of other congestion management processes. The EFCI threshold is checked as cells are serviced. If the buffer count for the traffic flow of interest is greater than the EFCI threshold for diat flow, the EFCI bit in die cell header set Again, the EFCI threshold can be dynamically scaled according to the above described process.
Thus, an efficient method for managing a common communications buffer resource shared by a large number of traffic flows (e.g., processes or connections) has been described. In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be clear that various modifications and changes may be made thereto without departing from the broader spirit and scope of die invention as set forth in the appended claims. For example, those skilled in the art will appreciate that die common buffers described in the specification may exist at a variety of levels within a network switch node. This includes the port level, card level, switch level, etc. Also, exemplary thresholds of interest, such as maximum cell discard diresholds, CLP thresholds, EPD thresholds and EFCI thresholds have been discussed. Aldiough discussed separately, ύiose skilled in the art will recognize that the buffer count checks for each of these thresholds may be performed simultaneously or in various groupings according to user and network requirements. Further, these are only examples of the types of thresholds which might be dynamically scaled. Those skilled in die art will recognize that a number of other thresholds may be dynamically configured to achieve desired traffic management in a data communications network. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims

CLAIMSWhat is claimed is:
1. A metiiod of managing a common buffer resource shared by a plurality of processes including a first process, die method comprising the steps of: establishing a first buffer utilization threshold for said first process; monitoring the usage of said common buffer by said plurality of processes; and dynamically adjusting said first buffer utilization threshold according to said usage.
2. The method of claim 1 wherein said plurality of processes is a plurality of traffic flows and said first process is a first traffic flow.
3. The method of claim 1 wherein said first buffer utilization threshold is represented in a format having a mantissa and an exponent
4. The method of claim 1 wherein said first buffer utilization threshold is a cell maximum threshold.
5. The method of claim 1 wherein said first buffer utilization threshold is a cell loss priority (CLP) threshold.
6. The mediod of claim 1 wherein said first buffer utilization threshold is an early packet discard (EPD) threshold.
7. The mediod of claim 1 wherein said common buffer has a maximum capacity and wherein die step of dynamically adjusting comprises:
determining said usage of said common buffer by said plurality of processes;
determining a first scaling factor according to said usage of said common buffer; and
scaling said first buffer utilization threshold by said first scaling factor.
8. The mediod of claim 3 wherein die step of dynamically adjusting comprises performing a subtraction operation on said exponent.
9. The method of claim 3 wherein a plurality of thresholds, each of said plurality of thresholds corresponding to a respective one of said plurality of processes, share a common exponent and wherein said plurality of thresholds are scaled simultaneously using a subtraction operation on said common exponent
10. A buffer management process for a cell switching communications network having a first node, said first node having a common buffer being shared by a plurality of network connections including a first connection, the process comprising the steps of: receiving a first cell at said first node, said first cell being associated with said first connection; determining a buffer count for said common buffer, said buffer count representing a current utilization of said common buffer by said plurality of connections; establishing a first connection threshold for said first connection according to said buffer count; and determining wheϋher said first cell will be accommodated in said common buffer using said first connection threshold.
11. The buffer management process of claim 10 wherein me step of establishing the first connection threshold comprises: establishing an initial connection threshold; and dynamically adjusting said initial connection threshold according to said buffer count, said dynamic adjusting producing a first scaled threshold.
12. The buffer management process of claim 11 wherein me step of determining whemer said first cell will be accommodated comprises: establishing a connection cell count, the connection cell count indicating me number of cells associated with the first connection stored in said common buffer; comparing said connection cell count to said first scaled ϋhreshold, wherein if said connection cell count exceeds said first scaled threshold, said first cell is not admitted to said common buffer.
13. The buffer management process of claim 12 wherein said first connection threshold is coded in a format having a mantissa and an exponent and said step of dynamically adjusting comprises a subtraction operation.
14. A buffer management system for congestion prevention in a cell switching communications network comprising a plurality of logical connections, die buffer management system comprising: a first node receiving network traffic transmitted over said plurality of logical connections, said first node having a common buffer and further having a buffer control device, said buffer control device monitoring the usage of said common buffer by said network traffic and dynamically scaling a buffer utilization threshold according to said usage, said buffer utilization ϋhreshold corresponding to a first of said plurality of logical connections.
15. A buffer management system as in claim 14 wherein said buffer control device further comprising: a lookup table stored in a memory, said lookup table comprising buffer ϋhreshold scaling factors.
16. The method of claim 14 wherein said buffer utilization threshold is represented in a format having a mantissa and an exponent
17. A buffer management system as in claim 14 wherein said buffer utilization threshold is a cell loss priority (CLP) threshold.
18. A buffer management system as in claim 14 wherein said buffer utilization threshold is early packet discard (EPD) threshold.
19. A buffer management system as in claim 14 wherein said buffer utilization threshold is a cell maximum threshold.
20. A buffer management system as in claim 15 wherein said buffer control device further comprising circuitry for comparing, said circuitry for comparing receiving a first signal indicating a buffer count the buffer count representing die utilization of said common buffer by said first connection, said circuitry for comparing further receiving a second signal representing said scaled buffer utilization threshold after dynamic scaling, said circuitry for comparing generating a tiiird signal, said third signal indicating whetiier said first connection has exceeded an associated allowable usage of said common buffer..
21. A cell exchange node for a cell switching communications network, the node comprising: a lookup table stored in a memory, said lookup table comprising buffer threshold scaling factors.
22. A cell exchange node as in claim 21 wherein said communications network comprising a plurality of logical connections and wherein said memory further comprising a plurality of connection thresholds, said plurality of connection thresholds each representing a maximum number of cells to be stored in a buffer associated with said cell exchange node for each respective logical connection.
23. A cell exchange node as in claim 22 wherein said plurality of connection tiiresholds are coded in a format having a mantissa and an exponent.
24. A cell exchange node as in claim 23 wherein said plurality of connection thresholds share a common exponent.
PCT/US1997/007839 1996-05-15 1997-05-09 Method and apparatus for per traffic flow buffer management WO1997043869A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP09540952A JP2000510308A (en) 1996-05-15 1997-05-09 Method and apparatus for buffer management per traffic flow
EP97924631A EP0898855A1 (en) 1996-05-15 1997-05-09 Method and apparatus for per traffic flow buffer management
AU30010/97A AU730804B2 (en) 1996-05-15 1997-05-09 Method and apparatus for per traffic flow buffer management
CA002254104A CA2254104A1 (en) 1996-05-15 1997-05-09 Method and apparatus for per traffic flow buffer management

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/648,556 US6034945A (en) 1996-05-15 1996-05-15 Method and apparatus for per traffic flow buffer management
US08/648,556 1996-05-15

Publications (1)

Publication Number Publication Date
WO1997043869A1 true WO1997043869A1 (en) 1997-11-20

Family

ID=24601278

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/007839 WO1997043869A1 (en) 1996-05-15 1997-05-09 Method and apparatus for per traffic flow buffer management

Country Status (6)

Country Link
US (2) US6034945A (en)
EP (1) EP0898855A1 (en)
JP (1) JP2000510308A (en)
AU (1) AU730804B2 (en)
CA (1) CA2254104A1 (en)
WO (1) WO1997043869A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2321820A (en) * 1997-01-17 1998-08-05 Tadhg Creedon A method for dynamically allocating buffers to virtual channels in an asynchronous network
EP0920235A2 (en) * 1997-11-28 1999-06-02 Newbridge Networks Corporation Congestion management in a multi-port shared memory switch
FR2775546A1 (en) * 1998-01-19 1999-09-03 Nec Corp Asynchronous transfer mode switch for traffic control
US5987507A (en) * 1998-05-28 1999-11-16 3Com Technologies Multi-port communication network device including common buffer memory with threshold control of port packet counters
US6151323A (en) * 1997-01-17 2000-11-21 3Com Technologies Method of supporting unknown addresses in an interface for data transmission in an asynchronous transfer mode
EP1056245A2 (en) * 1999-05-27 2000-11-29 Newbridge Networks Corporation Buffering system employing per traffic flow accounting congestion control
WO2000074432A1 (en) * 1999-05-28 2000-12-07 Network Equipment Technologies, Inc. Fair discard system
US6163541A (en) * 1997-01-17 2000-12-19 3Com Technologies Method for selecting virtual channels based on address priority in an asynchronous transfer mode device
US6208662B1 (en) 1997-01-17 2001-03-27 3Com Technologies Method for distributing and recovering buffer memories in an asynchronous transfer mode edge device
US6549541B1 (en) 1997-11-04 2003-04-15 Nokia Corporation Buffer management
US7139271B1 (en) 2001-02-07 2006-11-21 Cortina Systems, Inc. Using an embedded indication of egress application type to determine which type of egress processing to perform
US7286566B1 (en) 2001-05-08 2007-10-23 Cortina Systems, Inc. Multi-service segmentation and reassembly device that maintains reduced number of segmentation contexts
US9008109B2 (en) 2011-10-26 2015-04-14 Fujitsu Limited Buffer management of relay device

Families Citing this family (89)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6034945A (en) 1996-05-15 2000-03-07 Cisco Technology, Inc. Method and apparatus for per traffic flow buffer management
US5991265A (en) * 1996-12-02 1999-11-23 Conexant Systems, Inc. Asynchronous transfer mode system and method
US6246687B1 (en) * 1997-03-27 2001-06-12 Massachusetts Institute Of Technology Network switching system supporting guaranteed data rates
US6430191B1 (en) 1997-06-30 2002-08-06 Cisco Technology, Inc. Multi-stage queuing discipline
US6487202B1 (en) 1997-06-30 2002-11-26 Cisco Technology, Inc. Method and apparatus for maximizing memory throughput
US6912224B1 (en) 1997-11-02 2005-06-28 International Business Machines Corporation Adaptive playout buffer and method for improved data communication
US6560198B1 (en) * 1997-11-07 2003-05-06 Telcordia Technologies, Inc. Method and system for stabilized random early detection using packet sampling
IL122271A (en) * 1997-11-21 2001-01-11 Eci Telecom Ltd Apparatus and method for managing network congestion
US6526060B1 (en) 1997-12-05 2003-02-25 Cisco Technology, Inc. Dynamic rate-based, weighted fair scheduler with explicit rate feedback option
US6434612B1 (en) 1997-12-10 2002-08-13 Cisco Technology, Inc. Connection control interface for asynchronous transfer mode switches
US6320845B1 (en) 1998-04-27 2001-11-20 Cisco Technology, Inc. Traffic management and flow prioritization on a routed computer network
US6377546B1 (en) * 1998-05-12 2002-04-23 International Business Machines Corporation Rate guarantees through buffer management
DE59914435D1 (en) * 1998-05-29 2007-09-13 Siemens Ag Method for removing ATM cells from an ATM communication device
US6438102B1 (en) 1998-06-03 2002-08-20 Cisco Technology, Inc. Method and apparatus for providing asynchronous memory functions for bi-directional traffic in a switch platform
US6483850B1 (en) * 1998-06-03 2002-11-19 Cisco Technology, Inc. Method and apparatus for routing cells having different formats among service modules of a switch platform
US6463485B1 (en) 1998-06-03 2002-10-08 Cisco Technology, Inc. System for providing cell bus management in a switch platform including a write port cell count in each of a plurality of unidirectional FIFO for indicating which FIFO be able to accept more cell
JP3141850B2 (en) * 1998-07-10 2001-03-07 日本電気株式会社 Time division switching device, time division switching method, and recording medium
US6430153B1 (en) 1998-09-04 2002-08-06 Cisco Technology, Inc. Trunk delay simulator
US6999421B1 (en) * 1998-10-26 2006-02-14 Fujitsu Limited Adjustable connection admission control method and device for packet-based switch
JP3070683B2 (en) * 1998-11-13 2000-07-31 日本電気株式会社 Image transmission method and image transmission device using the method
US6978312B2 (en) * 1998-12-18 2005-12-20 Microsoft Corporation Adaptive flow control protocol
US6658469B1 (en) * 1998-12-18 2003-12-02 Microsoft Corporation Method and system for switching between network transport providers
US6724756B2 (en) 1999-01-12 2004-04-20 Cisco Technology, Inc. Method for introducing switched virtual connection call redundancy in asynchronous transfer mode networks
US7215641B1 (en) * 1999-01-27 2007-05-08 Cisco Technology, Inc. Per-flow dynamic buffer management
US6762994B1 (en) * 1999-04-13 2004-07-13 Alcatel Canada Inc. High speed traffic management control using lookup tables
EP1069801B1 (en) * 1999-07-13 2004-10-06 International Business Machines Corporation Connections bandwidth right sizing based on network resources occupancy monitoring
US6618378B1 (en) * 1999-07-21 2003-09-09 Alcatel Canada Inc. Method and apparatus for supporting multiple class of service connections in a communications network
US6724776B1 (en) * 1999-11-23 2004-04-20 International Business Machines Corporation Method and system for providing optimal discard fraction
US6788697B1 (en) * 1999-12-06 2004-09-07 Nortel Networks Limited Buffer management scheme employing dynamic thresholds
US6891794B1 (en) 1999-12-23 2005-05-10 Cisco Technology, Inc. System and method for bandwidth protection in a packet network
US6775292B1 (en) * 2000-01-24 2004-08-10 Cisco Technology, Inc. Method for servicing of multiple queues carrying voice over virtual circuits based on history
US7142558B1 (en) 2000-04-17 2006-11-28 Cisco Technology, Inc. Dynamic queuing control for variable throughput communication channels
US6904014B1 (en) 2000-04-27 2005-06-07 Cisco Technology, Inc. Method and apparatus for performing high-speed traffic shaping
US6738386B1 (en) * 2000-05-11 2004-05-18 Agere Systems Inc. Controlled latency with dynamically limited queue depth based on history and latency estimation
US20020018474A1 (en) * 2000-06-01 2002-02-14 Seabridge Ltd. Efficient packet transmission over ATM
US7126969B1 (en) * 2000-07-06 2006-10-24 Cisco Technology, Inc. Scalable system and method for reliably sequencing changes in signaling bits in multichannel telecommunication lines transmitted over a network
US7124440B2 (en) * 2000-09-07 2006-10-17 Mazu Networks, Inc. Monitoring network traffic denial of service attacks
US7043759B2 (en) * 2000-09-07 2006-05-09 Mazu Networks, Inc. Architecture to thwart denial of service attacks
US7702806B2 (en) * 2000-09-07 2010-04-20 Riverbed Technology, Inc. Statistics collection for network traffic
US7743134B2 (en) * 2000-09-07 2010-06-22 Riverbed Technology, Inc. Thwarting source address spoofing-based denial of service attacks
US7278159B2 (en) * 2000-09-07 2007-10-02 Mazu Networks, Inc. Coordinated thwarting of denial of service attacks
US7398317B2 (en) * 2000-09-07 2008-07-08 Mazu Networks, Inc. Thwarting connection-based denial of service attacks
US20020107974A1 (en) * 2000-10-06 2002-08-08 Janoska Mark William Data traffic manager
US6967921B1 (en) * 2000-11-27 2005-11-22 At&T Corp. Method and device for efficient bandwidth management
US7130267B1 (en) 2000-12-29 2006-10-31 Cisco Technology, Inc. System and method for allocating bandwidth in a network node
US6947996B2 (en) * 2001-01-29 2005-09-20 Seabridge, Ltd. Method and system for traffic control
US6990115B2 (en) * 2001-02-26 2006-01-24 Seabridge Ltd. Queue control method and system
US6831891B2 (en) * 2001-03-06 2004-12-14 Pluris, Inc. System for fabric packet control
US6950396B2 (en) * 2001-03-20 2005-09-27 Seabridge Ltd. Traffic control method and system
JP3598985B2 (en) * 2001-03-21 2004-12-08 日本電気株式会社 Queue assignment system and queue assignment method for packet switch
US7450510B1 (en) 2001-04-19 2008-11-11 Cisco Technology, Inc. System and method for distributing guaranteed bandwidth among service groups in a network node
US7161905B1 (en) * 2001-05-03 2007-01-09 Cisco Technology, Inc. Method and system for managing time-sensitive packetized data streams at a receiver
GB2372172B (en) 2001-05-31 2002-12-24 Ericsson Telefon Ab L M Congestion handling in a packet data network
US7065581B2 (en) * 2001-06-27 2006-06-20 International Business Machines Corporation Method and apparatus for an improved bulk read socket call
US7225271B1 (en) 2001-06-29 2007-05-29 Cisco Technology, Inc. System and method for recognizing application-specific flows and assigning them to queues
US7218608B1 (en) 2001-08-02 2007-05-15 Cisco Technology, Inc. Random early detection algorithm using an indicator bit to detect congestion in a computer network
US7039013B2 (en) * 2001-12-31 2006-05-02 Nokia Corporation Packet flow control method and device
US7213264B2 (en) 2002-01-31 2007-05-01 Mazu Networks, Inc. Architecture to thwart denial of service attacks
US7743415B2 (en) * 2002-01-31 2010-06-22 Riverbed Technology, Inc. Denial of service attacks characterization
US7286547B2 (en) * 2002-05-09 2007-10-23 Broadcom Corporation Dynamic adjust multicast drop threshold to provide fair handling between multicast and unicast frames
US8504879B2 (en) * 2002-11-04 2013-08-06 Riverbed Technology, Inc. Connection based anomaly detection
US8479057B2 (en) * 2002-11-04 2013-07-02 Riverbed Technology, Inc. Aggregator for connection based anomaly detection
US7363656B2 (en) * 2002-11-04 2008-04-22 Mazu Networks, Inc. Event detection/anomaly correlation heuristics
US7421502B2 (en) * 2002-12-06 2008-09-02 International Business Machines Corporation Method and system for storage-aware flow resource management
US7929534B2 (en) * 2004-06-28 2011-04-19 Riverbed Technology, Inc. Flow logging for connection-based anomaly detection
US7760653B2 (en) * 2004-10-26 2010-07-20 Riverbed Technology, Inc. Stackable aggregation for connection based anomaly detection
US8909807B2 (en) * 2005-04-07 2014-12-09 Opanga Networks, Inc. System and method for progressive download using surplus network capacity
US20070058650A1 (en) * 2005-08-09 2007-03-15 International Business Machines Corporation Resource buffer sizing under replenishment for services
JP2007206799A (en) * 2006-01-31 2007-08-16 Toshiba Corp Data transfer device, information recording reproduction device and data transfer method
US20070198982A1 (en) * 2006-02-21 2007-08-23 International Business Machines Corporation Dynamic resource allocation for disparate application performance requirements
US7948976B2 (en) * 2006-04-26 2011-05-24 Marvell Israel (M.I.S.L) Ltd. Efficient management of queueing resources for switches
JP4129694B2 (en) * 2006-07-19 2008-08-06 ソニー株式会社 Information processing apparatus and method, program, and recording medium
US20080049760A1 (en) * 2006-08-24 2008-02-28 Gilles Bergeron Oversubscription in broadband network
TW200833026A (en) * 2007-01-29 2008-08-01 Via Tech Inc Packet processing method and a network device using the method
US7978607B1 (en) * 2008-08-29 2011-07-12 Brocade Communications Systems, Inc. Source-based congestion detection and control
EP2187580B1 (en) * 2008-11-18 2013-01-16 Alcatel Lucent Method for scheduling packets of a plurality of flows and system for carrying out the method
US9112818B1 (en) 2010-02-05 2015-08-18 Marvell Isreal (M.I.S.L) Ltd. Enhanced tail dropping in a switch
US8499106B2 (en) * 2010-06-24 2013-07-30 Arm Limited Buffering of a data stream
US8665725B2 (en) * 2011-12-20 2014-03-04 Broadcom Corporation System and method for hierarchical adaptive dynamic egress port and queue buffer management
US9485326B1 (en) 2013-04-01 2016-11-01 Marvell Israel (M.I.S.L) Ltd. Scalable multi-client scheduling
US9306876B1 (en) 2013-04-01 2016-04-05 Marvell Israel (M.I.S.L) Ltd. Multibank egress queuing system in a network device
US9473418B2 (en) * 2013-12-12 2016-10-18 International Business Machines Corporation Resource over-subscription
US9548937B2 (en) * 2013-12-23 2017-01-17 Intel Corporation Backpressure techniques for multi-stream CAS
US10057194B1 (en) 2014-01-07 2018-08-21 Marvell Israel (M.I.S.L) Ltd. Methods and apparatus for memory resource management in a network device
US9866489B2 (en) * 2014-07-11 2018-01-09 F5 Networks, Inc. Delayed proxy action
CN109257304A (en) * 2017-07-12 2019-01-22 中兴通讯股份有限公司 A kind of bandwidth adjusting method, device, storage medium and the network equipment
US10608943B2 (en) * 2017-10-27 2020-03-31 Advanced Micro Devices, Inc. Dynamic buffer management in multi-client token flow control routers
CN109408233B (en) * 2018-10-17 2022-06-03 郑州云海信息技术有限公司 Cache resource allocation method and device
US11206222B2 (en) 2020-02-07 2021-12-21 Wipro Limited System and method of memory management in communication networks

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5434848A (en) * 1994-07-28 1995-07-18 International Business Machines Corporation Traffic management in packet communications networks
EP0706298A2 (en) * 1994-10-04 1996-04-10 AT&T Corp. Dynamic queue length thresholds in a shared memory ATM switch

Family Cites Families (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4430712A (en) * 1981-11-27 1984-02-07 Storage Technology Corporation Adaptive domain partitioning of cache memory space
US4734907A (en) * 1985-09-06 1988-03-29 Washington University Broadcast packet switching network
CA1329432C (en) * 1988-11-02 1994-05-10 William Davy Method of memory and cpu time allocation for a multi-user computer system
US5014265A (en) * 1989-11-30 1991-05-07 At&T Bell Laboratories Method and apparatus for congestion control in a data network
US5157654A (en) 1990-12-18 1992-10-20 Bell Communications Research, Inc. Technique for resolving output port contention in a high speed packet switch
US5303078A (en) 1990-12-18 1994-04-12 Bell Communications Research, Inc. Apparatus and method for large scale ATM switching
US5274768A (en) 1991-05-28 1993-12-28 The Trustees Of The University Of Pennsylvania High-performance host interface for ATM networks
CA2089726C (en) 1991-06-18 1999-10-26 Takao Ogura Detour path determination method
US5379297A (en) 1992-04-09 1995-01-03 Network Equipment Technologies, Inc. Concurrent multi-channel segmentation and reassembly processors for asynchronous transfer mode
DE69129851T2 (en) 1991-09-13 1999-03-25 Ibm Configurable gigabit / s switch adapter
US5542068A (en) * 1991-12-10 1996-07-30 Microsoft Corporation Method and system for storing floating point numbers to reduce storage space
US5680582A (en) * 1991-12-20 1997-10-21 Microsoft Corporation Method for heap coalescing where blocks do not cross page of segment boundaries
SE515178C2 (en) * 1992-03-20 2001-06-25 Ericsson Telefon Ab L M Procedures and devices for prioritizing buffer management in packet networks
US5313454A (en) 1992-04-01 1994-05-17 Stratacom, Inc. Congestion control for cell networks
US5539899A (en) * 1992-04-03 1996-07-23 International Business Machines Corporation System and method for handling a segmented program in a memory for a multitasking data processing system utilizing paged virtual storage
FR2694671A1 (en) * 1992-08-06 1994-02-11 Trt Telecom Radio Electr Device for rearranging virtual circuit rates in asynchronous time division multiplex transmission.
FR2699703B1 (en) * 1992-12-22 1995-01-13 Bull Sa Method for managing a buffer memory, recording medium and computer system incorporating it.
US5412655A (en) 1993-01-29 1995-05-02 Nec Corporation Multiprocessing system for assembly/disassembly of asynchronous transfer mode cells
US5359592A (en) 1993-06-25 1994-10-25 Stratacom, Inc. Bandwidth and congestion control for queue channels in a cell switching communication controller
DE4323405A1 (en) * 1993-07-13 1995-01-19 Sel Alcatel Ag Access control method for a buffer memory and device for buffering data packets and switching center with such a device
JP3044983B2 (en) 1993-08-25 2000-05-22 株式会社日立製作所 Cell switching method for ATM switching system
CA2123447C (en) 1993-09-20 1999-02-16 Richard L. Arndt Scalable system interrupt structure for a multiprocessing system
KR960003783B1 (en) * 1993-11-06 1996-03-22 한국전기통신공사 Subscriber atm mux for interface to isdn
US5600820A (en) * 1993-12-01 1997-02-04 Bell Communications Research, Inc. Method for partitioning memory in a high speed network based on the type of service
JP2888376B2 (en) 1993-12-31 1999-05-10 インターナシヨナル・ビジネス・マシーンズ・コーポレーシヨン Switching equipment for multiple traffic classes
JPH07221761A (en) 1994-02-04 1995-08-18 Fujitsu Ltd Cell delay absorption circuit
JP3405800B2 (en) 1994-03-16 2003-05-12 富士通株式会社 ATM-based variable-length cell transfer system, ATM-based variable-length cell switch, and ATM-based variable-length cell switch
US5583861A (en) 1994-04-28 1996-12-10 Integrated Telecom Technology ATM switching element and method having independently accessible cell memories
JP2655481B2 (en) 1994-04-28 1997-09-17 日本電気株式会社 Priority control method in output buffer type ATM switch
US5949781A (en) 1994-08-31 1999-09-07 Brooktree Corporation Controller for ATM segmentation and reassembly
US5548587A (en) 1994-09-12 1996-08-20 Efficient Networks, Inc. Asynchronous transfer mode adapter for desktop applications
EP0705006B1 (en) 1994-09-28 1999-09-01 Siemens Aktiengesellschaft ATM communication system for statistical multiplexing of cells
US5541919A (en) * 1994-12-19 1996-07-30 Motorola, Inc. Multimedia multiplexing device and method using dynamic packet segmentation
ZA959722B (en) 1994-12-19 1996-05-31 Alcatel Nv Traffic management and congestion control for packet-based networks
EP0719065A1 (en) 1994-12-20 1996-06-26 International Business Machines Corporation Multipurpose packet switching node for a data communication network
JPH08288965A (en) 1995-04-18 1996-11-01 Hitachi Ltd Switching system
US5625625A (en) 1995-07-07 1997-04-29 Sun Microsystems, Inc. Method and apparatus for partitioning data load and unload functions within an interface system for use with an asynchronous transfer mode system
US5917805A (en) 1995-07-19 1999-06-29 Fujitsu Network Communications, Inc. Network switch utilizing centralized and partitioned memory for connection topology information storage
US5796735A (en) 1995-08-28 1998-08-18 Integrated Device Technology, Inc. System and method for transmission rate control in a segmentation and reassembly (SAR) circuit under ATM protocol
US5875352A (en) 1995-11-03 1999-02-23 Sun Microsystems, Inc. Method and apparatus for multiple channel direct memory access control
US5974466A (en) 1995-12-28 1999-10-26 Hitachi, Ltd. ATM controller and ATM communication control device
US5765032A (en) 1996-01-11 1998-06-09 Cisco Technology, Inc. Per channel frame queuing and servicing in the egress direction of a communications network
US6028844A (en) 1996-01-25 2000-02-22 Cypress Semiconductor Corp. ATM receiver
US5978856A (en) 1996-01-26 1999-11-02 Dell Usa, L.P. System and method for reducing latency in layered device driver architectures
US5793747A (en) 1996-03-14 1998-08-11 Motorola, Inc. Event-driven cell scheduler and method for supporting multiple service categories in a communication network
US5844901A (en) 1996-03-15 1998-12-01 Integrated Telecom Technology Asynchronous bit-table calendar for ATM switch
US5812527A (en) 1996-04-01 1998-09-22 Motorola Inc. Simplified calculation of cell transmission rates in a cell based netwook
US6034945A (en) 1996-05-15 2000-03-07 Cisco Technology, Inc. Method and apparatus for per traffic flow buffer management
US6058114A (en) 1996-05-20 2000-05-02 Cisco Systems, Inc. Unified network cell scheduler and flow controller
US5898688A (en) 1996-05-24 1999-04-27 Cisco Technology, Inc. ATM switch with integrated system bus
US5742765A (en) 1996-06-19 1998-04-21 Pmc-Sierra, Inc. Combination local ATM segmentation and reassembly and physical layer device
GB9613473D0 (en) 1996-06-27 1996-08-28 Mitel Corp ATM cell transmit priority allocator
US5854911A (en) 1996-07-01 1998-12-29 Sun Microsystems, Inc. Data buffer prefetch apparatus and method
US5901147A (en) 1996-08-30 1999-05-04 Mmc Networks, Inc. Apparatus and methods to change thresholds to control congestion in ATM switches
US5999518A (en) 1996-12-04 1999-12-07 Alcatel Usa Sourcing, L.P. Distributed telecommunications switching system and method
US5864540A (en) 1997-04-04 1999-01-26 At&T Corp/Csi Zeinet(A Cabletron Co.) Method for integrated traffic shaping in a packet-switched network
US5970064A (en) 1997-06-12 1999-10-19 Northern Telecom Limited Real time control architecture for admission control in communications network
US5982783A (en) 1997-06-16 1999-11-09 Lucent Technologies Inc. Switch distribution via an intermediary switching network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5434848A (en) * 1994-07-28 1995-07-18 International Business Machines Corporation Traffic management in packet communications networks
EP0706298A2 (en) * 1994-10-04 1996-04-10 AT&T Corp. Dynamic queue length thresholds in a shared memory ATM switch

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KENJI KAWAHARA ET AL: "PERFORMANCE EVALUATION OF SELECTIVE CELL DISCARD SCHEMES IN ATM NETWORKS", PROCEEDINGS OF IEEE INFOCOM 1996. CONFERENCE ON COMPUTER COMMUNICATIONS, FIFTEENTH ANNUAL JOINT CONFERENCE OF THE IEEE COMPUTER AND COMMUNICATIONS SOCIETIES. NETWORKING THE NEXT GENERATION SAN FRANCISCO, MAR. 24 - 28, 1996, vol. 3, 24 March 1996 (1996-03-24), INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, pages 1054 - 1061, XP000622238 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151323A (en) * 1997-01-17 2000-11-21 3Com Technologies Method of supporting unknown addresses in an interface for data transmission in an asynchronous transfer mode
GB2321820B (en) * 1997-01-17 1999-04-14 Tadhg Creedon Method and apparatus for buffer management in virtual circuit systems
GB2321820A (en) * 1997-01-17 1998-08-05 Tadhg Creedon A method for dynamically allocating buffers to virtual channels in an asynchronous network
US6208662B1 (en) 1997-01-17 2001-03-27 3Com Technologies Method for distributing and recovering buffer memories in an asynchronous transfer mode edge device
US6163541A (en) * 1997-01-17 2000-12-19 3Com Technologies Method for selecting virtual channels based on address priority in an asynchronous transfer mode device
US6549541B1 (en) 1997-11-04 2003-04-15 Nokia Corporation Buffer management
EP0920235A2 (en) * 1997-11-28 1999-06-02 Newbridge Networks Corporation Congestion management in a multi-port shared memory switch
US7145868B2 (en) 1997-11-28 2006-12-05 Alcatel Canada Inc. Congestion management in a multi-port shared memory switch
EP0920235A3 (en) * 1997-11-28 2000-02-09 Newbridge Networks Corporation Congestion management in a multi-port shared memory switch
US7065088B2 (en) 1998-01-19 2006-06-20 Juniper Networks, Inc. Asynchronous transfer mode switch with function for assigning queue having forwarding rate close to declared rate
US8009565B2 (en) 1998-01-19 2011-08-30 Juniper Networks, Inc. Switch with function for assigning queue based on a declared transfer rate
FR2775546A1 (en) * 1998-01-19 1999-09-03 Nec Corp Asynchronous transfer mode switch for traffic control
AU745528B2 (en) * 1998-01-19 2002-03-21 Juniper Networks, Inc. Asynchronous transfer mode switch with function for assigning queue having forwarding rate close to declared rate
US7787468B2 (en) 1998-01-19 2010-08-31 Juniper Networks, Inc. Switch with function for assigning queue based on a declared rate transfer
US7391726B2 (en) 1998-01-19 2008-06-24 Juniper Networks, Inc. Switch with function for assigning queue based on forwarding rate
US6731603B1 (en) 1998-01-19 2004-05-04 Nec Corporation Asynchronous transfer mode switch with function for assigning queue having forwarding rate close to declared rate
US5987507A (en) * 1998-05-28 1999-11-16 3Com Technologies Multi-port communication network device including common buffer memory with threshold control of port packet counters
GB2337905A (en) * 1998-05-28 1999-12-01 3Com Technologies Ltd Buffer management in network devices
GB2337905B (en) * 1998-05-28 2003-02-12 3Com Technologies Ltd Buffer management in network devices
EP1056245A3 (en) * 1999-05-27 2004-05-12 Alcatel Canada Inc. Buffering system employing per traffic flow accounting congestion control
EP1056245A2 (en) * 1999-05-27 2000-11-29 Newbridge Networks Corporation Buffering system employing per traffic flow accounting congestion control
US6717912B1 (en) 1999-05-28 2004-04-06 Network Equipment Technologies, Inc. Fair discard system
WO2000074432A1 (en) * 1999-05-28 2000-12-07 Network Equipment Technologies, Inc. Fair discard system
US7142564B1 (en) 2001-02-07 2006-11-28 Cortina Systems, Inc. Multi-service segmentation and reassembly device with a single data path that handles both cell and packet traffic
US7342942B1 (en) 2001-02-07 2008-03-11 Cortina Systems, Inc. Multi-service segmentation and reassembly device that maintains only one reassembly context per active output port
US7369574B1 (en) 2001-02-07 2008-05-06 Cortina Systems, Inc. Multi-service segmentation and reassembly device that is operable in an ingress mode or in an egress mode
US7298738B1 (en) 2001-02-07 2007-11-20 Cortina Systems, Inc. Backpressuring using a serial bus interface and a status switch cell
US7139271B1 (en) 2001-02-07 2006-11-21 Cortina Systems, Inc. Using an embedded indication of egress application type to determine which type of egress processing to perform
US7286566B1 (en) 2001-05-08 2007-10-23 Cortina Systems, Inc. Multi-service segmentation and reassembly device that maintains reduced number of segmentation contexts
US9008109B2 (en) 2011-10-26 2015-04-14 Fujitsu Limited Buffer management of relay device

Also Published As

Publication number Publication date
AU730804B2 (en) 2001-03-15
JP2000510308A (en) 2000-08-08
US6535484B1 (en) 2003-03-18
CA2254104A1 (en) 1997-11-20
EP0898855A1 (en) 1999-03-03
AU3001097A (en) 1997-12-05
US6034945A (en) 2000-03-07

Similar Documents

Publication Publication Date Title
US6034945A (en) Method and apparatus for per traffic flow buffer management
EP0763915B1 (en) Packet transfer device and method adaptive to a large number of input ports
US5629928A (en) Dynamic fair queuing to support best effort traffic in an ATM network
EP1056245B1 (en) Buffering system employing per traffic flow accounting congestion control
JP3128654B2 (en) Supervisory control method, supervisory control device and switching system
US5541912A (en) Dynamic queue length thresholds in a shared memory ATM switch
JP3354689B2 (en) ATM exchange, exchange and switching path setting method thereof
EP1122916B1 (en) Dynamic buffering system having integrated radom early detection
US6219728B1 (en) Method and apparatus for allocating shared memory resources among a plurality of queues each having a threshold value therefor
US5765032A (en) Per channel frame queuing and servicing in the egress direction of a communications network
US5583861A (en) ATM switching element and method having independently accessible cell memories
US5864539A (en) Method and apparatus for a rate-based congestion control in a shared memory switch
US7023856B1 (en) Method and system for providing differentiated service on a per virtual circuit basis within a packet-based switch/router
US6587437B1 (en) ER information acceleration in ABR traffic
US20040151197A1 (en) Priority queue architecture for supporting per flow queuing and multiple ports
US6717912B1 (en) Fair discard system
US20050094643A1 (en) Method of and apparatus for variable length data packet transmission with configurable adaptive output scheduling enabling transmission on the same transmission link(s) of differentiated services for various traffic types
US7787468B2 (en) Switch with function for assigning queue based on a declared rate transfer
AU1746892A (en) Low delay or low loss cell switch for atm
US6865156B2 (en) Bandwidth control method, cell receiving apparatus, and traffic control system
EP1271856B1 (en) Flow and congestion control in a switching network
EP0481447B1 (en) Method of controlling communication network incorporating virtual channels exchange nodes and virtual paths exchange nodes, and the said communication network
JP2874713B2 (en) ATM switching system and its traffic control method
CA2273291A1 (en) Buffering system employing per traffic flow accounting congestion control
CA2297512A1 (en) Dynamic buffering system having integrated random early detection

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ CZ DE DE DK DK EE EE ES FI FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK TJ TM TR TT UA UG UZ VN YU AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref document number: 2254104

Country of ref document: CA

Ref country code: CA

Ref document number: 2254104

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1997924631

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1997924631

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 1997924631

Country of ref document: EP