WO1997004556A1 - Link buffer sharing method and apparatus - Google Patents

Link buffer sharing method and apparatus Download PDF

Info

Publication number
WO1997004556A1
WO1997004556A1 PCT/US1996/011934 US9611934W WO9704556A1 WO 1997004556 A1 WO1997004556 A1 WO 1997004556A1 US 9611934 W US9611934 W US 9611934W WO 9704556 A1 WO9704556 A1 WO 9704556A1
Authority
WO
WIPO (PCT)
Prior art keywords
εaid
receiver
link
count
data
Prior art date
Application number
PCT/US1996/011934
Other languages
French (fr)
Inventor
Thomas A. Manning
Stephen A. Caldara
Stephen A. Hauser
Douglas H. Hunt
Raymond L. Strouble
Original Assignee
Fujitsu Network Communications, Inc.
Fujitsu Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Network Communications, Inc., Fujitsu Limited filed Critical Fujitsu Network Communications, Inc.
Priority to JP9506875A priority Critical patent/JPH11511303A/en
Priority to PCT/US1996/011934 priority patent/WO1997004556A1/en
Priority to AU65019/96A priority patent/AU6501996A/en
Publication of WO1997004556A1 publication Critical patent/WO1997004556A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/18End to end
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17375One dimensional, e.g. linear array, ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/4608LAN interconnection over ATM networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L12/5602Bandwidth control in ATM Networks, e.g. leaky bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/266Stopping or restarting the source, e.g. X-on or X-off
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/106ATM switching elements using space switching, e.g. crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/107ATM switching elements using shared medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1515Non-blocking multistage, e.g. Clos
    • H04L49/153ATM switching fabrics having parallel switch planes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1553Interconnection of ATM switching modules, e.g. ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1553Interconnection of ATM switching modules, e.g. ATM switching fabrics
    • H04L49/1576Crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • H04L49/203ATM switching fabrics with multicast or broadcast capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/255Control mechanisms for ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/256Routing or path finding in ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • H04L49/309Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/45Arrangements for providing or supporting expansion
    • H04L49/455Provisions for supporting expansion in ATM switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0682Clock or time synchronisation in a network by delay compensation, e.g. by compensation of propagation delay or variations thereof, by ranging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0685Clock or time synchronisation in a node; Intranode synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5616Terminal equipment, e.g. codecs, synch.
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5625Operations, administration and maintenance [OAM]
    • H04L2012/5627Fault tolerance and recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5628Testing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5634In-call negotiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5635Backpressure, e.g. for ABR
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/564Connection-oriented
    • H04L2012/5642Multicast/broadcast/point-multipoint, e.g. VOD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/564Connection-oriented
    • H04L2012/5643Concast/multipoint-to-point
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • H04L2012/5648Packet discarding, e.g. EPD, PTD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5649Cell delay or jitter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5652Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5672Multiplexing, e.g. coding, scrambling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5682Threshold; Watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5683Buffer or queue management for avoiding head of line blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5685Addressing issues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/324Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the data link layer [OSI layer 2], e.g. HDLC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L7/00Arrangements for synchronising receiver with transmitter
    • H04L7/04Speed or phase control by synchronisation signals
    • H04L7/041Speed or phase control by synchronisation signals using special codes as synchronising signal
    • H04L7/046Speed or phase control by synchronisation signals using special codes as synchronising signal using a dotting sequence

Definitions

  • This application relates to communications methods and apparatus in a distributed switching architecture, and in particular to buffer sharing methods and apparatus in a distributed switching architecture.
  • FCVC Flow Controlled Virtual Connection
  • This protocol involves a credit-based flow control system, where a number of connections exist within the same link with the necessary buffers established and flow control monitored on a per-connection basis. Buffer usage over a known time interval, the link round-trip time, is determined in order to calculate the per-connection bandwidth. A trade-off is established between maximum bandwidth and buffer allocation per connection. Such per- connection feedback and subsequent flow control at the transmitter avoids data loss from an inability of the downstream element to store data cells sent from the upstream element.
  • the flow control protocol isolates each connection, ensuring lossless cell transmission for that connection.
  • Connection-level flow control results in a trade-off between update frequency and the realized bandwidth for the connection.
  • High update frequency has the effect of minimizing situations in which a large number of receiver cell buffers are available, though the transmitter incorrectly believes the buffers to be unavailable. Thus it reduces the number of buffers that must be set aside for a connection.
  • a high update frequency to control a traffic flow will require a high utilization of bandwidth (in the reverse direction) to supply the necessary flow control buffer update information where a large number of connections exist in the same link. Realizing that transmission systems are typically symmetrical with traffic flowing in both directions, and flow control buffer update information likewise flowing in both directions, it is readily apparent that a high update frequency is wasteful of the bandwidth of the link.
  • the presently claimed invention provides buffer state flow control at the link level, otherwise known as link flow control, in addition to the flow control on a per-connection basis.
  • link flow control may have a high update frequency, whereas connection flow control information may have a low update frequency.
  • the end result is a low effective update frequency ⁇ ince link level flow control exists only once per link basis whereas the link typically has many connections within it, each needing their own flow control. This minimizes the wasting of link bandwidth to transmit flow control update information.
  • buffers may be allocated from a pool of buffers and thus connections may share in access to available buffers.
  • Sharing buffers means that fewer buffers are needed since the projected buffers required for a link in the defined known time interval may be shown to be less than the projected buffers that would be required if independently calculated and summed for all of the connections within the link for the same time interval. Furthermore, the high update frequency that may be used on the link level flow control without undue wasting of link bandwidth, allows further minimization of the buffers that must be assigned to a link. Minimizing the number of cell buffers at the receiver significantly decreases net receiver cost.
  • the link can be defined either as a physical link or as a logical grouping comprised of logical connections.
  • the resultant system has eliminated both defects of the presently known art. It eliminates the excessive wasting of link bandwidth that results from reliance on a per-connection flow control mechanism alone, while taking advantage of both a high update frequency at the link level and buffer sharing to minimize the buffer requirements of the receiver. Yet this flow control mechanism still ensures the same lossless transmission of cells as would the prior art.
  • Fig. 1 is a block diagram of a connection-level flow control apparatus as known in the prior art
  • Fig. 2 is a block diagram of a link-level flow control apparatus according to the present invention.
  • Figs. 3A and 3B are flow diagram representations of counter initialization and preparation for cell transmission within a flow control method according to the present invention
  • Fig. 4 is a flow diagram representation of cell transmission within the flow control method according to the present invention
  • Figs. 5A and 5B are flow diagram representations of update cell preparation and transmission within the flow control method according to the present invention.
  • Figs. 6A and 6B are flow diagram representations of an alternative embodiment of the update cell preparation and transmission of Figs. 5A and 5B;
  • Figs. 7A and 7B are flow diagram representations of update cell reception within the flow control method according to the present invention
  • Figs. 8A, 8B and 8C are flow diagram representations of check cell preparation, transmission and reception within the flow control method according to the present invention
  • Figs. 9A, 9B and 9C are flow diagram representations of an alternative embodiment of the check cell preparation, transmission and reception of Figs. 8A, 8B and 8C;
  • Fig. 10 illustrates a cell buffer pool according to the present invention as viewed from an upstream element
  • Fig. 11 is a block diagram of a link-level flow control apparatus in an upstream element providing prioritized access to a shared buffer resource in a downstream element according to the present invention
  • Figs. 12A and 12B are flow diagram representations of counter initialization and preparation for cell transmission within a prioritized access method according to the present invention
  • Figs. 13A and 13B illustrate alternative embodiments of cell buffer pools according to the present invention a ⁇ viewed from an upstream element
  • Fig. 14 is a block diagram of a flow control apparatus in an upstream element providing guaranteed minimum bandwidth and prioritized access to a shared buffer resource in a downstream element according to the present invention
  • Figs. 15A and 15B are flow diagram representations of counter initialization and preparation for cell transmission within a guaranteed minimum bandwidth mechanism employing prioritized access according to the present invention
  • Fig. 16 i a block diagram repre ⁇ entation of a transmitter, a data link, and a receiver in which the presently disclo ⁇ ed joint flow control mechani ⁇ m is implemented;
  • Fig. 17 illustrates data structures associated with queues in the receiver of Fig. 16.
  • connection-level flow control the resources required for connection-level flow control are presented.
  • the illustrated configuration of Fig. 1 is presently known in the art.
  • a brief discussion of a connection-level flow control arrangement will facilitate an explanation of the presently disclosed link-level flow control method and apparatus.
  • One link 10 is shown providing an interface between an upstream transmitter element 12, also known as an UP subsystem, and a downstream receiver element 14, also known as a DP subsystem.
  • Each element 12, 14 can act as a switch between other network elements.
  • the upstream element 12 in Fig. 1 can receive data from a PC (not shown) . This data is communicated through the link 10 to the downstream element 14 , which in turn can forward the data to a device such as a printer (not shown) .
  • the illustrated network elements 12, 14 can themselves be network end-nodes.
  • the essential function of the presently described arrangement is the transfer of data cells from the upstream element 12 via a connection 20 in the link 10 to the downstream element 14, where the data cells are temporarily held in cell buffers 28.
  • Cell format is known, and is further described in "Quantum Flow Control", Version 1.5.1, dated June 27, 1995 and subsequently published in a later version by the Flow Control Consortium.
  • the block labelled Cell Buffers 28 represents a set of cell buffers dedicated to the respective connection 20. Data cells are released from the buffers 28, either through forwarding to another link beyond the downstream element 14, or through cell utilization within the downstream element 14. The latter event can include the construction of data frames from the individual data cells if the downstream element 14 is an end-node such as a work station.
  • Each of the upstream and down ⁇ tream elements 12, 14 are controlled by respective processors, labelled UP (Upstream Processor) 16 and DP (Downstream Processor) 18.
  • processors labelled UP (Upstream Processor) 16 and DP (Downstream Processor) 18.
  • UP Upstream Processor
  • DP Downstream Processor
  • Associated with each of the processors 16, 18 are sets of buffer counters for implementing the connection-level flow control. These buffer counters are each implemented as an increasing counter/limit register set to facilitate resource usage changes.
  • the counters of Fig. 1, described in further detail below, are implemented in a first embodiment in UP internal RAM.
  • the counter names discussed and illustrated for the prior art utilize some of the same counter names as used with respect to the presently disclosed flow control method and apparatus. This is merely to indicate the presence of a similar function or element in the prior art with respect to counters, registers, or like elements now disclosed.
  • the link 10 which in a first embodiment is a copper conductor, multiple virtual connections 20 are provided.
  • the link 10 is a logical grouping of plural virtual connections 20.
  • the number of connections 20 implemented within the link 10 depends upon the need ⁇ of the re ⁇ pective network element ⁇ 12, 14, as well as the required bandwidth per connection. In Fig. 1, only one connection 20 and a ⁇ sociated counters are illustrated for simplicity.
  • BS_Counter 22 and BS_Limit 24 are provided, BS_Counter 22 and BS_Limit 24.
  • each are implemented as fourteen bit counters/regi ⁇ ters, allowing a connection to have 16,383 buffers. This number would support, for example, 139 Mbps, 10,000 kilometer round-trip service.
  • the buffer state counters 22, 24 are employed only if the connection 20 in question is flow-control enabled. That is, a bit in a respective connection descriptor, or queue descriptor, of the UP 16 is set indicating the connection 20 is flow-control enabled.
  • BS_Counter 22 is incremented by the UP 16 each time a data cell is transferred out of the upstream element 12 and through the as ⁇ ociated connection 20. Periodically, a ⁇ described below, this counter 22 is adjusted during an update event based upon information received from the downstream element 14. BS_Counter 22 thus presents an indication of the number of data cells either currently being transmitted in the connection 20 between the upstream and downstream element ⁇ 12, 14, or yet unrelea ⁇ ed from buffers 28 in the downstream element 14.
  • BS_Limit 24 is set at connection configuration time to reflect the number of buffers 28 available within the receiver 14 for this connection 20. For instance, if BS_Counter 22 for this connection 20 indicates that twenty data cells have been transmitted and BS_Limit 24 indicates that thi ⁇ connection 20 is limited to twenty receiver buffers 28, the UP 16 will inhibit further transmi ⁇ ion from the upstream element 12 until an indication is received from the downstream element 14 that further buffer space 28 is available for that connection 20.
  • Tx_Counter 26 is used to count the total number of data cells transmitted by the UP 16 through this connection 20. In the first embodiment, this i ⁇ a twenty-eight bit counter which roll ⁇ over at OxFFFFFFF. As described later, Tx_Counter 16 is used during a check event to account for errored cell ⁇ for thi ⁇ connection 20.
  • the DP 18 also manages a set of counters for each connection 20.
  • Buffer_Limit 30 performs a policing function in the downstream element 14 to protect against misbehaving transmitter ⁇ .
  • the buffer_limit regi ⁇ ter 30 indicates the maximum number of cell buffers 28 in the receiver 14 which this connection 20 can use.
  • BS_Lim.it 24 is equal to Buffer_Limit 30.
  • This function i ⁇ coordinated by network management ⁇ oftware.
  • Buffer_Limit 30 To avoid the "dropping" of data cells in transmi ⁇ ion, an increa ⁇ e in buffers per connection is reflected first in Buffer_Limit 30 prior to BS_Limit 24. Conversely, a reduction in the number of receiver buffers per connection is reflected first in BS_Limit 24 and thereafter in Buffer_Limit 30.
  • Buffer_Counter 32 provides an indication of the number of buffers 28 in the downstream element 14 which are currently being used for the storage of data cells. As de ⁇ cribed ⁇ ubsequently, this value i ⁇ u ⁇ ed in providing the up ⁇ tream element 12 with a more accurate picture of buffer availability in the down ⁇ tream element 14. Both the Buffer_Limit 30 and Buffer_Counter 32 are fourteen bits wide in the first embodiment.
  • N2_Limit 34 determines the frequency of connection flow ⁇ rate communication to the upstream transmitter 12. A cell containing such flow-rate information i ⁇ ⁇ ent upstream every time the receiver element 14 forwards a number of cells equal to N2_Limit 34 out of the receiver element 14. This updating activity is further described subsequently.
  • N2_Limit 34 is six bits wide.
  • the DP 18 uses N2_Counter 36 to keep track of the number of cells which have been forwarded out of the receiver element 14 since the last time the N2_Limit 34 was reached.
  • N2_Counter 36 i ⁇ six bits wide.
  • the DP 18 maintain ⁇ Fwd_Counter 38 to maintain a running count of the total number of cell ⁇ forwarded through the receiver element 14.
  • Thi ⁇ include ⁇ buffers released when data cells are utilized for data frame construction in an end-node. When the maximum count for this counter 38 is reached, the counter roll ⁇ over to zero and continue ⁇ .
  • the total number of cell ⁇ received by the receiver element 14 can be derived by adding Buffer_Counter 32 to Fwd_Counter 38. The latter i ⁇ employed in correcting the tran ⁇ mitter element 12 for errored cell ⁇ during the check event, a ⁇ de ⁇ cribed below.
  • Fwd_Counter 38 is twenty-eight bits wide in the first embodiment.
  • the DP 18 maintains Rx_Counter 40, a counter which is incremented each time the downstream element 14 receives a data cell through the respective connection 20.
  • the value of thi ⁇ counter 40 i ⁇ then u ⁇ able directly in re ⁇ ponse to check cell ⁇ and in the generation of an update cell, both of which will be de ⁇ cribed further below.
  • ⁇ teady ⁇ tate data cell ⁇ are tran ⁇ mitted from the tran ⁇ mitter element 12 to the receiver element 14.
  • update buffer occupancy information is returned upstream by the receiver element 14 to correct counter values in the transmitter element 12.
  • Check mode is used to check for cell ⁇ lo ⁇ t or injected due to tran ⁇ mi ⁇ ion error ⁇ between the upstream transmitter and down ⁇ tream receiver elements 12, 14.
  • connection level counters are augmented with " [i]" to indicate as ⁇ ociation with one connection [i] of plural po ⁇ ible connection ⁇ .
  • Fig. 3A Prior to any activity, counter ⁇ in the up ⁇ trea and downstream elements 12, 14 are initialized, as illu ⁇ trated in Fig. 3A.
  • Initialization includes zeroing counters, and providing initial values to limit registers such as Link_BS_Limit and Link_Buffer_Limit.
  • Buffer_Limit[i] is shown being initialized to (RTT*BW) + N2, which represents the round-trip time times the virtual connection bandwidth, plus accommodation for delays in proces ⁇ ing the update cell.
  • RTT*BW Link_N2_Limit
  • "X" repre ⁇ ent ⁇ the buffer ⁇ tate update frequency for the link
  • N2_Limit[i] "Y" represents the buffer state update frequency for each connection.
  • the UP 16 of the tran ⁇ mitter element 12 determine ⁇ which virtual connection 20 (VC) has a non-zero cell count (i.e. has a cell ready to transmit), a BS_Counter value less than the BS_Limit, and an indication that the VC is next to send (also in Figs. 3A and 3B) .
  • the UP 16 increments BS Counter 22 and Tx Counter 26 whenever the UP 16 tran ⁇ mit ⁇ a data cell over the re ⁇ pective connection 20, assuming flow control is enabled (Fig. 4) .
  • Buffer_Counter 32 When a data cell is forwarded out of the receiver element 14, Buffer_Counter 32 is decremented. Buffer_Counter 32 should never exceed Buffer_Limit 30 when the connection- level flow control protocol is enabled, with the exception of when BS_Limit 24 has been decreased and the receiver element 14 has yet to forward sufficient cell ⁇ to bring Buffer_Counter 32 below Buffer_Lim.it 30.
  • a buffer ⁇ tate update occur ⁇ when the receiver element 14 ha ⁇ forwarded a number of data cell ⁇ equal to N2_Limit 34 out of the receiver element 14.
  • update involves the transfer of the value of Fwd_Counter 38 from the receiver element 14 back to the transmitter element 12 in an update cell, as in Fig. 6A.
  • the value of Rx_Counter 40 minus Buffer Counter 32 is conveyed in the update cell, as in Fig. 5A.
  • the update cell is used to update the value in BS_Counter 22, as shown for the two embodiments in Fig. 7A. Since BS_Counter 22 is independent of buffer allocation information, buffer allocation can be changed without impacting the performance of this aspect of connection-level flow control.
  • Update cells require an allocated bandwidth to ensure a bounded delay. This delay needs to be accounted for, as a component of round-trip time, to determine the buffer allocation for the re ⁇ pective connection.
  • the amount of bandwidth allocated to the update cell ⁇ is a function of a counter, Max_Update_Counter (not illustrated) at an a ⁇ sociated downstream transmitter element (not illustrated) .
  • Thi ⁇ counter force ⁇ the scheduling of update and check cells, the latter to be discus ⁇ ed sub ⁇ equently.
  • An update event occurs as follows, with regard to Figs. 1, 5A and 6A.
  • N2_Counter 36 i ⁇ equal to N2_Limit 34 the DP 18 prepare ⁇ an update cell for tran ⁇ mi ⁇ ion back to the upstream element 12 and N2_Counter 36 is set to zero.
  • the upstream element 12 receives a connection indicator from the downstream element 14 forwarded cell to identify which connection 20 i ⁇ to be updated.
  • the DP 18 causes the Fwd_Counter 38 value to be inserted into an update record payload (Fig. 6A) .
  • the DP 18 cause ⁇ the Rx Counter 40 value minu ⁇ the Buffer Counter 32 value to be in ⁇ erted into the update record payload (Fig. 5A) .
  • the update cell is tran ⁇ mitted to the upstream element 12.
  • the UP 16 receives the connection indicator from the update record to identify the transmitter connection, and extracts the Fwd_Counter 38 value or the Rx_Counter 40 minus Buffer_Counter 32 value from the update record.
  • the update event provides the transmitting element 12 with an indication of how many cells originally tran ⁇ mitted by it have now been released from buffers within the receiving element 14, and thus provides the transmitting element 12 with a more accurate indication of receiver element 14 buffer 28 availability for that connection 20.
  • the buffer ⁇ tate check event ⁇ erves two purposes: 1) it provides a mechanism to calculate and compensate for cell loss or cell insertion due to transmission errors; and 2) it provides a mechanism to start (or restart) a flow if update cells were lost or if enough data cells were lo ⁇ t that N2_Limit 34 i ⁇ never reached.
  • One timer (not ⁇ hown) in the UP ⁇ ubsystem 16 serves all connections.
  • the connections are enabled or disabled on a per connection ba ⁇ i ⁇ as to whether to ⁇ end check cell ⁇ from the up ⁇ tream tran ⁇ mitter element 12 to the downstream receiver element 14.
  • the check process in the transmitter element 12 involves ⁇ earching all of the connection de ⁇ criptor ⁇ to find one which is check enabled (see Figs. 8A, 9A) .
  • Once a minimum pacing interval has elapsed (the check interval) the check cell is forwarded to the receiver element 14 and the next check enabled connection is identified.
  • the spacing between check cells for the same connection is a function of the number of active flow- controlled connections times the mandated ⁇ pacing between check cell ⁇ for all connection ⁇ .
  • Check cell ⁇ have priority over update cell ⁇ .
  • the check event occur ⁇ as follows, with regard to Figs. 8A through 8C and 9A through 9C.
  • Each transmit element 12 connection 20 is checked after a timed check interval i ⁇ reached. If the connection i ⁇ flow-control enabled and the connection i ⁇ valid, then a check event is scheduled for tran ⁇ mi ⁇ ion to the receiver element 14.
  • a buffer ⁇ tate check cell is generated using the Tx_Counter 26 value for that connection 20 in the check cell payload, and is tran ⁇ mitted u ⁇ ing the connection indicator from the respective connection descriptor (Figs. 8A and 9A) .
  • a calculation of errored cells is made at the receiver element 14 by summing Fwd_Counter 38 with Buffer_Counter 32, and subtracting this value from the content ⁇ of the transmitted check cell record, the value of Tx_Counter 26 (Fig. 9B) .
  • the value of Fwd_Counter 38 is increased by the errored cell count.
  • An update record with the new value for Fwd_Counter 38 is then generated. Thi ⁇ updated Fwd_Counter 38 value ⁇ ub ⁇ equently update ⁇ the BS_Counter 22 value in the tran ⁇ mitter element 12.
  • link-level flow control al ⁇ o known as link-level buffer state accounting, is added to connection-level flow control. It is po ⁇ ible for ⁇ uch link-level flow control to be implemented without connection-level flow control. However, a combination of the two is preferable since without connection-level flow control there would be no restriction on the number of buffers a ⁇ ingle connection might consume.
  • Link-level flow control enables cell buffer ⁇ haring at a receiver element while maintaining the "no cell lo ⁇ s" guarantee afforded by connection-level flow control. Buffer sharing results in the most efficient use of a limited number of buffers. Rather than provide a number of buffers equal to bandwidth time ⁇ RTT for each connection, a smaller number of buffers is employable in the receiver element 14 since not all connections require a full compliment of buffers at any one time.
  • a further benefit of link-level buffer ⁇ tate accounting i ⁇ that each connection i ⁇ provided with an accurate representation of down ⁇ tream buffer availability without necessitating increased reverse bandwidth for each connection. A high-frequency link-level update does not significantly effect overall per-connection bandwidth.
  • the upstream transmitter element 12' (FSPP sub ⁇ ystem) partially includes a processor labelled From Switch Port Proces ⁇ or (FSPP) 16'.
  • the FSPP processor 16' is provided with two buffer ⁇ tate counters, BS_Counter 22' and BS_Limit 24', and a Tx_Counter 26' each having the same function on a per-connection basi ⁇ as those described with respect to Fig. 1.
  • the embodiment of Fig. 2 further include ⁇ a ⁇ et of resources added to the upstream and downstream elements 12', 14' which enable link-level buffer accounting.
  • the ⁇ e resources provide similar functions a ⁇ those utilized on a per-connection basis, yet they operate on the link level.
  • Link_BS_Counter 50 tracks all cells in flight between the FSPP 16' and elements downstream of the receiver element 14', including cells in transit between the transmitter 12' and the receiver 14' and cells ⁇ tored within receiver 14' buffers 28'.
  • Link_BS_Counter 50 i ⁇ modified during a link update event by subtracting either the Link_Fwd_Counter 68 value or the difference between Link_Rx_Counter 70 and Link_Buffer_Counter 62 from the Link_TX_Counter 54 value.
  • the link-level counters are implemented in external RAM associated with the FSPP proces ⁇ or 16'.
  • Link_BS_Limit 52 limits the number of shared downstream cell buffers 28' in the receiver element 14' to be shared among all of the flow-control enabled connections 20'.
  • Link_BS_Counter 50 and Link_BS_Limit 52 are both twenty bit ⁇ wide.
  • Link_TX_Counter 54 tracks all cells transmitted onto the link 10'. It is u ⁇ ed during the link-level update event to calculate a new value for Link_BS_Counter 50.
  • Link TX Counter 54 is twenty-eight bits wide in the first embodiment.
  • To Switch Port Proce ⁇ sor (TSPP) 18' also manages a set of counters for each link 10' in the ⁇ ame fa ⁇ hion with re ⁇ pect to the commonly illu ⁇ trated counter ⁇ in Fig ⁇ . 1 and 2.
  • the TSPP 18' further includes a Link_Buffer_Limit 60 which performs a function in the downstream element 14' similar to Link_BS_Limit 52 in the upstream element 12' by indicating the maximum number of cell buffers 28' in the receiver 14' available for u ⁇ e by all connections 10'. In most cases, Link_BS_Limit 52 is equal to Link_Buffer_Limit 60.
  • Link_Buffer_Limit 60 is twenty bits wide in the first embodiment.
  • Link_Buffer_Counter 62 provides an indication of the number of buffers in the downstream element 14' which are currently being used by all connections for the storage of data cell ⁇ . This value is used in a check event to correct the Link_Fwd_Counter 68 (described subsequently) .
  • the Link_Buffer_Counter 62 i ⁇ twenty bits wide in the first embodiment.
  • Link_N2_Limit 64 and Link_N2_Counter 66 are u ⁇ ed to generate link update records, which are intermixed with connection-level update record ⁇ .
  • Link_N2_Limit 64 establishe ⁇ a threshold number for triggering the generation of a link-level update record (Figs. 5B and 6B)
  • Link_N2_Counter 66 and Link_Fwd_Counter 68 are incremented each time a cell i ⁇ relea ⁇ ed out of a buffer cell in the receiver element 14'.
  • N2_Limit 34' and Link_N2_Limit 64 are both ⁇ tatic once initially configured.
  • each i ⁇ dynamically adju ⁇ table ba ⁇ ed upon mea ⁇ ured bandwidth For instance, if forward link bandwidth is relatively high, Link_N2_Limit 64 could be adju ⁇ ted down to cau ⁇ e more frequent link-level update record transmission. Any forward bandwidth impact would be considered minimal. Lower forward bandwidth would enable the raising of Link_N2_Limit 64 ⁇ ince the unknown availability of buffer ⁇ 28' in the down ⁇ tream element 14' i ⁇ less critical.
  • Link_Fwd_Counter 68 tracks all cells released from buffer cells 28' in the receiver element 14' that came from the link 10' in question. It is twenty-eight bit ⁇ wide in a fir ⁇ t embodiment, and is used in the update event to recalculate Link_BS_Counter 50.
  • Link_Rx_Counter 70 i ⁇ employed in an alternative embodiment in which Link_Fwd_Counter 68 i ⁇ not employed. It i ⁇ al ⁇ o twenty-eight bits wide in an illustrative embodiment and tracks the number of cell ⁇ received acro ⁇ all connections 20' in the link 10'.
  • a receiver element buffer sharing method is described. Normal data transfer by the FSPP 16' in the upstream element 12' to the TSPP 18' in the down ⁇ tream element 14' i ⁇ enabled across all connections 20' in the link 10' as long as the Link_BS_Counter 50 is les ⁇ than or equal to Link_BS_Lim.it 52, as in Fig. 3B. This test prevents the FSPP 16' from transmitting more data cells than it believes are available in the downstream element 14'. The accuracy of this belief i ⁇ maintained through the update and check event ⁇ , described next.
  • a data cell i ⁇ received at the downstream element 14' if neither connection-level or link-level buffer limit are exceeded (Fig. 3B) . If a limit is exceeded, the cell is di ⁇ carded.
  • the update event at the link level involves the generation of a link update record when the value in Link_N2_Counter 66 reaches (equals or exceeds) the value in Link_N2_Limit 64, as ⁇ hown in Fig ⁇ . 5B and 6B.
  • Link_N2_Limit 64 i ⁇ set to forty.
  • the link update record the value taken from Link_Fwd_Counter 68 in the embodiment of Fig. 6B, i ⁇ mixed with the per-connection update records (the value of Fwd_Counter 38') in update cells tran ⁇ ferred to the FSPP 16'.
  • the value of Link_Rx_Counter 70 minu ⁇ Link_Buffer_Counter 62 i ⁇ mixed with the per- connection update records.
  • the upstream element 12' sets the Link_BS_Counter 50 equal to the value of Link_Tx_Counter 54 minus the value in the update record (Fig. 7B) .
  • Link_BS_Counter 50 in the upstream element 12' is reset to reflect the number of data cells transmitted by the upstream element 12', but not yet released in the downstream element 14'.
  • the actual implementation of the transfer of an update record recognizes that for each TSPP subsystem 14', there is an associated FSPP processor (not illu ⁇ trated), and for each FSPP ⁇ ubsy ⁇ tem 12', there is also an as ⁇ ociated TSPP proce ⁇ sor (not illustrated) . Thu ⁇ , when an update record i ⁇ ready to be transmitted by the TSPP sub ⁇ y ⁇ tem 14' back to the upstream FSPP subsystem 12', the TSPP 18' conveys the update record to the associated FSPP (not illustrated) , which constructs an update cell.
  • the cell is conveyed from the a ⁇ ociated FSPP to the TSPP (not illu ⁇ trated) associated with the upstream FSPP ⁇ ub ⁇ y ⁇ tem 12' .
  • the a ⁇ sociated TSPP strip ⁇ out the update record from the received update cell, and convey ⁇ the record to the up ⁇ tream FSPP ⁇ ubsystem 12'.
  • the check event at the link level involves the tran ⁇ i ⁇ sion of a check cell having the Link_Tx_Counter 54 value by the FSPP 16' every "W" check cells (Figs. 8A and 9A) .
  • W is equal to four.
  • the TSPP 18' performs the previously de ⁇ cribed check functions at the connection-level, a ⁇ well a ⁇ increa ⁇ ing the Link_Fwd_Counter 68 value by an amount equal to the check record content ⁇ , Link_Tx_Counter 54, minu ⁇ the ⁇ u of Link_Buffer_Counter 62 plus Link_Fwd_Counter 68 in the embodiment of Fig. 9C.
  • Fig. 8C In the embodiment of Fig. 8C,
  • Link_Rx_Counter 70 is modified to equal the contents of the check record (Link_Tx_Counter 54) . This is an accounting for errored cells on a link-wide basis. An update record is then generated having a value taken from the updated
  • Link_Fwd_Counter 68 or Link_Rx_Counter 70 values (Figs. 8C and 9C) .
  • Link_Rx_Counter 70 value (Fig. 8C) quickly in the case of large tran ⁇ ient link failures.
  • the BS_Limit value equals the Buffer_Limit value for both the connections and the link.
  • BS_Limit 24' and Buffer_Limit 30' are both equal to twenty, and there are 100 connections in this link, there are only 1000 buffers 28' in the downstream element, as reflected by Link_BS_Lim.it 52 and Link_Buffer_Lim.it 60. This is becau ⁇ e of the buffer pool ⁇ haring enabled by link-level feedback.
  • Link-level flow control can be disabled, ⁇ hould the need ari ⁇ e, by not incrementing: Link_BS_Counter; Link_N2_Counter; and Link_Buffer_Counter, and by di ⁇ abling link-level check cell tran ⁇ fer. No update ⁇ will occur under these conditions.
  • the presently described invention can be further augmented with a dynamic buffer allocation ⁇ cheme, ⁇ uch a ⁇ previou ⁇ ly de ⁇ cribed with re ⁇ pect to N2_Limit 34 and Link_N2_Limit 64.
  • This scheme includes the ability to dynamically adjust limiting parameters such as BS_Limit 24, Link_BS_Limit 52, Buffer_Limit 30, and Link_Buffer_Limit 60, in addition to N2_Limit 34 and Link_N2_Limit 64.
  • limiting parameters such as BS_Limit 24, Link_BS_Limit 52, Buffer_Limit 30, and Link_Buffer_Limit 60, in addition to N2_Limit 34 and Link_N2_Limit 64.
  • Dynamic buffer allocation thus provides the ability to prioritize one or more connections or links given a limited buffer resource.
  • the Link_N2_Lim.it is set according to the de ⁇ ired accuracy of buffer accounting. On a link-wide basis, as the number of connections within the link increases, it may be desirable to decrease Link_N2_Limit in light of an increased number of connections in the link, ⁇ ince accurate buffer accounting allow ⁇ greater buffer sharing among many connections. Conversely, if the number of connections within the link decreases, Link_N2_Limit may be increased, since the criticality of sharing limited resources among a relatively small number of connection ⁇ i ⁇ decrea ⁇ ed.
  • the pre ⁇ ently di ⁇ closed dynamic allocation schemes are implemented during link operation, based upon previously prescribed performance goals.
  • incrementing logic for all counters i ⁇ di ⁇ po ⁇ ed within the FSPP proce ⁇ sor 16' can be implemented in a further embodiment as starting at the limit and counting down to zero.
  • the transmitter and receiver processors interpret the limits as starting points for the re ⁇ pective counters, and decrement upon detection of the appropriate event. For instance, if Buffer_Counter (or Link_Buffer_Counter) is implemented as a decrementing counter, each time a data cell is allocated to a buffer within the receiver, the counter would decrement.
  • a further enhancement of the foregoing zero cell loss, link-level flow control technique includes providing a plurality of shared cell buffers 28" in a downstream element 14" wherein the cell buffers 28" are divided into N prioritized cell buffer sub ⁇ et ⁇ , Priority 0 108a, Priority 1 108b, Priority 2 108c, and Priority 3 108d, by N - 1 threshold level( ⁇ ) , Thre ⁇ hold(l) 102, Threshold(2) 104, and Threshold(3) 106.
  • Such a cell buffer pool 28" is illustrated in Fig.
  • Thi ⁇ prioritized buffer pool enables the transmission of high priority connections while lower priority connections are "starved" or prevented from transmitting cells downstream during periods of link congestion.
  • Cell priorities are identified on a per-connection basi ⁇ .
  • the policy by which the thresholds are established is defined according to a predicted model of cell traffic in a first embodiment, or, in an alternative embodiment, is dynamically adjusted. Such dynamic adjustment may be in respon ⁇ e to ob ⁇ erved cell traffic at an upstream transmitting element, or according to empirical cell traffic data as observed at the prioritized buffer pool in the downstream element.
  • the cell buffer pool 28" depicted in Fig. 10 is taken from the vantage point of a modified version 12" of the foregoing link-level flow control upstream element 12', the pool 28" being resident within a corre ⁇ ponding downstream element 14".
  • This modified upstream element 12 viewed in Fig. 11, has at least one Link_BS_Threshold(n) 100, 102, 104 established in association with a Link_BS_Counter 50" and Link_BS_Limit 52", as described above, for characterizing a cell buffer pool 28" in a downstream element 14".
  • Link_BS_Thresholds 102, 104, 106 define a number of cell buffers in the pool 28" which are allocatable to cells of a given priority, wherein the priority is identified by a register 108 as ⁇ ociated with the BS_Counter 22" counter and BS_Limit 24" regi ⁇ ter for each connection 20".
  • the Prioritie ⁇ 108a, 108b, 108c, 108d illu ⁇ trated in Fig. 11 are identified a ⁇ Priority 0 through Priority 3, Priority 0 being the highe ⁇ t.
  • connection-level flow control can still prevent a high-priority connection from transmitting, if the path that connection is intended for is ⁇ everely conge ⁇ ted.
  • Link BS Counter 50 is periodically updated ba ⁇ ed upon a value contained within a link-level update record transmitted from the downstream element 14" to the upstream element 12". This periodic updating is required in order to ensure accurate function of the prioritized buffer acces ⁇ of the pre ⁇ ent invention.
  • the Threshold levels 102, 104, 106 are modified dynamically, either as a result of tracking the priority associated with cells received at the upstream transmitter element or based upon observed buffer usage in the down ⁇ tream receiver element, it is necessary for the FSPP 16" to have an accurate record of the state of the cell buffer ⁇ 28", as afforded by the update function.
  • the multiple priority levels enable different categories of service, in terms of delay bounds, to be offered within a single quality of service.
  • highe ⁇ t priority to shared buffer ⁇ is typically given to connection/network management traffic, a ⁇ identified by the cell header.
  • Fig. 12A Initialization of the upstream element 12" as depicted in Fig. 11 is illustrated in Fig. 12A.
  • the same counters and registers are set a ⁇ viewed in Fig. 3A for an up ⁇ tream element 12' not enabling prioritized access to a shared buffer resource, with the exception that Link_BS_Thre ⁇ hold 102, 104, 106 values are initialized to a respective buffer value T.
  • these threshold buffer value ⁇ can be pre-e ⁇ tabli ⁇ hed and ⁇ tatic, or can be adju ⁇ ted dynamically ba ⁇ ed upon empirical buffer usage data.
  • Fig. 12B represents many of the same tests employed prior to forwarding a cell from the upstream element 12" to the downstream element 14" as shown in Fig. 3B, with the exception that an additional test is added for the provision of prioritized acce ⁇ to a ⁇ hared buffer resource.
  • the FSPP 16" uses the priority value 108 associated with a cell to be transferred to determine a threshold value 102, 104, 106 above which the cell cannot be transferred to the down ⁇ tream element 14". Then, a te ⁇ t i ⁇ made to determine whether the Link_BS_Counter 50" value i ⁇ greater than or equal to the appropriate thre ⁇ hold value 102, 104, 106. If so, the data cell is not transmitted. Otherwise, the cell i ⁇ tran ⁇ mitted and connection-level congestion tests are executed, as previously described.
  • more or le ⁇ s than four priorities can be implemented with the appropriate number of thresholds, wherein the fewest number of priorities is two, and the corresponding fewest number of thresholds is one. For every N prioritie ⁇ , there are N - l thre ⁇ holds.
  • flow-control is provided solely at the link level, and not at the connection level, though it is still nece ⁇ ary for each connection to provide ⁇ ome form of priority indication akin to the priority field 108 illu ⁇ trated in Fig. 11.
  • the link level flow controlled protocol as previously described can be further augmented in yet another embodiment to enable a guaranteed minimum cell rate on a per-connection ba ⁇ i ⁇ with zero cell lo ⁇ s.
  • This minimum cell rate is also referred to as guaranteed bandwidth.
  • the connection can be flow-controlled below this minimum, allocated rate, but only by the receiver elements as ⁇ ociated with this connection. Therefore, the minimum rate of one connection i ⁇ not affected by congestion within other connections. It is a requirement of the presently disclosed mechanism that cells present at the upstream element as ⁇ ociated with the FSPP 116 be identified by whether they are to be transmitted from the up ⁇ tream element using allocated bandwidth, or whether they are to be transmitted using dynamic bandwidth.
  • the cells may be provided in queues associated with a list labelled "preferred,” indicative of cells requiring allocated bandwidth.
  • the cells may be provided in queues associated with a list labelled "dynamic,” indicative of cells requiring dynamic bandwidth.
  • the present mechanism is used to monitor and limit both dynamic and allocated bandwidth. In a setting involving purely internet traffic, only the dynamic portions of the mechanism may be of significance. In a setting involving purely CBR flow, only the allocated portions of the mechanism would be employed. Thu ⁇ , the pre ⁇ ently disclosed method and apparatus enables the maximized use of mixed scheduling connections - those requiring all allocated bandwidth to those requiring all dynamic bandwidth, and connections therebetween.
  • a down ⁇ tream cell buffer pool In the present mechanism, a down ⁇ tream cell buffer pool
  • Fig. 13A shows the two portions 300, 301 as distinct entities; the allocated portion is not a physically distinct block of memory, but represents a number of individual cell buffers, located anywhere in the pool 128.
  • a downstream buffer pool 228 is logically divided among an allocated portion 302 and a dynamic portion 208, the latter logically ⁇ ubdivided by thre ⁇ hold level ⁇ 202, 204, 206 into prioritized cell buffer subset ⁇ 208a-d.
  • the division of the buffer pool 228 is a logical, not physical, division.
  • Fig. 14 Elements required to implement this guaranteed minimum bandwidth mechani ⁇ m are illustrated in Fig. 14, where like element ⁇ from Fig ⁇ . 2 and 11 are provided with like reference numbers, added to 100 or 200. Note that no new elements have been added to the downstream element; the pre ⁇ ently de ⁇ cribed guaranteed minimum bandwidth mechanism is transparent to the down ⁇ tream element.
  • D_BS_Counter 122 highlight ⁇ resource consumption by tracking the number of cells scheduled using dynamic bandwidth transmitted downstream to the receiver 114. This counter has essentially the same function as BS_Counter 22' found in Fig. 2, where there was no differentiation between allocated and dynamically scheduled cell traffic.
  • D_BS_Limit 124 used to provide a ceiling on the number of downstream buffers available to store cells from the transmitter 112, finds a corre ⁇ ponding function in BS_Lim.it 24' of Fig. 2.
  • the dynamic bandwidth can be statistically shared; the actual number of buffers available for dynamic cell traffic can be over-allocated.
  • the amount of "D" buffers provided to a connection is equal to the RTT times the dynamic bandwidth plus N2. RTT includes delays incurred in processing the update cell.
  • A_BS_Counter 222 and A_BS_Limit 224 also track and limit, respectively, the number of cell ⁇ a connection can transmit by comparing a transmitted number with a limit on buffers available. However, these values apply strictly to allocated cells; allocated cells are tho ⁇ e identified a ⁇ requiring allocated bandwidth (the guaranteed minimum bandwidth) for tran ⁇ mi ⁇ ion. Limit information i ⁇ ⁇ et up at connection initialization time and can be rai ⁇ ed and lowered a ⁇ the guaranteed minimum bandwidth is changed. If a connection does not have an allocated component, the A BS Limit 224 will be zero.
  • the A BS Counter 222 and A_BS_Limit 224 are in addition to the D_BS_Counter 122 and D_BS_Lim.it 124 de ⁇ cribed above.
  • the amount of "A" buffers dedicated to a connection is equal to the RTT time ⁇ the allocated bandwidth plu ⁇ N2.
  • the actual number of buffers dedicated to allocated traffic cannot be over-allocated. This ensure ⁇ that conge ⁇ tion on other connections does not impact the guaranteed minimum bandwidth.
  • a connection loses, or run ⁇ out of, it ⁇ allocated bandwidth through the a ⁇ ociated upstream switch once it has enqueued a cell but has no more "A" buffers a ⁇ reflected by A_BS_Counter 222 and A_BS_Limit 224. If a connection i ⁇ flow controlled below its allocated rate, it lose ⁇ a portion of its allocated bandwidth in the switch until the congestion condition is alleviated. Such may be the case in multipoint- to-point (M2P) switching, where plural sources on the same connection, all having a minimum guaranteed rate, converge on a ⁇ ingle egre ⁇ s point which is less than the ⁇ um of the source rates.
  • M2P multipoint- to-point
  • the condition of not having further "A" buffer states inhibits the intra-switch transmis ⁇ ion of further allocated cell traffic for that connection.
  • the per-connection buffer return policy is to return buffer ⁇ to the allocated pool first, until the A_BS_Counter 222 equals zero. Then buffers are returned to the dynamic pool, decreasing D_BS_Counter 122.
  • Tx_Counter 126 and Priority 208 are provided as de ⁇ cribed above with respect to connection-level flow contro'l and prioritized access.
  • Link_A_BS_Counter 250 is added to the FSPP 116. It tracks all cells identified as requiring allocated bandwidth that are "in-flight" between the FSPP 116 and the downstream switch fabric, including cells in the TSPP 118 cell buffers 128, 228. The counter 250 is decreased by the ⁇ ame amount as the A_BS_Counter 222 for each connection when a connection level update function occurs (discussed subsequently) .
  • Link_BS_Limit 152 reflects the total number of buffers available to dynamic cell ⁇ only, and does not include allocated buffer ⁇ .
  • Link_BS_Counter 150 reflect ⁇ a total number of allocated and dynamic cells transmitted.
  • connection ⁇ are not able to u ⁇ e their dynamic bandwidth when Link_BS_Counter 150 (all cell ⁇ in-flight, buffered, or in down ⁇ tream ⁇ witch fabric) minu ⁇ Link_A_BS_Counter 250 (all allocated cell ⁇ tran ⁇ mitted) i ⁇ greater than Link_BS_Limit 152 (the maximum number of dynamic buffers available) . This i ⁇ nece ⁇ ary to ensure that congestion does not impact the allocated bandwidth.
  • the sum of all individual A_BS_Lim.it 224 values, or the total per-connection allocated cell buffer space 300, 302, is in one embodiment le ⁇ than the actually dedicated allocated cell buffer ⁇ pace in order to account for the potential effect of ⁇ tale (i.e., low frequency) connection-level update ⁇ .
  • Update and check events are also implemented in the presently di ⁇ closed allocated/dynamic flow control mechanism.
  • the downstream element 114 transmits connection level update cells when either a preferred list and a VBR-priority 0 list are empty and an update queue is fully packed, or when a "max_update_interval" (not illu ⁇ trated) has been reached.
  • the update cell is analyzed to identify the appropriate queue, the FSPP 116 adjusts the A_BS_Counter 222 and D_BS_Counter 122 for that queue, returning cell buffers to "A" first then "D", as described above, ⁇ ince the FSPP 116 cannot distinguish between allocated and dynamic buffers.
  • the number of "A" buffers returned to individual connections is subtracted from Link_A_BS_Counter 250.
  • link level elements used in a ⁇ sociation with the presently disclosed minimum guaranteed bandwidth mechanism ⁇ uch as Link Tx Counter 154, function as described in the foregoing di ⁇ cu ⁇ sion of link level flow control.
  • link Tx Counter 154 function as described in the foregoing di ⁇ cu ⁇ sion of link level flow control.
  • a ⁇ previou ⁇ ly noted a further embodiment of the pre ⁇ ently described mechanism function ⁇ with a link level flow control scenario incorporating prioritized access to the downstream buffer resource 228 through the use of thre ⁇ holds 202, 204, 206.
  • the function of these elements are a ⁇ de ⁇ cribed in the foregoing.
  • Downstream element has 3000 buffers
  • Link is short haul, ⁇ o RTT*bandwidth equals one cell; 100 allocated connections requiring 7 "A” buffers each, consuming 700 buffers total;
  • 3000-700 2300 "D” buffers to be shared among 512 connections having zero allocated bandwidth;
  • Link_BS_Limit 2300.
  • Exceptions include: Link_A_BS_Counter 250 initialized to zero; connection-level allocated and dynamic BS_Counters 122, 222 set to zero; and connection-level allocated and dynamic BS_Limits 124, 224 set to re ⁇ pective values of N ⁇ and N u .
  • BW ⁇ allocated cell bandwidth
  • BW D dynamic cell bandwidth
  • each cell to be transmitted is identified as either requiring allocated or dynamic bandwidth as the cell is received from the switch fabric.
  • Fig. 15B represent ⁇ many of the ⁇ ame te ⁇ t ⁇ employed prior to forwarding a cell from the upstream element 112 to the down ⁇ tream element 114 as shown in Figs. 3B and 12B, with the following exceptions.
  • Over-allocation of buffer state ⁇ per connection is checked for dynamic traffic only and is calculated by subtracting Link_A_BS_Counter from Link_BS_Counter and comparing the result to Link_BS_Limit. Over-allocation on a link-wide basis is calculated from a summation of Link_BS_Counter (which tracks both allocated and dynamic cell traffic) and Link_A_BS_Counter against the Link_BS_Limit. Similarly, over-allocation at the downstream element is tested for both allocated and dynamic traffic at the connection level.
  • the presently di ⁇ clo ⁇ ed mechani ⁇ m for providing guaranteed minimum bandwidth can be utilized with or without the prioritized acce ⁇ mechani ⁇ m, though a ⁇ pects of the latter are illu ⁇ trated in Fig. 15A and 15B for completeness.
  • connection-level flow control as known in the art relies upon discrete control of each individual connection.
  • the control is from transmitter queue to receiver queue. Thu ⁇ , even in the situation illustrated in Fig. 16 in which a single queue Q A in a tran ⁇ mitter element i ⁇ the source of data cells for four queues Q w , Q x , Q v , and Q z as ⁇ ociated with a ⁇ ingle receiver proce ⁇ or, the prior art doe ⁇ not define any mechani ⁇ m to handle thi ⁇ situation.
  • the transmitter element 10 is an FSPP element having a FSPP 11 as ⁇ ociated therewith, and the receiver element 12 i ⁇ a TSPP element having a TSPP 13 a ⁇ ociated therewith.
  • the FSPP 11 and TSPP 13 as employed in Fig. 16 selectively provide the same programmable capabilities as described above, such as link-level flow control, prioritized access to a shared, downstream buffer resource, and guaranteed minimum cell rate on a connection level, in addition to a connection-level flow control mechanism. Whether one or more of these enhanced capabilities are employed in conjunction with the connection- level flow control is at the option of the system configurator.
  • Yet another capability provided by the FSPP and TSPP according to the present disclo ⁇ ure is the ability to treat a group of receiver queues jointly for purposes of connection-level flow control.
  • the pre ⁇ ently di ⁇ closed mechanism utilizes one connection 16 in a link 14, terminating in four separate queues Q w , Q , Q y , and Q z , though the four queues are treated essentially as a single, joint entity for purposes of connection-level flow control.
  • N2 is ⁇ et to a low value, 10 or less (see above for a discussion of the update event in connection-level flow control) .
  • Setting N2 to a large value, such as 30, for a large number of connections requires large amounts of downstream buffering becau ⁇ e of buffer orphaning, where buffers are not in-use but are accounted for up-stream a ⁇ in-u ⁇ e because of the lower frequency of update events.
  • Thi ⁇ mechani ⁇ m is also u ⁇ eful to terminate Virtual Channel Connection ⁇ (VCC) within a Virtual Path Connection (VPC) , where flow control i ⁇ applied to the VPC.
  • VCC Virtual Channel Connection ⁇
  • VPC Virtual Path Connection
  • queue de ⁇ criptor ⁇ for the queues in the receiver are illu ⁇ trated. Specifically, the de ⁇ criptor ⁇ for queue ⁇ Q w , Q x , and Q y are provided on the left, and in general have the same characteristics.
  • One of the first fields pertinent to the present disclosure i ⁇ a bit labelled "J.” When set, this bit indicates that the associated queue is being treated as part of a joint connection in a receiver.
  • Jt_Buff_Cntr Joint_N2_Counter
  • Joint_Forward_Counter labelled "Jt_Fwd_Cntr”
  • the joint counter ⁇ perform the ⁇ ame function a ⁇ the individual counters, such a ⁇ those illustrated in Fig. 2 at the connection level, but are advanced or decremented a ⁇ appropriate by action ⁇ taken in a ⁇ ociation with the individual queue ⁇ .
  • Joint_Buffer_Counter i ⁇ updated whenever a buffer cell receives a data cell or releases a data cell in a ⁇ ociation with any of the group queues.
  • Joint_N2_Counter and Joint_Forward_Counter updated whenever a buffer cell receives a data cell or releases a data cell in a ⁇ ociation with any of the group queues.
  • each Forward_Counter is replaced with Receive_Counter.
  • Joint_Forward_Counter is replaced with Joint_Receive_Counter, depending upon which is maintained in each of the group queues. Only the embodiment including Forward_Counter and Joint_Forward_Counter are illustrated.
  • Buffer_Limit (labelled “Buff_Limit” in Fig. 17) is set and referred to on a per-queue basi ⁇ .
  • Joint_Buffer_Counter is compared against the Buffer_Lim.it of a respective queue.
  • the Buffer_Lim.it could be Joint_Buffer_Limit, instead of maintaining individual, common limits.
  • the policy is to set the same Buffer_Limit in all the TSPP queue ⁇ associated with a single Joint_Buffer_Counter.
  • An update event is triggered, as previously described, when the Joint_N2_Counter reaches the queue-level N2_Lim.it.
  • the policy is to set all of the N2_Limit ⁇ equal to the same value for all the queue ⁇ a ⁇ ociated with a ⁇ ingle joint flow control connection.
  • a check cell i ⁇ received for a connection an effort to modify the Receive_Counter associated with the receiving queue result ⁇ in a modification of the Joint_Receive_Counter.
  • the level of indirection provided by the Joint_Number is applicable to both data cells and check cells.
  • the pre ⁇ ently disclo ⁇ ed mechanism only requires one set of elements (Tx_Counter, BS_Counter, BS_Lim.it, all having the functionality as previou ⁇ ly described) .
  • each new queue must have the ⁇ ame N2_Limit and Buffer_Limit value ⁇ .
  • the queue ⁇ for the additional connections will reference the common Joint_N2_Counter and either Joint_Forward_Counter or Joint_Receive_Counter.
  • J 1
  • Joint_Number field is used a ⁇ an offset to the group de ⁇ criptor.
  • the Joint_Number for the group descriptor is set to itself, as shown in Fig. 17 with regard to the descriptor for queue Q z .

Abstract

A method and apparatus (12', 14') for providing buffer state accounting at a link level, otherwise known as flow control, in addition to flow control at the virtual connection level. Link flow control enables receiver cell buffer (28') sharing while maintaining per-connection bandwidth (20') with lossless cell transmission. High link level update frequency is enabled without a significant sacrifice in overall link forward bandwidth (20). A higher and thus more efficient utilization of receiver cell buffers (28') is achieved.

Description

LINK BUFFER SHARING METHOD AND APPARATUS RELATED APPLICATION This application claims benefit of U.S. Provisional Application Serial No. 60/001,498, filed July 19, 1995.
FIELD OF THE INVENTION This application relates to communications methods and apparatus in a distributed switching architecture, and in particular to buffer sharing methods and apparatus in a distributed switching architecture.
BACKGROUND OF THE INVENTION A Flow Controlled Virtual Connection (FCVC) protocol for use in a distributed switching architecture is presently known in the art, and is briefly discussed below with reference to Fig. 1. This protocol involves communication of status (buffer allocation and current state) on a per virtual connection, such as a virtual channel connection or virtual path connection, basis between upstream and downstream network elements to provide a "no cell loss" guarantee. A cell is the unit of data to be transmitted. Each cell requires a buffer to store it.
One example of this protocol involves a credit-based flow control system, where a number of connections exist within the same link with the necessary buffers established and flow control monitored on a per-connection basis. Buffer usage over a known time interval, the link round-trip time, is determined in order to calculate the per-connection bandwidth. A trade-off is established between maximum bandwidth and buffer allocation per connection. Such per- connection feedback and subsequent flow control at the transmitter avoids data loss from an inability of the downstream element to store data cells sent from the upstream element. The flow control protocol isolates each connection, ensuring lossless cell transmission for that connection. However, since buffers reserved for a first connection cannot be made available for (that is, shared with) a second connection without risking cell loss in the first connection, the cost of the potentially enormous number of cell buffers required for long-haul, high-bandwidth links, each supporting a large number of connections, quickly becomes of great significance.
Connection-level flow control results in a trade-off between update frequency and the realized bandwidth for the connection. High update frequency has the effect of minimizing situations in which a large number of receiver cell buffers are available, though the transmitter incorrectly believes the buffers to be unavailable. Thus it reduces the number of buffers that must be set aside for a connection. However, a high update frequency to control a traffic flow will require a high utilization of bandwidth (in the reverse direction) to supply the necessary flow control buffer update information where a large number of connections exist in the same link. Realizing that transmission systems are typically symmetrical with traffic flowing in both directions, and flow control buffer update information likewise flowing in both directions, it is readily apparent that a high update frequency is wasteful of the bandwidth of the link. On the other hand, using a lower update frequency to lower the high cost of this bandwidth loss in the link, in turn requires that more buffers be set aside for each connection. This trade-off can thus be restated as being between more efficient receiver cell buffer usage and a higher cell transmission rate. In practice, given a large number of connections in a given link, it turns out that any compromise results in both too high a cost for buffers and too much bandwidth wasted in the link.
Therefore, presently known cell transfer flow control protocols fail to provide for a minimized receiver cell buffer pool and a high link data transfer efficiency, while simultaneously maintaining the "no cell loss" guarantee on a per-connection basis when a plurality of connections exist in the same link.
SUMMARY OF THE INVENTION The presently claimed invention provides buffer state flow control at the link level, otherwise known as link flow control, in addition to the flow control on a per-connection basis.
In such a system, link flow control may have a high update frequency, whereas connection flow control information may have a low update frequency. The end result is a low effective update frequency εince link level flow control exists only once per link basis whereas the link typically has many connections within it, each needing their own flow control. This minimizes the wasting of link bandwidth to transmit flow control update information. However, since the whole link now has a flow control mechanism ensuring lossless transmission for it and thus for all of the connections within it, buffers may be allocated from a pool of buffers and thus connections may share in access to available buffers. Sharing buffers means that fewer buffers are needed since the projected buffers required for a link in the defined known time interval may be shown to be less than the projected buffers that would be required if independently calculated and summed for all of the connections within the link for the same time interval. Furthermore, the high update frequency that may be used on the link level flow control without undue wasting of link bandwidth, allows further minimization of the buffers that must be assigned to a link. Minimizing the number of cell buffers at the receiver significantly decreases net receiver cost.
The link can be defined either as a physical link or as a logical grouping comprised of logical connections.
The resultant system has eliminated both defects of the presently known art. It eliminates the excessive wasting of link bandwidth that results from reliance on a per-connection flow control mechanism alone, while taking advantage of both a high update frequency at the link level and buffer sharing to minimize the buffer requirements of the receiver. Yet this flow control mechanism still ensures the same lossless transmission of cells as would the prior art.
As an additional advantage of this invention, a judicious use of the counters associated with the link level and connection level flow control mechanisms, allows easy incorporation of a dynamic buffer allocation mechanism to control the number of buffers allocated to each connection, further reducing the buffer requirements.
BRIEF DESCRIPTION OF THE DRAWINGS The above and further advantages may be more fully understood by referring to the following description and accompanying drawings of which:
Fig. 1 is a block diagram of a connection-level flow control apparatus as known in the prior art;
Fig. 2 is a block diagram of a link-level flow control apparatus according to the present invention;
Figs. 3A and 3B are flow diagram representations of counter initialization and preparation for cell transmission within a flow control method according to the present invention; Fig. 4 is a flow diagram representation of cell transmission within the flow control method according to the present invention;
Figs. 5A and 5B are flow diagram representations of update cell preparation and transmission within the flow control method according to the present invention;
Figs. 6A and 6B are flow diagram representations of an alternative embodiment of the update cell preparation and transmission of Figs. 5A and 5B;
Figs. 7A and 7B are flow diagram representations of update cell reception within the flow control method according to the present invention; Figs. 8A, 8B and 8C are flow diagram representations of check cell preparation, transmission and reception within the flow control method according to the present invention;
Figs. 9A, 9B and 9C are flow diagram representations of an alternative embodiment of the check cell preparation, transmission and reception of Figs. 8A, 8B and 8C;
Fig. 10 illustrates a cell buffer pool according to the present invention as viewed from an upstream element;
Fig. 11 is a block diagram of a link-level flow control apparatus in an upstream element providing prioritized access to a shared buffer resource in a downstream element according to the present invention;
Figs. 12A and 12B are flow diagram representations of counter initialization and preparation for cell transmission within a prioritized access method according to the present invention;
Figs. 13A and 13B illustrate alternative embodiments of cell buffer pools according to the present invention aε viewed from an upstream element; Fig. 14 is a block diagram of a flow control apparatus in an upstream element providing guaranteed minimum bandwidth and prioritized access to a shared buffer resource in a downstream element according to the present invention;
Figs. 15A and 15B are flow diagram representations of counter initialization and preparation for cell transmission within a guaranteed minimum bandwidth mechanism employing prioritized access according to the present invention;
Fig. 16 iε a block diagram repreεentation of a transmitter, a data link, and a receiver in which the presently discloεed joint flow control mechaniεm is implemented; and
Fig. 17 illustrates data structures associated with queues in the receiver of Fig. 16.
DETAILED DESCRIPTION
In Fig. 1, the resources required for connection-level flow control are presented. As previously stated, the illustrated configuration of Fig. 1 is presently known in the art. However, a brief discussion of a connection-level flow control arrangement will facilitate an explanation of the presently disclosed link-level flow control method and apparatus.
One link 10 is shown providing an interface between an upstream transmitter element 12, also known as an UP subsystem, and a downstream receiver element 14, also known as a DP subsystem. Each element 12, 14 can act as a switch between other network elements. For instance, the upstream element 12 in Fig. 1 can receive data from a PC (not shown) . This data is communicated through the link 10 to the downstream element 14 , which in turn can forward the data to a device such as a printer (not shown) . Alternatively, the illustrated network elements 12, 14 can themselves be network end-nodes.
The essential function of the presently described arrangement is the transfer of data cells from the upstream element 12 via a connection 20 in the link 10 to the downstream element 14, where the data cells are temporarily held in cell buffers 28. Cell format is known, and is further described in "Quantum Flow Control", Version 1.5.1, dated June 27, 1995 and subsequently published in a later version by the Flow Control Consortium. In Fig. 1, the block labelled Cell Buffers 28 represents a set of cell buffers dedicated to the respective connection 20. Data cells are released from the buffers 28, either through forwarding to another link beyond the downstream element 14, or through cell utilization within the downstream element 14. The latter event can include the construction of data frames from the individual data cells if the downstream element 14 is an end-node such as a work station.
Each of the upstream and downεtream elements 12, 14 are controlled by respective processors, labelled UP (Upstream Processor) 16 and DP (Downstream Processor) 18. Associated with each of the processors 16, 18 are sets of buffer counters for implementing the connection-level flow control. These buffer counters are each implemented as an increasing counter/limit register set to facilitate resource usage changes. The counters of Fig. 1, described in further detail below, are implemented in a first embodiment in UP internal RAM. The counter names discussed and illustrated for the prior art utilize some of the same counter names as used with respect to the presently disclosed flow control method and apparatus. This is merely to indicate the presence of a similar function or element in the prior art with respect to counters, registers, or like elements now disclosed.
Within the link 10, which in a first embodiment is a copper conductor, multiple virtual connections 20 are provided. In an alternative embodiment, the link 10 is a logical grouping of plural virtual connections 20. The number of connections 20 implemented within the link 10 depends upon the needε of the reεpective network elementε 12, 14, as well as the required bandwidth per connection. In Fig. 1, only one connection 20 and aεsociated counters are illustrated for simplicity.
First, with respect to the upstream element 12 of Fig. 1, two buffer state controls are provided, BS_Counter 22 and BS_Limit 24. In a first embodiment, each are implemented as fourteen bit counters/regiεters, allowing a connection to have 16,383 buffers. This number would support, for example, 139 Mbps, 10,000 kilometer round-trip service. The buffer state counters 22, 24 are employed only if the connection 20 in question is flow-control enabled. That is, a bit in a respective connection descriptor, or queue descriptor, of the UP 16 is set indicating the connection 20 is flow-control enabled.
BS_Counter 22 is incremented by the UP 16 each time a data cell is transferred out of the upstream element 12 and through the asεociated connection 20. Periodically, aε described below, this counter 22 is adjusted during an update event based upon information received from the downstream element 14. BS_Counter 22 thus presents an indication of the number of data cells either currently being transmitted in the connection 20 between the upstream and downstream elementε 12, 14, or yet unreleaεed from buffers 28 in the downstream element 14.
BS_Limit 24 is set at connection configuration time to reflect the number of buffers 28 available within the receiver 14 for this connection 20. For instance, if BS_Counter 22 for this connection 20 indicates that twenty data cells have been transmitted and BS_Limit 24 indicates that thiε connection 20 is limited to twenty receiver buffers 28, the UP 16 will inhibit further transmiεεion from the upstream element 12 until an indication is received from the downstream element 14 that further buffer space 28 is available for that connection 20.
Tx_Counter 26 is used to count the total number of data cells transmitted by the UP 16 through this connection 20. In the first embodiment, this iε a twenty-eight bit counter which rollε over at OxFFFFFFF. As described later, Tx_Counter 16 is used during a check event to account for errored cellε for thiε connection 20.
In the downstream element 14, the DP 18 also manages a set of counters for each connection 20. Buffer_Limit 30 performs a policing function in the downstream element 14 to protect against misbehaving transmitterε. Specifically, the buffer_limit regiεter 30 indicates the maximum number of cell buffers 28 in the receiver 14 which this connection 20 can use. In most cases, BS_Lim.it 24 is equal to Buffer_Limit 30. At some point, though, it may be necessary to adjust the maximum number of cell buffers 28 for this connection 20 up or down. This function iε coordinated by network management εoftware. To avoid the "dropping" of data cells in transmiεεion, an increaεe in buffers per connection is reflected first in Buffer_Limit 30 prior to BS_Limit 24. Conversely, a reduction in the number of receiver buffers per connection is reflected first in BS_Limit 24 and thereafter in Buffer_Limit 30.
Buffer_Counter 32 provides an indication of the number of buffers 28 in the downstream element 14 which are currently being used for the storage of data cells. As deεcribed εubsequently, this value iε uεed in providing the upεtream element 12 with a more accurate picture of buffer availability in the downεtream element 14. Both the Buffer_Limit 30 and Buffer_Counter 32 are fourteen bits wide in the first embodiment.
N2_Limit 34 determines the frequency of connection flow¬ rate communication to the upstream transmitter 12. A cell containing such flow-rate information iε εent upstream every time the receiver element 14 forwards a number of cells equal to N2_Limit 34 out of the receiver element 14. This updating activity is further described subsequently. In the first embodiment, N2_Limit 34 is six bits wide.
The DP 18 uses N2_Counter 36 to keep track of the number of cells which have been forwarded out of the receiver element 14 since the last time the N2_Limit 34 was reached. In the first embodiment, N2_Counter 36 iε six bits wide.
In a firεt embodiment, the DP 18 maintainε Fwd_Counter 38 to maintain a running count of the total number of cellε forwarded through the receiver element 14. Thiε includeε buffers released when data cells are utilized for data frame construction in an end-node. When the maximum count for this counter 38 is reached, the counter rollε over to zero and continueε. The total number of cellε received by the receiver element 14 can be derived by adding Buffer_Counter 32 to Fwd_Counter 38. The latter iε employed in correcting the tranεmitter element 12 for errored cellε during the check event, aε deεcribed below. Fwd_Counter 38 is twenty-eight bits wide in the first embodiment.
In a second embodiment, the DP 18 maintains Rx_Counter 40, a counter which is incremented each time the downstream element 14 receives a data cell through the respective connection 20. The value of thiε counter 40 iε then uεable directly in reεponse to check cellε and in the generation of an update cell, both of which will be deεcribed further below. Similar to the Fwd_Counter 38, Rx_Counter 40 iε twenty-eight bits wide in this second embodiment.
There are two eventε in addition to a εteady state condition in the connection-level flow controlled protocol: update; and check. In εteady εtate, data cellε are tranεmitted from the tranεmitter element 12 to the receiver element 14. In update, buffer occupancy information is returned upstream by the receiver element 14 to correct counter values in the transmitter element 12. Check mode is used to check for cellε loεt or injected due to tranεmiεεion errorε between the upstream transmitter and downεtream receiver elements 12, 14.
In the accompanying figures, connection level counters are augmented with " [i]" to indicate asεociation with one connection [i] of plural poεεible connectionε.
Prior to any activity, counterε in the upεtrea and downstream elements 12, 14 are initialized, as illuεtrated in Fig. 3A. Initialization includes zeroing counters, and providing initial values to limit registers such as Link_BS_Limit and Link_Buffer_Limit. In Fig. 3A, Buffer_Limit[i] is shown being initialized to (RTT*BW) + N2, which represents the round-trip time times the virtual connection bandwidth, plus accommodation for delays in procesεing the update cell. As for Link_N2_Limit, "X" repreεentε the buffer εtate update frequency for the link, and for N2_Limit[i], "Y" represents the buffer state update frequency for each connection.
In steady state operation, the UP 16 of the tranεmitter element 12 determineε which virtual connection 20 (VC) has a non-zero cell count (i.e. has a cell ready to transmit), a BS_Counter value less than the BS_Limit, and an indication that the VC is next to send (also in Figs. 3A and 3B) .
The UP 16 increments BS Counter 22 and Tx Counter 26 whenever the UP 16 tranεmitε a data cell over the reεpective connection 20, assuming flow control is enabled (Fig. 4) . Upon receipt of the data cell, the DP 18 checks whether Buffer_Counter 32 equals or exceedε Buffer_Limit 30, which would be an indication that there are no bufferε available for receipt of the data cell. If Buffer_Counter >= Buffer_Limit, the data cell is diεcarded (Fig. 3B) . Otherwiεe, the DP 18 increments Buffer_Counter 32 ar. Rx_Counter 40 and the data cell is deposited in a buffer cell 28, as in Fig. 4. The Tx_Counter 26 and the Rx_Counter 40 roll over when they reach their maximum.
If flow control is not enabled, none of the presently described functionality iε implemented. Connections that do not utilize flow control on the link can coexist with connections using link flow control. The flow control accounting is not employed when cells from non-flow controlled connections are transmitted and received. This includes both connection level accounting and link level accounting. Thereby, flow control and non-flow control connections can be active simultaneously.
When a data cell is forwarded out of the receiver element 14, Buffer_Counter 32 is decremented. Buffer_Counter 32 should never exceed Buffer_Limit 30 when the connection- level flow control protocol is enabled, with the exception of when BS_Limit 24 has been decreased and the receiver element 14 has yet to forward sufficient cellε to bring Buffer_Counter 32 below Buffer_Lim.it 30.
A buffer εtate update occurε when the receiver element 14 haε forwarded a number of data cellε equal to N2_Limit 34 out of the receiver element 14. In the firεt embodiment in which the DP 18 maintainε Fwd_Counter 38, update involves the transfer of the value of Fwd_Counter 38 from the receiver element 14 back to the transmitter element 12 in an update cell, as in Fig. 6A. In the embodiment employing Rx_Counter 40 in the downstream element 14, the value of Rx_Counter 40 minus Buffer Counter 32 is conveyed in the update cell, as in Fig. 5A. At the transmitter 12, the update cell is used to update the value in BS_Counter 22, as shown for the two embodiments in Fig. 7A. Since BS_Counter 22 is independent of buffer allocation information, buffer allocation can be changed without impacting the performance of this aspect of connection-level flow control.
Update cells require an allocated bandwidth to ensure a bounded delay. This delay needs to be accounted for, as a component of round-trip time, to determine the buffer allocation for the reεpective connection.
The amount of bandwidth allocated to the update cellε is a function of a counter, Max_Update_Counter (not illustrated) at an aεsociated downstream transmitter element (not illustrated) . Thiε counter forceε the scheduling of update and check cells, the latter to be discusεed subεequently. There iε a corresponding Min_Update_Interval counter (not shown) in the downstream transmitter element, which controls the space between update cellε. Normal cell packing iε εeven recordε per cell, and Min_Update_Interval is similarly set to εeven. Since the UP 16 can only process one update record per cell time, back-to-back, fully packed update cells received at the UP 16 would cauεe εome records to be dropped.
An update event occurs as follows, with regard to Figs. 1, 5A and 6A. When the downstream element 14 forwards (releaεeε) a cell, Buffer_Counter 32 iε decremented and N2_Counter 36 and Fwd_Counter 38 are incremented. When the N2_Counter 36 iε equal to N2_Limit 34, the DP 18 prepareε an update cell for tranεmiεεion back to the upstream element 12 and N2_Counter 36 is set to zero. The upstream element 12 receives a connection indicator from the downstream element 14 forwarded cell to identify which connection 20 iε to be updated. In the first embodiment, the DP 18 causes the Fwd_Counter 38 value to be inserted into an update record payload (Fig. 6A) . In the second embodiment, the DP 18 causeε the Rx Counter 40 value minuε the Buffer Counter 32 value to be inεerted into the update record payload (Fig. 5A) . When an update cell is fully packed with records, or as the minimum bandwidth pacing interval is reached, the update cell is tranεmitted to the upstream element 12. Once received upstream, the UP 16 receives the connection indicator from the update record to identify the transmitter connection, and extracts the Fwd_Counter 38 value or the Rx_Counter 40 minus Buffer_Counter 32 value from the update record. BS_Counter 22 iε reεet to the value of Tx_Counter 26 minuε the update record value (Fig. 7A) . If thiε connection was disabled from transmitting due to BS_Counter 22 being equal to or greater than BS_Limit 24, this condition εhould now be reverεed, and if so the connection should again be enabled for transmitting. In summary, the update event provides the transmitting element 12 with an indication of how many cells originally tranεmitted by it have now been released from buffers within the receiving element 14, and thus provides the transmitting element 12 with a more accurate indication of receiver element 14 buffer 28 availability for that connection 20.
The buffer εtate check event εerves two purposes: 1) it provides a mechanism to calculate and compensate for cell loss or cell insertion due to transmission errors; and 2) it provides a mechanism to start (or restart) a flow if update cells were lost or if enough data cells were loεt that N2_Limit 34 iε never reached.
One timer (not εhown) in the UP εubsystem 16 serves all connections. The connections are enabled or disabled on a per connection baεiε as to whether to εend check cellε from the upεtream tranεmitter element 12 to the downstream receiver element 14. The check process in the transmitter element 12 involves εearching all of the connection deεcriptorε to find one which is check enabled (see Figs. 8A, 9A) . Once a minimum pacing interval has elapsed (the check interval) , the check cell is forwarded to the receiver element 14 and the next check enabled connection is identified. The spacing between check cells for the same connection is a function of the number of active flow- controlled connections times the mandated εpacing between check cellε for all connectionε. Check cellε have priority over update cellε.
The check event occurε as follows, with regard to Figs. 8A through 8C and 9A through 9C. Each transmit element 12 connection 20 is checked after a timed check interval iε reached. If the connection iε flow-control enabled and the connection iε valid, then a check event is scheduled for tranεmiεεion to the receiver element 14. A buffer εtate check cell is generated using the Tx_Counter 26 value for that connection 20 in the check cell payload, and is tranεmitted uεing the connection indicator from the respective connection descriptor (Figs. 8A and 9A) .
In the first embodiment, a calculation of errored cells is made at the receiver element 14 by summing Fwd_Counter 38 with Buffer_Counter 32, and subtracting this value from the contentε of the transmitted check cell record, the value of Tx_Counter 26 (Fig. 9B) . The value of Fwd_Counter 38 is increased by the errored cell count. An update record with the new value for Fwd_Counter 38 is then generated. Thiε updated Fwd_Counter 38 value εubεequently updateε the BS_Counter 22 value in the tranεmitter element 12. In the second embodiment, illustrated in Fig. 8B, the same is accompliεhed by resetting the Rx_Counter 40 value equal to the check cell payload value (Tx_Counter 26) . A subsequent update record is eεtabliεhed uεing the difference between the valueε of Rx_Counter 40 and Buffer_Counter 32. Thuε, the check event enableε accounting for cellε transmitted by the tranεmitter element 12, through the connection 20, but either dropped or not received by the receiver element 14.
A "no cell loεs" guarantee iε enabled uεing buffer εtate accounting at the connection level εince the tranεmitter element 12 haε an up-to-date account of the number of buffers 28 in the receiver element 14 available for receipt of data cellε, and has an indication of when data cell tranεmiεεion should be ceased due to the absence of available buffers 28 downεtream. In order to augment the foregoing protocol with a receiver element buffer εharing mechaniεm, link-level flow control, alεo known as link-level buffer state accounting, is added to connection-level flow control. It is poεεible for εuch link-level flow control to be implemented without connection-level flow control. However, a combination of the two is preferable since without connection-level flow control there would be no restriction on the number of buffers a εingle connection might consume.
It iε desirable to perform buffer εtate accounting at the link level, in addition to the connection level, for the following reaεons. Link-level flow control enables cell buffer εharing at a receiver element while maintaining the "no cell loεs" guarantee afforded by connection-level flow control. Buffer sharing results in the most efficient use of a limited number of buffers. Rather than provide a number of buffers equal to bandwidth timeε RTT for each connection, a smaller number of buffers is employable in the receiver element 14 since not all connections require a full compliment of buffers at any one time. A further benefit of link-level buffer εtate accounting iε that each connection iε provided with an accurate representation of downεtream buffer availability without necessitating increased reverse bandwidth for each connection. A high-frequency link-level update does not significantly effect overall per-connection bandwidth.
Link-level flow control iε deεcribed now with regard to Fig. 2. Like elementε found in Fig. 1 are given the εame reference nu berε in Fig. 2, with the addition of a prime. Once again, only one virtual connection 20' iε illustrated in the link 10', though the link 10' would normally hoεt multiple virtual connections 20'. Once again, the link 10' iε a phyεical link in a firεt embodiment, and a logical grouping of plural virtual connections in a second embodiment.
The upstream transmitter element 12' (FSPP subεystem) partially includes a processor labelled From Switch Port Procesεor (FSPP) 16'. The FSPP processor 16' is provided with two buffer εtate counters, BS_Counter 22' and BS_Limit 24', and a Tx_Counter 26' each having the same function on a per-connection basiε as those described with respect to Fig. 1.
The embodiment of Fig. 2 further includeε a εet of resources added to the upstream and downstream elements 12', 14' which enable link-level buffer accounting. Theεe resources provide similar functions aε those utilized on a per-connection basis, yet they operate on the link level.
For instance, Link_BS_Counter 50 tracks all cells in flight between the FSPP 16' and elements downstream of the receiver element 14', including cells in transit between the transmitter 12' and the receiver 14' and cells εtored within receiver 14' buffers 28'. As with the update event described above with reεpect to connection-level buffer accounting, Link_BS_Counter 50 iε modified during a link update event by subtracting either the Link_Fwd_Counter 68 value or the difference between Link_Rx_Counter 70 and Link_Buffer_Counter 62 from the Link_TX_Counter 54 value. In a first embodiment, the link-level counters are implemented in external RAM associated with the FSPP procesεor 16'.
Link_BS_Limit 52 limits the number of shared downstream cell buffers 28' in the receiver element 14' to be shared among all of the flow-control enabled connections 20'. In a firεt embodiment, Link_BS_Counter 50 and Link_BS_Limit 52 are both twenty bitε wide.
Link_TX_Counter 54 tracks all cells transmitted onto the link 10'. It is uεed during the link-level update event to calculate a new value for Link_BS_Counter 50. Link TX Counter 54 is twenty-eight bits wide in the first embodiment.
In the downεtream element 14', To Switch Port Proceεsor (TSPP) 18' also manages a set of counters for each link 10' in the εame faεhion with reεpect to the commonly illuεtrated counterε in Figε. 1 and 2. The TSPP 18' further includes a Link_Buffer_Limit 60 which performs a function in the downstream element 14' similar to Link_BS_Limit 52 in the upstream element 12' by indicating the maximum number of cell buffers 28' in the receiver 14' available for uεe by all connections 10'. In most cases, Link_BS_Limit 52 is equal to Link_Buffer_Limit 60. The effect of adjusting the number of buffers 28' available up or down on a link-wide basiε is the same as that deεcribed above with reεpect to adjuεting the number of buffers 28 available for a particular connection 20. Link_Buffer_Limit 60 is twenty bits wide in the first embodiment.
Link_Buffer_Counter 62 provides an indication of the number of buffers in the downstream element 14' which are currently being used by all connections for the storage of data cellε. This value is used in a check event to correct the Link_Fwd_Counter 68 (described subsequently) . The Link_Buffer_Counter 62 iε twenty bits wide in the first embodiment.
Link_N2_Limit 64 and Link_N2_Counter 66, each eight bitε wide in the firεt embodiment, are uεed to generate link update records, which are intermixed with connection-level update recordε. Link_N2_Limit 64 establisheε a threshold number for triggering the generation of a link-level update record (Figs. 5B and 6B) , and Link_N2_Counter 66 and Link_Fwd_Counter 68 are incremented each time a cell iε releaεed out of a buffer cell in the receiver element 14'. In a firεt embodiment, N2_Limit 34' and Link_N2_Limit 64 are both εtatic once initially configured.
However, in a further embodiment of the preεent invention, each iε dynamically adjuεtable baεed upon meaεured bandwidth. For instance, if forward link bandwidth is relatively high, Link_N2_Limit 64 could be adjuεted down to cauεe more frequent link-level update record transmission. Any forward bandwidth impact would be considered minimal. Lower forward bandwidth would enable the raising of Link_N2_Limit 64 εince the unknown availability of bufferε 28' in the downεtream element 14' iε less critical.
Link_Fwd_Counter 68 tracks all cells released from buffer cells 28' in the receiver element 14' that came from the link 10' in question. It is twenty-eight bitε wide in a firεt embodiment, and is used in the update event to recalculate Link_BS_Counter 50.
Link_Rx_Counter 70 iε employed in an alternative embodiment in which Link_Fwd_Counter 68 iε not employed. It iε alεo twenty-eight bits wide in an illustrative embodiment and tracks the number of cellε received acroεε all connections 20' in the link 10'.
With regard to Figs. 2 et seq., a receiver element buffer sharing method is described. Normal data transfer by the FSPP 16' in the upstream element 12' to the TSPP 18' in the downεtream element 14' iε enabled across all connections 20' in the link 10' as long as the Link_BS_Counter 50 is lesε than or equal to Link_BS_Lim.it 52, as in Fig. 3B. This test prevents the FSPP 16' from transmitting more data cells than it believes are available in the downstream element 14'. The accuracy of this belief iε maintained through the update and check eventε, described next.
A data cell iε received at the downstream element 14' if neither connection-level or link-level buffer limit are exceeded (Fig. 3B) . If a limit is exceeded, the cell is diεcarded.
The update event at the link level involves the generation of a link update record when the value in Link_N2_Counter 66 reaches (equals or exceeds) the value in Link_N2_Limit 64, as εhown in Figε. 5B and 6B. In a firεt embodiment, Link_N2_Limit 64 iε set to forty.
The link update record, the value taken from Link_Fwd_Counter 68 in the embodiment of Fig. 6B, iε mixed with the per-connection update records (the value of Fwd_Counter 38') in update cells tranεferred to the FSPP 16'. In the embodiment of Fig. 5B, the value of Link_Rx_Counter 70 minuε Link_Buffer_Counter 62 iε mixed with the per- connection update records. When the upstream element 12' receives the update cell having the link update record, it sets the Link_BS_Counter 50 equal to the value of Link_Tx_Counter 54 minus the value in the update record (Fig. 7B) . Thus, Link_BS_Counter 50 in the upstream element 12' is reset to reflect the number of data cells transmitted by the upstream element 12', but not yet released in the downstream element 14'.
The actual implementation of the transfer of an update record, in a first embodiment, recognizes that for each TSPP subsystem 14', there is an associated FSPP processor (not illuεtrated), and for each FSPP εubsyεtem 12', there is also an asεociated TSPP proceεsor (not illustrated) . Thuε, when an update record iε ready to be transmitted by the TSPP subεyεtem 14' back to the upstream FSPP subsystem 12', the TSPP 18' conveys the update record to the associated FSPP (not illustrated) , which constructs an update cell. The cell is conveyed from the aεεociated FSPP to the TSPP (not illuεtrated) associated with the upstream FSPP εubεyεtem 12' . The aεsociated TSPP stripε out the update record from the received update cell, and conveyε the record to the upεtream FSPP εubsystem 12'.
The check event at the link level involves the tranε iεsion of a check cell having the Link_Tx_Counter 54 value by the FSPP 16' every "W" check cells (Figs. 8A and 9A) . In a first embodiment, W is equal to four. At the receiver element 14', the TSPP 18' performs the previously deεcribed check functions at the connection-level, aε well aε increaεing the Link_Fwd_Counter 68 value by an amount equal to the check record contentε, Link_Tx_Counter 54, minuε the εu of Link_Buffer_Counter 62 plus Link_Fwd_Counter 68 in the embodiment of Fig. 9C. In the embodiment of Fig. 8C,
Link_Rx_Counter 70 is modified to equal the contents of the check record (Link_Tx_Counter 54) . This is an accounting for errored cells on a link-wide basis. An update record is then generated having a value taken from the updated
Link_Fwd_Counter 68 or Link_Rx_Counter 70 values (Figs. 8C and 9C) .
It iε necessary to perform the check event at the link level in addition to the connection level in order to readjust the Link_Fwd_Counter 68 value (Fig. 9C) or
Link_Rx_Counter 70 value (Fig. 8C) quickly in the case of large tranεient link failures.
Again with regard to Fig. 2, the following are exemplary initial values for the illustrated counters in an embodiment having 100 connections in one link.
BS_Limit (24') = 20
Buffer_Lim.it (30') = 20
N2_Limit (34') = 3 Link_BS_Lim.it (52) = 1000
Link_Buffer_Limit (60) = 1000
Link_N2_Counter (66) = 40
The BS_Limit value equals the Buffer_Limit value for both the connections and the link. Though BS_Limit 24' and Buffer_Limit 30' are both equal to twenty, and there are 100 connections in this link, there are only 1000 buffers 28' in the downstream element, as reflected by Link_BS_Lim.it 52 and Link_Buffer_Lim.it 60. This is becauεe of the buffer pool εharing enabled by link-level feedback.
Link-level flow control can be disabled, εhould the need ariεe, by not incrementing: Link_BS_Counter; Link_N2_Counter; and Link_Buffer_Counter, and by diεabling link-level check cell tranεfer. No updateε will occur under these conditions. The presently described invention can be further augmented with a dynamic buffer allocation εcheme, εuch aε previouεly deεcribed with reεpect to N2_Limit 34 and Link_N2_Limit 64. This scheme includes the ability to dynamically adjust limiting parameters such as BS_Limit 24, Link_BS_Limit 52, Buffer_Limit 30, and Link_Buffer_Limit 60, in addition to N2_Limit 34 and Link_N2_Limit 64. Such adjustment iε in reεponεe to meaεured characteristics of the individual connections or the entire link in one embodiment, and iε eεtabliεhed according to a determined priority scheme in another embodiment. Dynamic buffer allocation thus provides the ability to prioritize one or more connections or links given a limited buffer resource.
The Link_N2_Lim.it is set according to the deεired accuracy of buffer accounting. On a link-wide basis, as the number of connections within the link increases, it may be desirable to decrease Link_N2_Limit in light of an increased number of connections in the link, εince accurate buffer accounting allowε greater buffer sharing among many connections. Conversely, if the number of connections within the link decreases, Link_N2_Limit may be increased, since the criticality of sharing limited resources among a relatively small number of connectionε iε decreaεed.
In addition to adjuεting the limits on a per-link basiε, it may also be deεirable to adjust limits on a per-connection basis in order to change the maximum suεtained bandwidth for the connection.
The preεently diεclosed dynamic allocation schemes are implemented during link operation, based upon previously prescribed performance goals.
In a first embodiment of the present invention, incrementing logic for all counters iε diεpoεed within the FSPP proceεsor 16'. Related thereto, the counters previously described aε being reεet to zero and counting up to a limit can be implemented in a further embodiment as starting at the limit and counting down to zero. The transmitter and receiver processors interpret the limits as starting points for the reεpective counters, and decrement upon detection of the appropriate event. For instance, if Buffer_Counter (or Link_Buffer_Counter) is implemented as a decrementing counter, each time a data cell is allocated to a buffer within the receiver, the counter would decrement. When a data cell iε releaεed from the respective buffer, the counter would increment. In this manner, the counter reaching zero would serve as an indication that all available buffers have been allocated. Such implementation is lesε eaεily employed in a dynamic bandwidth allocation εcheme εince dynamic adjuεtment of the limitε must be accounted for in the non¬ zero counts.
A further enhancement of the foregoing zero cell loss, link-level flow control technique includes providing a plurality of shared cell buffers 28" in a downstream element 14" wherein the cell buffers 28" are divided into N prioritized cell buffer subεetε, Priority 0 108a, Priority 1 108b, Priority 2 108c, and Priority 3 108d, by N - 1 threshold level(ε) , Threεhold(l) 102, Threshold(2) 104, and Threshold(3) 106. Such a cell buffer pool 28" is illustrated in Fig. 10, in which four priorities labelled Priority 0 through Priority 3 are illuεtrated as being defined by three threεholdε labelled Threεhold(l) through Threεhold(3) . Thiε prioritized buffer pool enables the transmission of high priority connections while lower priority connections are "starved" or prevented from transmitting cells downstream during periods of link congestion. Cell priorities are identified on a per-connection basiε. The policy by which the thresholds are established is defined according to a predicted model of cell traffic in a first embodiment, or, in an alternative embodiment, is dynamically adjusted. Such dynamic adjustment may be in responεe to obεerved cell traffic at an upstream transmitting element, or according to empirical cell traffic data as observed at the prioritized buffer pool in the downstream element. For example, in an embodiment employing dynamic threshold adjustment, it may be advantageous to lower the number of buffers available to data cells having a priority lesε than Priority 0, or conversely to increase the number of buffers above Threεhold(3) , if a significantly larger quantity of Priority 0 traffic is detected .
The cell buffer pool 28" depicted in Fig. 10 is taken from the vantage point of a modified version 12" of the foregoing link-level flow control upstream element 12', the pool 28" being resident within a correεponding downstream element 14". This modified upstream element 12", viewed in Fig. 11, has at least one Link_BS_Threshold(n) 100, 102, 104 established in association with a Link_BS_Counter 50" and Link_BS_Limit 52", as described above, for characterizing a cell buffer pool 28" in a downstream element 14". These Link_BS_Thresholds 102, 104, 106 define a number of cell buffers in the pool 28" which are allocatable to cells of a given priority, wherein the priority is identified by a register 108 asεociated with the BS_Counter 22" counter and BS_Limit 24" regiεter for each connection 20". The Prioritieε 108a, 108b, 108c, 108d illuεtrated in Fig. 11 are identified aε Priority 0 through Priority 3, Priority 0 being the higheεt. When there iε no congestion, as reflected by Link_BS_Counter 50" being lesε than Link_BS_Threεhold(1) 102 in Figs. 10 and 11, flow-controlled connections of any priority can transmit. Aε congeεtion occurε, aε indicated by an increaεing value in the Link_BS_Counter 50", lower priority connections are denied acceεε to downstream buffers, in effect disabling their transmiεεion of cells. In the case of εevere congestion, only cells of the highest priority are allowed to transmit. For inεtance, with respect again to Fig. 10, only cells of Priority 0 108a are enabled for transmisεion from the upstream element 12" to the downstream element 14" if the link-level Link_BS_Threεhold(3) 106 haε been reached downstream. Thus, higher priority connectionε are leεε effected by the εtate of the network becauεe they have first accesε to the εhared downstream buffer pool. Note, however, that connection-level flow control can still prevent a high-priority connection from transmitting, if the path that connection is intended for is εeverely congeεted.
As above, Link BS Counter 50" is periodically updated baεed upon a value contained within a link-level update record transmitted from the downstream element 14" to the upstream element 12". This periodic updating is required in order to ensure accurate function of the prioritized buffer accesε of the preεent invention. In an embodiment of the preεent invention in which the Threshold levels 102, 104, 106 are modified dynamically, either as a result of tracking the priority associated with cells received at the upstream transmitter element or based upon observed buffer usage in the downεtream receiver element, it is necessary for the FSPP 16" to have an accurate record of the state of the cell bufferε 28", as afforded by the update function.
The multiple priority levels enable different categories of service, in terms of delay bounds, to be offered within a single quality of service. Within each quality of service, higheεt priority to shared bufferε is typically given to connection/network management traffic, aε identified by the cell header. Second higheεt priority iε given to low bandwidth, εmall burεt connections, and third higheεt for burεty traffic. With prioritization allocated aε deεcribed, congeεtion within any one of the service categories will not prevent connection/management traffic from having the lowest cell delay.
Initialization of the upstream element 12" as depicted in Fig. 11 is illustrated in Fig. 12A. Essentially, the same counters and registers are set aε viewed in Fig. 3A for an upεtream element 12' not enabling prioritized access to a shared buffer resource, with the exception that Link_BS_Threεhold 102, 104, 106 values are initialized to a respective buffer value T. As discussed, these threshold buffer valueε can be pre-eεtabliεhed and εtatic, or can be adjuεted dynamically baεed upon empirical buffer usage data.
Fig. 12B represents many of the same tests employed prior to forwarding a cell from the upstream element 12" to the downstream element 14" as shown in Fig. 3B, with the exception that an additional test is added for the provision of prioritized acceεε to a εhared buffer resource. Specifically, the FSPP 16" uses the priority value 108 associated with a cell to be transferred to determine a threshold value 102, 104, 106 above which the cell cannot be transferred to the downεtream element 14". Then, a teεt iε made to determine whether the Link_BS_Counter 50" value iε greater than or equal to the appropriate threεhold value 102, 104, 106. If so, the data cell is not transmitted. Otherwise, the cell iε tranεmitted and connection-level congestion tests are executed, as previously described.
In alternative embodiments, more or leεs than four priorities can be implemented with the appropriate number of thresholds, wherein the fewest number of priorities is two, and the corresponding fewest number of thresholds is one. For every N prioritieε, there are N - l threεholds.
In yet a further embodiment, flow-control is provided solely at the link level, and not at the connection level, though it is still neceεεary for each connection to provide εome form of priority indication akin to the priority field 108 illuεtrated in Fig. 11.
The link level flow controlled protocol as previously described can be further augmented in yet another embodiment to enable a guaranteed minimum cell rate on a per-connection baεiε with zero cell loεs. This minimum cell rate is also referred to as guaranteed bandwidth. The connection can be flow-controlled below this minimum, allocated rate, but only by the receiver elements asεociated with this connection. Therefore, the minimum rate of one connection iε not affected by congestion within other connections. It is a requirement of the presently disclosed mechanism that cells present at the upstream element asεociated with the FSPP 116 be identified by whether they are to be transmitted from the upεtream element using allocated bandwidth, or whether they are to be transmitted using dynamic bandwidth. For instance, the cells may be provided in queues associated with a list labelled "preferred," indicative of cells requiring allocated bandwidth. Similarly, the cells may be provided in queues associated with a list labelled "dynamic," indicative of cells requiring dynamic bandwidth. In a frame relay setting, the present mechanism is used to monitor and limit both dynamic and allocated bandwidth. In a setting involving purely internet traffic, only the dynamic portions of the mechanism may be of significance. In a setting involving purely CBR flow, only the allocated portions of the mechanism would be employed. Thuε, the preεently disclosed method and apparatus enables the maximized use of mixed scheduling connections - those requiring all allocated bandwidth to those requiring all dynamic bandwidth, and connections therebetween. In the present mechanism, a downεtream cell buffer pool
128, akin to the pool 28' of Fig. 2, iε logically divided between an allocated portion 300 and a dynamic portion 301, whereby cellε identified aε to receive allocated bandwidth are buffered within this allocated portion 300, and cells identified aε to receive dynamic bandwidth are buffered in the dynamic portion 301. Fig. 13A shows the two portions 300, 301 as distinct entities; the allocated portion is not a physically distinct block of memory, but represents a number of individual cell buffers, located anywhere in the pool 128.
In a further embodiment, the presently diεcloεed mechaniεm for guaranteeing minimum bandwidth is applicable to a mechanism providing prioritized accesε to downstream buffers, as previously described in conjunction with Figs. 10 and 11. With regard to Fig. 13B, a downstream buffer pool 228 is logically divided among an allocated portion 302 and a dynamic portion 208, the latter logically εubdivided by threεhold levelε 202, 204, 206 into prioritized cell buffer subsetε 208a-d. Aε with Fig. 13A, the division of the buffer pool 228 is a logical, not physical, division.
Elements required to implement this guaranteed minimum bandwidth mechaniεm are illustrated in Fig. 14, where like elementε from Figε. 2 and 11 are provided with like reference numbers, added to 100 or 200. Note that no new elements have been added to the downstream element; the preεently deεcribed guaranteed minimum bandwidth mechanism is transparent to the downεtream element.
New aεpectε of flow control are found at both the connection and link levelε. With reεpect firεt to the connection level additions and modificationε, D_BS_Counter 122 highlightε resource consumption by tracking the number of cells scheduled using dynamic bandwidth transmitted downstream to the receiver 114. This counter has essentially the same function as BS_Counter 22' found in Fig. 2, where there was no differentiation between allocated and dynamically scheduled cell traffic. Similarly, D_BS_Limit 124, used to provide a ceiling on the number of downstream buffers available to store cells from the transmitter 112, finds a correεponding function in BS_Lim.it 24' of Fig. 2. Aε discusεed previouεly with reεpect to link level flow control, the dynamic bandwidth can be statistically shared; the actual number of buffers available for dynamic cell traffic can be over-allocated. The amount of "D" buffers provided to a connection is equal to the RTT times the dynamic bandwidth plus N2. RTT includes delays incurred in processing the update cell.
A_BS_Counter 222 and A_BS_Limit 224 also track and limit, respectively, the number of cellε a connection can transmit by comparing a transmitted number with a limit on buffers available. However, these values apply strictly to allocated cells; allocated cells are thoεe identified aε requiring allocated bandwidth (the guaranteed minimum bandwidth) for tranεmiεεion. Limit information iε εet up at connection initialization time and can be raiεed and lowered aε the guaranteed minimum bandwidth is changed. If a connection does not have an allocated component, the A BS Limit 224 will be zero. The A BS Counter 222 and A_BS_Limit 224 are in addition to the D_BS_Counter 122 and D_BS_Lim.it 124 deεcribed above. The amount of "A" buffers dedicated to a connection is equal to the RTT timeε the allocated bandwidth pluε N2. The actual number of buffers dedicated to allocated traffic cannot be over-allocated. This ensureε that congeεtion on other connections does not impact the guaranteed minimum bandwidth.
A connection loses, or runε out of, itε allocated bandwidth through the aεεociated upstream switch once it has enqueued a cell but has no more "A" buffers aε reflected by A_BS_Counter 222 and A_BS_Limit 224. If a connection iε flow controlled below its allocated rate, it loseε a portion of its allocated bandwidth in the switch until the congestion condition is alleviated. Such may be the case in multipoint- to-point (M2P) switching, where plural sources on the same connection, all having a minimum guaranteed rate, converge on a εingle egreεs point which is less than the εum of the source rates. In an embodiment of the presently disclosed mechanism in which the transmitter element is a portion of a switch having complimentary εwitch flow control, the condition of not having further "A" buffer states inhibits the intra-switch transmisεion of further allocated cell traffic for that connection.
The per-connection buffer return policy is to return bufferε to the allocated pool first, until the A_BS_Counter 222 equals zero. Then buffers are returned to the dynamic pool, decreasing D_BS_Counter 122.
Tx_Counter 126 and Priority 208 are provided as deεcribed above with respect to connection-level flow contro'l and prioritized access.
On the link level, the following elements are added to enable guaranteed minimum cell rate on a per-connection basis. Link_A_BS_Counter 250 is added to the FSPP 116. It tracks all cells identified as requiring allocated bandwidth that are "in-flight" between the FSPP 116 and the downstream switch fabric, including cells in the TSPP 118 cell buffers 128, 228. The counter 250 is decreased by the εame amount as the A_BS_Counter 222 for each connection when a connection level update function occurs (discussed subsequently) .
Link_BS_Limit 152 reflects the total number of buffers available to dynamic cellε only, and does not include allocated bufferε. Link_BS_Counter 150, however, reflectε a total number of allocated and dynamic cells transmitted. Thus, connectionε are not able to uεe their dynamic bandwidth when Link_BS_Counter 150 (all cellε in-flight, buffered, or in downεtream εwitch fabric) minuε Link_A_BS_Counter 250 (all allocated cellε tranεmitted) iε greater than Link_BS_Limit 152 (the maximum number of dynamic buffers available) . This iε neceεεary to ensure that congestion does not impact the allocated bandwidth. The sum of all individual A_BS_Lim.it 224 values, or the total per-connection allocated cell buffer space 300, 302, is in one embodiment leεε than the actually dedicated allocated cell buffer εpace in order to account for the potential effect of εtale (i.e., low frequency) connection-level updateε. Update and check events are also implemented in the presently diεclosed allocated/dynamic flow control mechanism. The downstream element 114 transmits connection level update cells when either a preferred list and a VBR-priority 0 list are empty and an update queue is fully packed, or when a "max_update_interval" (not illuεtrated) has been reached.
At the upstream end 112, the update cell is analyzed to identify the appropriate queue, the FSPP 116 adjusts the A_BS_Counter 222 and D_BS_Counter 122 for that queue, returning cell buffers to "A" first then "D", as described above, εince the FSPP 116 cannot distinguish between allocated and dynamic buffers. The number of "A" buffers returned to individual connections is subtracted from Link_A_BS_Counter 250.
Other link level elements used in aεsociation with the presently disclosed minimum guaranteed bandwidth mechanism, εuch as Link Tx Counter 154, function as described in the foregoing diεcuεsion of link level flow control. Also, aε previouεly noted, a further embodiment of the preεently described mechanism functionε with a link level flow control scenario incorporating prioritized access to the downstream buffer resource 228 through the use of threεholds 202, 204, 206. The function of these elements are aε deεcribed in the foregoing.
The following iε an example of a typical initialization in a flow controlled link according to the present disclosure:
Downstream element has 3000 buffers;
Link is short haul, εo RTT*bandwidth equals one cell; 100 allocated connections requiring 7 "A" buffers each, consuming 700 buffers total;
3000-700 = 2300 "D" buffers to be shared among 512 connections having zero allocated bandwidth; Link_BS_Limit = 2300.
If D_BS_Counter >= D_BS_Limit, then the queue is prevented from indicating that it haε a cell ready to tranεmit. In the embodiment referred to above in which the upstream element iε a switch having composite bandwidth, this occurs by the queue being removed from the dynamic list, preventing the queue from being εcheduled for tranεmit uεing dynamic bandwidth.
For allocated cellε, a check is made when each cell is enqueued to determine whether the cell, plus other enqueued cells, plus A_BS_Counter, is a number greater than A_BS_Lim.it. If not, the cell is enqueued and the queue iε placed on the preferred list. Else, the connection is prevented from transmitting further cells through the upstream element 112 switch fabric.
Initialization of the upstream element 112 as depicted in Fig. 14 iε illuεtrated in Fig. 15A. Eεεentially, the εame counterε and regiεterε εet in Fig. 3A for an upεtream element
12' (when prioritized acceεε to a εhared buffer resource is not enabled) , and in Fig. 12A for an upstream element 12" (when prioritized access is enabled) . Exceptions include: Link_A_BS_Counter 250 initialized to zero; connection-level allocated and dynamic BS_Counters 122, 222 set to zero; and connection-level allocated and dynamic BS_Limits 124, 224 set to reεpective values of NΛ and Nu. Similarly, on the downstream end at the connection level, the allocated and dynamic Buffer_Limits and Buffer_counterε are εet, with the Buffer_Limitε employing a bandwidth value for the reεpective traffic type (i.e., BWΛ = allocated cell bandwidth and BWD = dynamic cell bandwidth) . Further, each cell to be transmitted is identified as either requiring allocated or dynamic bandwidth as the cell is received from the switch fabric. Fig. 15B representε many of the εame teεtε employed prior to forwarding a cell from the upstream element 112 to the downεtream element 114 as shown in Figs. 3B and 12B, with the following exceptions. Over-allocation of buffer stateε per connection is checked for dynamic traffic only and is calculated by subtracting Link_A_BS_Counter from Link_BS_Counter and comparing the result to Link_BS_Limit. Over-allocation on a link-wide basis is calculated from a summation of Link_BS_Counter (which tracks both allocated and dynamic cell traffic) and Link_A_BS_Counter against the Link_BS_Limit. Similarly, over-allocation at the downstream element is tested for both allocated and dynamic traffic at the connection level. As previously indicated, the presently diεcloεed mechaniεm for providing guaranteed minimum bandwidth can be utilized with or without the prioritized acceεε mechaniεm, though aεpects of the latter are illuεtrated in Fig. 15A and 15B for completeness.
As discussed, connection-level flow control as known in the art relies upon discrete control of each individual connection. In particular, between network elements such as a transmitting element and a receiving element, the control is from transmitter queue to receiver queue. Thuε, even in the situation illustrated in Fig. 16 in which a single queue QA in a tranεmitter element iε the source of data cells for four queues Qw, Qx, Qv, and Qz asεociated with a εingle receiver proceεεor, the prior art doeε not define any mechaniεm to handle thiε situation.
In Fig. 16, the transmitter element 10 is an FSPP element having a FSPP 11 asεociated therewith, and the receiver element 12 iε a TSPP element having a TSPP 13 aεεociated therewith. The FSPP 11 and TSPP 13 as employed in Fig. 16 selectively provide the same programmable capabilities as described above, such as link-level flow control, prioritized access to a shared, downstream buffer resource, and guaranteed minimum cell rate on a connection level, in addition to a connection-level flow control mechanism. Whether one or more of these enhanced capabilities are employed in conjunction with the connection- level flow control is at the option of the system configurator.
Yet another capability provided by the FSPP and TSPP according to the present discloεure is the ability to treat a group of receiver queues jointly for purposes of connection-level flow control. In Fig. 16, inεtead of utilizing four parallel connections, the preεently diεclosed mechanism utilizes one connection 16 in a link 14, terminating in four separate queues Qw, Q , Qy, and Qz, though the four queues are treated essentially as a single, joint entity for purposes of connection-level flow control. This is needed becauεe some network elements need to use a flow controlled service but cannot handle the bandwidth of proceεεing update cellε when N2 is εet to a low value, 10 or less (see above for a discussion of the update event in connection-level flow control) . Setting N2 to a large value, such as 30, for a large number of connections requires large amounts of downstream buffering becauεe of buffer orphaning, where buffers are not in-use but are accounted for up-stream aε in-uεe because of the lower frequency of update events. Thiε mechaniεm is also uεeful to terminate Virtual Channel Connectionε (VCC) within a Virtual Path Connection (VPC) , where flow control iε applied to the VPC.
This ability to group receiver queues is a result of manipulations of the queue descriptor asεociated with each of the receiver queueε Qw, Qx, Qγ/ and Qz. With reference to Fig. 17, queue deεcriptorε for the queues in the receiver are illuεtrated. Specifically, the deεcriptorε for queueε Qw, Qx, and Qy are provided on the left, and in general have the same characteristics. One of the first fields pertinent to the present disclosure iε a bit labelled "J." When set, this bit indicates that the associated queue is being treated as part of a joint connection in a receiver. Instead of maintaining all connection-level flow control information in each queue descriptor for each queue in the group, certain flow control elements are maintained only in one of the queue deεcriptorε for the group. In the illuεtrated caεe, that one queue iε queue Qz.
In each of the deεcriptors for queues Qw, Qx, and Qγ, a "Joint Number" field provides an offset or pointer to a set of flow control elementε in the deεcriptor for queue Qz. Thiε pointer field may provide another function when the "J" bit iε not εet. While BufferJLimit (labelled "Buff_Limit" in Fig. 17) and N2_Limit are maintained locally within each respective descriptor, Joint_Buffer_Counter (labelled
"Jt_Buff_Cntr") , Joint_N2_Counter (labelled "Jt_N2_Cntr") , and Joint_Forward_Counter (labelled "Jt_Fwd_Cntr") are maintained in the deεcriptor for queue Qz for all of the queueε in the group. The same counters in the deεcriptorε for queueε Qw, Qx, and Qγ go unuεed. The joint counterε perform the εame function aε the individual counters, such aε those illustrated in Fig. 2 at the connection level, but are advanced or decremented aε appropriate by actionε taken in aεεociation with the individual queueε. Thus, for example, Joint_Buffer_Counter iε updated whenever a buffer cell receives a data cell or releases a data cell in aεεociation with any of the group queues. The same applies to Joint_N2_Counter and Joint_Forward_Counter. In an alternate embodiment of the previouεly deεcribed flow control mechaniεm, each Forward_Counter is replaced with Receive_Counter. Similarly, in an alternative embodiment of the presently disclosed mechanism, Joint_Forward_Counter is replaced with Joint_Receive_Counter, depending upon which is maintained in each of the group queues. Only the embodiment including Forward_Counter and Joint_Forward_Counter are illustrated.
Not all of the per-queue descriptor elements are superseded by functionε in a common descriptor. Buffer_Limit (labelled "Buff_Limit" in Fig. 17) is set and referred to on a per-queue basiε. Thus, Joint_Buffer_Counter is compared against the Buffer_Lim.it of a respective queue. Optionally, the Buffer_Lim.it could be Joint_Buffer_Limit, instead of maintaining individual, common limits. The policy is to set the same Buffer_Limit in all the TSPP queueε associated with a single Joint_Buffer_Counter. An update event is triggered, as previously described, when the Joint_N2_Counter reaches the queue-level N2_Lim.it. The policy is to set all of the N2_Limitε equal to the same value for all the queueε aεεociated with a εingle joint flow control connection. When a check cell iε received for a connection, an effort to modify the Receive_Counter associated with the receiving queue resultε in a modification of the Joint_Receive_Counter. Thus, the level of indirection provided by the Joint_Number is applicable to both data cells and check cells.
At the transmitter element 10, only one set of upstream flow control elements are maintained. At connection set-up time, the joint connection iε εet-up aε a εingle, point-to- point connection, aε far aε the upεtream elements are concerned. Therefore, instead of maintaining four sets of upstream elementε for the embodiment of Fig. 16, the preεently discloεed mechanism only requires one set of elements (Tx_Counter, BS_Counter, BS_Lim.it, all having the functionality as previouεly described) .
Once a joint flow control entity has been establiεhed, other TSPP queueε for additional connectionε may be added. To do so, each new queue must have the εame N2_Limit and Buffer_Limit valueε. The queueε for the additional connections will reference the common Joint_N2_Counter and either Joint_Forward_Counter or Joint_Receive_Counter. As previously noted, when J = 1, the Joint_Number field is used aε an offset to the group deεcriptor. The Joint_Number for the group descriptor is set to itself, as shown in Fig. 17 with regard to the descriptor for queue Qz. This is alεo the caεe in point-to-point connections (VCC to VCC rather than the VPC to VCC, aε illuεtrated in Fig. 16) , where each Joint_Number points to itε own deεcriptor.
Implementation for each of point-to-point and the presently deεcribed point-to-multipoint connectionε iε thuε εimplified.
Having deεcribed preferred embodimentε of the invention, it will be apparent to thoεe εkilled in the art that other embodiments incorporating the concepts may be used. These and other examples of the invention illustrated above are intended by way of example and the actual scope of the invention is to be determined from the following claimε.

Claims

CLAIMS What iε claimed iε: 1. A method of managing a plurality of bufferε within a receiving apparatuε for εtoring data cellε received over a link from a tranεmitting apparatus comprising the stepε of: εtoring, in a firεt εtorage location in εaid receiving apparatuε and in a firεt location in said transmitting apparatus, an indication of the maximum number of buffers available for storing data cells received over said link corresponding to data cellε transmitted over said link by said transmitting apparatus over a plurality of connections on said link; generating, in said transmitting apparatus, a first count indicative of the number of data cellε tranεmitted through said link by said tranεmitting apparatus for storage in buffers within said receiving apparatus but not released from said buffers; generating, in said receiving apparatus, a εecond count indicative of the number of buffers in said receiving apparatus currently storing unreleased data cellε received over said link from said transmitting apparatus; generating, in said receiving apparatuε, a third count indicative of the number of data cells released from buffers in said receiving apparatuε, said released data cellε having been received over εaid link from εaid transmitting apparatus; modifying said firεt count upon transmisεion of at leaεt one data cell over said link to reflect an additional data cell requiring a buffer for εtorage in said receiving apparatuε; modifying εaid second count upon receipt of a data cell over said link to reflect usage of an additional buffer in said receiving apparatus and upon release of a data cell to reflect availability of an additional buffer in εaid receiving apparatus; and modifying said third count upon the releaεe of a data cell from a buffer in said receiving apparatus to reflect the availability of an additional buffer for received data cell storage.
' 5
2. The method according to claim 1, further comprising the step of: inhibiting tranεmitting apparatuε tranεmiεεion of data cellε through said plurality of connections in said link to said receiving apparatus when εaid firεt count equals or 10 exceeds said stored indication of the maximum number of buffers available.
3. The method according to claim 1, further comprising the step of:
15 generating, in said transmitting apparatuε, a fourth count indicative of a total number of data cellε transmitted by said transmitting apparatus over said link; and updating εaid firεt count, to reflect cellε tranεmitted by εaid transmitting apparatus and not released from buffers
20 in εaid receiving apparatus, by resetting εaid firεt count equal to the difference between εaid third count and εaid fourth count.
4. The method according to claim 3, further compriεing the 25 stepε of: εtoring, in εaid receiving apparatuε, a threshold value for εaid third count; and updating said first count when said threshold value iε exceeded. 30
5. The method according to claim 1, wherein εaid εtep of generating a third count further comprises: generating, in said receiving apparatuε, a third count indicative of cellε from a firεt εubεet of said plurality of 35 connections in said link released from buffers in said receiving apparatuε.
6. The method according to claim 1, further compriεing the steps of: generating, in said receiver apparatus, a fourth count indicative of a number of errored cells by providing said receiver apparatus with εaid firεt count value, εaid fourth count having the value of said third count and said second count subtracted from εaid firεt count; and correcting εaid third count by adding εaid fourth count to said third count.
7. The method according to claim 1, wherein εaid indication of the maximum number of buffers available for εtoring iε dynamically adjuεtable.
8. A method of sharing a finite number of buffers within a receiver, said receiver connected to a transmitter via a link providing a plurality of flow controlled virtual connections, said method comprising the steps of: storing, in a first storage location in said transmitter, a maximum number of buffers available to said link; generating a first count in said tranεmitter indicative of all data cellε tranεmitted through εaid connections of said link by said transmitter to said receiver; generating a second count in said transmitter indicative of data cells currently being transmitted through said connections of said link by said transmitter to said receiver, and of data cellε not known to be releaεed from a subset of said buffers in said receiver; generating a third count in εaid receiver indicative of the total number of buffers in said receiver presently storing data cells; generating a fourth count in said receiver indicative of all data cellε releaεed from buffers in said receiver, εaid released data cellε originally received from said transmitter via said link; εtoring, in a εecond εtorage location within εaid receiver, a maximum number of data cellε to be releaεed from buffers in said receiver during a firεt interval; and εtoring, in a third εtorage location within εaid receiver, a number of data cells actually released from bufferε in εaid receiver during εaid firεt interval.
9. The method according to claim 8, wherein εaid step of generating εaid firεt count further compriεes modifying said firεt count upon transmission of a data cell through one of said connections.
10. The method according to claim 8, wherein said εtep of generating εaid second count further compriseε modifying said second count upon transmiεεion of a data cell through one of said connections of εaid link.
11. The method according to claim 8, wherein εaid εtep of generating said third count further compriseε modifying εaid third count upon storage of an additional data cell in a buffer in said receiver.
12. The method according to claim 8, wherein εaid εtep of generating εaid fourth count further comprises modifying said fourth count upon release of an additional data cell from a buffer in εaid receiver.
13. The method according to claim 8, further compriεing the εtep of inhibiting said transmitter from transmitting data cells through said connections of said link when said second count equals or exceeds said maximum number of buffers.
14. The method according to claim 8, further comprising the step of updating said second count by adjusting said second count to be equal to said first count minus said fourth count.
15. The method according to claim 14, wherein said updating step occurs when said count of data cells released during said firεt interval equalε or exceeds εaid maximum number of data cellε to be releaεed.
16. The method according to claim 8, further compriεing the steps of: generating a count of errored cells by subtracting said fourth count and said third count from said first count; and generating a corrected second count by resetting said second count to the sum of said errored cell count and said second count.
17. The method of according to claim 8, wherein said maximum number of bufferε available and εaid maximum number of data cells to be released are each dynamically adjustable.
18. A method of managing a plurality of buffers within a receiving apparatus for storing data cells received over a link from a transmitting apparatus comprising the steps of: storing, in a first storage location in said receiving apparatus and in a first location in said tranεmitting apparatus, an indication of the maximum number of buffers available for εtoring data cellε received over said link corresponding to data cells transmitted over said link by εaid tranεmitting apparatuε over a plurality of connectionε on said link; generating, in said transmitting apparatus, a first count indicative of the number of data cells tranεmitted through εaid link by εaid tranεmitting apparatus for storage in buffers within said receiving apparatus but not released from said buffers; generating, in said receiving apparatus, a second count indicative of the number of buffers in said receiving apparatus currently storing unreleaεed data cellε received over εaid link from εaid transmitting apparatus; generating, in εaid receiving apparatus, a third count indicative of the total number of data cells received over said link from said tranεmitting apparatus; modifying said first count upon tranεmiεεion of a data cell over εaid link to reflect an additional data cell requiring a buffer for εtorage in εaid receiving apparatus; modifying said second count upon receipt of a data cell over said link to reflect usage of an additional buffer in said receiving apparatus and upon release of a data cell to reflect availability of an additional buffer in said receiving apparatus; and modifying said third count upon the receipt of a data cell in said receiving apparatus to reflect an additional data cell transmitted from said transmitting apparatus and received at said receiving apparatus.
19. The method according to claim 18, further comprising the step of: inhibiting transmitting apparatus transmission of data cells through εaid plurality of connectionε in εaid link to said receiving apparatus when said first count equals or exceeds said stored indication of the maximum number of bufferε available.
20. The method according to claim 18, further compriεing the εteps of: generating, in said transmitting apparatus, a fourth count indicative of a total number of data cells transmitted by said transmitting apparatuε over εaid link; and updating εaid first count, to reflect cells transmitted by εaid tranεmitting apparatus and not released from buffers in said receiving apparatus, by resetting said first count equal to the fourth count minus the difference between said second count and said third count.
21. The method according to claim 18, wherein said step of generating said third count further comprises: generating, in said receiving apparatus, a third count indicative of cells from a firεt εubεet of said plurality of connections in said link released from buffers in said receiving apparatus.
22. The method according to claim 18, further comprising the steps of: generating, in said tranεmitting apparatuε, a fourth count indicative of a total number of data cellε tranεmitted by said transmitting apparatus over said link; and correcting said third count by reεetting εaid third count equal to εaid fourth count.
23. The method according to claim 18, wherein said maximum number of buffers available for storing data cells is dynamically adjustable.
24. The method according to claim 18, wherein εaid εtepε of generating said first and second counts further comprise initializing each of said firεt and εecond countε at a reεpective maximum, and wherein εaid εtepε of modifying εaid firεt and second counts further compriseε decrementing εaid first count upon said tranεmission of εaid data cell, and decrementing εaid εecond count upon said receipt of said data cell.
25. A method of sharing a finite number of bufferε within a receiver, εaid receiver connected to a tranεmitter via a link providing a plurality of flow controlled virtual connectionε, εaid method compriεing the εtepε of: εtoring, in a firεt εtorage location in εaid transmitter, a maximum number of buffers available to said link; generating a first count in εaid tranεmitter indicative of all data cellε tranεmitted through εaid connectionε of εaid link by said transmitter to εaid receiver; generating a second count in said tranεmitter indicative of data cellε currently being tranεmitted through said connections of said link by said tranεmitter to εaid receiver, and of data cellε known to be releaεed from a subset of said buffers in said receiver; generating a third count in said receiver indicative of the total number of bufferε in εaid receiver preεently storing data cellε; generating a fourth count in εaid receiver indicative of all data cells received in said receiver from said transmitter via said link; εtoring, in a εecond εtorage location within εaid receiver, a maximum number of data cells to be released from buffers in said receiver during a first interval; and storing, in a third storage location within εaid receiver, a number of data cells actually released from buffers in εaid receiver during said first interval.
26. The method according to claim 25, wherein said step of generating said firεt count further comprises modifying said firεt count upon tranεmission of a data cell through one of εaid connections.
27. The method according to claim 25, wherein εaid εtep of generating εaid εecond count further compriεeε modifying said second count upon trans isεion of a data cell through one of εaid connections of said link.
28. The method according to claim 27, wherein εaid step of modifying further comprises decrementing said εecond count upon tranεmiεεion of εaid data cell.
29. The method according to claim 25, wherein εaid εtep of generating εaid third count further compriεeε modifying εaid third count upon storage of an additional data cell in a buffer in said receiver.
30. The method according to claim 29, wherein εaid εtep of modifying εaid third count further comprises decrementing said third count upon storage of εaid additional data cell.
31. The method according to claim 25, wherein εaid step of generating said fourth count further compriseε modifying said fourth count upon receipt of an additional data cell in said receiver.
32. The method according to claim 25, further comprising the step of inhibiting said tranεmitter from tranεmitting data cellε through εaid connectionε of εaid link when εaid εecond count equals or exceeds εaid maximum number of bufferε.
33. The method according to claim 25, further compriεing the step of updating εaid εecond count by reεetting εaid second count equal to said first count minus the difference between said fourth count and said third count.
34. The method according to claim 33, wherein εaid updating εtep occurε when εaid count of data cells released during said first interval equalε or exceedε εaid maximum number of data cellε to be releaεed.
35. The method according to claim 25, further compriεing the εtepε of: accounting for errored cellε by reεetting εaid fourth count equal to said first count.
36. The method according to claim 25, wherein εaid maximum number of bufferε available to said link and εaid maximum number of data cellε to be released are each dynamically adjustable.
37. A method of εharing a finite number of buffers within a receiver, said receiver connected to a transmitter via a link providing a plurality of flow controlled virtual connectionε, εaid method compriεing the εtepε of: dynamically adjuεting, in a firεt εtorage location in said transmitter, a maximum number of buffers available to said link; generating a first count in εaid tranεmitter indicative of all data cellε transmitted through said connectionε of said link by said tranεmitter to εaid receiver; generating a εecond count in said tranεmitter indicative of data cellε currently being transmitted through εaid connectionε of εaid link by said transmitter to said receiver, and of data cells known to be released from a subset of said buffers in εaid receiver; generating a third count in said receiver indicative of the total number of buffers in said receiver presently storing data cellε; generating a fourth count in εaid receiver indicative of all data cellε releaεed from buffers in said receiver, said released data cells originally received from said transmitter via εaid link; dynamically adjuεting, in a εecond storage location within said receiver, a maximum number of data cells to be releaεed from buffers in said receiver during a first interval; and εtoring, in a third storage location within said receiver, a number of data cells actually released from buffers in said receiver during said firεt interval.
38. The method according to claim 37, wherein said maximum number of buffers available and said maximum number of d ,:a cellε to be releaεed are dynamically adjuεted baεed upon current buffer demandε in εaid receiver.
39. The method according to claim 37, wherein said maximum number of bufferε available and εaid maximum number of data cellε to be releaεed are dynamically adjuεted baεed upon user-prescribed connection-level prioritization.
40. The method according to claim 37, wherein said maximum number of buffers available and εaid maximum number of data cellε to be releaεed are dynamically adjuεted baεed upon user-prescribed link-level prioritization.
41. A method of sharing a finite number of bufferε within a receiver, εaid receiver connected to a tranεmitter via a link providing a plurality of flow controlled virtual connectionε, εaid method comprising the stepε of: dynamically adjusting, in a first storage location in said transmitter, a maximum number of buffers available to said link; generating a first count in said transmitter indicative of all data cells tranεmitted through εaid connectionε of said link by said tranεmitter to εaid receiver; generating a εecond count in εaid transmitter indicative of data cells currently being transmitted through εaid connectionε of εaid link by εaid transmitter to said receiver, and of data cells known to be released from a subset of said buffers in said receiver; generating a third count in said receiver indicative of the total number of buffers in said receiver presently εtoring data cells; generating a fourth count in said receiver indicative of all data cells received in said receiver from said tranεmitter via εaid link; dynamically adjuεting, in a εecond εtorage location within εaid receiver, a maximum number of data cells to be released from buffers in said receiver during a firεt interval; and εtoring, in a third εtorage location within εaid receiver, a number of data cellε actually releaεed from buffers in said receiver during said first interval.
42. The method according to claim 41, wherein εaid maximum number of buffers available and said maximum number of data cells to be released are dynamically adjusted baεed upon current buffer demandε in εaid receiver.
43. The method according to claim 41, wherein εaid maximum number of bufferε available and εaid maximum number of data cellε to be releaεed are dynamically adjusted baεed upon user-prescribed connection-level prioritization.
44. The method according to claim 41, wherein said maximum number of bufferε available and εaid maximum number of data cellε to be releaεed are dynamically adjusted based upon user-preεcribed link-level prioritization.
45. A link-level buffer εharing apparatuε, comprising: a communications link having a transmitter end and a receiver end; a transmitter at said transmitter end of said link for transmitting data cells over said link; and a receiver at εaid receiver end of εaid link, εaid receiver having a plurality of buffers for storing data cellε received from εaid tranεmitter via εaid link, wherein εaid receiver provides link-level buffer status information to said transmitter as data cells are received from said transmitter.
46. The apparatus according to claim 45, said communications link further comprising a plurality of virtual connections through which εaid data cellε are transmitted from said transmitter to said receiver.
47. The apparatuε according to claim 45, εaid tranεmitter further compriεing: a firεt counter for counting a number of data cellε' tranεmitted by εaid tranεmitter over εaid link to said receiver, said first counter incrementing upon transmiεεion of each data cell; and operating logic operative to increment εaid firεt counter.
48. The apparatuε according to claim 45, εaid receiver further compriεing: a εecond counter for counting a number of data cellε currently εtored in εaid plurality of bufferε, εaid second counter incrementing upon εtorage of each data cell in a respective buffer and decrementing upon release of each data cell from a respective buffer and operating logic operative to increment and decrement εaid εecond counter.
49. The apparatuε according to claim 48, εaid receiver further compriεing: a third counter for counting a number of data cellε released from said plurality of buffers, said third counter incrementing upon release of each data cell from said respective buffer; and operating logic operative to increment εaid third counter, wherein εaid εtatuε information compriεeε a value of said third counter.
50. The apparatuε according to claim 48, εaid receiver further compriεing: a fourth counter for counting a number of data cellε received by εaid receiver from εaid tranεmitter over εaid link, εaid fourth counter incrementing upon receipt of each data cell from said transmitter; and operating logic operative to increment said fourth counter, wherein said statuε information compriεeε a value of said fourth counter.
51. A link buffer εharing apparatuε for use with a plurality of bufferε within a receiver, εaid receiver connected to a tranεmitter via at leaεt two virtual connectionε in a link and receiving in said buffers a plurality of data cells from said transmitter, εaid apparatuε comprising: a tranεmitter buffer counter aεεociated with εaid tranεmitter for counting data cellε currently in tranεmiεεion over said link from said tranεmitter to said receiver and data cells stored within a subset of said buffers in said receiver; a buffer limit register associated with said transmitter for indicating a maximum number of buffers in εaid receiver for εtorage of data cellε from εaid transmitter via said link; a transmitted cell counter asεociated with said transmitter for counting all data cellε tranεmitted by εaid tranεmitter to εaid receiver over the link; a receiver buffer counter aεsociated with said receiver for counting a number of buffers in said receiver storing data cells received from said tranεmitter via εaid link; a released cell counter asεociated with said receiver for counting all data cells released from buffers in said receiver; and operating logic operative to increment said transmitter buffer counter, εaid tranεmitted cell counter, εaid receiver buffer counter, and said released cell counter, operative to decrement said receiver buffer counter, and operative to adjust said transmitter buffer counter, wherein εaid transmitter buffer counter is incremented upon transmiεsion of a data cell from said transmitter to said receiver via said link, wherein said transmitted cell counter is incremented upon transmission of a data cell from said tranεmitter to εaid receiver via εaid link, wherein εaid receiver buffer counter iε incremented upon receipt of a data cell at said receiver from said tranεmitter via said link and is decremented upon release of a data cell from a respective buffer, and wherein said released cell counter iε incremented upon releaεe from εaid receiver of a data cell originally received from εaid transmitter via said link.
52. The method according to claim 51, wherein said maximum number of buffers in said receiver for εtorage of data cellε is dynamically adjustable.
53. The apparatus according to claim 51, wherein said link further compriseε a physical link.
54. The apparatus according to claim 51, wherein said link further compriseε a virtual link.
55. The apparatus according to claim 51, said operating logic comprising: a tranεmitter proceεsor asεociated with εaid tranεmitter for updating said transmitter buffer counter, εaid buffer limit regiεter, and εaid tranεmitted cell counter; and a receiver proceεεor associated with said receiver for updating said receiver buffer counter and εaid released cell counter.
56. The apparatus according to claim 55, further compriεing: an update release counter associated with said receiver and updated by said receiver processor, said update release counter for counting data cells released from buffers in said receiver during a specified interval; and a release limit register associated with said receiver and said receiver procesεor providing an indication of a maximum number of data cells to be released from bufferε in said receiver during said specified interval.
57. The apparatus according to claim 56, wherein said maximum number of data cells to be released is dynamically adjustable.
58. The apparatus according to claim 56, wherein εaid receiver processor cauεeε the tranεmiεsion of the value of said transmitted cell counter minuε εaid releaεed cell counter to εaid tranεmitter when the value of said update release counter is equal to said release limit register, and wherein εaid tranεmitter procesεor receiveε εaid releaεed cell counter value and loads said value into said transmitter buffer counter.
59. The apparatus according to claim 55, wherein said transmitter proceεεor enableε tranεmiεsion of data cells in said virtual connections of said link in a first mode, and inhibits data cell transmiεsion over all connections of said link in a second mode.
60. The apparatus according to claim 59, wherein said first mode iε defined by εaid tranεmitter buffer counter being less than said buffer limit register, and said second mode is defined by said transmitter buffer counter being equal to or greater than said buffer limit regiεter.
61. The apparatuε according to claim 55, wherein εaid tranεmitter proceεεor tranεmitε a value of said transmitted cell counter to said receiver proceεεor at a specified interval, and εaid receiver processor subtractε a value of εaid receiver buffer counter and a value of said released cell counter from said tranεmitted cell counter value to establish an errored cell count, said receiver processor incrementing said released cell counter by an amount equal to said errored cell count.
62. A link buffer sharing apparatus for use with a plurality of buffers within a receiver, said receiver connected to a transmitter via at leaεt two virtual connections in a link and receiving in said buffers a plurality of data cellε from said tranεmitter, said apparatus comprising: a transmitter buffer counter aεεociated with said transmitter for counting data cellε currently in tranεmiεεion over εaid link from said tranεmitter to εaid receiver and data cellε εtored within a εubset of said bufferε in εaid receiver; a buffer limit regiεter aεεociated with εaid tranεmitter for indicating a maximum number of bufferε in εaid receiver for εtorage of data cellε from said transmitter via εaid link; a tranεmitted cell counter aεεociated with εaid tranεmitter for counting all data cellε tranεmitted by εaid tranεmitter to εaid receiver over the link; a receiver buffer counter aεsociated with said receiver for counting a number of bufferε in said receiver storing data cells received from said transmitter via said link; a received cell counter asεociated with said receiver for counting all data cells received from said tranεmitter via εaid link; and operating logic operative to increment said transmitter buffer counter, said transmitted cell counter, said receiver buffer counter, and said releaεed cell counter, operative to decrement said receiver buffer counter, and operative to adjust εaid tranεmitter buffer counter, wherein εaid tranεmitter buffer counter iε incremented upon transmission of a data cell from said transmitter to said receiver via said link, wherein said transmitted cell counter is incremented upon transmiεεion of a data cell from εaid transmitter to said receiver via said link, wherein εaid receiver buffer counter iε incremented upon receipt of a data cell at εaid receiver from εaid tranεmitter via εaid link and iε decremented upon release of a data cell from a reεpective buffer, and wherein εaid received cell counter iε incremented upon receipt of a data cell at εaid receiver from εaid tranεmitter via εaid link.
63. The apparatuε according to claim 62, wherein εaid maximum number of bufferε in εaid receiver for εtorage of data cellε iε dynamically adjustable.
64. The apparatus according to claim 62, wherein said link further compriseε a phyεical link.
65. The apparatus according to claim 62, wherein said link further compriεeε a virtual link.
66. The apparatuε according to claim 62, εaid operating logic compriεing: a tranεmitter proceεεor aεεociated with εaid transmitter for updating said transmitter buffer counter, said buffer limit regiεter, and εaid tranεmitted cell counter; and a receiver proceεεor aεεociated with εaid receiver for updating εaid receiver buffer counter and εaid received cell counter.
67. The apparatuε according to claim 66, further compriεing: an update releaεe counter associated with said receiver and updated by said receiver processor, said update releaεe counter for counting data cellε released from buffers in εaid receiver during a εpecified interval; and a releaεe limit regiεter aεεociated with said receiver and εaid receiver proceεεor providing an indication of a maximum number of data cellε to be released from buffers in εaid receiver during εaid εpecified interval.
68. The apparatus according to claim 67, wherein said maximum number of data cellε to be released from buffers in said receiver is dynamically adjuεtable.
69. The apparatuε according to claim 67, wherein εaid receiver proceεεor cauεeε the tranεmiεsion of the value of the difference between said received cell counter and εaid receiver buffer counter εubtracted from the value in εaid transmitted cell counter to said transmitter when the value of said update release counter is equal to said release limit register, and wherein εaid tranεmitter processor receives and loads said difference value into said transmitter buffer counter.
70. The apparatus according to claim 66, wherein said transmitter procesεor enableε tranεmiεεion of data cells in said virtual connections of said link in a first mode, and inhibits data cell transmisεion over all connectionε of εaid link in a εecond mode.
71. The apparatuε according to claim 70, wherein εaid firεt mode is defined by said transmitter buffer counter being leεε than εaid buffer limit regiεter, and said second mode is defined by said transmitter buffer counter being equal to or greater than said buffer limit regiεter.
72. The apparatuε according to claim 66, wherein said transmitter processor transmitε a value of εaid tranεmitted cell counter to εaid receiver proceεsor at a specified interval, and said receiver processor resets εaid received cell counter equal to εaid tranεmitted cell counter value.
73. A dynamically reεponεive link-level buffer εharing apparatuε, comprising: a communications link having a transmitter end and a receiver end; a tranεmitter at εaid tranεmitter end of εaid link for transmitting data cells over said link according to dynamically adjusted transmiεεion parameterε; and a receiver at εaid receiver end of εaid link, εaid receiver having a plurality of bufferε for εtoring data cellε received from εaid tranεmitter via εaid link according to dynamically adjusted reception parameterε, wherein εaid receiver provideε link-level buffer εtatuε information to εaid tranεmitter aε data cellε are received from εaid tranεmitter.
74. The apparatuε according to claim 73, wherein said dynamically adjusted transmiεεion and reception parameters are adjusted in responεe to changes in data cell transmisεion rate.
75. The apparatuε according to claim 73, wherein εaid dynamically adjuεted tranεmiεεion and reception parameterε are adjuεted in reεponεe to user-defined performance criteria.
76. A dynamically reεponεive connection-level buffer εharing apparatuε, compriεing: a communicationε connection having a tranεmitter end and a receiver end; a tranεmitter at εaid tranεmitter end of εaid connection for transmitting data cells over said connection according to dynamically adjusted transmission parameters; and a receiver at said receiver end of said connection, said receiver having a plurality of buffers for storing data cells received from εaid tranεmitter via said connection according to dynamically adjusted reception parameterε, wherein εaid receiver provideε connection-level buffer εtatuε information to εaid tranεmitter as data cells are received from said tranεmitter.
77. The apparatus according to claim 76, wherein εaid dynamically adjuεted tranεmission and reception parameters are adjusted in response to changes in data cell transmiεεion rate.
78. The apparatuε according to claim 76, wherein said dynamically adjusted transmisεion and reception parameters are adjusted in responεe to uεer-defined performance criteria.
PCT/US1996/011934 1995-07-19 1996-07-18 Link buffer sharing method and apparatus WO1997004556A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP9506875A JPH11511303A (en) 1995-07-19 1996-07-18 Method and apparatus for sharing link buffer
PCT/US1996/011934 WO1997004556A1 (en) 1995-07-19 1996-07-18 Link buffer sharing method and apparatus
AU65019/96A AU6501996A (en) 1995-07-19 1996-07-18 Link buffer sharing method and apparatus

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US149895P 1995-07-19 1995-07-19
US60/001,498 1995-07-19
PCT/US1996/011934 WO1997004556A1 (en) 1995-07-19 1996-07-18 Link buffer sharing method and apparatus

Publications (1)

Publication Number Publication Date
WO1997004556A1 true WO1997004556A1 (en) 1997-02-06

Family

ID=38659684

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/011934 WO1997004556A1 (en) 1995-07-19 1996-07-18 Link buffer sharing method and apparatus

Country Status (3)

Country Link
JP (1) JPH11511303A (en)
AU (1) AU6501996A (en)
WO (1) WO1997004556A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022137816A (en) 2021-03-09 2022-09-22 富士通株式会社 Information processing device and control method for information processing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4603382A (en) * 1984-02-27 1986-07-29 International Business Machines Corporation Dynamic buffer reallocation
US5093912A (en) * 1989-06-26 1992-03-03 International Business Machines Corporation Dynamic resource pool expansion and contraction in multiprocessing environments
US5483526A (en) * 1994-07-20 1996-01-09 Digital Equipment Corporation Resynchronization method and apparatus for local memory buffers management for an ATM adapter implementing credit based flow control
US5533009A (en) * 1995-02-03 1996-07-02 Bell Communications Research, Inc. Bandwidth management and access control for an ATM network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4603382A (en) * 1984-02-27 1986-07-29 International Business Machines Corporation Dynamic buffer reallocation
US5093912A (en) * 1989-06-26 1992-03-03 International Business Machines Corporation Dynamic resource pool expansion and contraction in multiprocessing environments
US5483526A (en) * 1994-07-20 1996-01-09 Digital Equipment Corporation Resynchronization method and apparatus for local memory buffers management for an ATM adapter implementing credit based flow control
US5533009A (en) * 1995-02-03 1996-07-02 Bell Communications Research, Inc. Bandwidth management and access control for an ATM network

Also Published As

Publication number Publication date
AU6501996A (en) 1997-02-18
JPH11511303A (en) 1999-09-28

Similar Documents

Publication Publication Date Title
US5896511A (en) Method and apparatus for providing buffer state flow control at the link level in addition to flow control on a per-connection basis
US6717912B1 (en) Fair discard system
US6625121B1 (en) Dynamically delisting and relisting multicast destinations in a network switching node
WO1997003549A2 (en) Prioritized access to shared buffers
JP2753468B2 (en) Digital communication controller
EP0719012B1 (en) Traffic management and congestion control for packet-based network
US6456590B1 (en) Static and dynamic flow control using virtual input queueing for shared memory ethernet switches
JP2693266B2 (en) Data cell congestion control method in communication network
AU714901B2 (en) Arrangement and method relating to packet flow control
CA2214838C (en) Broadband switching system
AU719514B2 (en) Broadband switching system
US6526062B1 (en) System and method for scheduling and rescheduling the transmission of cell objects of different traffic types
US6768717B1 (en) Apparatus and method for traffic shaping in a network switch
US6249819B1 (en) Method for flow controlling ATM traffic
WO1997004546A1 (en) Method and apparatus for reducing information loss in a communications network
EP0872091A1 (en) Controlled available bit rate service in an atm switch
WO1997004557A1 (en) Minimum guaranteed cell rate method and apparatus
WO1997004556A1 (en) Link buffer sharing method and apparatus
EP0839420A1 (en) Allocated and dynamic bandwidth management
Kosak et al. Buffer management and flow control in the credit net ATM host interface
JPH11510009A (en) Assignable and dynamic switch flow control
US7450510B1 (en) System and method for distributing guaranteed bandwidth among service groups in a network node
WO1997004563A1 (en) Joint flow control mechanism in a telecommunications network
Katevenis et al. Multi-queue management and scheduling for improved QoS in communication networks
WO1997004571A1 (en) Method and apparatus for emulating a circuit connection in a cell based communications network

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref country code: JP

Ref document number: 1997 506875

Kind code of ref document: A

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase