US20070280258A1 - Method and apparatus for performing link aggregation - Google Patents
Method and apparatus for performing link aggregation Download PDFInfo
- Publication number
- US20070280258A1 US20070280258A1 US11/605,829 US60582906A US2007280258A1 US 20070280258 A1 US20070280258 A1 US 20070280258A1 US 60582906 A US60582906 A US 60582906A US 2007280258 A1 US2007280258 A1 US 2007280258A1
- Authority
- US
- United States
- Prior art keywords
- flow
- egress
- key
- cam
- ingress
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/24—Multipath
- H04L45/245—Link aggregation, e.g. trunking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/38—Flow based routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/72—Routing based on the source address
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/741—Routing in networks with a plurality of addressing schemes, e.g. with both IPv4 and IPv6
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/745—Address table lookup; Address filtering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/745—Address table lookup; Address filtering
- H04L45/7452—Multiple parallel or consecutive lookup operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/745—Address table lookup; Address filtering
- H04L45/74591—Address table lookup; Address filtering using content-addressable memories [CAM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2483—Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3009—Header conversion, routing tables or routing tags
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/35—Switches specially adapted for specific applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5619—Network Node Interface, e.g. tandem connections, transit switching
- H04L2012/5624—Path aspects, e.g. path bundling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/35—Switches specially adapted for specific applications
- H04L49/351—Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Definitions
- Link aggregation allows for the grouping of multiple physical links or ports within a network node into a single aggregated interface.
- Aggregated interfaces can be used for increasing bandwidth of an interface and for providing port level redundancy within an interface.
- An ingress interface on a line card residing in the network node receives flows including multiple packets and forwards these flows to port members of an aggregated group associated with an egress interface.
- Line cards may utilize Content Addressable Memory (CAM) to increase the speed of link aggregation and minimize the effects of search latency.
- CAM Content Addressable Memory
- CAM however, is expensive and, together with static RAM or other logic, consumes a significant amount of power and takes up board space.
- the number of entries in the CAM used for link aggregation greatly expands as the number of aggregated links increases.
- the CAM has a limited number of entries for performing other necessary and useful functions, including functions associated with a multi-service network node.
- a network node or corresponding method in accordance with an embodiment of the present invention reduces a number of CAM entries required to perform link aggregation.
- a first mapping unit maps a given ingress flow to an egress flow identifier.
- a second mapping unit maps the egress flow identifier to a member of an aggregated group associated with an egress interface based on information available in the given ingress flow.
- a flow forwarding unit forwards the given ingress flow to the member of the aggregated group associated with the egress interface.
- FIG. 1 is a network diagram of a portion of a communications network employing an embodiment of the present invention
- FIG. 2 is a block diagram of an example switch used in a communications network
- FIG. 3 is a block diagram of a switch that includes an ingress line card with example components
- FIGS. 4-6 are block diagrams of a switch illustrating multiple operations of the example components in an ingress line card according to embodiments of the present invention
- FIGS. 7-8 are block diagrams illustrating example components in a node of a communications network according to embodiments of the present invention.
- FIGS. 9-11 are example flow diagrams performed by elements of a communications system according to embodiments of the present invention.
- FIG. 1 illustrates a network with switches that use link aggregation.
- FIG. 2 illustrates an example switch with line cards and a switch matrix supporting Layer 2 switching that may use CAM with a lookup table to support the switching.
- Example embodiments of the present invention illustrated in FIGS. 4-11 further reduce the number of CAM entries needed by performing two different successive lookups.
- the tradeoff for reducing the number of CAM entries is increased latency because of the additional lookup.
- the latency may be reduced by dividing the CAM into multiple cascaded CAMs and performing multiple lookups in parallel. For example, a CAM may be divided into four CAMs with each CAM dedicated to a portion of the VLANs supported by an ingress interface.
- FIGS. 1-11 are presented in detail below, in turn.
- FIG. 1 is a network diagram of a portion of a communications network 100 employing an example embodiment of the present invention.
- This portion of the communications system 100 includes two switches 110 , 120 (Switch A and Switch B).
- Switch A 110 may include any number of ingress ports 114 a, 114 b, . . . , 114 n ( 114 a - n ), 118 a, 118 b, . . . , 118 n ( 118 a - n ), and so forth, connected through physical links 113 and 117 , respectively, to other network nodes (not shown).
- Switch A 110 may logically bond together groups of the physical links 113 , 117 , connected to respective groups of ingress ports 114 a - n, 118 a - n, into link aggregation groups (LAGs) 112 , 116 , respectively.
- LAGs link aggregation groups
- the link aggregation groups 112 , 116 may be maintained according to a link aggregation configuration hierarchy that includes an aggregator (not shown) associated with each group of ingress ports 114 a - n and 118 a - n.
- a logical interface (not shown) can be built on the aggregator with the associated physical ports being part of the logical interface.
- each link aggregation group 112 , 116 has a uniquely assigned Media Access Control (MAC) address and an identifier.
- This MAC address can be assigned from the MAC address of one of the ports in a link aggregation group or from a pool of reserved MAC addresses not associated with any of the ports in the link aggregation group.
- the MAC address is used as a source address when transmitting and as a destination address when receiving.
- Switch B 120 may similarly have any number of egress ports 124 a, 124 b, . . . , 124 n ( 124 a - n ), 128 a, 128 b, . . . , 128 n ( 128 a - n ), and so forth, connected through physical links 123 , 127 , respectively, to other network nodes (not shown).
- Switch B 120 may logically bind together groups of the physical links 123 , 127 connected to respective groups of egress ports 124 a - n, 128 a - n, into respective link aggregation groups 122 , 126 .
- Switch A 110 may also have egress ports 119 a, 119 b, 119 c, and 119 d connected through respective physical links 125 to ingress ports 129 a, 129 b, 129 c, and 129 d of Switch B 120 . Both Switch A 110 and Switch B 120 may bind together the group of the physical links 125 connecting the two switches 110 , 120 into a link aggregation group 130 .
- a given flow including any number of packets 111 a, 111 b, . . . , 111 n ( 111 a - n ), may be transmitted from another network node to Switch A 110 via the physical link connected to ingress port 114 a.
- the given flow may include multiple packets having the same source and destination addresses. Packets that are not members of the given flow may be interspersed among packets (e.g., packets 111 a - n ) that are members of the given flow.
- Switch A 110 may transmit or forward the same or a different flow, including packets 131 a, 131 b, . . . , 131 n ( 131 a - n ), to Switch B 120 via at least one of the physical links 125 connecting Switch A's egress ports 119 a - d to Switch B's ingress ports 129 a - d.
- Switch B 120 may transmit the same or a different flow, including packets 121 a, 121 b, . . .
- LSP Label Switched Path
- IP Internet Protocol
- the aggregator may distribute received frames from a higher application to one of the links used by the aggregator.
- the aggregator may transmit received frames from one of the links on a link aggregation group to a higher layer application in the order that they are received.
- the aggregator may operate according to two modes: incremental bandwidth mode and link protection mode.
- incremental bandwidth mode a user can increase or decrease the bandwidth of interfaces built on an aggregator by adding or deleting members to or from the link aggregation group. For example, a user may wish to upgrade from a 100 Megabit fast Ethernet link without subscribing to a costly Gigabit fast Ethernet link.
- incremental bandwidth mode the user can take two 100 Megabit fast Ethernet links and bond them together using link aggregation to get effectively 200 Megabits of bandwidth.
- an “active” member is the only member within an aggregator that can transmit, while all members of the aggregator can receive.
- the maximum bandwidth of an interface that is built on the aggregator is the bandwidth of a single member and not the sum of all the members as in incremental bandwidth mode. Thus, the other members are reserved for future use in case the “active” member goes down.
- FIG. 2 is a block diagram of an example switch 200 (Switch A) used in a communications network.
- Switch A 210 may include multiple ingress line cards, such as ingress line cards A and B 232 , 234 , connected to multiple egress line cards, such as egress line cards A and B 233 , 235 , via a switch fabric 240 .
- a flow 209 including any number of packets 211 a, 211 b, . . . , 211 n ( 211 a - n ), may be transmitted to Switch A 210 via a link member 213 of a link aggregation group 212 associated with ingress line card A 232 .
- the ingress interface may not be aggregated.
- the ingress line card A 232 determines the appropriate egress line card and egress line card port to forward the flow 209 and forwards the flow 209 via the switch fabric 240 to one of the egress line cards 233 , 235 .
- a flow including packets 231 a, 231 b, . . . , 231 n ( 211 a - n ) may be forwarded to a link of another link aggregation group 222 .
- FIG. 3 is a block diagram of a switch 300 that includes an ingress line card 332 illustrating example components of the ingress line card 332 .
- the switch 300 also includes a switch fabric 340 and an egress line card 333 .
- the ingress line card 332 includes a packet processor 330 , logic 336 , a central processing unit (CPU) 334 , and Content Addressable Memory (CAM) 335 .
- the packet processor 330 connects to the logic 336 via a bidirectional line 345 .
- the logic 336 formats data from the result SRAM 337 in a way that the packet processor 334 understands, and the logic 336 formats data from the packet processor 334 in a way that the CAM 335 understands.
- the logic 336 connects to Content Addressable Memory (CAM) 335 , and the CAM 335 , in turn, connects to a result Static Random Access Memory (SRAM) 337 .
- the result SRAM 337 then connects back to the logic 336 .
- the logic 336 may be programmed into a Field Programmable Gate Array (FPGA).
- the packet processor 330 via the logic 334 , may access information, such as keys 338 (shown as sets of numbers within brackets), that is organized and stored in the CAM 335 .
- the CAM 335 may have a maximum of 512,000 entries that are 72 bits wide or 256,000 entries that are 144 bits wide. Each CAM entry may have a corresponding SRAM entry.
- the result SRAM 337 may have at least 512,000 or 256,000 entries if the CAM has 512,000 or 256,000 entries, respectively.
- the result SRAM 337 may have 192-bit-wide entries to accommodate other information besides an egress aggregate flow identifier and a flag.
- the ingress line card 332 includes ingress ports 314 a, 314 b, 314 c, and 314 d ( 314 a - d ).
- the ingress line card 332 may bond together the ingress ports 314 a - d into an ingress link aggregation group 312 .
- the ingress line card 332 connects through the switch fabric 340 to the egress line card 333 having egress ports 319 a, 319 b, 319 c, and 319 d ( 319 a - d ).
- the egress line card 333 may also bond together the egress ports 319 a - d into an egress link aggregation group 322 .
- any number of the ingress ports 314 a - d and egress ports 319 a - d may not be logically bound together into link aggregation groups, such as the ingress and egress link aggregation groups 312 , 322 .
- a network operator may provision (or signal) the Ingress Line Card 332 with configuration settings using an embodiment of the present invention. For example, the network operator may enter configuration information for a customer using VLAN ID 10 on a given fast Ethernet interface via an operator interface. In this manner, the network operator builds a circuit on the fast Ethernet interface of VLAN ID 10 .
- the CPU 335 may then program the CAM 335 , the result SRAM, and the packet processor 334 via the logic 336 .
- the CPU 334 may execute a lower layer of software that programs the appropriate CAM entries (i.e., CAM keys and corresponding SRAM results) via the logic 336 .
- the CPU 335 may also program the packet processor 334 with microcode instructions to analyze a given ingress flow and access information from the CAM 335 and result SRAM 337 in order to determine a link on which to forward a given ingress flow.
- the CPU 334 may further program the packet processor 330 with the encapsulation type of port 314 a of the ingress interface.
- the encapsulation type is layer 2 switched VLAN traffic.
- the packet processor 330 (1) receives a flow 309 , including multiple packets 311 a, 311 b, . . . , 311 n, from one of the ingress ports 314 a - n (in this example, port 314 a ) of the ingress line card 332 , (2) builds a key 325 based on the contents of the flow 309 , and (3) launches a CAM lookup with the key 325 .
- the packet processor 330 executes these functions because it can do so at a significantly greater packet rate than a CPU (e.g., more than 50 million packets per second).
- the key 325 may include four key parameters: an ingress interface identifier 323 a, VLAN identifier 323 b, three-bit priority 323 c, and hash value 323 d ( ⁇ L2FlowId, VLAN, Priority, Hash ⁇ ).
- the key 325 may include different key parameters for key types, such as Internet Protocol (IP), Ethernet, or other non-VLAN key types.
- IP Internet Protocol
- the packet processor 330 When the packet processor 330 receives the flow 309 from the port 314 a of the ingress link aggregation group 312 , it populates the key's first entry 323 a with the layer 2 flow identifier identifying the ingress interface (e.g., “1000”).
- the packet processor 330 looks at the flow's Ethernet header (not shown) to make sure the packet headers are correct and to identify the flow's VLAN tag or identifier that identifies the VLAN with which the flow 309 is associated.
- the packet processor 330 looks-up the VLAN identifier to determine on which interface to send out the flow 309 and, optionally, swaps in a new VLAN identifier in place of the one the flow 309 had when the switch 310 received it.
- the packet processor 330 also extracts the priority from a priority field, such as a three-bit priority field, in the VLAN header that is used to prioritize flows and to populate the priority key parameter field 323 c with the priority (e.g., priority “0”). Finally, depending on the flow type, the packet processor 330 extracts the source and destination addresses from the flow's Ethernet headers and runs an algorithm on the source and destination addresses to calculate a hash value, such as a four-bit hash value. The packet processor 330 populates the hash key parameter field 323 d with this hash value. The hash value indicates the specific egress port member of the egress link aggregation group 315 to forward the flow 309 . Note that in a switch having an egress interface that is not link aggregated, the CAM keys may not include a hash field because there is no need for link aggregation.
- a hashing technique decreases the size of a table (e.g., CAM and corresponding result SRAM) that must be indexed to determine the specific egress port member to which to forward a flow.
- a table e.g., CAM and corresponding result SRAM
- a hashing technique that 48-bit source MAC address may be compressed to a smaller number, such as a 10-bit number.
- Hashing produces duplicates or “collisions.” For example, subsets of 48-bit number variations compress to a same 10-bit number.
- a table may have multiple entries at a certain index that hash to the same value. As a result, hashing increases the efficiency of a lookup.
- a hashing algorithm compresses a 48-bit source MAC address and a 48-bit destination MAC address to 4 bits. In other words, 96-bits of information are compressed to a 4-bit number. Thus, many combinations of source and destination MAC addresses can hash to the same 4-bit value. If there are a larger number of flows, there is a better chance of getting an equal distribution across all egress port members.
- the hashing is typically random enough to provide some variance so that traffic is distributed evenly across the links.
- the hashing may be a CRC or an exclusive-or (XOR) type of operation. Hashing may also be performed on a per flow basis or on a per packet basis.
- per-flow hashing whether there are two flows or a thousand flows, if the flows originate from the same source MAC address destined to the same destination MAC address, they all hash to the same link because the same hashing operation is performed on each of the flows.
- Variance in the source and destination MAC addresses of the flows causes distribution flows across multiple links. For example, if several flows originate from the same source MAC address, but they are destined to different destination MAC addresses, there is a greater probability that some flows will hash to a first link and some flows will hash to a second link.
- a hashing operation may also be performed on individual packets, which are distributed across different links based on the hashing, even if those packet are part of the same flow.
- individual packets may arrive at a receiving side out of order. As a result, the receiving side must put individual packets back in order. This involves a significant amount of overhead and some protocols cannot handle packets that arrive out of order. Therefore, a hashing algorithm may be run on a per flow basis, and the flows are distributed accordingly to ensure that packets associated with the same flow arrive in order.
- the two least significant bits of the four-bit hash value identify the egress port members ( 319 a - d ) of the egress link aggregation group 322 .
- a hash value of “00” identifies egress port member 319 a
- a hash value of “01” identifies egress port member 319 b
- a hash value of “10” identifies egress port member 319 c
- a hash value of “11” identifies egress port member 319 d.
- a four-bit hash value may be used that supports up to sixteen egress port members. Other numbers of bits used for hash values support other numbers of egress port members.
- the packet processor After the packet processor populates the key 325 with key parameters 323 a - d, it launches a lookup with the key 325 . Specifically, this lookup causes a search of keys 338 in the CAM 335 for a matching key. If there is a match, the CAM 335 returns an address 341 that indexes another lookup table 338 in the result SRAM 337 that has the CAM result, which may include the egress port member identifier 343 .
- the address 341 may be an index or a pointer to some area in the result SRAM 337 .
- the egress port member identifier 343 may include, for multiple egress line cards 333 , a destination egress line card identifier and an output connection identifier (OCID).
- the contents of the result SRAM 337 indexed by the CAM result are then provided to the packet processor 330 .
- the packet processor 330 then forwards the flow 309 to the appropriate egress port member (e.g., port member 319 d identified by hash value “11”) via switch fabric 340 based on the egress port member identifier 343 .
- flows e.g., flow 309 ) from one VLAN identified by the number “10” ( 323 b ) may come into the packet processor 330 through an ingress port member (e.g., port member 314 a ) of the ingress link aggregation group 312 .
- the egress link aggregation group 322 of the egress line card 333 may include only two active port members (e.g., port members 319 a, 319 b ).
- two CAM entries are used to allow incoming traffic flows to hash to the two port or link members 319 a, 319 b (e.g., ⁇ 1000, 10, 0, x0 ⁇ and ⁇ 1000, 10, 0, x1 ⁇ ).
- a given link can support multiple VLANs (i.e., “logical subinterfaces”).
- another VLAN e.g., “11” on the same ingress interface (e.g., “1000” ( 323 a )) associated with the ingress link aggregation group 312 may be forwarded to the same two port members 319 a, 319 b.
- another two CAM entries are used (e.g., ⁇ 1000, 11, 0, x0 ⁇ and ⁇ 1000, 11, 0, x1 ⁇ ).
- the egress link aggregation group 322 includes four active port members 319 a - d, sixteen CAM entries ( 338 ) are used for each of the VLANs identified by the numbers 10 - 13 .
- the number of CAM entries is equal to the number of VLANs a user desires to support multiplied by the number of aggregated egress links or port members of the egress link aggregation group 322 .
- many CAM entries are used.
- the ingress line card 332 may support 4,000 VLANs, numbered 10 to 4009
- FIG. 4 is a block diagram of a switch 400 illustrating example components in an ingress line card 412 according to an embodiment of the present invention.
- FIG. 4 illustrates a new manner in which to set up the CAM entries to provide packet or flow distribution across outgoing links, i.e., determine the outgoing links to direct flows, as a function of the incoming flow.
- a switch 410 includes an ingress line card 412 , switch fabric 440 , and an egress line card 433 .
- the ingress line card 412 includes a packet processor 430 , CPU 434 , CAM 435 , logic 436 , and Result SRAM 437 .
- the CAM may be a Ternary CAM (TCAM) that has three possible lookups or choices: a binary 0, binary 1, or “Don't Care” (i.e., either a binary 0 or binary 1).
- TCAM Ternary CAM
- the packet processor 430 receives a flow 409 , including multiple packets 411 a, 411 b, . . . , 411 n, through an ingress port 414 .
- the packet processor 430 then builds a first key 421 formatted to hit a CAM entry and launches a first CAM lookup.
- the first key 421 includes three key parameters.
- the first key parameter 451 a is a layer 2 flow identifier, which identifies the interface from where the flow 409 originated.
- the second key parameter 451 b is a VLAN identifier which the packet processor 430 extracts from the header of the packets 411 a - n in the flow 409 .
- the third key parameter 451 c is a priority which the packet processor 430 also extracts from the header of the packets 411 a - n in the flow 409 .
- the first key 421 does not include a hash key parameter.
- the packet processor 430 does not extract source and destination addresses, such as a MAC or IP address, from the flow 409 and calculate a hash value when it builds the first key 421 .
- the packet processor 430 After the packet processor 430 builds the first key 421 , it launches a first lookup by sending the first key 421 to the CAM 435 .
- the CAM 435 searches a first lookup table 438 for a matching key and returns an address or first index 441 used to index the Result SRAM 437 .
- the information contained in an entry of the Result SRAM located at the first index 441 may be a first result 443 that includes an “aggregated” bit or flag and the egress aggregate flow identifier.
- the “aggregated” bit indicates to the packet processor 430 that it should launch a second CAM lookup (also referred to as the “aggregated lookup”) because the egress interface associated with the VLAN ingress flow is aggregated.
- the egress aggregate flow identifier for example, may be an 18-bit number.
- the packet processor 430 then builds a second key 423 formatted to hit another CAM entry.
- the second key 423 includes four key parameters.
- the first key parameter is the flow type key parameter 453 a.
- the flow type key parameter 453 a identifies what type of flow is being sent out on an aggregated interface, such as the egress line card 433 .
- the packet processor 430 builds the second key 423 , it knows the flow type of the flow 409 from the first lookup.
- the flow type key parameter 453 a is used to distinguish between different forwarded flows that are traversing the same egress aggregated interface. For example, if layer 2 traffic and IP traffic are both traversing the same Resource Reservation Protocol (RSVP) Label-Switched Path (LSP), then the flow type key parameter 453 a is used to distinguish the layer 2 flow from the IP flow.
- the ingress line card 412 and the egress line card 433 may receive and send, respectively, multiple flows of different types.
- the flows may include IP flows and layer 2 switched VLAN flows.
- the second key parameter is the egress aggregate flow identifier 453 b.
- This parameter is a globally unique node- or router-wide flow identifier that is allocated and associated with every egress logical flow that is built on an aggregated interface.
- the second lookup identifies the traffic characteristics of that flow.
- different flows can be assigned by different types of traffic parameters. One flow may be a higher priority flow than another flow. In preferred embodiments, the flows do not interfere with another flow. The way the different types of flows are identified may be through using this aggregate flow ID, and each may be given a certain type of treatment.
- the third key parameter is a miscellaneous key parameter 453 c.
- This key parameter may provide additional information that is specific to the flow type 453 a and the egress aggregate flow identifier 453 b.
- the miscellaneous key parameter 453 c is used to make a more qualified decision as to which Output Connection Identifier (OCID) to choose.
- OCID Output Connection Identifier
- the second CAM lookup (i.e., the aggregate CAM lookup) may also need to take into account the VPLS instance identifier in order to obtain the final OCID to be used for that LSP. In this embodiment, however, the miscellaneous key parameter 453 c is not used.
- VPLS Virtual Private LAN Service
- DMAC Destination MAC
- the last key parameter is the hash value 453 d, which is calculated based on the source and destination MAC addresses of the flow 409 .
- the packet processor 430 After the packet processor 430 builds the second key 423 , it launches a second CAM lookup by providing the second key 423 to the CAM 435 .
- the CAM 435 searches a second lookup table 439 for a key matching the second key 423 and provides an address or first index 441 used to index the Result SRAM 437 .
- the contents of the Result SRAM 437 at the first index 441 is a first result 443 which may include an egress port member identifier.
- the egress port member identifier may include, for multiple egress line cards ( 433 ), a destination egress line card identifier identifying the egress line card to which to forward the flow 409 , and an OCID identifying the port member of the egress line card to which to forward the flow 409 .
- the packet processor 430 then forwards the flow 409 to the appropriate egress port member (e.g., a port member identified by hash value “x1”) via the switch fabric 440 .
- a first lookup operation involves mapping an incoming flow that arrives on an incoming interface to an outgoing aggregated flow identifier.
- the first lookup operation may involve mapping an ⁇ interface, flow ⁇ tuple to the outgoing aggregated flow identifier.
- a second lookup operation involves mapping the outgoing aggregated flow identifier to an outgoing link member of the aggregated group.
- the outgoing aggregate flow identifier links the first lookup operation to the second lookup operation.
- example embodiments of the present invention re-organize the keys in the CAM so that the first lookup is independent of the hash value. It is the use of the hash value that requires a significant number of CAM entries because each VLAN, for example, needs CAM entries corresponding to every possible hash value. The possible hash values come up in the second lookup.
- the number of CAM entries required by example embodiments is equal to about the number of ingress flows supported by an ingress interface plus the number of members of the aggregated group associated with the egress interface.
- a switch may have multiple egress interfaces, each of which is aggregated and has eight members.
- a primary advantage of the double lookup embodiment is scalability. That is, fewer CAM entries are used for a greater number of flows. However, the number of CAM entries is reduced at the expense of having to do one more look up.
- a switch is typically designed to minimize latency. If there is too much latency, packets take longer to get through the switch, and packets need to be buffered for a greater length of time.
- Embodiments of the present invention increase latency by performing two successive lookups instead of increasing the number of CAMs entries. Adding CAM to a switch may increase the latency by a given number of clock cycles, but performing a second lookup may increase latency, for example, by half the given number of clock cycles.
- switching or routing devices employing embodiments of the present invention may support frame relay services, ATM services, Ethernet, GigaEthernet (GigE), IP, IPv6, MPLS, VLAN. These services, whether they involve switching or routing, each require CAM resources in order to perform the forwarding function.
- GigE GigaEthernet
- IP IPv6, MPLS
- Link aggregation is often implemented in pure layer 2 Ethernet switches. In this case, there is no concern about using up CAM resources. In fact, the switch may not use a CAM. For example, the switch may use a different data structure that is optimized strictly for layer 2 Ethernet. But, a CAM is the most flexible hardware today in a switch or router that supports multiple service types.
- CAMs only support serial lookups. For example, in a system with four CAMs, a lookup operation involves searching each of the four CAMs one at a time until there is a match.
- a CAM may be designed to support parallel lookups in order to decrease the latency introduced by embodiments of the present invention.
- the first and second lookups involve performing four parallel lookups in the four respective CAMs.
- the first key (or forwarding lookup key) includes a layer 2 flow identifier.
- the result of the first key lookup includes (i) an input connection identifier, (ii) an “aggregated” bit indicating that the egress interface associated with the ingress flow is aggregated, and (iii) the egress aggregate flow identifier.
- the second key (or aggregate lookup key) includes a port key type parameter that identifies the new aggregate lookup table as a hash lookup for aggregated interfaces.
- the result of the second key lookup includes the OCID and a destination egress line card identifier.
- the hash value for the second key is calculated from the source and destination MAC addresses of a given port to port flow.
- the first key includes a VPN identifier and a destination IP address.
- the result of the first key lookup includes the “aggregated” bit and the egress aggregate flow identifier.
- the second key includes an IP destination key type parameter, the egress aggregate flow identifier, a miscellaneous key parameter, which may be a traffic class identifier, and the hash value.
- the result of the second key lookup includes the OCID and a destination egress line card identifier.
- the hash value for the second key is calculated from the source and destination IP addresses of a given IP flow.
- FIG. 5 is a block diagram of a switch 500 illustrating example components in an ingress line card 512 according to another embodiment of the present invention.
- FIG. 5 illustrates an embodiment of the invention that uses two successive lookups as applied to FIG. 3 .
- a CAM 535 of FIG. 5 may include only eight CAM entries 538 , 539 as compared to the sixteen CAM entries 338 in the CAM 535 of FIG. 3 .
- the switch 500 includes an ingress line card 512 , switch fabric 540 , and an egress line card 533 .
- the ingress line card 512 includes a packet processor 530 , CPU 534 , CAM 535 , logic 536 , and Result SRAM 537 .
- the CAM 535 includes four entries in a first CAM lookup table 538 and four entries in a second CAM lookup table 539 .
- the packet processor 530 receives a flow 509 , including multiple packets 511 a, 511 b, . . . , 511 n, through ingress port 514 a.
- the packet processor 530 then builds a first key 521 formatted to hit a CAM entry in the first CAM lookup table 538 .
- the first key 521 includes three key parameters as described above with reference to FIG. 4 .
- After the packet processor 530 builds the first key 521 it launches a first lookup by sending the first key 521 to the CAM 535 .
- the CAM 535 searches the first lookup table 538 for a matching key (e.g., a first CAM entry for the first CAM lookup) and returns an address or first index 541 used to index the Result SRAM 537 .
- the information contained in an entry of the Result SRAM 537 located at the first index 541 is a first result 543 that includes an input connection identifier (ICID) (e.g., 200 ), an “aggregated” (e.g., 1 ) bit indicating that the packet processor 530 should launch a second CAM lookup, and the egress aggregate flow identifier (e.g., 100 ).
- the packet processor 530 then builds a second key 523 formatted to hit a CAM entry in the second lookup table 539 . To build the second key 523 , the packet processor 530 calculates a hash value (e.g., “11”) based on the source and destination MAC addresses of the flow 509 .
- a hash value e.g., “11”
- the second key 523 includes four key parameters as described above with reference to FIG. 4 .
- the packet processor 530 builds the second key 523 , it launches a second lookup by sending the second key 523 to the CAM 535 .
- the CAM 535 searches a second lookup table 539 for a matching key (e.g., a fourth CAM entry in the second CAM lookup table 539 ) and returns an address or second index 542 used to index the Result SRAM 537 .
- the information contained in an entry of the Result SRAM 537 located at the second index 542 is a second result 545 that includes, for multiple egress line cards, a destination egress line card (e.g., 1 ) and an output connection identifier (OCID) (e.g., 303 ).
- the packet processor 530 then forwards the flow 509 to the appropriate egress port member (e.g., port member 519 d ( 303 ) corresponding to hash value “11”) via the switch fabric 540 .
- FIG. 6 is a block diagram of a switch 600 illustrating example components in an ingress line card 612 according to another embodiment of the present invention.
- the switch 600 includes an ingress line card 612 , switch fabric 640 , and egress line card 633 .
- the ingress line card 612 includes a packet processor 630 , CPU 634 , CAM 635 , logic 636 , and Result SRAM 637 .
- the CAM 635 includes one entry in a first CAM lookup table 638 and two entries in a second CAM lookup table 639 .
- the packet processor 630 receives a flow 609 , including multiple packets 611 a, 611 b, . . . , 611 n, through a single ingress port 614 .
- the packet processor 630 then builds a first key 621 formatted to hit a CAM entry in the first CAM lookup table 638 .
- the first key 621 includes three key parameters as described above in reference to FIG. 4 .
- the packet processor 630 builds the first key 621 , it launches a first lookup by sending the first key 621 to the CAM 635 .
- the CAM 635 searches the first lookup table 638 for a matching key (e.g., a first CAM entry in the first CAM lookup table 638 ) and returns an address or first index 641 used to index the Result SRAM 637 .
- the information contained in an entry of the Result SRAM 637 located at the first index 641 is a first result 643 .
- the packet processor 630 then builds a second key 623 based on the first result 643 and formatted to hit a CAM entry for the second lookup 639 .
- the packet processor 630 calculates a hash value (e.g., “x1”) based on the source and destination MAC addresses of the flow 609 .
- a hash value e.g., “x1”
- the second key 623 includes four key parameters as described above in reference to FIG. 4 .
- the packet processor 630 builds the second key 623 , it launches a second lookup by sending the second key 623 to the CAM 635 .
- the CAM 635 searches a second lookup table 639 for a matching key (e.g., a second CAM entry in the second CAM lookup table 639 ).
- the result 645 of the second lookup corresponds directly to a port ID because the index value returned by the CAM 635 self-identifies the port ID due to predetermined placement of data in the CAM 635 .
- the CAM 635 returns an egress port identifier 645 , so there is no need in this embodiment to pass the second index (i.e., port ID 645 ) through the Result SRAM 637 .
- An advantage of this embodiment is decreased latency because the Result SRAM 637 is indexed once instead of twice. Moreover, less result SRAM 637 space is used because Result SRAM entries corresponding to the entries in the second CAM lookup table 639 are eliminated.
- the packet processor 630 then forwards the flow 609 to the appropriate egress port member (e.g., the port member identified by hash value “x1”) via switch fabric 640 .
- FIG. 7 is a block diagram illustrating example components of a node 701 in a communications network 700 according to one embodiment.
- the node 701 includes an ingress interface 740 that receives a given ingress flow 709 , which may include multiple packets 711 a, 711 b, . . . , 711 n, on a first ingress link 713 a.
- the first ingress link 713 a may be a member of a link aggregation group 712 , which also includes a second ingress link 713 b.
- a first mapping unit 742 maps the given ingress flow 709 to an egress flow identifier 743 .
- a second mapping unit 744 maps the egress flow identifier 743 to an egress link member identifier 745 based on information available in the given ingress flow 709 .
- the egress link member identifier 745 identifies an egress link (e.g., a first egress link 723 a or a second egress link 723 b ) to which to forward the given ingress flow 709 .
- the egress links 723 a - b may be members of an aggregated group 722 associated with an egress interface 748 .
- a flow forwarding unit 746 then forwards the given ingress flow 709 to the egress link member corresponding to the egress link member identifier 745 (e.g., the second egress link member 723 b ).
- FIG. 8 is a block diagram illustrating example components of a node 801 in a communications network 800 according to another embodiment.
- the node 801 includes an ingress interface 840 that receives a given ingress flow 809 , which may include multiple packets 811 a, 811 b, . . . , 811 n, on a first ingress link 813 a.
- the first ingress link 813 a may be a member of a link aggregation group 812 , which also includes a second ingress link 813 b.
- the node 801 includes an identification unit 847 that identifies parameters associated with the given ingress flow 809 to include in a first key 861 and a second key 862 .
- the first mapping unit 842 After the identification unit 847 or a first mapping unit 842 builds the first key 861 , the first mapping unit 842 searches a first lookup table 851 for a match of the first key 861 .
- a linking unit 843 then links the search of the first lookup table 851 to a search of a second lookup table 852 .
- the linking unit 843 may receive an index value 863 from the first lookup table 581 and provide part of the second key 862 , such as an egress flow identifier 864 , to a second mapping unit 844 .
- the linking unit 843 may include Static Random Access Memory (SRAM) having an entry addressed by the index value 863 .
- the entry may include the egress flow identifier 864 . In this manner, the given ingress flow 809 is mapped to the egress flow identifier 864 .
- SRAM Static Random Access Memory
- the node 801 may also include a hashing unit 830 that hashes or calculates a hash value 866 based on a unique identifier 865 available in the given ingress flow 809 .
- the unique identifier 865 may include source and destination Media Access Control (MAC) addresses or source and destination Internet Protocol (IP) addresses.
- the second mapping unit 844 may build the second key 862 using the result 866 of the hashing unit 830 , the result 866 of the linking unit 843 , and other key parameters 867 identified by the identification unit 847 . The second mapping unit 844 may then search the second lookup table 852 for a match of the second key 862 .
- the second mapping unit 844 may provide an egress link member identifier 869 corresponding to the match to the traffic forwarding unit 846 .
- the second mapping unit 844 may map the egress flow identifier 864 to the egress link member identifier 869 .
- the egress link member identifier 689 identifies an egress link (e.g., a first egress link 823 a or a second egress link 823 b ) to which to forward the given ingress flow 809 .
- the egress links 823 a - b may be members of an aggregated group 822 associated with an egress interface 848 .
- the traffic forwarding unit 846 then forwards the given ingress flow 809 to the egress link member corresponding to the egress link member identifier 869 (e.g., the second egress link member 823 b ).
- FIG. 9 is an example flow diagram 900 performed by elements of a communications system according to an embodiment of the present invention.
- a network node maps an ingress interface to an egress flow identifier ( 902 ).
- the network node maps the egress flow identifier to a member of an aggregated group associated with an egress interface based on information available in a given ingress flow ( 904 ).
- the network node forwards a given ingress flow to a member of the aggregated group associated with the egress interface ( 906 ) and ends the above process ( 908 ).
- FIG. 10 is another example flow diagram performed by elements of the communications system.
- parameters of a first key are identified for a given ingress flow ( 1002 ).
- a first look-up table is searched to find a match for the first key ( 1004 ).
- a key parameter is identified based on an index value from the search of the first look-up table ( 1006 ).
- the second look-up table is searched to find a second key that includes the key parameter ( 1008 ).
- the given ingress flow is forwarded to a member of an aggregated group associated with a key in the second look-up table matching the second key ( 1010 ).
- the above process 1000 then ends 1012 .
- FIG. 11 is an example flow diagram performed by elements of a communications system 1100 .
- a first key is identified from a given ingress flow ( 1102 ).
- a CAM is searched to find a match for the first key and to obtain an index corresponding to the matching key ( 1104 ).
- An aggregated group identifier is obtained based on the index ( 1106 ).
- the source and destination IP addresses of the given ingress flow are hashed to obtain a hash key parameter ( 1108 ).
- the CAM is searched to find a match for a second key including the hash key parameter and the aggregated group identifier ( 1110 ).
- the given ingress flow is forwarded to a member of an aggregated group associated with a key in the CAM matching the second entry ( 1112 ).
- the above process 1100 ends ( 1114 ).
- forwarding logic i.e., packet processor, CAM, and so forth
- the forwarding logic may be implemented in a line card, a motherboard (containing the forwarding and switching logic on the same printed circuit board (PCB), or any other medium known to a person having ordinary skill in the art.
Abstract
Description
- This application is a continuation-in-part of U.S. application Ser. No. 11/447,692, filed Jun. 5, 2006, entitled “A Method and Apparatus for Performing Link Aggregation.” The entire teachings of the above application are incorporated herein by reference.
- Link aggregation allows for the grouping of multiple physical links or ports within a network node into a single aggregated interface. Aggregated interfaces can be used for increasing bandwidth of an interface and for providing port level redundancy within an interface. An ingress interface on a line card residing in the network node receives flows including multiple packets and forwards these flows to port members of an aggregated group associated with an egress interface. Line cards may utilize Content Addressable Memory (CAM) to increase the speed of link aggregation and minimize the effects of search latency.
- CAM, however, is expensive and, together with static RAM or other logic, consumes a significant amount of power and takes up board space. In addition, the number of entries in the CAM used for link aggregation greatly expands as the number of aggregated links increases. As a result, the CAM has a limited number of entries for performing other necessary and useful functions, including functions associated with a multi-service network node.
- A network node or corresponding method in accordance with an embodiment of the present invention reduces a number of CAM entries required to perform link aggregation. In one embodiment, a first mapping unit maps a given ingress flow to an egress flow identifier. A second mapping unit, in turn, maps the egress flow identifier to a member of an aggregated group associated with an egress interface based on information available in the given ingress flow. A flow forwarding unit forwards the given ingress flow to the member of the aggregated group associated with the egress interface.
- The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
-
FIG. 1 is a network diagram of a portion of a communications network employing an embodiment of the present invention; -
FIG. 2 is a block diagram of an example switch used in a communications network; -
FIG. 3 is a block diagram of a switch that includes an ingress line card with example components; -
FIGS. 4-6 are block diagrams of a switch illustrating multiple operations of the example components in an ingress line card according to embodiments of the present invention; -
FIGS. 7-8 are block diagrams illustrating example components in a node of a communications network according to embodiments of the present invention; -
FIGS. 9-11 are example flow diagrams performed by elements of a communications system according to embodiments of the present invention; - A description of example embodiments of the invention follows.
- Typically, when a link aggregated interface of a multi-service switch receives a given flow, it searches a lookup table to determine a port member of an egress interface to which to forward the flow. The lookup table is often programmed into Content Addressable Memory (CAM) because of its speed and flexibility in supporting multiple services. In the case of
layer 2 switched VLAN traffic, hundreds or thousands of VLANs may be associated with the link aggregated interface. As a result, the CAM may need tens of thousands of entries.FIG. 1 illustrates a network with switches that use link aggregation.FIG. 2 illustrates an example switch with line cards and a switchmatrix supporting Layer 2 switching that may use CAM with a lookup table to support the switching. An example embodiment of the present invention illustrated inFIG. 3 reduces the number of CAM entries needed by executing a hashing algorithm on source and destination addresses of a packet in the given flow. Example embodiments of the present invention illustrated inFIGS. 4-11 further reduce the number of CAM entries needed by performing two different successive lookups. The tradeoff for reducing the number of CAM entries is increased latency because of the additional lookup. The latency, however, may be reduced by dividing the CAM into multiple cascaded CAMs and performing multiple lookups in parallel. For example, a CAM may be divided into four CAMs with each CAM dedicated to a portion of the VLANs supported by an ingress interface.FIGS. 1-11 are presented in detail below, in turn. -
FIG. 1 is a network diagram of a portion of acommunications network 100 employing an example embodiment of the present invention. This portion of thecommunications system 100 includes twoswitches 110, 120 (Switch A and Switch B).Switch A 110 may include any number ofingress ports physical links physical links physical links link aggregation groups - In an example embodiment, each
link aggregation group -
Switch B 120 may similarly have any number ofegress ports physical links B 120 may logically bind together groups of thephysical links link aggregation groups Switch A 110 may also haveegress ports physical links 125 toingress ports Switch B 120. Both Switch A 110 and Switch B 120 may bind together the group of thephysical links 125 connecting the twoswitches link aggregation group 130. - A given flow, including any number of
packets ingress port 114 a. The given flow may include multiple packets having the same source and destination addresses. Packets that are not members of the given flow may be interspersed among packets (e.g., packets 111 a-n) that are members of the given flow. - Switch
A 110 may transmit or forward the same or a different flow, includingpackets B 120 via at least one of thephysical links 125 connecting Switch A's egress ports 119 a-d to Switch B's ingress ports 129 a-d. SwitchB 120, in turn, may transmit the same or a different flow, includingpackets physical links 127 connected to Switch B's egress ports, such as thelowermost port 128 n, as illustrated. In this manner, flows are transmitted between nodes in thecommunications network 100 via a Label Switched Path (LSP) or other type of path, such as an Internet Protocol (IP) path. - The aggregator (not shown) may distribute received frames from a higher application to one of the links used by the aggregator. In addition, the aggregator may transmit received frames from one of the links on a link aggregation group to a higher layer application in the order that they are received.
- The aggregator (not shown) may operate according to two modes: incremental bandwidth mode and link protection mode. In incremental bandwidth mode, a user can increase or decrease the bandwidth of interfaces built on an aggregator by adding or deleting members to or from the link aggregation group. For example, a user may wish to upgrade from a 100 Megabit fast Ethernet link without subscribing to a costly Gigabit fast Ethernet link. In incremental bandwidth mode, the user can take two 100 Megabit fast Ethernet links and bond them together using link aggregation to get effectively 200 Megabits of bandwidth.
- In link protection mode, an “active” member is the only member within an aggregator that can transmit, while all members of the aggregator can receive. In link protection mode, the maximum bandwidth of an interface that is built on the aggregator is the bandwidth of a single member and not the sum of all the members as in incremental bandwidth mode. Thus, the other members are reserved for future use in case the “active” member goes down.
-
FIG. 2 is a block diagram of an example switch 200 (Switch A) used in a communications network.Switch A 210 may include multiple ingress line cards, such as ingress line cards A andB B switch fabric 240. Aflow 209, including any number ofpackets link member 213 of alink aggregation group 212 associated with ingressline card A 232. In other embodiments, the ingress interface may not be aggregated. The ingressline card A 232 determines the appropriate egress line card and egress line card port to forward theflow 209 and forwards theflow 209 via theswitch fabric 240 to one of theegress line cards packets link aggregation group 222. -
FIG. 3 is a block diagram of aswitch 300 that includes aningress line card 332 illustrating example components of theingress line card 332. Theswitch 300 also includes aswitch fabric 340 and anegress line card 333. Theingress line card 332 includes apacket processor 330,logic 336, a central processing unit (CPU) 334, and Content Addressable Memory (CAM) 335. Thepacket processor 330 connects to thelogic 336 via a bidirectional line 345. Thelogic 336 formats data from theresult SRAM 337 in a way that thepacket processor 334 understands, and thelogic 336 formats data from thepacket processor 334 in a way that theCAM 335 understands. Thelogic 336 connects to Content Addressable Memory (CAM) 335, and theCAM 335, in turn, connects to a result Static Random Access Memory (SRAM) 337. Theresult SRAM 337 then connects back to thelogic 336. Thelogic 336 may be programmed into a Field Programmable Gate Array (FPGA). Thepacket processor 330, via thelogic 334, may access information, such as keys 338 (shown as sets of numbers within brackets), that is organized and stored in theCAM 335. - In one example embodiment, the
CAM 335 may have a maximum of 512,000 entries that are 72 bits wide or 256,000 entries that are 144 bits wide. Each CAM entry may have a corresponding SRAM entry. Thus, in this embodiment, theresult SRAM 337 may have at least 512,000 or 256,000 entries if the CAM has 512,000 or 256,000 entries, respectively. Theresult SRAM 337 may have 192-bit-wide entries to accommodate other information besides an egress aggregate flow identifier and a flag. - The
ingress line card 332 includesingress ports ingress line card 332 may bond together the ingress ports 314 a-d into an ingresslink aggregation group 312. Theingress line card 332 connects through theswitch fabric 340 to theegress line card 333 havingegress ports egress line card 333 may also bond together the egress ports 319 a-d into an egresslink aggregation group 322. In other embodiments, any number of the ingress ports 314 a-d and egress ports 319 a-d may not be logically bound together into link aggregation groups, such as the ingress and egresslink aggregation groups - A network operator may provision (or signal) the
Ingress Line Card 332 with configuration settings using an embodiment of the present invention. For example, the network operator may enter configuration information for a customer usingVLAN ID 10 on a given fast Ethernet interface via an operator interface. In this manner, the network operator builds a circuit on the fast Ethernet interface ofVLAN ID 10. TheCPU 335 may then program theCAM 335, the result SRAM, and thepacket processor 334 via thelogic 336. For example, theCPU 334 may execute a lower layer of software that programs the appropriate CAM entries (i.e., CAM keys and corresponding SRAM results) via thelogic 336. TheCPU 335 may also program thepacket processor 334 with microcode instructions to analyze a given ingress flow and access information from theCAM 335 and resultSRAM 337 in order to determine a link on which to forward a given ingress flow. TheCPU 334 may further program thepacket processor 330 with the encapsulation type ofport 314 a of the ingress interface. In this example embodiment, the encapsulation type islayer 2 switched VLAN traffic. - After the
CPU 334 programs theCAM 335, resultSRAM 337, andpacket processor 330, the packet processor 330 (1) receives aflow 309, includingmultiple packets port 314 a) of theingress line card 332, (2) builds a key 325 based on the contents of theflow 309, and (3) launches a CAM lookup with the key 325. Thepacket processor 330 executes these functions because it can do so at a significantly greater packet rate than a CPU (e.g., more than 50 million packets per second). - For a
layer 2 switched Virtual Local Area Network (VLAN) key type, the key 325 may include four key parameters: aningress interface identifier 323 a, VLAN identifier 323 b, three-bit priority 323 c, and hash value 323 d ({L2FlowId, VLAN, Priority, Hash}). The key 325 may include different key parameters for key types, such as Internet Protocol (IP), Ethernet, or other non-VLAN key types. When thepacket processor 330 receives theflow 309 from theport 314 a of the ingresslink aggregation group 312, it populates the key'sfirst entry 323 a with thelayer 2 flow identifier identifying the ingress interface (e.g., “1000”). Thepacket processor 330 then looks at the flow's Ethernet header (not shown) to make sure the packet headers are correct and to identify the flow's VLAN tag or identifier that identifies the VLAN with which theflow 309 is associated. Thepacket processor 330 looks-up the VLAN identifier to determine on which interface to send out theflow 309 and, optionally, swaps in a new VLAN identifier in place of the one theflow 309 had when the switch 310 received it. - The
packet processor 330 also extracts the priority from a priority field, such as a three-bit priority field, in the VLAN header that is used to prioritize flows and to populate the prioritykey parameter field 323c with the priority (e.g., priority “0”). Finally, depending on the flow type, thepacket processor 330 extracts the source and destination addresses from the flow's Ethernet headers and runs an algorithm on the source and destination addresses to calculate a hash value, such as a four-bit hash value. Thepacket processor 330 populates the hash key parameter field 323 d with this hash value. The hash value indicates the specific egress port member of the egress link aggregation group 315 to forward theflow 309. Note that in a switch having an egress interface that is not link aggregated, the CAM keys may not include a hash field because there is no need for link aggregation. - Use of a hashing technique, such as the one described immediately above, decreases the size of a table (e.g., CAM and corresponding result SRAM) that must be indexed to determine the specific egress port member to which to forward a flow. For example, a 48-bit source MAC address requires a table having 248 entries. But, by using a hashing technique, that 48-bit source MAC address may be compressed to a smaller number, such as a 10-bit number. Hashing produces duplicates or “collisions.” For example, subsets of 48-bit number variations compress to a same 10-bit number. Thus, a table may have multiple entries at a certain index that hash to the same value. As a result, hashing increases the efficiency of a lookup. In one embodiment, a hashing algorithm compresses a 48-bit source MAC address and a 48-bit destination MAC address to 4 bits. In other words, 96-bits of information are compressed to a 4-bit number. Thus, many combinations of source and destination MAC addresses can hash to the same 4-bit value. If there are a larger number of flows, there is a better chance of getting an equal distribution across all egress port members.
- The hashing is typically random enough to provide some variance so that traffic is distributed evenly across the links. The hashing may be a CRC or an exclusive-or (XOR) type of operation. Hashing may also be performed on a per flow basis or on a per packet basis. With per-flow hashing, whether there are two flows or a thousand flows, if the flows originate from the same source MAC address destined to the same destination MAC address, they all hash to the same link because the same hashing operation is performed on each of the flows. Variance in the source and destination MAC addresses of the flows causes distribution flows across multiple links. For example, if several flows originate from the same source MAC address, but they are destined to different destination MAC addresses, there is a greater probability that some flows will hash to a first link and some flows will hash to a second link.
- A hashing operation may also be performed on individual packets, which are distributed across different links based on the hashing, even if those packet are part of the same flow. However, individual packets may arrive at a receiving side out of order. As a result, the receiving side must put individual packets back in order. This involves a significant amount of overhead and some protocols cannot handle packets that arrive out of order. Therefore, a hashing algorithm may be run on a per flow basis, and the flows are distributed accordingly to ensure that packets associated with the same flow arrive in order.
- In the example embodiment illustrated in
FIG. 3 , the two least significant bits of the four-bit hash value identify the egress port members (319 a-d) of the egresslink aggregation group 322. A hash value of “00” identifiesegress port member 319 a, a hash value of “01” identifiesegress port member 319 b, a hash value of “10” identifiesegress port member 319 c, and a hash value of “11” identifiesegress port member 319 d. In another embodiment, a four-bit hash value may be used that supports up to sixteen egress port members. Other numbers of bits used for hash values support other numbers of egress port members. - After the packet processor populates the key 325 with key parameters 323 a-d, it launches a lookup with the key 325. Specifically, this lookup causes a search of
keys 338 in theCAM 335 for a matching key. If there is a match, theCAM 335 returns anaddress 341 that indexes another lookup table 338 in theresult SRAM 337 that has the CAM result, which may include the egressport member identifier 343. Theaddress 341 may be an index or a pointer to some area in theresult SRAM 337. The egressport member identifier 343 may include, for multipleegress line cards 333, a destination egress line card identifier and an output connection identifier (OCID). The contents of theresult SRAM 337 indexed by the CAM result are then provided to thepacket processor 330. Thepacket processor 330 then forwards theflow 309 to the appropriate egress port member (e.g.,port member 319 d identified by hash value “11”) viaswitch fabric 340 based on the egressport member identifier 343. - In summary, in the above-described example embodiment of
FIG. 3 , flows (e.g., flow 309) from one VLAN identified by the number “10” (323 b) may come into thepacket processor 330 through an ingress port member (e.g.,port member 314 a) of the ingresslink aggregation group 312. The egresslink aggregation group 322 of theegress line card 333 may include only two active port members (e.g.,port members link members - A given link can support multiple VLANs (i.e., “logical subinterfaces”). Thus, another VLAN (e.g., “11”) on the same ingress interface (e.g., “1000” (323 a)) associated with the ingress
link aggregation group 312 may be forwarded to the same twoport members packet processor 330 and the egresslink aggregation group 322 includes four active port members 319 a-d, sixteen CAM entries (338) are used for each of the VLANs identified by the numbers 10-13. - Thus, the number of CAM entries is equal to the number of VLANs a user desires to support multiplied by the number of aggregated egress links or port members of the egress
link aggregation group 322. For large numbers of VLANs, many CAM entries are used. For example, theingress line card 332 may support 4,000 VLANs, numbered 10 to 4009, and theegress line card 333 may include two aggregated egress links. In this case, 4,000×2=8,000 CAM entries that are used to service all possible combinations. -
FIG. 4 is a block diagram of aswitch 400 illustrating example components in aningress line card 412 according to an embodiment of the present invention. In particular,FIG. 4 illustrates a new manner in which to set up the CAM entries to provide packet or flow distribution across outgoing links, i.e., determine the outgoing links to direct flows, as a function of the incoming flow. Like the switch 310 inFIG. 3 , a switch 410 includes aningress line card 412,switch fabric 440, and anegress line card 433. Theingress line card 412 includes apacket processor 430,CPU 434,CAM 435,logic 436, and ResultSRAM 437. The CAM may be a Ternary CAM (TCAM) that has three possible lookups or choices: a binary 0,binary 1, or “Don't Care” (i.e., either a binary 0 or binary 1). - The
packet processor 430 receives aflow 409, includingmultiple packets ingress port 414. Thepacket processor 430 then builds afirst key 421 formatted to hit a CAM entry and launches a first CAM lookup. Thefirst key 421 includes three key parameters. The firstkey parameter 451 a is alayer 2 flow identifier, which identifies the interface from where theflow 409 originated. The secondkey parameter 451 b is a VLAN identifier which thepacket processor 430 extracts from the header of the packets 411 a-n in theflow 409. The thirdkey parameter 451 c is a priority which thepacket processor 430 also extracts from the header of the packets 411 a-n in theflow 409. Thefirst key 421, however, does not include a hash key parameter. Thus, thepacket processor 430 does not extract source and destination addresses, such as a MAC or IP address, from theflow 409 and calculate a hash value when it builds thefirst key 421. - After the
packet processor 430 builds thefirst key 421, it launches a first lookup by sending thefirst key 421 to theCAM 435. TheCAM 435 searches a first lookup table 438 for a matching key and returns an address orfirst index 441 used to index theResult SRAM 437. The information contained in an entry of the Result SRAM located at thefirst index 441 may be afirst result 443 that includes an “aggregated” bit or flag and the egress aggregate flow identifier. The “aggregated” bit indicates to thepacket processor 430 that it should launch a second CAM lookup (also referred to as the “aggregated lookup”) because the egress interface associated with the VLAN ingress flow is aggregated. The egress aggregate flow identifier, for example, may be an 18-bit number. Thepacket processor 430 then builds asecond key 423 formatted to hit another CAM entry. - The
second key 423 includes four key parameters. The first key parameter is the flow typekey parameter 453 a. The flow typekey parameter 453 a identifies what type of flow is being sent out on an aggregated interface, such as theegress line card 433. When thepacket processor 430 builds thesecond key 423, it knows the flow type of theflow 409 from the first lookup. In one embodiment, the flow typekey parameter 453 a is used to distinguish between different forwarded flows that are traversing the same egress aggregated interface. For example, iflayer 2 traffic and IP traffic are both traversing the same Resource Reservation Protocol (RSVP) Label-Switched Path (LSP), then the flow typekey parameter 453 a is used to distinguish thelayer 2 flow from the IP flow. Theingress line card 412 and theegress line card 433 may receive and send, respectively, multiple flows of different types. For example, the flows may include IP flows andlayer 2 switched VLAN flows. - The second key parameter is the egress
aggregate flow identifier 453 b. This parameter is a globally unique node- or router-wide flow identifier that is allocated and associated with every egress logical flow that is built on an aggregated interface. The second lookup identifies the traffic characteristics of that flow. In an example implementation, different flows can be assigned by different types of traffic parameters. One flow may be a higher priority flow than another flow. In preferred embodiments, the flows do not interfere with another flow. The way the different types of flows are identified may be through using this aggregate flow ID, and each may be given a certain type of treatment. - The third key parameter is a miscellaneous
key parameter 453 c. This key parameter may provide additional information that is specific to theflow type 453 a and the egressaggregate flow identifier 453 b. The miscellaneouskey parameter 453 c is used to make a more qualified decision as to which Output Connection Identifier (OCID) to choose. For example, if an ingress LSP is built on an aggregated IP interface and a Virtual Private LAN Service (VPLS) Destination MAC (DMAC) forwarding decision is made that returns the egress aggregate flow identifier of that LSP, then the second CAM lookup (i.e., the aggregate CAM lookup) may also need to take into account the VPLS instance identifier in order to obtain the final OCID to be used for that LSP. In this embodiment, however, the miscellaneouskey parameter 453 c is not used. - The last key parameter is the
hash value 453 d, which is calculated based on the source and destination MAC addresses of theflow 409. - After the
packet processor 430 builds thesecond key 423, it launches a second CAM lookup by providing thesecond key 423 to theCAM 435. TheCAM 435 searches a second lookup table 439 for a key matching thesecond key 423 and provides an address orfirst index 441 used to index theResult SRAM 437. The contents of theResult SRAM 437 at thefirst index 441 is afirst result 443 which may include an egress port member identifier. The egress port member identifier may include, for multiple egress line cards (433), a destination egress line card identifier identifying the egress line card to which to forward theflow 409, and an OCID identifying the port member of the egress line card to which to forward theflow 409. Thepacket processor 430 then forwards theflow 409 to the appropriate egress port member (e.g., a port member identified by hash value “x1”) via theswitch fabric 440. - In other words, a first lookup operation involves mapping an incoming flow that arrives on an incoming interface to an outgoing aggregated flow identifier. In other embodiments, the first lookup operation may involve mapping an {interface, flow} tuple to the outgoing aggregated flow identifier. A second lookup operation involves mapping the outgoing aggregated flow identifier to an outgoing link member of the aggregated group. In this embodiment, the outgoing aggregate flow identifier links the first lookup operation to the second lookup operation.
- As described above, example embodiments of the present invention re-organize the keys in the CAM so that the first lookup is independent of the hash value. It is the use of the hash value that requires a significant number of CAM entries because each VLAN, for example, needs CAM entries corresponding to every possible hash value. The possible hash values come up in the second lookup. The number of CAM entries required by example embodiments is equal to about the number of ingress flows supported by an ingress interface plus the number of members of the aggregated group associated with the egress interface. For example, if 4,000 VLANs come in on the same ingress interface and they are destined to the same egress aggregated interface which has two members (e.g., two ports of an aggregated group), then the ingress interface needs 4,000+2=4,002 CAM entries. In comparison, for the single lookup embodiment (e.g.,
FIG. 3 ), the ingress interface needs 4,000×2=8000 CAM entries. - A switch may have multiple egress interfaces, each of which is aggregated and has eight members. In this case, the ingress interface of the double lookup embodiment (e.g.,
FIG. 4 ) needs 4,000+8=4,008 CAM entries, whereas the ingress interface of the single lookup embodiment uses 4,000×8=32,000 CAM entries. Thus, a primary advantage of the double lookup embodiment is scalability. That is, fewer CAM entries are used for a greater number of flows. However, the number of CAM entries is reduced at the expense of having to do one more look up. - A switch is typically designed to minimize latency. If there is too much latency, packets take longer to get through the switch, and packets need to be buffered for a greater length of time. Embodiments of the present invention increase latency by performing two successive lookups instead of increasing the number of CAMs entries. Adding CAM to a switch may increase the latency by a given number of clock cycles, but performing a second lookup may increase latency, for example, by half the given number of clock cycles.
- In a multi-service switch, increasing latency is better than increasing the number of CAM entries because a larger number of new packets of different services may be supported. For example, switching or routing devices employing embodiments of the present invention may support frame relay services, ATM services, Ethernet, GigaEthernet (GigE), IP, IPv6, MPLS, VLAN. These services, whether they involve switching or routing, each require CAM resources in order to perform the forwarding function.
- Link aggregation is often implemented in
pure layer 2 Ethernet switches. In this case, there is no concern about using up CAM resources. In fact, the switch may not use a CAM. For example, the switch may use a different data structure that is optimized strictly forlayer 2 Ethernet. But, a CAM is the most flexible hardware today in a switch or router that supports multiple service types. - Many CAMs only support serial lookups. For example, in a system with four CAMs, a lookup operation involves searching each of the four CAMs one at a time until there is a match. However, a CAM may be designed to support parallel lookups in order to decrease the latency introduced by embodiments of the present invention. Thus, the first and second lookups involve performing four parallel lookups in the four respective CAMs.
- Other example flow types include port to port and IP. For port to port flows, the first key (or forwarding lookup key) includes a
layer 2 flow identifier. The result of the first key lookup includes (i) an input connection identifier, (ii) an “aggregated” bit indicating that the egress interface associated with the ingress flow is aggregated, and (iii) the egress aggregate flow identifier. The second key (or aggregate lookup key) includes a port key type parameter that identifies the new aggregate lookup table as a hash lookup for aggregated interfaces. The result of the second key lookup includes the OCID and a destination egress line card identifier. The hash value for the second key is calculated from the source and destination MAC addresses of a given port to port flow. - For IP flows, the first key includes a VPN identifier and a destination IP address. The result of the first key lookup includes the “aggregated” bit and the egress aggregate flow identifier. The second key includes an IP destination key type parameter, the egress aggregate flow identifier, a miscellaneous key parameter, which may be a traffic class identifier, and the hash value. The result of the second key lookup includes the OCID and a destination egress line card identifier. The hash value for the second key is calculated from the source and destination IP addresses of a given IP flow.
-
FIG. 5 is a block diagram of aswitch 500 illustrating example components in aningress line card 512 according to another embodiment of the present invention. In particular,FIG. 5 illustrates an embodiment of the invention that uses two successive lookups as applied toFIG. 3 . ACAM 535 ofFIG. 5 may include only eightCAM entries CAM entries 338 in theCAM 535 ofFIG. 3 . Like the switch 310 inFIG. 3 , theswitch 500 includes aningress line card 512,switch fabric 540, and anegress line card 533. Theingress line card 512 includes apacket processor 530,CPU 534,CAM 535,logic 536, and ResultSRAM 537. TheCAM 535 includes four entries in a first CAM lookup table 538 and four entries in a second CAM lookup table 539. - The
packet processor 530 receives aflow 509, includingmultiple packets ingress port 514 a. Thepacket processor 530 then builds a first key 521 formatted to hit a CAM entry in the first CAM lookup table 538. The first key 521 includes three key parameters as described above with reference toFIG. 4 . After thepacket processor 530 builds the first key 521, it launches a first lookup by sending the first key 521 to theCAM 535. TheCAM 535 searches the first lookup table 538 for a matching key (e.g., a first CAM entry for the first CAM lookup) and returns an address orfirst index 541 used to index theResult SRAM 537. - The information contained in an entry of the
Result SRAM 537 located at thefirst index 541 is afirst result 543 that includes an input connection identifier (ICID) (e.g., 200), an “aggregated” (e.g., 1) bit indicating that thepacket processor 530 should launch a second CAM lookup, and the egress aggregate flow identifier (e.g., 100). Thepacket processor 530 then builds asecond key 523 formatted to hit a CAM entry in the second lookup table 539. To build thesecond key 523, thepacket processor 530 calculates a hash value (e.g., “11”) based on the source and destination MAC addresses of theflow 509. - The
second key 523 includes four key parameters as described above with reference toFIG. 4 . After thepacket processor 530 builds thesecond key 523, it launches a second lookup by sending thesecond key 523 to theCAM 535. TheCAM 535 searches a second lookup table 539 for a matching key (e.g., a fourth CAM entry in the second CAM lookup table 539) and returns an address orsecond index 542 used to index theResult SRAM 537. The information contained in an entry of theResult SRAM 537 located at thesecond index 542 is asecond result 545 that includes, for multiple egress line cards, a destination egress line card (e.g., 1) and an output connection identifier (OCID) (e.g., 303). Thepacket processor 530 then forwards theflow 509 to the appropriate egress port member (e.g.,port member 519 d (303) corresponding to hash value “11”) via theswitch fabric 540. -
FIG. 6 is a block diagram of aswitch 600 illustrating example components in aningress line card 612 according to another embodiment of the present invention. Like theswitch 400 inFIG. 4 , theswitch 600 includes aningress line card 612,switch fabric 640, andegress line card 633. Theingress line card 612 includes apacket processor 630,CPU 634,CAM 635,logic 636, and ResultSRAM 637. TheCAM 635 includes one entry in a first CAM lookup table 638 and two entries in a second CAM lookup table 639. - The
packet processor 630 receives aflow 609, includingmultiple packets single ingress port 614. Thepacket processor 630 then builds afirst key 621 formatted to hit a CAM entry in the first CAM lookup table 638. Thefirst key 621 includes three key parameters as described above in reference toFIG. 4 . After thepacket processor 630 builds thefirst key 621, it launches a first lookup by sending thefirst key 621 to theCAM 635. TheCAM 635 searches the first lookup table 638 for a matching key (e.g., a first CAM entry in the first CAM lookup table 638) and returns an address orfirst index 641 used to index theResult SRAM 637. The information contained in an entry of theResult SRAM 637 located at thefirst index 641 is afirst result 643. Thepacket processor 630 then builds asecond key 623 based on thefirst result 643 and formatted to hit a CAM entry for thesecond lookup 639. To build thesecond key 623, thepacket processor 630 calculates a hash value (e.g., “x1”) based on the source and destination MAC addresses of theflow 609. - The
second key 623 includes four key parameters as described above in reference toFIG. 4 . After thepacket processor 630 builds thesecond key 623, it launches a second lookup by sending thesecond key 623 to theCAM 635. TheCAM 635 then searches a second lookup table 639 for a matching key (e.g., a second CAM entry in the second CAM lookup table 639). In this embodiment, theresult 645 of the second lookup corresponds directly to a port ID because the index value returned by theCAM 635 self-identifies the port ID due to predetermined placement of data in theCAM 635. Thus, when the matching key is found, theCAM 635 returns anegress port identifier 645, so there is no need in this embodiment to pass the second index (i.e., port ID 645) through theResult SRAM 637. An advantage of this embodiment is decreased latency because theResult SRAM 637 is indexed once instead of twice. Moreover,less result SRAM 637 space is used because Result SRAM entries corresponding to the entries in the second CAM lookup table 639 are eliminated. Thepacket processor 630 then forwards theflow 609 to the appropriate egress port member (e.g., the port member identified by hash value “x1”) viaswitch fabric 640. -
FIG. 7 is a block diagram illustrating example components of anode 701 in acommunications network 700 according to one embodiment. Thenode 701 includes aningress interface 740 that receives a giveningress flow 709, which may includemultiple packets first ingress link 713 a. Thefirst ingress link 713 a may be a member of alink aggregation group 712, which also includes asecond ingress link 713 b. Afirst mapping unit 742 maps the giveningress flow 709 to anegress flow identifier 743. Asecond mapping unit 744, in turn, maps theegress flow identifier 743 to an egresslink member identifier 745 based on information available in the giveningress flow 709. The egresslink member identifier 745 identifies an egress link (e.g., afirst egress link 723 a or a second egress link 723 b) to which to forward the giveningress flow 709. The egress links 723 a-b may be members of an aggregatedgroup 722 associated with anegress interface 748. Aflow forwarding unit 746 then forwards the giveningress flow 709 to the egress link member corresponding to the egress link member identifier 745 (e.g., the second egress link member 723 b). -
FIG. 8 is a block diagram illustrating example components of anode 801 in acommunications network 800 according to another embodiment. Thenode 801 includes aningress interface 840 that receives a giveningress flow 809, which may includemultiple packets first ingress link 813 a. Thefirst ingress link 813 a may be a member of alink aggregation group 812, which also includes asecond ingress link 813 b. Thenode 801 includes anidentification unit 847 that identifies parameters associated with the giveningress flow 809 to include in afirst key 861 and asecond key 862. - After the
identification unit 847 or afirst mapping unit 842 builds thefirst key 861, thefirst mapping unit 842 searches a first lookup table 851 for a match of thefirst key 861. A linkingunit 843 then links the search of the first lookup table 851 to a search of a second lookup table 852. For example, the linkingunit 843 may receive anindex value 863 from the first lookup table 581 and provide part of thesecond key 862, such as anegress flow identifier 864, to asecond mapping unit 844. The linkingunit 843 may include Static Random Access Memory (SRAM) having an entry addressed by theindex value 863. The entry may include theegress flow identifier 864. In this manner, the giveningress flow 809 is mapped to theegress flow identifier 864. - The
node 801 may also include ahashing unit 830 that hashes or calculates ahash value 866 based on a unique identifier 865 available in the giveningress flow 809. The unique identifier 865 may include source and destination Media Access Control (MAC) addresses or source and destination Internet Protocol (IP) addresses. Thesecond mapping unit 844 may build thesecond key 862 using theresult 866 of thehashing unit 830, theresult 866 of the linkingunit 843, and other key parameters 867 identified by theidentification unit 847. Thesecond mapping unit 844 may then search the second lookup table 852 for a match of thesecond key 862. - When the
second mapping unit 844 finds a match, it may provide an egresslink member identifier 869 corresponding to the match to thetraffic forwarding unit 846. In this manner, thesecond mapping unit 844 may map theegress flow identifier 864 to the egresslink member identifier 869. The egress link member identifier 689 identifies an egress link (e.g., afirst egress link 823 a or asecond egress link 823 b) to which to forward the giveningress flow 809. The egress links 823 a-b may be members of an aggregated group 822 associated with anegress interface 848. Thetraffic forwarding unit 846 then forwards the giveningress flow 809 to the egress link member corresponding to the egress link member identifier 869 (e.g., the secondegress link member 823 b). -
FIG. 9 is an example flow diagram 900 performed by elements of a communications system according to an embodiment of the present invention. After starting (901), a network node maps an ingress interface to an egress flow identifier (902). The network node then maps the egress flow identifier to a member of an aggregated group associated with an egress interface based on information available in a given ingress flow (904). Finally, the network node forwards a given ingress flow to a member of the aggregated group associated with the egress interface (906) and ends the above process (908). -
FIG. 10 is another example flow diagram performed by elements of the communications system. After starting (1001), parameters of a first key are identified for a given ingress flow (1002). A first look-up table is searched to find a match for the first key (1004). A key parameter is identified based on an index value from the search of the first look-up table (1006). Next, the second look-up table is searched to find a second key that includes the key parameter (1008). The given ingress flow is forwarded to a member of an aggregated group associated with a key in the second look-up table matching the second key (1010). Theabove process 1000 then ends 1012. -
FIG. 11 is an example flow diagram performed by elements of acommunications system 1100. After starting (1101), a first key is identified from a given ingress flow (1102). A CAM is searched to find a match for the first key and to obtain an index corresponding to the matching key (1104). An aggregated group identifier is obtained based on the index (1106). The source and destination IP addresses of the given ingress flow are hashed to obtain a hash key parameter (1108). Next, the CAM is searched to find a match for a second key including the hash key parameter and the aggregated group identifier (1110). Finally, the given ingress flow is forwarded to a member of an aggregated group associated with a key in the CAM matching the second entry (1112). Theabove process 1100 then ends (1114). - While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
- The term “about” allows for any differences that are within the spirit and scope of the inventions described in the present specification.
- It should be understood that the forwarding logic (i.e., packet processor, CAM, and so forth) may be implemented in a line card, a motherboard (containing the forwarding and switching logic on the same printed circuit board (PCB), or any other medium known to a person having ordinary skill in the art.
Claims (21)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/605,829 US8792497B2 (en) | 2006-06-05 | 2006-11-29 | Method and apparatus for performing link aggregation |
US14/339,863 US20150023351A1 (en) | 2006-06-05 | 2014-07-24 | Method and apparatus for performing link aggregation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US44769206A | 2006-06-05 | 2006-06-05 | |
US11/605,829 US8792497B2 (en) | 2006-06-05 | 2006-11-29 | Method and apparatus for performing link aggregation |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US44769206A Continuation-In-Part | 2006-06-05 | 2006-06-05 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/339,863 Continuation US20150023351A1 (en) | 2006-06-05 | 2014-07-24 | Method and apparatus for performing link aggregation |
Publications (2)
Publication Number | Publication Date |
---|---|
US20070280258A1 true US20070280258A1 (en) | 2007-12-06 |
US8792497B2 US8792497B2 (en) | 2014-07-29 |
Family
ID=38790078
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/605,829 Active 2027-04-28 US8792497B2 (en) | 2006-06-05 | 2006-11-29 | Method and apparatus for performing link aggregation |
US14/339,863 Abandoned US20150023351A1 (en) | 2006-06-05 | 2014-07-24 | Method and apparatus for performing link aggregation |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/339,863 Abandoned US20150023351A1 (en) | 2006-06-05 | 2014-07-24 | Method and apparatus for performing link aggregation |
Country Status (1)
Country | Link |
---|---|
US (2) | US8792497B2 (en) |
Cited By (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070140277A1 (en) * | 2005-12-20 | 2007-06-21 | Via Technologies Inc. | Packet transmission apparatus and processing method for the same |
US20070189154A1 (en) * | 2006-02-10 | 2007-08-16 | Stratex Networks, Inc. | System and method for resilient wireless packet communications |
US20080089326A1 (en) * | 2006-10-17 | 2008-04-17 | Verizon Service Organization Inc. | Link aggregation |
US20080181196A1 (en) * | 2007-01-31 | 2008-07-31 | Alcatel Lucent | Link aggregation across multiple chassis |
US20080181103A1 (en) * | 2007-01-29 | 2008-07-31 | Fulcrum Microsystems Inc. | Traffic distribution techniques |
US20080291826A1 (en) * | 2007-05-24 | 2008-11-27 | Harris Stratex Networks Operating Corporation | Dynamic Load Balancing for Layer-2 Link Aggregation |
US20090041013A1 (en) * | 2007-08-07 | 2009-02-12 | Mitchell Nathan A | Dynamically Assigning A Policy For A Communication Session |
US20090041014A1 (en) * | 2007-08-08 | 2009-02-12 | Dixon Walter G | Obtaining Information From Tunnel Layers Of A Packet At A Midpoint |
US20090067324A1 (en) * | 2007-09-06 | 2009-03-12 | Harris Stratex Networks Operating Corporation | Resilient Data Communications with Physical Layer Link Aggregation, Extended Failure Detection and Load Balancing |
US7552275B1 (en) * | 2006-04-03 | 2009-06-23 | Extreme Networks, Inc. | Method of performing table lookup operation with table index that exceeds CAM key size |
US20090232152A1 (en) * | 2006-12-22 | 2009-09-17 | Huawei Technologies Co., Ltd. | Method and apparatus for aggregating ports |
US20090290537A1 (en) * | 2008-05-23 | 2009-11-26 | Nokia Siemens Networks | Providing station context and mobility in a wireless local area network having a split MAC architecture |
US20100246593A1 (en) * | 2009-03-25 | 2010-09-30 | International Business Machines Corporation | Steering Data Communications Packets For Transparent Bump-In-The-Wire Processing Among Multiple Data Processing Applications |
US7830873B1 (en) * | 2007-01-09 | 2010-11-09 | Marvell Israel (M.I.S.L.) Ltd. | Implementation of distributed traffic rate limiters |
US20100316055A1 (en) * | 2009-06-10 | 2010-12-16 | International Business Machines Corporation | Two-Layer Switch Apparatus Avoiding First Layer Inter-Switch Traffic In Steering Packets Through The Apparatus |
US7869432B1 (en) * | 2007-06-29 | 2011-01-11 | Force 10 Networks, Inc | Peer-to-peer link aggregation across a service provider network |
US20110205909A1 (en) * | 2008-10-23 | 2011-08-25 | Huawei Technologies Co., Ltd. | Method, node and system for obtaining link aggregation group information |
CN102271082A (en) * | 2010-06-03 | 2011-12-07 | 富士通株式会社 | Switching apparatus and method for setting up virtual lan |
US20120063311A1 (en) * | 2010-09-10 | 2012-03-15 | Muhammad Sakhi Sarwar | Method and system for providing contextualized flow tags |
US20120136999A1 (en) * | 2010-11-30 | 2012-05-31 | Amir Roitshtein | Load balancing hash computation for network switches |
US20120155395A1 (en) * | 2010-12-21 | 2012-06-21 | Cisco Technology, Inc. | Client modeling in a forwarding plane |
US20120236859A1 (en) * | 2011-03-15 | 2012-09-20 | Force10 Networks, Inc. | Method & apparatus for configuring a link aggregation group on a stacked switch |
CN102843285A (en) * | 2011-06-24 | 2012-12-26 | 中兴通讯股份有限公司 | Distributed link aggregation method and node for realizing same |
US20130044687A1 (en) * | 2011-08-15 | 2013-02-21 | Yong Liu | Long range wlan data unit format |
US20130064246A1 (en) * | 2011-09-12 | 2013-03-14 | Cisco Technology, Inc. | Packet Forwarding Using an Approximate Ingress Table and an Exact Egress Table |
US20130156037A1 (en) * | 2011-12-19 | 2013-06-20 | Alaxala Networks Corporation | Network relay apparatus |
US8509236B2 (en) * | 2007-09-26 | 2013-08-13 | Foundry Networks, Llc | Techniques for selecting paths and/or trunk ports for forwarding traffic flows |
US20130322427A1 (en) * | 2012-05-31 | 2013-12-05 | Bryan Stiekes | Core network architecture |
US20130336166A1 (en) * | 2012-06-15 | 2013-12-19 | Tushar K. Swain | Systems and methods for deriving unique mac address for a cluster |
US20140105215A1 (en) * | 2012-10-15 | 2014-04-17 | Hewlett-Packard Development Company, L.P. | Converting addresses for nodes of a data center network into compact identifiers for determining flow keys for received data packets |
US20140133351A1 (en) * | 2012-11-14 | 2014-05-15 | Hitachi Metals, Ltd. | Communication system and network relay apparatus |
US20140254453A1 (en) * | 2008-05-23 | 2014-09-11 | Nokia Siemens Networks Oy | Providing station context and mobility in a wireless local area network having a split mac architecture |
US20140294010A1 (en) * | 2013-03-29 | 2014-10-02 | International Business Machines Corporation | Asymmetrical link aggregation |
US20140314094A1 (en) * | 2013-04-23 | 2014-10-23 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of implementing conversation-sensitive collection for a link aggregation group |
US20140321449A1 (en) * | 2012-01-12 | 2014-10-30 | Huawei Device Co., Ltd. | Data Communications Method, Apparatus, and System |
US20140337910A1 (en) * | 2008-11-18 | 2014-11-13 | Avigilon Corporation | Method, system and apparatus for image capture, analysis and transmission |
US9237100B1 (en) | 2008-08-06 | 2016-01-12 | Marvell Israel (M.I.S.L.) Ltd. | Hash computation for network switches |
US9294477B1 (en) * | 2006-05-04 | 2016-03-22 | Sprint Communications Company L.P. | Media access control address security |
US9385957B1 (en) * | 2012-11-30 | 2016-07-05 | Netronome Systems, Inc. | Flow key lookup involving multiple simultaneous cam operations to identify hash values in a hash bucket |
US9438505B1 (en) * | 2012-03-29 | 2016-09-06 | Google Inc. | System and method for increasing capacity in router forwarding tables |
US9461880B2 (en) | 2013-04-23 | 2016-10-04 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system for network and intra-portal link (IPL) sharing in distributed relay control protocol (DRCP) |
US9537771B2 (en) | 2013-04-04 | 2017-01-03 | Marvell Israel (M.I.S.L) Ltd. | Exact match hash lookup databases in network switch devices |
US9553798B2 (en) | 2013-04-23 | 2017-01-24 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of updating conversation allocation in link aggregation |
US20170026289A1 (en) * | 2015-07-23 | 2017-01-26 | Vss Monitoring, Inc. | Aia enhancements to support lag networks |
US9590897B1 (en) * | 2015-02-26 | 2017-03-07 | Qlogic Corporation | Methods and systems for network devices and associated network transmissions |
US9654418B2 (en) | 2013-11-05 | 2017-05-16 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of supporting operator commands in link aggregation group |
US9813290B2 (en) | 2014-08-29 | 2017-11-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system for supporting distributed relay control protocol (DRCP) operations upon misconfiguration |
US9876719B2 (en) | 2015-03-06 | 2018-01-23 | Marvell World Trade Ltd. | Method and apparatus for load balancing in network switches |
US9875126B2 (en) | 2014-08-18 | 2018-01-23 | Red Hat Israel, Ltd. | Hash-based load balancing for bonded network interfaces |
US9906592B1 (en) | 2014-03-13 | 2018-02-27 | Marvell Israel (M.I.S.L.) Ltd. | Resilient hash computation for load balancing in network switches |
US9985928B2 (en) | 2013-09-27 | 2018-05-29 | Hewlett Packard Enterprise Development Lp | Dynamic link aggregation |
US10171368B1 (en) * | 2013-07-01 | 2019-01-01 | Juniper Networks, Inc. | Methods and apparatus for implementing multiple loopback links |
US10243857B1 (en) | 2016-09-09 | 2019-03-26 | Marvell Israel (M.I.S.L) Ltd. | Method and apparatus for multipath group updates |
WO2019153210A1 (en) * | 2018-02-08 | 2019-08-15 | Oppo广东移动通信有限公司 | Method and apparatus for uplink and downlink data transmission |
CN110581799A (en) * | 2019-08-29 | 2019-12-17 | 迈普通信技术股份有限公司 | Service flow forwarding method and device |
US10587516B1 (en) | 2014-07-15 | 2020-03-10 | Marvell Israel (M.I.S.L) Ltd. | Hash lookup table entry management in a network device |
US10791091B1 (en) * | 2018-02-13 | 2020-09-29 | Architecture Technology Corporation | High assurance unified network switch |
CN111935021A (en) * | 2020-09-27 | 2020-11-13 | 翱捷智能科技(上海)有限公司 | Method and system for quickly matching network data packets |
US10904150B1 (en) | 2016-02-02 | 2021-01-26 | Marvell Israel (M.I.S.L) Ltd. | Distributed dynamic load balancing in network systems |
US11864085B2 (en) * | 2019-05-02 | 2024-01-02 | Samsung Electronics Co., Ltd. | Method and apparatus for transmitting data to a network node in a wireless communication system |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8031606B2 (en) * | 2008-06-24 | 2011-10-04 | Intel Corporation | Packet switching |
US20100145963A1 (en) * | 2008-12-04 | 2010-06-10 | Morris Robert P | Methods, Systems, And Computer Program Products For Resolving A Network Identifier Based On A Geospatial Domain Space Harmonized With A Non-Geospatial Domain Space |
JP5513342B2 (en) * | 2010-02-26 | 2014-06-04 | アラクサラネットワークス株式会社 | Packet relay device |
US9407537B1 (en) * | 2010-07-23 | 2016-08-02 | Juniper Networks, Inc. | Data packet switching within a communications network including aggregated links |
US9171030B1 (en) | 2012-01-09 | 2015-10-27 | Marvell Israel (M.I.S.L.) Ltd. | Exact match lookup in network switch devices |
US9819637B2 (en) | 2013-02-27 | 2017-11-14 | Marvell World Trade Ltd. | Efficient longest prefix matching techniques for network devices |
US9152494B2 (en) * | 2013-03-15 | 2015-10-06 | Cavium, Inc. | Method and apparatus for data packet integrity checking in a processor |
JP6751272B2 (en) * | 2015-03-30 | 2020-09-02 | 日本電気株式会社 | Network system, node device, control device, communication control method and control method |
US11277357B2 (en) * | 2019-01-25 | 2022-03-15 | Dell Products L.P. | Multi-port queue group system |
Citations (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5222085A (en) * | 1987-10-15 | 1993-06-22 | Peter Newman | Self-routing switching element and fast packet switch |
US5600641A (en) * | 1994-07-07 | 1997-02-04 | International Business Machines Corporation | Voice circuit emulation system in a packet switching network |
US5617413A (en) * | 1993-08-18 | 1997-04-01 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Scalable wrap-around shuffle exchange network with deflection routing |
US5754791A (en) * | 1996-03-25 | 1998-05-19 | I-Cube, Inc. | Hierarchical address translation system for a network switch |
US5917819A (en) * | 1996-04-26 | 1999-06-29 | Cascade Communications Corp. | Remapping of ATM cells for multicast transmission |
US5940596A (en) * | 1996-03-25 | 1999-08-17 | I-Cube, Inc. | Clustered address caching system for a network switch |
US20010037396A1 (en) * | 2000-05-24 | 2001-11-01 | Mathieu Tallegas | Stackable lookup engines |
US20020012585A1 (en) * | 2000-06-09 | 2002-01-31 | Broadcom Corporation | Gigabit switch with fast filtering processor |
US6363077B1 (en) * | 1998-02-13 | 2002-03-26 | Broadcom Corporation | Load balancing in link aggregation and trunking |
US6385201B1 (en) * | 1997-04-30 | 2002-05-07 | Nec Corporation | Topology aggregation using parameter obtained by internodal negotiation |
US20020085578A1 (en) * | 2000-12-15 | 2002-07-04 | Dell Martin S. | Three-stage switch fabric with buffered crossbar devices |
US6535489B1 (en) * | 1999-05-21 | 2003-03-18 | Advanced Micro Devices, Inc. | Method and apparatus in a network switch for handling link failure and link recovery in a trunked data path |
US20030053474A1 (en) * | 2001-08-22 | 2003-03-20 | Tuck Russell R. | Virtual egress packet classification at ingress |
US20030147385A1 (en) * | 2002-01-28 | 2003-08-07 | Armando Montalvo | Enterprise switching device and method |
US6633567B1 (en) * | 2000-08-31 | 2003-10-14 | Mosaid Technologies, Inc. | Method and apparatus for searching a filtering database with one search operation |
US20030223421A1 (en) * | 2002-06-04 | 2003-12-04 | Scott Rich | Atomic lookup rule set transition |
US20040004964A1 (en) * | 2002-07-03 | 2004-01-08 | Intel Corporation | Method and apparatus to assemble data segments into full packets for efficient packet-based classification |
US6721800B1 (en) * | 2000-04-10 | 2004-04-13 | International Business Machines Corporation | System using weighted next hop option in routing table to include probability of routing a packet for providing equal cost multipath forwarding packets |
US6728261B1 (en) * | 2000-02-07 | 2004-04-27 | Axerra Networks, Ltd. | ATM over IP |
US6765866B1 (en) * | 2000-02-29 | 2004-07-20 | Mosaid Technologies, Inc. | Link aggregation |
US20040190512A1 (en) * | 2003-03-26 | 2004-09-30 | Schultz Robert J | Processing packet information using an array of processing elements |
US20040213275A1 (en) * | 2003-04-28 | 2004-10-28 | International Business Machines Corp. | Packet classification using modified range labels |
US20050083935A1 (en) * | 2003-10-20 | 2005-04-21 | Kounavis Michael E. | Method and apparatus for two-stage packet classification using most specific filter matching and transport level sharing |
US6922410B1 (en) * | 1998-05-21 | 2005-07-26 | 3Com Technologies | Organization of databases in network switches for packet-based data communications networks |
US6952401B1 (en) * | 1999-03-17 | 2005-10-04 | Broadcom Corporation | Method for load balancing in a network switch |
US20060039384A1 (en) * | 2004-08-17 | 2006-02-23 | Sitaram Dontu | System and method for preventing erroneous link aggregation due to component relocation |
US7016352B1 (en) * | 2001-03-23 | 2006-03-21 | Advanced Micro Devices, Inc. | Address modification within a switching device in a packet-switched network |
US20060221967A1 (en) * | 2005-03-31 | 2006-10-05 | Narayan Harsha L | Methods for performing packet classification |
US7289503B1 (en) * | 2002-07-10 | 2007-10-30 | Juniper Networks, Inc. | Systems and methods for efficient multicast handling |
US7304996B1 (en) * | 2004-03-30 | 2007-12-04 | Extreme Networks, Inc. | System and method for assembling a data packet |
US7403484B2 (en) * | 2003-10-03 | 2008-07-22 | Maurice A Goodfellow | Switching fabrics and control protocols for them |
US7539750B1 (en) * | 2004-03-30 | 2009-05-26 | Extreme Networks, Inc. | System and method for packet processor status monitoring |
US7561571B1 (en) * | 2004-02-13 | 2009-07-14 | Habanero Holdings, Inc. | Fabric address and sub-address resolution in fabric-backplane enterprise servers |
US7602712B2 (en) * | 2004-06-08 | 2009-10-13 | Sun Microsystems, Inc. | Switch method and apparatus with cut-through routing for use in a communications network |
US20090285223A1 (en) * | 2002-11-07 | 2009-11-19 | David Andrew Thomas | Method and System for Communicating Information Between a Switch and a Plurality of Servers in a Computer Network |
US7633955B1 (en) * | 2004-02-13 | 2009-12-15 | Habanero Holdings, Inc. | SCSI transport for fabric-backplane enterprise servers |
US7764709B2 (en) * | 2004-07-07 | 2010-07-27 | Tran Hieu T | Prioritization of network traffic |
US8085779B2 (en) * | 2004-03-30 | 2011-12-27 | Extreme Networks, Inc. | Systems for supporting packet processing operations |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7280752B2 (en) * | 2002-02-22 | 2007-10-09 | Intel Corporation | Network address routing using multiple routing identifiers |
-
2006
- 2006-11-29 US US11/605,829 patent/US8792497B2/en active Active
-
2014
- 2014-07-24 US US14/339,863 patent/US20150023351A1/en not_active Abandoned
Patent Citations (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5222085A (en) * | 1987-10-15 | 1993-06-22 | Peter Newman | Self-routing switching element and fast packet switch |
US5617413A (en) * | 1993-08-18 | 1997-04-01 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Scalable wrap-around shuffle exchange network with deflection routing |
US5600641A (en) * | 1994-07-07 | 1997-02-04 | International Business Machines Corporation | Voice circuit emulation system in a packet switching network |
US5754791A (en) * | 1996-03-25 | 1998-05-19 | I-Cube, Inc. | Hierarchical address translation system for a network switch |
US5940596A (en) * | 1996-03-25 | 1999-08-17 | I-Cube, Inc. | Clustered address caching system for a network switch |
US5917819A (en) * | 1996-04-26 | 1999-06-29 | Cascade Communications Corp. | Remapping of ATM cells for multicast transmission |
US6385201B1 (en) * | 1997-04-30 | 2002-05-07 | Nec Corporation | Topology aggregation using parameter obtained by internodal negotiation |
US6363077B1 (en) * | 1998-02-13 | 2002-03-26 | Broadcom Corporation | Load balancing in link aggregation and trunking |
US6922410B1 (en) * | 1998-05-21 | 2005-07-26 | 3Com Technologies | Organization of databases in network switches for packet-based data communications networks |
US20050232274A1 (en) * | 1999-03-17 | 2005-10-20 | Broadcom Corporation | Method for load balancing in a network switch |
US6952401B1 (en) * | 1999-03-17 | 2005-10-04 | Broadcom Corporation | Method for load balancing in a network switch |
US6535489B1 (en) * | 1999-05-21 | 2003-03-18 | Advanced Micro Devices, Inc. | Method and apparatus in a network switch for handling link failure and link recovery in a trunked data path |
US6728261B1 (en) * | 2000-02-07 | 2004-04-27 | Axerra Networks, Ltd. | ATM over IP |
US6765866B1 (en) * | 2000-02-29 | 2004-07-20 | Mosaid Technologies, Inc. | Link aggregation |
US6721800B1 (en) * | 2000-04-10 | 2004-04-13 | International Business Machines Corporation | System using weighted next hop option in routing table to include probability of routing a packet for providing equal cost multipath forwarding packets |
US20010037396A1 (en) * | 2000-05-24 | 2001-11-01 | Mathieu Tallegas | Stackable lookup engines |
US20020012585A1 (en) * | 2000-06-09 | 2002-01-31 | Broadcom Corporation | Gigabit switch with fast filtering processor |
US7050430B2 (en) * | 2000-06-09 | 2006-05-23 | Broadcom Corporation | Gigabit switch with fast filtering processor |
US6633567B1 (en) * | 2000-08-31 | 2003-10-14 | Mosaid Technologies, Inc. | Method and apparatus for searching a filtering database with one search operation |
US20020085578A1 (en) * | 2000-12-15 | 2002-07-04 | Dell Martin S. | Three-stage switch fabric with buffered crossbar devices |
US7016352B1 (en) * | 2001-03-23 | 2006-03-21 | Advanced Micro Devices, Inc. | Address modification within a switching device in a packet-switched network |
US6763394B2 (en) * | 2001-08-22 | 2004-07-13 | Pluris, Inc. | Virtual egress packet classification at ingress |
US20030053474A1 (en) * | 2001-08-22 | 2003-03-20 | Tuck Russell R. | Virtual egress packet classification at ingress |
US20030147385A1 (en) * | 2002-01-28 | 2003-08-07 | Armando Montalvo | Enterprise switching device and method |
US7327748B2 (en) * | 2002-01-28 | 2008-02-05 | Alcatel Lucent | Enterprise switching device and method |
US20030223421A1 (en) * | 2002-06-04 | 2003-12-04 | Scott Rich | Atomic lookup rule set transition |
US20040004964A1 (en) * | 2002-07-03 | 2004-01-08 | Intel Corporation | Method and apparatus to assemble data segments into full packets for efficient packet-based classification |
US7289503B1 (en) * | 2002-07-10 | 2007-10-30 | Juniper Networks, Inc. | Systems and methods for efficient multicast handling |
US20090285223A1 (en) * | 2002-11-07 | 2009-11-19 | David Andrew Thomas | Method and System for Communicating Information Between a Switch and a Plurality of Servers in a Computer Network |
US20040190512A1 (en) * | 2003-03-26 | 2004-09-30 | Schultz Robert J | Processing packet information using an array of processing elements |
US20040213275A1 (en) * | 2003-04-28 | 2004-10-28 | International Business Machines Corp. | Packet classification using modified range labels |
US7403484B2 (en) * | 2003-10-03 | 2008-07-22 | Maurice A Goodfellow | Switching fabrics and control protocols for them |
US20050083935A1 (en) * | 2003-10-20 | 2005-04-21 | Kounavis Michael E. | Method and apparatus for two-stage packet classification using most specific filter matching and transport level sharing |
US7561571B1 (en) * | 2004-02-13 | 2009-07-14 | Habanero Holdings, Inc. | Fabric address and sub-address resolution in fabric-backplane enterprise servers |
US7633955B1 (en) * | 2004-02-13 | 2009-12-15 | Habanero Holdings, Inc. | SCSI transport for fabric-backplane enterprise servers |
US7304996B1 (en) * | 2004-03-30 | 2007-12-04 | Extreme Networks, Inc. | System and method for assembling a data packet |
US20080049774A1 (en) * | 2004-03-30 | 2008-02-28 | Swenson Erik R | System and method for assembling a data packet |
US7539750B1 (en) * | 2004-03-30 | 2009-05-26 | Extreme Networks, Inc. | System and method for packet processor status monitoring |
US8085779B2 (en) * | 2004-03-30 | 2011-12-27 | Extreme Networks, Inc. | Systems for supporting packet processing operations |
US7602712B2 (en) * | 2004-06-08 | 2009-10-13 | Sun Microsystems, Inc. | Switch method and apparatus with cut-through routing for use in a communications network |
US7764709B2 (en) * | 2004-07-07 | 2010-07-27 | Tran Hieu T | Prioritization of network traffic |
US20060039384A1 (en) * | 2004-08-17 | 2006-02-23 | Sitaram Dontu | System and method for preventing erroneous link aggregation due to component relocation |
US20060221967A1 (en) * | 2005-03-31 | 2006-10-05 | Narayan Harsha L | Methods for performing packet classification |
Cited By (143)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070140277A1 (en) * | 2005-12-20 | 2007-06-21 | Via Technologies Inc. | Packet transmission apparatus and processing method for the same |
US10498584B2 (en) | 2006-02-10 | 2019-12-03 | Aviat U.S., Inc. | System and method for resilient wireless packet communications |
US11570036B2 (en) | 2006-02-10 | 2023-01-31 | Aviat U.S., Inc. | System and method for resilient wireless packet communications |
US20070189154A1 (en) * | 2006-02-10 | 2007-08-16 | Stratex Networks, Inc. | System and method for resilient wireless packet communications |
US10091051B2 (en) | 2006-02-10 | 2018-10-02 | Aviat U.S., Inc. | System and method for resilient wireless packet communications |
US11165630B2 (en) | 2006-02-10 | 2021-11-02 | Aviat U.S., Inc. | System and method for resilient wireless packet communications |
US8693308B2 (en) | 2006-02-10 | 2014-04-08 | Aviat U.S., Inc. | System and method for resilient wireless packet communications |
US8988981B2 (en) | 2006-02-10 | 2015-03-24 | Aviat U.S., Inc. | System and method for resilient wireless packet communications |
US9712378B2 (en) | 2006-02-10 | 2017-07-18 | Aviat U.S., Inc. | System and method for resilient wireless packet communications |
US11916722B2 (en) | 2006-02-10 | 2024-02-27 | Aviat U.S., Inc. | System and method for resilient wireless packet communications |
US7908431B2 (en) | 2006-04-03 | 2011-03-15 | Extreme Networks, Inc. | Method of performing table lookup operation with table index that exceeds cam key size |
US7552275B1 (en) * | 2006-04-03 | 2009-06-23 | Extreme Networks, Inc. | Method of performing table lookup operation with table index that exceeds CAM key size |
US20090259811A1 (en) * | 2006-04-03 | 2009-10-15 | Ram Krishnan | Method of performing table lookup operation with table index that exceeds cam key size |
US9294477B1 (en) * | 2006-05-04 | 2016-03-22 | Sprint Communications Company L.P. | Media access control address security |
US20080089326A1 (en) * | 2006-10-17 | 2008-04-17 | Verizon Service Organization Inc. | Link aggregation |
US8565085B2 (en) * | 2006-10-17 | 2013-10-22 | Verizon Patent And Licensing Inc. | Link aggregation |
US20090232152A1 (en) * | 2006-12-22 | 2009-09-17 | Huawei Technologies Co., Ltd. | Method and apparatus for aggregating ports |
US7830873B1 (en) * | 2007-01-09 | 2010-11-09 | Marvell Israel (M.I.S.L.) Ltd. | Implementation of distributed traffic rate limiters |
US7821925B2 (en) * | 2007-01-29 | 2010-10-26 | Fulcrum Microsystems, Inc. | Traffic distribution techniques utilizing initial and scrambled hash values |
US20080181103A1 (en) * | 2007-01-29 | 2008-07-31 | Fulcrum Microsystems Inc. | Traffic distribution techniques |
US20080181196A1 (en) * | 2007-01-31 | 2008-07-31 | Alcatel Lucent | Link aggregation across multiple chassis |
US7756029B2 (en) * | 2007-05-24 | 2010-07-13 | Harris Stratex Networks Operating Corporation | Dynamic load balancing for layer-2 link aggregation |
US20080291826A1 (en) * | 2007-05-24 | 2008-11-27 | Harris Stratex Networks Operating Corporation | Dynamic Load Balancing for Layer-2 Link Aggregation |
US8264959B2 (en) | 2007-05-24 | 2012-09-11 | Harris Stratex Networks Operating Corporation | Dynamic load balancing for layer-2 link aggregation |
US20100246396A1 (en) * | 2007-05-24 | 2010-09-30 | Sergio Licardie | Dynamic Load Balancing for Layer-2 Link Aggregation |
US7869432B1 (en) * | 2007-06-29 | 2011-01-11 | Force 10 Networks, Inc | Peer-to-peer link aggregation across a service provider network |
US20090041013A1 (en) * | 2007-08-07 | 2009-02-12 | Mitchell Nathan A | Dynamically Assigning A Policy For A Communication Session |
US20090041014A1 (en) * | 2007-08-08 | 2009-02-12 | Dixon Walter G | Obtaining Information From Tunnel Layers Of A Packet At A Midpoint |
US11558285B2 (en) | 2007-09-06 | 2023-01-17 | Aviat U.S., Inc. | Resilient data communications with physical layer link aggregation, extended failure detection and load balancing |
US9294943B2 (en) | 2007-09-06 | 2016-03-22 | Harris Stratex Networks, Inc. | Resilient data communications with physical layer link aggregation, extended failure detection and load balancing |
US8774000B2 (en) | 2007-09-06 | 2014-07-08 | Harris Stratex Networks, Inc. | Resilient data communications with physical layer link aggregation, extended failure detection and load balancing |
US10164874B2 (en) | 2007-09-06 | 2018-12-25 | Aviat Networks, Inc. | Resilient data communications with physical layer link aggregation, extended failure detection and load balancing |
US8264953B2 (en) | 2007-09-06 | 2012-09-11 | Harris Stratex Networks, Inc. | Resilient data communications with physical layer link aggregation, extended failure detection and load balancing |
US9929900B2 (en) | 2007-09-06 | 2018-03-27 | Aviat Networks, Inc. | Resilient data communications with physical layer link aggregation, extended failure detection and load balancing |
US9521036B2 (en) | 2007-09-06 | 2016-12-13 | Harris Stratex Networks, Inc. | Resilient data communications with physical layer link aggregation, extended failure detection and load balancing |
US20090067324A1 (en) * | 2007-09-06 | 2009-03-12 | Harris Stratex Networks Operating Corporation | Resilient Data Communications with Physical Layer Link Aggregation, Extended Failure Detection and Load Balancing |
US8509236B2 (en) * | 2007-09-26 | 2013-08-13 | Foundry Networks, Llc | Techniques for selecting paths and/or trunk ports for forwarding traffic flows |
CN102037713A (en) * | 2008-05-23 | 2011-04-27 | 诺基亚西门子通信公司 | Providing station context and mobility in a wireless local area network having a split MAC architecture |
US8422513B2 (en) * | 2008-05-23 | 2013-04-16 | Nokia Siemens Networks Oy | Providing station context and mobility in a wireless local area network having a split MAC architecture |
CN105208143A (en) * | 2008-05-23 | 2015-12-30 | 诺基亚通信公司 | Providing Station Context And Mobility In A Wireless Local Area Network Having A Split Mac Architecture |
US20090290537A1 (en) * | 2008-05-23 | 2009-11-26 | Nokia Siemens Networks | Providing station context and mobility in a wireless local area network having a split MAC architecture |
US20140254453A1 (en) * | 2008-05-23 | 2014-09-11 | Nokia Siemens Networks Oy | Providing station context and mobility in a wireless local area network having a split mac architecture |
US9276768B2 (en) * | 2008-05-23 | 2016-03-01 | Nokia Solutions And Networks Oy | Providing station context and mobility in a wireless local area network having a split MAC architecture |
US10244047B1 (en) | 2008-08-06 | 2019-03-26 | Marvell Israel (M.I.S.L) Ltd. | Hash computation for network switches |
US9237100B1 (en) | 2008-08-06 | 2016-01-12 | Marvell Israel (M.I.S.L.) Ltd. | Hash computation for network switches |
US20110205909A1 (en) * | 2008-10-23 | 2011-08-25 | Huawei Technologies Co., Ltd. | Method, node and system for obtaining link aggregation group information |
US8559318B2 (en) * | 2008-10-23 | 2013-10-15 | Huawei Technologies Co., Ltd. | Method, node and system for obtaining link aggregation group information |
US9697616B2 (en) | 2008-11-18 | 2017-07-04 | Avigilon Corporation | Image data generation and analysis for network transmission |
US11521325B2 (en) | 2008-11-18 | 2022-12-06 | Motorola Solutions, Inc | Adaptive video streaming |
US10223796B2 (en) * | 2008-11-18 | 2019-03-05 | Avigilon Corporation | Adaptive video streaming |
US20160037194A1 (en) * | 2008-11-18 | 2016-02-04 | Avigilon Corporation | Adaptive video streaming |
US9697615B2 (en) | 2008-11-18 | 2017-07-04 | Avigilon Corporation | Movement indication |
US11107221B2 (en) | 2008-11-18 | 2021-08-31 | Avigilon Corporation | Adaptive video streaming |
US9412178B2 (en) * | 2008-11-18 | 2016-08-09 | Avigilon Corporation | Method, system and apparatus for image capture, analysis and transmission |
US20140337910A1 (en) * | 2008-11-18 | 2014-11-13 | Avigilon Corporation | Method, system and apparatus for image capture, analysis and transmission |
US20100246593A1 (en) * | 2009-03-25 | 2010-09-30 | International Business Machines Corporation | Steering Data Communications Packets For Transparent Bump-In-The-Wire Processing Among Multiple Data Processing Applications |
US7881324B2 (en) | 2009-03-25 | 2011-02-01 | International Business Machines Corporation | Steering data communications packets for transparent bump-in-the-wire processing among multiple data processing applications |
US8289977B2 (en) | 2009-06-10 | 2012-10-16 | International Business Machines Corporation | Two-layer switch apparatus avoiding first layer inter-switch traffic in steering packets through the apparatus |
US20100316055A1 (en) * | 2009-06-10 | 2010-12-16 | International Business Machines Corporation | Two-Layer Switch Apparatus Avoiding First Layer Inter-Switch Traffic In Steering Packets Through The Apparatus |
CN102271082A (en) * | 2010-06-03 | 2011-12-07 | 富士通株式会社 | Switching apparatus and method for setting up virtual lan |
EP2393249A1 (en) * | 2010-06-03 | 2011-12-07 | Fujitsu Limited | Switching apparatus and method for setting up virtual LAN |
US9077559B2 (en) * | 2010-06-03 | 2015-07-07 | Fujitsu Limited | Switching apparatus and method for setting up virtual LAN |
US20110299424A1 (en) * | 2010-06-03 | 2011-12-08 | Fujitsu Limited | Switching apparatus and method for setting up virtual lan |
US8774201B2 (en) * | 2010-09-10 | 2014-07-08 | Fujitsu Limited | Method and system for providing contextualized flow tags |
US20120063311A1 (en) * | 2010-09-10 | 2012-03-15 | Muhammad Sakhi Sarwar | Method and system for providing contextualized flow tags |
US9455966B2 (en) | 2010-11-30 | 2016-09-27 | Marvell Israel (M.I.S.L) Ltd. | Load balancing hash computation for network switches |
US9503435B2 (en) | 2010-11-30 | 2016-11-22 | Marvell Israel (M.I.S.L) Ltd. | Load balancing hash computation for network switches |
US9455967B2 (en) | 2010-11-30 | 2016-09-27 | Marvell Israel (M.I.S.L) Ltd. | Load balancing hash computation for network switches |
US20120136999A1 (en) * | 2010-11-30 | 2012-05-31 | Amir Roitshtein | Load balancing hash computation for network switches |
US8756424B2 (en) * | 2010-11-30 | 2014-06-17 | Marvell Israel (M.I.S.L) Ltd. | Load balancing hash computation for network switches |
EP2656559B1 (en) * | 2010-12-21 | 2019-02-20 | Cisco Technology, Inc. | Method and apparatus for applying client associated policies in a forwarding engine |
US20120155395A1 (en) * | 2010-12-21 | 2012-06-21 | Cisco Technology, Inc. | Client modeling in a forwarding plane |
US9319276B2 (en) * | 2010-12-21 | 2016-04-19 | Cisco Technology, Inc. | Client modeling in a forwarding plane |
US20120236859A1 (en) * | 2011-03-15 | 2012-09-20 | Force10 Networks, Inc. | Method & apparatus for configuring a link aggregation group on a stacked switch |
US8649379B2 (en) * | 2011-03-15 | 2014-02-11 | Force10 Networks, Inc. | Method and apparatus for configuring a link aggregation group on a stacked switch |
WO2012174957A1 (en) * | 2011-06-24 | 2012-12-27 | 中兴通讯股份有限公司 | Distributed link aggregation method and node therefor |
CN102843285A (en) * | 2011-06-24 | 2012-12-26 | 中兴通讯股份有限公司 | Distributed link aggregation method and node for realizing same |
US9131398B2 (en) * | 2011-08-15 | 2015-09-08 | Marvell World Trade Ltd. | Long range WLAN data unit format |
US9832057B2 (en) | 2011-08-15 | 2017-11-28 | Marvell World Trade Ltd. | Control frame format for WLAN |
US8982792B2 (en) | 2011-08-15 | 2015-03-17 | Marvell World Trade Ltd. | Long range WLAN data unit format |
US9083590B2 (en) | 2011-08-15 | 2015-07-14 | Marvell World Trade Ltd | Long range WLAN data unit format |
JP2014524709A (en) * | 2011-08-15 | 2014-09-22 | マーベル ワールド トレード リミテッド | Long distance wireless LAN data unit format |
US20130044687A1 (en) * | 2011-08-15 | 2013-02-21 | Yong Liu | Long range wlan data unit format |
US9131399B2 (en) | 2011-08-15 | 2015-09-08 | Marvell World Trade Ltd. | Control data unit format for WLAN |
US20130064246A1 (en) * | 2011-09-12 | 2013-03-14 | Cisco Technology, Inc. | Packet Forwarding Using an Approximate Ingress Table and an Exact Egress Table |
US9237096B2 (en) * | 2011-12-19 | 2016-01-12 | Alaxala Networks Corporation | Network relay apparatus |
US20130156037A1 (en) * | 2011-12-19 | 2013-06-20 | Alaxala Networks Corporation | Network relay apparatus |
US9906491B2 (en) * | 2012-01-12 | 2018-02-27 | Huawei Device (Dongguan) Co., Ltd. | Improving transmission efficiency of data frames by using shorter addresses in the frame header |
US20140321449A1 (en) * | 2012-01-12 | 2014-10-30 | Huawei Device Co., Ltd. | Data Communications Method, Apparatus, and System |
US9438505B1 (en) * | 2012-03-29 | 2016-09-06 | Google Inc. | System and method for increasing capacity in router forwarding tables |
US9106578B2 (en) * | 2012-05-31 | 2015-08-11 | Hewlett-Packard Development Company, L.P. | Core network architecture |
US20130322427A1 (en) * | 2012-05-31 | 2013-12-05 | Bryan Stiekes | Core network architecture |
US9450859B2 (en) * | 2012-06-15 | 2016-09-20 | Citrix Systems, Inc. | Systems and methods for deriving unique MAC address for a cluster |
US20130336166A1 (en) * | 2012-06-15 | 2013-12-19 | Tushar K. Swain | Systems and methods for deriving unique mac address for a cluster |
WO2013188775A1 (en) * | 2012-06-15 | 2013-12-19 | Citrix Systems, Inc. | Systems and methods for deriving unique mac address for a cluster |
US20140105215A1 (en) * | 2012-10-15 | 2014-04-17 | Hewlett-Packard Development Company, L.P. | Converting addresses for nodes of a data center network into compact identifiers for determining flow keys for received data packets |
US20140133351A1 (en) * | 2012-11-14 | 2014-05-15 | Hitachi Metals, Ltd. | Communication system and network relay apparatus |
US9225667B2 (en) * | 2012-11-14 | 2015-12-29 | Hitachi Metals, Ltd. | Communication system and network relay apparatus |
US9385957B1 (en) * | 2012-11-30 | 2016-07-05 | Netronome Systems, Inc. | Flow key lookup involving multiple simultaneous cam operations to identify hash values in a hash bucket |
US20140294010A1 (en) * | 2013-03-29 | 2014-10-02 | International Business Machines Corporation | Asymmetrical link aggregation |
US9654384B2 (en) * | 2013-03-29 | 2017-05-16 | International Business Machines Corporation | Asymmetrical link aggregation |
US20170012863A1 (en) * | 2013-03-29 | 2017-01-12 | International Business Machines Corporation | Asymmetrical link aggregation |
US9513750B2 (en) * | 2013-03-29 | 2016-12-06 | International Business Machines Corporation | Asymmetrical link aggregation |
US9537771B2 (en) | 2013-04-04 | 2017-01-03 | Marvell Israel (M.I.S.L) Ltd. | Exact match hash lookup databases in network switch devices |
US9871728B2 (en) | 2013-04-04 | 2018-01-16 | Marvell Israel (M.I.S.L) Ltd. | Exact match hash lookup databases in network switch devices |
US20170026299A1 (en) * | 2013-04-23 | 2017-01-26 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of implementing conversation-sensitive collection for a link aggregation group |
US11025492B2 (en) | 2013-04-23 | 2021-06-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Packet data unit (PDU) structure for supporting distributed relay control protocol (DRCP) |
US9461880B2 (en) | 2013-04-23 | 2016-10-04 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system for network and intra-portal link (IPL) sharing in distributed relay control protocol (DRCP) |
US11811605B2 (en) | 2013-04-23 | 2023-11-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Packet data unit (PDU) structure for supporting distributed relay control protocol (DRCP) |
US9497074B2 (en) | 2013-04-23 | 2016-11-15 | Telefonaktiebolaget L M Ericsson (Publ) | Packet data unit (PDU) structure for supporting distributed relay control protocol (DRCP) |
US9497132B2 (en) * | 2013-04-23 | 2016-11-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system of implementing conversation-sensitive collection for a link aggregation group |
US20140314094A1 (en) * | 2013-04-23 | 2014-10-23 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of implementing conversation-sensitive collection for a link aggregation group |
US10097414B2 (en) | 2013-04-23 | 2018-10-09 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system for synchronizing with neighbor in a distributed resilient network interconnect (DRNI) link aggregation group |
US10116498B2 (en) | 2013-04-23 | 2018-10-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system for network and intra-portal link (IPL) sharing in distributed relay control protocol (DRCP) |
US9660861B2 (en) | 2013-04-23 | 2017-05-23 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system for synchronizing with neighbor in a distributed resilient network interconnect (DRNI) link aggregation group |
US9503316B2 (en) | 2013-04-23 | 2016-11-22 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system for updating distributed resilient network interconnect (DRNI) states |
US9654337B2 (en) | 2013-04-23 | 2017-05-16 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system for supporting distributed relay control protocol (DRCP) operations upon communication failure |
US9509556B2 (en) | 2013-04-23 | 2016-11-29 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system for synchronizing with neighbor in a distributed resilient network interconnect (DRNI) link aggregation group |
US10237134B2 (en) | 2013-04-23 | 2019-03-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system for updating distributed resilient network interconnect (DRNI) states |
US9553798B2 (en) | 2013-04-23 | 2017-01-24 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of updating conversation allocation in link aggregation |
US11038804B2 (en) * | 2013-04-23 | 2021-06-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system of implementing conversation-sensitive collection for a link aggregation group |
US10257030B2 (en) | 2013-04-23 | 2019-04-09 | Telefonaktiebolaget L M Ericsson | Packet data unit (PDU) structure for supporting distributed relay control protocol (DRCP) |
US10270686B2 (en) | 2013-04-23 | 2019-04-23 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of updating conversation allocation in link aggregation |
US10171368B1 (en) * | 2013-07-01 | 2019-01-01 | Juniper Networks, Inc. | Methods and apparatus for implementing multiple loopback links |
US9985928B2 (en) | 2013-09-27 | 2018-05-29 | Hewlett Packard Enterprise Development Lp | Dynamic link aggregation |
US9654418B2 (en) | 2013-11-05 | 2017-05-16 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of supporting operator commands in link aggregation group |
US9906592B1 (en) | 2014-03-13 | 2018-02-27 | Marvell Israel (M.I.S.L.) Ltd. | Resilient hash computation for load balancing in network switches |
US10587516B1 (en) | 2014-07-15 | 2020-03-10 | Marvell Israel (M.I.S.L) Ltd. | Hash lookup table entry management in a network device |
US9875126B2 (en) | 2014-08-18 | 2018-01-23 | Red Hat Israel, Ltd. | Hash-based load balancing for bonded network interfaces |
US9813290B2 (en) | 2014-08-29 | 2017-11-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system for supporting distributed relay control protocol (DRCP) operations upon misconfiguration |
US9590897B1 (en) * | 2015-02-26 | 2017-03-07 | Qlogic Corporation | Methods and systems for network devices and associated network transmissions |
US9876719B2 (en) | 2015-03-06 | 2018-01-23 | Marvell World Trade Ltd. | Method and apparatus for load balancing in network switches |
US10284471B2 (en) * | 2015-07-23 | 2019-05-07 | Netscout Systems, Inc. | AIA enhancements to support lag networks |
US20170026289A1 (en) * | 2015-07-23 | 2017-01-26 | Vss Monitoring, Inc. | Aia enhancements to support lag networks |
US10904150B1 (en) | 2016-02-02 | 2021-01-26 | Marvell Israel (M.I.S.L) Ltd. | Distributed dynamic load balancing in network systems |
US10243857B1 (en) | 2016-09-09 | 2019-03-26 | Marvell Israel (M.I.S.L) Ltd. | Method and apparatus for multipath group updates |
WO2019153210A1 (en) * | 2018-02-08 | 2019-08-15 | Oppo广东移动通信有限公司 | Method and apparatus for uplink and downlink data transmission |
US10791091B1 (en) * | 2018-02-13 | 2020-09-29 | Architecture Technology Corporation | High assurance unified network switch |
US11792160B1 (en) | 2018-02-13 | 2023-10-17 | Architecture Technology Corporation | High assurance unified network switch |
US11864085B2 (en) * | 2019-05-02 | 2024-01-02 | Samsung Electronics Co., Ltd. | Method and apparatus for transmitting data to a network node in a wireless communication system |
CN110581799A (en) * | 2019-08-29 | 2019-12-17 | 迈普通信技术股份有限公司 | Service flow forwarding method and device |
CN111935021B (en) * | 2020-09-27 | 2020-12-25 | 翱捷智能科技(上海)有限公司 | Method and system for quickly matching network data packets |
CN111935021A (en) * | 2020-09-27 | 2020-11-13 | 翱捷智能科技(上海)有限公司 | Method and system for quickly matching network data packets |
Also Published As
Publication number | Publication date |
---|---|
US20150023351A1 (en) | 2015-01-22 |
US8792497B2 (en) | 2014-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8792497B2 (en) | Method and apparatus for performing link aggregation | |
CN111512601B (en) | Segmented routing network processing of packets | |
US6553029B1 (en) | Link aggregation in ethernet frame switches | |
US7079537B1 (en) | Layer 3 switching logic architecture in an integrated network switch | |
US8780911B2 (en) | Link aggregation based on port and protocol combination | |
US7190695B2 (en) | Flexible application of mapping algorithms within a packet distributor | |
US7352760B2 (en) | Link aggregation | |
US6977932B1 (en) | System and method for network tunneling utilizing micro-flow state information | |
US7447204B2 (en) | Method and device for the classification and redirection of data packets in a heterogeneous network | |
US8005084B2 (en) | Mirroring in a network device | |
US6674769B1 (en) | Simultaneous searching of layer 3 policy filter and policy cache in a network switch port | |
US6798788B1 (en) | Arrangement determining policies for layer 3 frame fragments in a network switch | |
US20110292939A1 (en) | Method & apparatus for forwarding table reduction | |
US7336660B2 (en) | Method and apparatus for processing packets based on information extracted from the packets and context indications such as but not limited to input interface characteristics | |
US8265072B2 (en) | Frame switching device | |
US7830892B2 (en) | VLAN translation in a network device | |
US10212069B2 (en) | Forwarding of multicast packets in a network | |
US10367734B2 (en) | Forwarding of packets in a network based on multiple compact forwarding identifiers represented in a single internet protocol version 6 (IPv6) address | |
US6343078B1 (en) | Compression of forwarding decisions in a network device | |
US7403526B1 (en) | Partitioning and filtering a search space of particular use for determining a longest prefix match thereon | |
US20030195916A1 (en) | Network thread scheduling | |
US20100142536A1 (en) | Unicast trunking in a network device | |
US6807176B1 (en) | Arrangement for switching data packets in a network switch based on subnet identifier | |
US7103035B1 (en) | Arrangement for searching network addresses in a network switch using multiple tables based on subnet identifier | |
US7564841B2 (en) | Apparatus and method for performing forwarding table searches using consecutive symbols tables |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TELLABS SAN JOSE, INC., ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAJAGOPALAN, BALAJI;NUBANI, SAMER I.;PARK, CHARLES C.;AND OTHERS;REEL/FRAME:018884/0192;SIGNING DATES FROM 20070125 TO 20070129 Owner name: TELLABS SAN JOSE, INC., ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAJAGOPALAN, BALAJI;NUBANI, SAMER I.;PARK, CHARLES C.;AND OTHERS;SIGNING DATES FROM 20070125 TO 20070129;REEL/FRAME:018884/0192 |
|
AS | Assignment |
Owner name: TELLABS OPERATIONS, INC., ILLINOIS Free format text: MERGER;ASSIGNOR:TELLABS SAN JOSE, INC.;REEL/FRAME:027844/0508 Effective date: 20111111 |
|
AS | Assignment |
Owner name: CERBERUS BUSINESS FINANCE, LLC, AS COLLATERAL AGEN Free format text: SECURITY AGREEMENT;ASSIGNORS:TELLABS OPERATIONS, INC.;TELLABS RESTON, LLC (FORMERLY KNOWN AS TELLABS RESTON, INC.);WICHORUS, LLC (FORMERLY KNOWN AS WICHORUS, INC.);REEL/FRAME:031768/0155 Effective date: 20131203 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: TELECOM HOLDING PARENT LLC, CALIFORNIA Free format text: ASSIGNMENT FOR SECURITY - - PATENTS;ASSIGNORS:CORIANT OPERATIONS, INC.;TELLABS RESTON, LLC (FORMERLY KNOWN AS TELLABS RESTON, INC.);WICHORUS, LLC (FORMERLY KNOWN AS WICHORUS, INC.);REEL/FRAME:034484/0740 Effective date: 20141126 |
|
AS | Assignment |
Owner name: TELECOM HOLDING PARENT LLC, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REMOVE APPLICATION NUMBER 10/075,623 PREVIOUSLY RECORDED AT REEL: 034484 FRAME: 0740. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT FOR SECURITY --- PATENTS;ASSIGNORS:CORIANT OPERATIONS, INC.;TELLABS RESTON, LLC (FORMERLY KNOWN AS TELLABS RESTON, INC.);WICHORUS, LLC (FORMERLY KNOWN AS WICHORUS, INC.);REEL/FRAME:042980/0834 Effective date: 20141126 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |