US7274705B2 - Method and apparatus for reducing clock speed and power consumption - Google Patents

Method and apparatus for reducing clock speed and power consumption Download PDF

Info

Publication number
US7274705B2
US7274705B2 US09/858,505 US85850501A US7274705B2 US 7274705 B2 US7274705 B2 US 7274705B2 US 85850501 A US85850501 A US 85850501A US 7274705 B2 US7274705 B2 US 7274705B2
Authority
US
United States
Prior art keywords
transmit
clock speed
core
packet
receive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/858,505
Other versions
US20020041599A1 (en
Inventor
Michael Chang
Michael A. Sokol
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to ALTIMA COMMUNICATIONS INC. reassignment ALTIMA COMMUNICATIONS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHANG, MICHAEL, SOKOL, MICHAEL A.
Priority to US09/858,505 priority Critical patent/US7274705B2/en
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to EP01308917A priority patent/EP1207640A3/en
Publication of US20020041599A1 publication Critical patent/US20020041599A1/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION MERGER (SEE DOCUMENT FOR DETAILS). Assignors: ALTIMA COMMUNICATIONS, INC.
Priority to US11/889,741 priority patent/US7656907B2/en
Publication of US7274705B2 publication Critical patent/US7274705B2/en
Application granted granted Critical
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE PREVIOUSLY RECORDED AT REEL: 047196 FRAME: 0097. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER. Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/324Power saving characterised by the action undertaken by lowering clock frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0685Clock or time synchronisation in a node; Intranode synchronisation
    • H04J3/0697Synchronisation in a packet node
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2205/00Indexing scheme relating to group G06F5/00; Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F2205/06Indexing scheme relating to groups G06F5/06 - G06F5/16
    • G06F2205/064Linked list, i.e. structure using pointers, e.g. allowing non-contiguous address segments in one logical buffer or dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5665Interaction of ATM with other protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5682Threshold; Watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/103Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, ATM, ethernet, fast ethernet, and gigabit ethernet environments, generally known as LANs.
  • the invention is also applicable to wide area networks, and virtually any computer network.
  • the invention relates to a new switching architecture geared to power efficient and cost sensitive markets, and which can be implemented on a semiconductor substrate such as a silicon chip.
  • Switches Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network.
  • wirespeed or linespeed which is the maximum speed capability of the particular network.
  • Basic ethernet wirespeed is up to 10 megabits per second
  • Fast Ethernet is up to 100 megabits per second.
  • a gigabit Ethernet is capable of transmitting data over a network at a rate of up to 1,000 megabits per second.
  • Hubs or repeaters operate at layer one, and essentially copy and “broadcast” incoming data to a plurality of spokes of the hub.
  • Layer two switching-related devices are typically referred to as multiport bridges, and are capable of bridging two separate networks.
  • Bridges can build a table of forwarding rules based upon which MAC (media access controller) addresses exist on which ports of the bridge, and pass packets which are destined for an address which is located on an opposite side of the bridge.
  • Bridges typically utilize what is known as the “spanning tree” algorithm to eliminate potential data loops; a data loop is a situation wherein a packet endlessly loops in a network looking for a particular address.
  • the spanning tree algorithm defines a protocol for preventing data loops.
  • Layer three switches sometimes referred to as routers, can forward packets based upon the destination network address. Layer three switches are capable of learning addresses and maintaining tables thereof which correspond to port mappings. Processing speed for layer three switches can be improved by utilizing specialized high performance hardware, and off loading the host CPU so that instruction decisions do not delay packet forwarding.
  • the invention is directed to a method and apparatus for reducing clock speed and power consumption in a network chip.
  • the invention is a system having a core that transmits and receives signals at a first clock speed.
  • a receive buffer is in communication with the core and configured to transmit the signals to the core at the first clock speed.
  • a transmit buffer is in communication with the core and configured to receive signals from the core at the first clock speed.
  • a sync is configured to receive signals in the receive buffer at a second clock speed and to transmit the signals from the transmit buffer at the second clock speed. The sync is in communication with the transmit buffer and the receive buffer.
  • the invention is a method for synching two clock speeds.
  • the method includes the steps of receiving a signal in a receive buffer at a first clock speed using a sync, then transmitting the signal from the receive buffer to a core at a second clock speed; transmitting the signal from the core to a transmit buffer at the second clock speed; and transmitting the signal from the transmit buffer at the first clock speed using a sync.
  • Another embodiment of the invention is a system for syncing two clock speeds.
  • the system has a signal receiving means for receiving a signal in a receive buffer at a first clock speed using a sync.
  • a core transmitting means transmits the signal from the receive buffer to a core at a second clock speed.
  • a transmit buffer transmits the signal from the core to a transmit buffer at the second clock speed, and a processor transmitting means transmits the signal from the transmit buffer at the first clock speed using the sync.
  • FIG. 1 is a general block diagram of elements of the present invention
  • FIG. 2 illustrates the data flow on the CPS channel of a network switch according to the present invention
  • FIG. 3A illustrates a linked list structure of Packet Buffer Memory
  • FIG. 3B illustrates a linked list structure of Packet Buffer Memory with two data packets
  • FIG. 3C illustrates a linked list structure of Packet Buffer Memory after the memory occupied by one data packet is freed
  • FIG. 3D illustrates a linked list structure of Packet Buffer Memory after the memory occupied by another data packet is freed
  • FIG. 4 is a block diagram of a processor having multiple clients
  • FIG. 5 is a flow diagram of method steps in one embodiment of the invention.
  • FIG. 1 is an example of a block diagram of a switch 100 of the present invention.
  • switch 100 has 12 ports, 102 ( 1 )- 102 ( 12 ), which can be fully integrated IEEE compliant ports.
  • Each of these 12 ports 102 ( 1 )- 102 ( 12 ) can be 10BASE-T/100BASE-TX/FX ports each having a physical element (PHY), which can be compliant with IEEE standards.
  • Each of the ports 102 ( 1 )- 102 ( 12 ), in one example of the invention, has a port speed that can be forced to a particular configuration or set so that auto-negotiation will determine the optimal speed for each port independently.
  • Each PHY of each of the ports can be connected to a twisted-pair interface using TXOP/N and RXIP/N as transmit and receive protocols, or a fiber interface using FXOP/N and FXIP/N as transmit and receive protocols.
  • Each of the ports 102 ( 1 )- 102 ( 12 ) has a Media Access Controller (MAC) connected to each corresponding PHY.
  • MAC Media Access Controller
  • each MAC is a fully compliant IEEE 802.3 MAC.
  • Each MAC can operate at 10 Mbps or 100 Mbps and supports both a full-duplex mode, which allows for data transmission and reception simultaneously, and a half duplex mode, which allows data to be either transmitted or received, but not both at the same time.
  • Flow control can be provided by each of the MACs.
  • flow control When flow control is implemented, the flow of incoming data packets is managed or controlled to reduce the chances of system resources being exhausted.
  • the present embodiment can be a non-blocking, wire speed switch, the memory space available may limit data transmission speeds. For example, during periods of packet flooding (i.e. packet broadcast storms), the available memory can be exhausted rather quickly.
  • the present invention can implement two different types of flow control. In full-duplex mode, the present invention can, for example, implement the IEEE 802.3x flow control. In half-duplex mode, the present invention can implement a collision backpressure scheme.
  • each port has a latency block connected to the MAC.
  • Each of the latency blocks has transmit and receive FIFOs which provide an interface to main packet memory. In this example, if a packet does not successfully transmit from one port to another port within a preset time, the packet will be dropped from the transmit queue.
  • a gigabit interface 104 can be provided on switch 100 .
  • Gigabit interface 104 can support a Gigabit Media—Independent Interface (GMII) and a Ten Bit Interface (TBI).
  • the GMII can be fully compliant to IEEE 802.3ab.
  • the GMII can pass data at a rate of 8 bits every 8 ns resulting in a throughput of 2 Gbps including both transmit and receive data.
  • gigabit interface 104 can be configured to be a TBI, which is compatible with many industry standard fiber drivers. Since in some embodiments of the invention the MDIO/MDC interfaces (optical interfaces) are not supported, the gigabit PHY (physical layer) is set into the proper mode by the system designer.
  • Gigabit interface 104 like ports 102 ( 1 )- 102 ( 12 ), has a PHY, a Gigabit Media Access Controller (GMAC) and a latency block.
  • the GMAC can be a fully compliant IEEE 802.3z MAC operating at 1 Gbps full-duplex only and can connect to a fully compliant GMII or TBI interface through the PHY.
  • GMAC 108 provides full-duplex flow control mechanisms and a low cost stacking solution for either twisted pair or TBI mode using in-band signaling for management. This low cost stacking solution allows for a ring structure to connect each switch utilizing only one gigabit port.
  • a CPU interface 106 is provided on switch 100 .
  • CPU interface 106 is an asynchronous 8 or 16 bit I/O device interface. Through this interface a CPU can read internal registers, receive packets, transmit packets and allow for interrupts.
  • CPU interface 106 also allows for a Spanning Tree Protocol to be implemented.
  • a chip select pin is available allowing a single CPU control two switches. In this example an interrupt pin when driven low (i.e., driven to the active state) requiring a pull-up resistor will allow multiple switches to be controlled by a single CPU.
  • a switching fabric 108 is also located on switch 100 in one example of the present invention.
  • Switching fabric 108 can allow for full wire speed operation of all ports.
  • a hybrid or virtual shared memory approach can also be implemented to minimize bandwidth and memory requirements. This architecture allows for efficient and low latency transfer of packets through the switch and also supports address learning and aging features, VLAN, port trunking and port mirroring.
  • Memory interfaces 110 , 112 and 114 can be located on switch 100 and allow for the separation of data and control information.
  • Packet buffer memory interface (PBM) 110 handles packet data storage while the transmit queue memory interface (TXM) 112 keeps a list of packets to be transmitted and address table/control memory interface (ATM) 114 handles the address table and header information.
  • PBM Packet buffer memory interface
  • TXM transmit queue memory interface
  • ATM address table/control memory interface
  • Each of these interfaces can use memory such as SSRAM that can be configured in various total amounts and chip sizes.
  • PBM 110 is located on switch 100 and can have an external packet buffer memory (not shown) that is used to store the packet during switching operations.
  • packet buffer memory is made up of multiple 256 byte buffers. Therefore, one packet may span several buffers within memory. This structure allows for efficient memory usage and minimizes bandwidth overhead.
  • the packet buffer memory can be configurable so that up to 4 Mbytes of memory per chip can be used for a total of 8 Mbytes per 24+2 ports. In this example, efficient memory usage is maintained by allocating 256 byte blocks, which allows storage for up to 32K packets.
  • PBM 110 can be 64 bits wide and can use either a 64 bit wide memory or two 32 bit wide memories and can run at 100 MHz.
  • TXM 112 is located on switch 100 and can have an external transmit queue memory (not shown). TXM 112 , in this example, maintains 4 priority queues per port and allows for 64K packets per chip and up to 128K packets per system. TXM 112 can run at a speed of up to 100 MHz.
  • ATM 114 can be located on switch 100 and can have an external address table/control memory (not shown) used to store the address table and header information corresponding to each 256 byte section of PBM 110 .
  • Address table/control memory allows up to 16K unique unicast addresses. The remaining available memory is used for control information.
  • ATM 114 in this example, runs up to 133 MHz.
  • Switch 100 in one example of the invention, has a Flow Control Manager 116 that manages the flow of packet data. As each port sends more and more data to the switch, Flow Control Manager 116 can monitor the amount of memory being used by each port 102 ( 1 )- 102 ( 12 ) of switch 100 and the switch as a whole. In this example, if one of the ports 102 ( 1 )- 102 ( 12 ) or the switch as a whole is using up too much memory as is predetermined by a register setting predefined by the manufacturer or by a user, Flow Control Manager 116 will issue commands over the ATM Bus requesting the port or switch to slow down and may eventually drop packets if necessary.
  • switch 100 In addition to Flow control manager 116 , switch 100 also has a Start Point Manager (SPM) 118 connected to Switching Fabric 108 , a Forwarding Manager (FM) 120 connected to Switching Fabric 108 and an Address Manager (AM) 122 connected to Switching Fabric 108 .
  • SPM Start Point Manager
  • FM Forwarding Manager
  • AM Address Manager
  • Start Point Manager (SPM) 118 through Switching Fabric 108 in one example of the present invention, keeps track of which blocks of memory in PBM 110 are being used and which blocks of memory are free.
  • Forwarding Manager 120 can, for example, forward packet data through Switching Fabric 108 to appropriate ports for transmission.
  • AM 122 can, through Switching Fabric 108 , manage the address table including learning source addresses, assigning headers to packets and keeping track of these addresses.
  • AM 122 uses aging to remove addresses from the address table that have not been used for a specified time period or after a sequence of events.
  • An expansion port 124 can also be provided on switch 100 to connect two switches together. This will allow for full wire speed operation on twenty-five 100 M ports (includes one CPU port) and two gigabit ports.
  • the expansion port 124 in this example, allows for 4.6 Gbps of data to be transmitted between switches.
  • An LED controller 126 can also be provided on switch 100 .
  • LED controller 126 activates appropriate LEDs or other suitable indicator to give a user necessary status information.
  • Each port of the ports 102 ( 1 )- 102 ( 12 ), in one example of the invention, has 4 separate LEDs, which provide per port status information.
  • the LEDs are fully programmable and are made up of port LEDs and other LEDs.
  • Each LED can include a default state for each of the four port LEDs. An example of the default operation of each of the port LEDs are shown below.
  • each of the port LEDs can be programmed through registers. These registers can be set up, in one example of the invention, by a CPU. By having programmable registers that control LEDs, full customization of the system architecture can be realized including the programmability of the blink rate.
  • Each of the LEDs can have a table, as shown below, associated with the LED, where register bits R Ax , R Bx and R Cx can be set to provide a wide range of information.
  • register bits R Ax , R Bx and R Cx can be set to determine when LED ON , LED BLINK and LED OFF are activated or deactivated.
  • LED ON , LED BLINK and LED OFF are activated or deactivated.
  • LED OFF are activated or deactivated.
  • Registers 128 are located on switch 100 in this example of the present invention. Registers 128 are full registers that allow for configuration, status and Remote Monitoring (RMON) management. In this example, Registers 128 are arranged into groups and offsets. There are 32 address groups each of which can contain up to 64 registers.
  • FIG. 2 is an illustration of one embodiment of the invention having a PBM Bus, an ATM Bus, and a TXM Bus for communications with other portions of the switch.
  • PBM 110 is connected to the PBM Bus and an external PBM Memory
  • TXM 112 is connected to the TXM Bus and an external TXM Memory
  • ATM 114 is connected to the ATM Bus and an external ATM Memory.
  • Each of the transmit (TX) and receive (RX) portions of ports 102 ( 1 )- 102 ( 12 ) are connected to the PBM Bus, ATM Bus and TXM Bus for communications.
  • FM 120 is connected to each of the ports 102 ( 1 )- 102 ( 12 ) directly and is also connected to the ATM Bus for communications with other portions of the switch.
  • SPM 118 and AM 122 are also connected to the ATM Bus for communications with other portions of the switch.
  • switch 100 for transmission of a unicast packet (i.e., a packet destined for a single port for output) in one example of the invention is made with reference to FIG. 2 as follows.
  • Switch 100 is initialized following the release of a hardware reset pin. A series of initialization steps will occur including the initialization of external buffer memory and the address table. All ports on the switch will then be disabled and the CPU will enable packet traffic by setting an enable register. As links become available on the ports (ports 102 ( 1 )- 102 ( 12 ) and gigabit port 104 ), an SPT protocol will confirm these ports and the ports will become activated. After the initialization process is concluded normal operation of Switch 100 can begin.
  • a PORT_ACTIVE command is issued by CPU. This indicates that the port is ready to transmit and receive data packets. If for some reason a port goes down or becomes disabled, a PORT_INACTIVE command is issued by the CPU.
  • a packet from an external source on port 102 ( 1 ) is received at the receive (RX) PHY of port 102 ( 1 ).
  • the RX MAC of port 102 ( 1 ) will not start processing the packet until a Start of Frame Delimiter (SFD) for the packet is detected.
  • SFD Start of Frame Delimiter
  • the RX MAC will place the packet into a receive (RX) FIFO of the latency block of port 102 ( 1 ).
  • RX FIFO receive FIFO
  • port 102 ( 1 ) will request an empty receive buffer from the SPM.
  • the RX FIFO Latency block of port 102 ( 1 ) sends packets received in the RX FIFO to the external PBM Memory through the PBM Bus and PBM 110 until the end of packet is reached.
  • the PBM Memory in this example, is made up of 256 byte buffers. Therefore, one packet may span several buffers within the packet buffer memory if the packet size is greater than 256 bytes. Connections between packet buffers can be maintained through a linked list system in one example of the present invention.
  • a linked list system allows for efficient memory usage and minimized bandwidth overhead and will be explained in further detail with relation to FIG. 3A-FIG . 3 D.
  • the port will also send the source address to Address Manager (AM) 122 and request a filtering table from AM 122 .
  • AM Address Manager
  • the port If the packet is “good”, as is determined through normal, standard procedures known to those of ordinary skill in the art, such as valid length and IEEE standard packet checking or such as a Cyclic Redundancy Check, the port writes the header information to the ATM memory through the ATM Bus and ATM 114 .
  • AM 122 sends a RECEP_COMPL command over the ATM Bus signifying that packet reception is complete.
  • Other information is also sent along with the RECEP_COMPL command such as the start address and filtering table which indicates which ports the packet is to be sent out on. For example, a filtering table having a string such as “011111111111” would send the packet to all ports except port 1 and would have a count of 11. The count simply is the number of ports the packet is to be sent, as indicated by the number of “1”s.
  • FM 120 Forwarding Manager 120 is constantly monitoring the ATM Bus to determine if a RECEP_COMPL command has been issued. Once FM 120 has determined that a RECEP_COMPL command has been issued, Forwarding Manager (FM) 120 will use the filtering table to send packets to appropriate ports. It is noted that a packet will not be forwarded if one of the following conditions is met:
  • the packet contains a CRC error
  • the packet is less than a minimum threshold such as 64 bytes
  • the packet is greater than a maximum threshold such as 1518 bytes or 1522 bytes depending on register settings
  • the packet is only forwarded to the receiving port
  • the RECEP_COMPL command includes information such as a filter table, a start pointer, priority information and other miscellaneous information.
  • FM 120 will read the filter table to determine if the packet is to be transmitted from one of its ports. If it is determined that the packet is to be transmitted from one of its ports, FM 120 will send the RECEP_COMPL command information directly to the port. In this case, the RECEP_COMPL command information is sent to the TX FIFO of port 102 ( 12 ).
  • the RECEP_COMPL command information is transferred to TXM Memory through the TXM Bus and TXM 112 .
  • the TXM memory contains a queue of packets to be transmitted.
  • TXM Memory is allocated on a per port basis so that if there are ten ports there are ten queues within the TXM Memory allocated to each port. As each of the ports transmitters becomes idle, each port will read the next RECEP_COMPL command information stored in the TXM Memory.
  • the TX FIFO of port 102 ( 12 ) will receive, as part of the RECEP_COMPL command information, a start pointer which will point to a header in ATM memory across the ATM Bus which in turn points to the location of a packet in the PBM Memory over the PBM Bus.
  • the port will at this point request to load the packet into the transmit (TX) FIFO of port 102 ( 12 ) and send it out through the MAC and PHY of port 102 ( 12 ).
  • the port If the port is in half duplex mode, it is possible that a collision could occur and force the packet transmission to start over. If this occurs, the port simply re-requests the bus master and reloads the packet and starts over again. If however, the number of consecutive collisions becomes excessive, the packet will be dropped from the transmission queue.
  • the port will signal FM 120 that it is done with the current buffer. FM 120 will then decrement a counter which indicates how many more ports must transmit the packet. For example, if a packet is destined to eleven ports for output, the counter, in this example, is set to 11. Each time a packet is successfully transmitted, FM 120 decrements the counter by one. When the counter reaches zero this will indicate that all designated ports have successfully transmitted the packet. FM 120 will then issue a FREE command over the ATM Bus indicating that the memory occupied by the packet in the PBM Memory is no longer needed and can now be freed for other use.
  • Multicast and broadcast packets are handled exactly like unicast packets with the exception that their filter tables will indicate that all or most ports should transmit the packet. This will force the forwarding managers to transmit the packet out on all or most of their ports.
  • FIG. 3A is an illustration of a PBM Memory structure in one example of the invention.
  • PBM Memory Structure 300 is a linked list of 256 byte segments 302 , 304 , 306 , 308 , 310 , 312 , 314 and 316 .
  • segment 302 is the free_head indicating the beginning of the free memory linked list and segment 316 is the free_tail indicating the last segment of free memory.
  • Packet 1 occupies segments 302 , 306 and 308 and packet 2 occupies segment 304 .
  • Segments 310 , 312 , 314 and 316 are free memory.
  • Segment 310 is the free_head indicating the beginning of free memory and segment 316 is the free_tail indicating the end of free memory.
  • segment 310 linking to segment 312
  • segment 312 linking to segment 314
  • segment 314 linking to segment 316
  • segment 316 linking to segment 302
  • segment 302 linking to segment 306
  • segment 306 linking to segment 308 where segment 308 is the free_tail.
  • FIG. 3D in this example simply illustrates the PBM Memory after packet 2 has been transmitted successfully and the Forwarding Manager has issued a FREE command over the ATM Bus.
  • the SPM will detect the FREE command and then add the memory space occupied by packet 2 in the PBM Memory to the free memory linked list.
  • segment 308 is linked to the memory occupied by packet 2 , segment 304 , and segment 304 is identified as the free_tail.
  • FIG. 4 is an illustration of one embodiment of the invention.
  • a processor 400 has a bus arbitrator 402 and an access controller 404 .
  • Arbitrator 402 is in communication with access controller 404 .
  • Arbitrator 402 manages command and data traffic within processor 400 and also manages commands and data traffic external of processor 400 .
  • Access controller 404 receives data and commands from bus arbitrator 402 and processes these commands and data at a processor clock speed.
  • a command bus, CMD Bus 406 , and a data bus 408 are provided for communication and transmission of commands and data.
  • Arbitrator 402 is in communication with CMD Bus 406 and data bus 408 .
  • the first client system 410 has a core 412 , a sync 414 , a transmit FIFO 416 and a receive FIFO 418 .
  • Core 412 can be the basic circuitry for a network switch to transmit and receive data.
  • Sync 414 can synchronize the processor clock speed with the core clock speed to transmit signals between each of the cores and the processor.
  • Core 412 is in communication with a sync 414 , a transmit FIFO 416 and a receive FIFO 418 .
  • Sync 414 is in communication with transmit FIFO 416 and receive FIFO 418 .
  • Sync 414 is also in communication with CMD Bus 406 and data bus 408 .
  • Transmit FIFO 416 is in communication with CMD Bus 406 and data bus 408
  • receive FIFO 418 is also in communication with CMD Bus 406 and data bus 408 .
  • client systems 420 and 430 are arranged identical to client system 410 .
  • Client system 420 has a core 422 , a sync 424 , a transmit FIFO 426 and a receive FIFO 428 .
  • the inter connections between core 422 , sync 424 , transmit FIFO 426 , receive FIFO 428 , CMD Bus 406 and data bus 408 are identical to those as described with relation to client system 410 .
  • client system 430 has a core 432 , a sync 434 , a transmit FIFO 436 and a receive FIFO 438 .
  • the interconnections between core 432 , sync 424 , transmit FIFO 436 , receive FIFO 438 , CMD Bus 406 and data bus 408 are identical to those as described with respect to client systems 410 and 420 .
  • Core 412 , core 422 and core 432 can be the basic circuitry for a network switch.
  • Receive FIFO 418 can be used as a receive buffer to receive signals over CMD Bus 406 and data bus 408 from processor 400 at a first clock speed.
  • receive FIFO 418 receives a signal at a first clock speed, namely, the processor clock speed
  • sync 414 will hold CMD Bus 406 and data bus 408 .
  • the signal received in the receive FIFO 418 will then be transmitted to core 412 at a second clock speed, which can be a slower clock speed at which core 412 operates.
  • Core 412 will receive the signal from the receive FIFO 418 at the second, core clock speed and process the signal at the second, core clock speed.
  • core 412 can transmit the signal at the second core clock speed to transmit FIFO 416 .
  • Transmit FIFO 416 can receive the signal processed by core 412 at the second core clock speed and hold the signal in the transmit FIFO 416 as a buffer.
  • the transmit FIFO 416 can then transmit the signal received from core 412 across CMD Bus 406 and/or data bus 408 to be sent to processor 400 at the processor clock speed.
  • Arbitrator 402 can then receive the signal over the CMD Bus 406 and for data bus 408 for further processing.
  • sync 414 can release the CMD Bus 406 and/or data bus 408 .
  • Arbitrator 402 will then receive the signal from the client 410 and will transmit the signal to access controller 404 for further processing.
  • processor 400 will take requests and commands to access a client's data.
  • arbitrator 402 will manage the command and data traffic between processor 400 and its clients 410 , 420 and 430 .
  • Access controller 404 can then take these commands and data for further processing by processor 400 .
  • each client is responsible for issuing commands and data to processor 400 and also for accepting manipulated data from processor 400 . Since the processor 400 and each of the cores 412 , 422 and 432 operate at different clock speeds, each client is provided with a transmit FIFO and a receive FIFO to buffer the data. In the case of client 410 , the transmit FIFO 416 and a receive FIFO 418 are provided. In the case of client 420 , a transmit FIFO 426 and a receive FIFO 429 are provided. In the case of client 430 , a transmit FIFO 436 and a receive FIFO 438 are provided.
  • One of the FIFOs (receive FIFO 418 , receive FIFO 428 , receive FIFO 438 ) is for the ingress and the other FIFO (transmit FIFO 416 , transmit FIFO 426 , transmit FIFO 436 ) is for the egress.
  • syncs 414 , 424 and 434 play an important role in the present invention.
  • Syncs 414 , 424 and 434 synchronize the request signal from the clock domain of the cores 412 , 422 and 432 to the clock domain of the processor 400 .
  • the core requests commands from the processor, data must also be ready to be accessed.
  • processor 400 grants the bus to the client, the processor can drive command and data at the clock speed of processor 400 .
  • processor 400 will send manipulated data to the client that requests the processor.
  • Each of the sync block 414 , 424 and 434 will latch the manipulated data at the clock speed of the processor and will hold the data long enough to write the data to the egress FIFO (transmit FIFO 416 , transmit FIFO 426 , transmit FIFO 436 ) at the clock speed of the core.
  • processor 400 can then serve other clients.
  • the advantage of the invention as described above is that only a few signals will need to be synchronized which will achieve reduced clock speed on the core of each client. This will also reduce engineering efforts for asynchronous FIFO and controller design.
  • FIG. 5 is a flow chart which illustrates another embodiment of the invention.
  • sync 414 latches a bus (command bus 406 and/or data bus 408 ).
  • a signal is received over the bus at a first processor clock speed in receive buffer, RX FIFO 418 .
  • the signal is transmitted from the receive FIFO 418 to core 412 at a second clock speed (core clock speed) and processed by the core at the second core clock speed.
  • step 540 the signal is transmitted from the core to transmit FIFO 416 at the second core clock speed.
  • the signals are transmitted from the transmit FIFO 416 to processor 400 over the bus at the first processor clock speed in step 550 and in step 560 , sync 414 frees the bus and allows processor 400 to serve other clients.
  • the above-discussed configuration of the invention is, in a preferred embodiment, embodied on a semiconductor substrate, such as silicon, with appropriate semiconductor manufacturing techniques and based upon a circuit layout which would, based upon the embodiments discussed above, be apparent to those skilled in the art.
  • a person of skill in the art with respect to semiconductor design and manufacturing would be able to implement the various modules, interfaces, and tables, buffers, etc. of the present invention onto a single semiconductor substrate, based upon the architectural description discussed above. It would also be within the scope of the invention to implement the disclosed elements of the invention in discrete electronic components, thereby taking advantage of the functional aspects of the invention without maximizing the advantages through the use of a single semiconductor substrate.

Abstract

A system for reducing clock speed and power consumption in a network chip. The system has a core that transmits and receives signals at a first clock speed. A receive buffer is in communication with the core and configured to transmit the signals to the core at the first clock speed. A transmit buffer is in communication with the core and configured to receive signals from the core at the first clock speed. A sync is configured to receive signals in the receive buffer at a second clock speed and to transmit the signals from the transmit buffer at the second clock speed. The sync is in communication with the transmit buffer and the receive buffer.

Description

REFERENCE TO RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application Ser. No. 60/237,764 filed on Oct. 3, 2000 and No. 60/241,332 filed on Oct. 19, 2000. The contents of these provisional applications are hereby incorporated by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, ATM, ethernet, fast ethernet, and gigabit ethernet environments, generally known as LANs. The invention is also applicable to wide area networks, and virtually any computer network. In particular, the invention relates to a new switching architecture geared to power efficient and cost sensitive markets, and which can be implemented on a semiconductor substrate such as a silicon chip.
2. Description of the Related Art
As computer performance has increased in recent years, the demands on computer networks has significantly increased; faster computer processors and higher memory capabilities need networks with high bandwidth capabilities to enable high speed transfer of significant amounts of data. The well-known ethernet technology, which is based upon numerous IEEE ethernet standards, is one example of computer networking technology which has been able to be modified and improved to remain a viable computing technology. A more complete discussion of prior art networking systems can be found, for example, in SWITCHED AND FAST ETHERNET, by Breyer and Riley (Ziff-Davis, 1996), and numerous IEEE publications relating to IEEE 802 standards. Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network. Basic ethernet wirespeed is up to 10 megabits per second, and Fast Ethernet is up to 100 megabits per second. A gigabit Ethernet is capable of transmitting data over a network at a rate of up to 1,000 megabits per second. As speed has increased, design constraints and design requirements have become more and more complex with respect to following appropriate design and protocol rules and providing a low cost, commercially viable solution.
Referring to the OSI 7-layer reference model discussed previously, the higher layers typically have more information. Various types of products are available for performing switching-related functions at various levels of the OSI model. Hubs or repeaters operate at layer one, and essentially copy and “broadcast” incoming data to a plurality of spokes of the hub. Layer two switching-related devices are typically referred to as multiport bridges, and are capable of bridging two separate networks. Bridges can build a table of forwarding rules based upon which MAC (media access controller) addresses exist on which ports of the bridge, and pass packets which are destined for an address which is located on an opposite side of the bridge. Bridges typically utilize what is known as the “spanning tree” algorithm to eliminate potential data loops; a data loop is a situation wherein a packet endlessly loops in a network looking for a particular address. The spanning tree algorithm defines a protocol for preventing data loops. Layer three switches, sometimes referred to as routers, can forward packets based upon the destination network address. Layer three switches are capable of learning addresses and maintaining tables thereof which correspond to port mappings. Processing speed for layer three switches can be improved by utilizing specialized high performance hardware, and off loading the host CPU so that instruction decisions do not delay packet forwarding.
SUMMARY OF THE INVENTION
The invention is directed to a method and apparatus for reducing clock speed and power consumption in a network chip.
In one embodiment, the invention is a system having a core that transmits and receives signals at a first clock speed. A receive buffer is in communication with the core and configured to transmit the signals to the core at the first clock speed. A transmit buffer is in communication with the core and configured to receive signals from the core at the first clock speed. A sync is configured to receive signals in the receive buffer at a second clock speed and to transmit the signals from the transmit buffer at the second clock speed. The sync is in communication with the transmit buffer and the receive buffer.
In another embodiment, the invention is a method for synching two clock speeds. The method includes the steps of receiving a signal in a receive buffer at a first clock speed using a sync, then transmitting the signal from the receive buffer to a core at a second clock speed; transmitting the signal from the core to a transmit buffer at the second clock speed; and transmitting the signal from the transmit buffer at the first clock speed using a sync.
Another embodiment of the invention is a system for syncing two clock speeds. The system has a signal receiving means for receiving a signal in a receive buffer at a first clock speed using a sync. A core transmitting means transmits the signal from the receive buffer to a core at a second clock speed. A transmit buffer transmits the signal from the core to a transmit buffer at the second clock speed, and a processor transmitting means transmits the signal from the transmit buffer at the first clock speed using the sync.
BRIEF DESCRIPTION OF THE DRAWINGS
The objects and features of the invention will be more readily understood with reference to the following description and the attached drawings, wherein:
FIG. 1 is a general block diagram of elements of the present invention;
FIG. 2 illustrates the data flow on the CPS channel of a network switch according to the present invention;
FIG. 3A illustrates a linked list structure of Packet Buffer Memory;
FIG. 3B illustrates a linked list structure of Packet Buffer Memory with two data packets;
FIG. 3C illustrates a linked list structure of Packet Buffer Memory after the memory occupied by one data packet is freed;
FIG. 3D illustrates a linked list structure of Packet Buffer Memory after the memory occupied by another data packet is freed;
FIG. 4 is a block diagram of a processor having multiple clients;
FIG. 5 is a flow diagram of method steps in one embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 is an example of a block diagram of a switch 100 of the present invention. In this example, switch 100 has 12 ports, 102(1)-102(12), which can be fully integrated IEEE compliant ports. Each of these 12 ports 102(1)-102(12) can be 10BASE-T/100BASE-TX/FX ports each having a physical element (PHY), which can be compliant with IEEE standards. Each of the ports 102(1)-102(12), in one example of the invention, has a port speed that can be forced to a particular configuration or set so that auto-negotiation will determine the optimal speed for each port independently. Each PHY of each of the ports can be connected to a twisted-pair interface using TXOP/N and RXIP/N as transmit and receive protocols, or a fiber interface using FXOP/N and FXIP/N as transmit and receive protocols.
Each of the ports 102(1)-102(12) has a Media Access Controller (MAC) connected to each corresponding PHY. In one example of the invention, each MAC is a fully compliant IEEE 802.3 MAC. Each MAC can operate at 10 Mbps or 100 Mbps and supports both a full-duplex mode, which allows for data transmission and reception simultaneously, and a half duplex mode, which allows data to be either transmitted or received, but not both at the same time.
Flow control can be provided by each of the MACs. When flow control is implemented, the flow of incoming data packets is managed or controlled to reduce the chances of system resources being exhausted. Although the present embodiment can be a non-blocking, wire speed switch, the memory space available may limit data transmission speeds. For example, during periods of packet flooding (i.e. packet broadcast storms), the available memory can be exhausted rather quickly. In order to enhance the operability of the switch in these types of situations, the present invention can implement two different types of flow control. In full-duplex mode, the present invention can, for example, implement the IEEE 802.3x flow control. In half-duplex mode, the present invention can implement a collision backpressure scheme.
In one example of the present invention each port has a latency block connected to the MAC. Each of the latency blocks has transmit and receive FIFOs which provide an interface to main packet memory. In this example, if a packet does not successfully transmit from one port to another port within a preset time, the packet will be dropped from the transmit queue.
In addition to ports 102(1)-102(12), a gigabit interface 104 can be provided on switch 100. Gigabit interface 104 can support a Gigabit Media—Independent Interface (GMII) and a Ten Bit Interface (TBI). The GMII can be fully compliant to IEEE 802.3ab. The GMII can pass data at a rate of 8 bits every 8 ns resulting in a throughput of 2 Gbps including both transmit and receive data. In addition to the GMII, gigabit interface 104 can be configured to be a TBI, which is compatible with many industry standard fiber drivers. Since in some embodiments of the invention the MDIO/MDC interfaces (optical interfaces) are not supported, the gigabit PHY (physical layer) is set into the proper mode by the system designer.
Gigabit interface 104, like ports 102(1)-102(12), has a PHY, a Gigabit Media Access Controller (GMAC) and a latency block. The GMAC can be a fully compliant IEEE 802.3z MAC operating at 1 Gbps full-duplex only and can connect to a fully compliant GMII or TBI interface through the PHY. In this example, GMAC 108 provides full-duplex flow control mechanisms and a low cost stacking solution for either twisted pair or TBI mode using in-band signaling for management. This low cost stacking solution allows for a ring structure to connect each switch utilizing only one gigabit port.
A CPU interface 106 is provided on switch 100. In one example of the present invention, CPU interface 106 is an asynchronous 8 or 16 bit I/O device interface. Through this interface a CPU can read internal registers, receive packets, transmit packets and allow for interrupts. CPU interface 106 also allows for a Spanning Tree Protocol to be implemented. In one example of the present invention, a chip select pin is available allowing a single CPU control two switches. In this example an interrupt pin when driven low (i.e., driven to the active state) requiring a pull-up resistor will allow multiple switches to be controlled by a single CPU.
A switching fabric 108 is also located on switch 100 in one example of the present invention. Switching fabric 108 can allow for full wire speed operation of all ports. A hybrid or virtual shared memory approach can also be implemented to minimize bandwidth and memory requirements. This architecture allows for efficient and low latency transfer of packets through the switch and also supports address learning and aging features, VLAN, port trunking and port mirroring.
Memory interfaces 110, 112 and 114 can be located on switch 100 and allow for the separation of data and control information. Packet buffer memory interface (PBM) 110 handles packet data storage while the transmit queue memory interface (TXM) 112 keeps a list of packets to be transmitted and address table/control memory interface (ATM) 114 handles the address table and header information. Each of these interfaces can use memory such as SSRAM that can be configured in various total amounts and chip sizes.
PBM 110 is located on switch 100 and can have an external packet buffer memory (not shown) that is used to store the packet during switching operations. In one example of the invention, packet buffer memory is made up of multiple 256 byte buffers. Therefore, one packet may span several buffers within memory. This structure allows for efficient memory usage and minimizes bandwidth overhead. The packet buffer memory can be configurable so that up to 4 Mbytes of memory per chip can be used for a total of 8 Mbytes per 24+2 ports. In this example, efficient memory usage is maintained by allocating 256 byte blocks, which allows storage for up to 32K packets. PBM 110 can be 64 bits wide and can use either a 64 bit wide memory or two 32 bit wide memories and can run at 100 MHz.
TXM 112 is located on switch 100 and can have an external transmit queue memory (not shown). TXM 112, in this example, maintains 4 priority queues per port and allows for 64K packets per chip and up to 128K packets per system. TXM 112 can run at a speed of up to 100 MHz.
ATM 114 can be located on switch 100 and can have an external address table/control memory (not shown) used to store the address table and header information corresponding to each 256 byte section of PBM 110. Address table/control memory allows up to 16K unique unicast addresses. The remaining available memory is used for control information. ATM 114, in this example, runs up to 133 MHz.
Switch 100, in one example of the invention, has a Flow Control Manager 116 that manages the flow of packet data. As each port sends more and more data to the switch, Flow Control Manager 116 can monitor the amount of memory being used by each port 102(1)-102(12) of switch 100 and the switch as a whole. In this example, if one of the ports 102(1)-102(12) or the switch as a whole is using up too much memory as is predetermined by a register setting predefined by the manufacturer or by a user, Flow Control Manager 116 will issue commands over the ATM Bus requesting the port or switch to slow down and may eventually drop packets if necessary.
In addition to Flow control manager 116, switch 100 also has a Start Point Manager (SPM) 118 connected to Switching Fabric 108, a Forwarding Manager (FM) 120 connected to Switching Fabric 108 and an Address Manager (AM) 122 connected to Switching Fabric 108.
Start Point Manager (SPM) 118, through Switching Fabric 108 in one example of the present invention, keeps track of which blocks of memory in PBM 110 are being used and which blocks of memory are free.
Forwarding Manager 120 can, for example, forward packet data through Switching Fabric 108 to appropriate ports for transmission.
Address Manager (AM) 122 can, through Switching Fabric 108, manage the address table including learning source addresses, assigning headers to packets and keeping track of these addresses. In one example of the invention, AM 122 uses aging to remove addresses from the address table that have not been used for a specified time period or after a sequence of events.
An expansion port 124 can also be provided on switch 100 to connect two switches together. This will allow for full wire speed operation on twenty-five 100 M ports (includes one CPU port) and two gigabit ports. The expansion port 124, in this example, allows for 4.6 Gbps of data to be transmitted between switches.
An LED controller 126 can also be provided on switch 100. LED controller 126 activates appropriate LEDs or other suitable indicator to give a user necessary status information. Each port of the ports 102(1)-102(12), in one example of the invention, has 4 separate LEDs, which provide per port status information. The LEDs are fully programmable and are made up of port LEDs and other LEDs. Each LED can include a default state for each of the four port LEDs. An example of the default operation of each of the port LEDs are shown below.
LED DEFAULT OPERATION
0 Speed Indicator
OFF = 10 Mbps or no link
ON = 100 Mbps
1 Full/Half/Collision Duplex
OFF = The port is in half duplex or no link
BLINK = The port is in half duplex and a collision has occurred
ON = The port is in full duplex
2 Link/Activity Indicator
OFF = Indicates that the port does not have link
BLINK = Link is present and receive or transmit activity is
occurring on the media
ON = Link present without activity
3 Alert Condition
OFF = No alert conditions, port is operating normally
ON = The port has detected an isolate condition
In addition to the default operations for the port LEDs, each of the port LEDs can be programmed through registers. These registers can be set up, in one example of the invention, by a CPU. By having programmable registers that control LEDs, full customization of the system architecture can be realized including the programmability of the blink rate.
Each of the LEDs can have a table, as shown below, associated with the LED, where register bits RAx, RBx and RCx can be set to provide a wide range of information.
Event ON Condition BLINK Condition OFF Condition
Link (L) A0 = (RA0&L) | !RA0 B0 = (RB0&L) | !RB0 C0 = (RC0&L) | !RC0
Isolate (I) A1 = (RA1&I) | !RA1 B1 = (RB1&I) | !RB1 C1 = (RC1&I) | !RC1
Speed (S) A2 = (RA2&S) | !RA2 B2 = (RB2&S) | !RB2 C2 = (RC2&S) | !RC2
Duplex (D) A3 = (RA3&D) | !RA3 B3 = (RB3&D) | !RB3 C3 = (RC3&D) | !RC3
TX/RX Activity A4 = (RA4&TRA) | !RA4 B4 = (RB4&TRA) | !RB4 C4 = (RC4&TRA) | !RC4
(TRA)
TX Activity A5 = (RA5&TA) | !RA5 B5 = (RB5&TA) | !RB5 C5 = (RC5&TA) | !RC5
(TA)
RX Activity A6 = (RA6&RA) | !RA6 B6 = (RB6&RA) | !RB6 C6 = (RC6&RA) | !RC6
(RA)
Auto-Negotiate A7 = (RA7&N) | !RA7 B7 = (RB7&N) | !RB7 C7 = (RC7&N) | !RC7
Active (N)
Port Disabled A8 = (RA8&PD) | !RA8 B8 = (RB8&PD) | !RB8 C8 = (RC8&PD) | !RC8
(PD)
Collision © A9 = (RA9&C) | !RA9 B9 = (RB9&C) | !RB9 C9 = (RC9&C) | !RC9
Result LEDON = (A0&A1&A2&A3 LEDBLINK = (B0&B1&B2& LEDOFF(C0&C1&C2&
&A4&A5&A6&A7&A8& B3&B4&B5&B6&B7&B8 C3&C4&C5&C6&C7&C8
A9)&(LEDBLINK&LEDOFF) &B9)&LEDOFF &C9)
For example, register bits RAx, RBx and RCx can be set to determine when LEDON, LEDBLINK and LEDOFF are activated or deactivated. In addition to the port LEDs, there are additional LEDs which indicate the status of the switch.
Registers 128 are located on switch 100 in this example of the present invention. Registers 128 are full registers that allow for configuration, status and Remote Monitoring (RMON) management. In this example, Registers 128 are arranged into groups and offsets. There are 32 address groups each of which can contain up to 64 registers.
FIG. 2 is an illustration of one embodiment of the invention having a PBM Bus, an ATM Bus, and a TXM Bus for communications with other portions of the switch. In this example PBM 110 is connected to the PBM Bus and an external PBM Memory; TXM 112 is connected to the TXM Bus and an external TXM Memory; and ATM 114 is connected to the ATM Bus and an external ATM Memory. Each of the transmit (TX) and receive (RX) portions of ports 102(1)-102(12) are connected to the PBM Bus, ATM Bus and TXM Bus for communications.
FM 120 is connected to each of the ports 102(1)-102(12) directly and is also connected to the ATM Bus for communications with other portions of the switch. SPM 118 and AM 122 are also connected to the ATM Bus for communications with other portions of the switch.
The operation of switch 100 for transmission of a unicast packet (i.e., a packet destined for a single port for output) in one example of the invention is made with reference to FIG. 2 as follows.
In this example, Switch 100 is initialized following the release of a hardware reset pin. A series of initialization steps will occur including the initialization of external buffer memory and the address table. All ports on the switch will then be disabled and the CPU will enable packet traffic by setting an enable register. As links become available on the ports (ports 102(1)-102(12) and gigabit port 104), an SPT protocol will confirm these ports and the ports will become activated. After the initialization process is concluded normal operation of Switch 100 can begin.
In this example, once a port has been initialized and activated, a PORT_ACTIVE command is issued by CPU. This indicates that the port is ready to transmit and receive data packets. If for some reason a port goes down or becomes disabled, a PORT_INACTIVE command is issued by the CPU.
During unicast transmission, a packet from an external source on port 102(1) is received at the receive (RX) PHY of port 102(1).
In this example, the RX MAC of port 102(1) will not start processing the packet until a Start of Frame Delimiter (SFD) for the packet is detected. When the SFD is detected by the RX MAC portion of port 102(1), the RX MAC will place the packet into a receive (RX) FIFO of the latency block of port 102(1). As the RX FIFO becomes filled, port 102(1) will request an empty receive buffer from the SPM. Once access to the ATM Bus is granted, the RX FIFO Latency block of port 102(1) sends packets received in the RX FIFO to the external PBM Memory through the PBM Bus and PBM 110 until the end of packet is reached.
The PBM Memory, in this example, is made up of 256 byte buffers. Therefore, one packet may span several buffers within the packet buffer memory if the packet size is greater than 256 bytes. Connections between packet buffers can be maintained through a linked list system in one example of the present invention. A linked list system allows for efficient memory usage and minimized bandwidth overhead and will be explained in further detail with relation to FIG. 3A-FIG. 3D.
At the same time packets are being sent to the external PBM Memory, the port will also send the source address to Address Manager (AM) 122 and request a filtering table from AM 122.
If the packet is “good”, as is determined through normal, standard procedures known to those of ordinary skill in the art, such as valid length and IEEE standard packet checking or such as a Cyclic Redundancy Check, the port writes the header information to the ATM memory through the ATM Bus and ATM 114. AM 122 sends a RECEP_COMPL command over the ATM Bus signifying that packet reception is complete. Other information is also sent along with the RECEP_COMPL command such as the start address and filtering table which indicates which ports the packet is to be sent out on. For example, a filtering table having a string such as “011111111111” would send the packet to all ports except port 1 and would have a count of 11. The count simply is the number of ports the packet is to be sent, as indicated by the number of “1”s.
Forwarding Manager (FM) 120 is constantly monitoring the ATM Bus to determine if a RECEP_COMPL command has been issued. Once FM 120 has determined that a RECEP_COMPL command has been issued, Forwarding Manager (FM) 120 will use the filtering table to send packets to appropriate ports. It is noted that a packet will not be forwarded if one of the following conditions is met:
a. The packet contains a CRC error
b. The PHY signals a receive error
c. The packet is less than a minimum threshold such as 64 bytes
d. The packet is greater than a maximum threshold such as 1518 bytes or 1522 bytes depending on register settings
e. The packet is only forwarded to the receiving port
The RECEP_COMPL command includes information such as a filter table, a start pointer, priority information and other miscellaneous information. FM 120 will read the filter table to determine if the packet is to be transmitted from one of its ports. If it is determined that the packet is to be transmitted from one of its ports, FM 120 will send the RECEP_COMPL command information directly to the port. In this case, the RECEP_COMPL command information is sent to the TX FIFO of port 102(12).
If the port is busy, the RECEP_COMPL command information is transferred to TXM Memory through the TXM Bus and TXM 112. The TXM memory contains a queue of packets to be transmitted. TXM Memory is allocated on a per port basis so that if there are ten ports there are ten queues within the TXM Memory allocated to each port. As each of the ports transmitters becomes idle, each port will read the next RECEP_COMPL command information stored in the TXM Memory. The TX FIFO of port 102(12) will receive, as part of the RECEP_COMPL command information, a start pointer which will point to a header in ATM memory across the ATM Bus which in turn points to the location of a packet in the PBM Memory over the PBM Bus. The port will at this point request to load the packet into the transmit (TX) FIFO of port 102(12) and send it out through the MAC and PHY of port 102(12).
If the port is in half duplex mode, it is possible that a collision could occur and force the packet transmission to start over. If this occurs, the port simply re-requests the bus master and reloads the packet and starts over again. If however, the number of consecutive collisions becomes excessive, the packet will be dropped from the transmission queue.
Once the port successfully transmits a packet, the port will signal FM 120 that it is done with the current buffer. FM 120 will then decrement a counter which indicates how many more ports must transmit the packet. For example, if a packet is destined to eleven ports for output, the counter, in this example, is set to 11. Each time a packet is successfully transmitted, FM 120 decrements the counter by one. When the counter reaches zero this will indicate that all designated ports have successfully transmitted the packet. FM 120 will then issue a FREE command over the ATM Bus indicating that the memory occupied by the packet in the PBM Memory is no longer needed and can now be freed for other use.
When SPM 118 detects a FREE command over the ATM Bus, steps are taken to indicate that the space taken by the packet is now free memory.
Multicast and broadcast packets are handled exactly like unicast packets with the exception that their filter tables will indicate that all or most ports should transmit the packet. This will force the forwarding managers to transmit the packet out on all or most of their ports.
FIG. 3A is an illustration of a PBM Memory structure in one example of the invention. PBM Memory Structure 300 is a linked list of 256 byte segments 302, 304, 306, 308, 310, 312, 314 and 316. In this example segment 302 is the free_head indicating the beginning of the free memory linked list and segment 316 is the free_tail indicating the last segment of free memory.
In FIG. 3B two packets have been received and stored in the PBM Memory. Packet 1 occupies segments 302, 306 and 308 and packet 2 occupies segment 304. Segments 310, 312, 314 and 316 are free memory. Segment 310 is the free_head indicating the beginning of free memory and segment 316 is the free_tail indicating the end of free memory.
In FIG. 3C packet 1 has been fully transmitted and the Forwarding Manager (FM) has issued a FREE command. Since packet 1 is already in a linked list format the SPM can add the memory occupied by packet 1 to the free memory link list. The free_head, segment 310 remains the same. However, the free_tail is changed. This is accomplished by linking segment 316 to the beginning of packet 1, which is segment 302, and designating the last segment of packet 1, which is segment 308, as the free_tail. As a result, there is a linked list starting with segment 310 linking to segment 312, segment 312 linking to segment 314, segment 314 linking to segment 316, segment 316 linking to segment 302, segment 302 linking to segment 306 and segment 306 linking to segment 308 where segment 308 is the free_tail.
FIG. 3D in this example simply illustrates the PBM Memory after packet 2 has been transmitted successfully and the Forwarding Manager has issued a FREE command over the ATM Bus. The SPM will detect the FREE command and then add the memory space occupied by packet 2 in the PBM Memory to the free memory linked list. In this example segment 308 is linked to the memory occupied by packet 2, segment 304, and segment 304 is identified as the free_tail.
FIG. 4 is an illustration of one embodiment of the invention. In this embodiment, a processor 400 has a bus arbitrator 402 and an access controller 404. Arbitrator 402 is in communication with access controller 404. Arbitrator 402 manages command and data traffic within processor 400 and also manages commands and data traffic external of processor 400. Access controller 404 receives data and commands from bus arbitrator 402 and processes these commands and data at a processor clock speed.
A command bus, CMD Bus 406, and a data bus 408 are provided for communication and transmission of commands and data. Arbitrator 402 is in communication with CMD Bus 406 and data bus 408. In one embodiment of the invention there are three client systems that operate at a core clock speed. The first client system 410 has a core 412, a sync 414, a transmit FIFO 416 and a receive FIFO 418. Core 412 can be the basic circuitry for a network switch to transmit and receive data. Sync 414 can synchronize the processor clock speed with the core clock speed to transmit signals between each of the cores and the processor.
Core 412 is in communication with a sync 414, a transmit FIFO 416 and a receive FIFO 418. Sync 414 is in communication with transmit FIFO 416 and receive FIFO 418. Sync 414 is also in communication with CMD Bus 406 and data bus 408. Transmit FIFO 416 is in communication with CMD Bus 406 and data bus 408, and receive FIFO 418 is also in communication with CMD Bus 406 and data bus 408.
Similarly, client systems 420 and 430 are arranged identical to client system 410. Client system 420 has a core 422, a sync 424, a transmit FIFO 426 and a receive FIFO 428. The inter connections between core 422, sync 424, transmit FIFO 426, receive FIFO 428, CMD Bus 406 and data bus 408 are identical to those as described with relation to client system 410.
Similarly, client system 430 has a core 432, a sync 434, a transmit FIFO 436 and a receive FIFO 438. The interconnections between core 432, sync 424, transmit FIFO 436, receive FIFO 438, CMD Bus 406 and data bus 408 are identical to those as described with respect to client systems 410 and 420.
Core 412, core 422 and core 432 can be the basic circuitry for a network switch. Receive FIFO 418 can be used as a receive buffer to receive signals over CMD Bus 406 and data bus 408 from processor 400 at a first clock speed. When receive FIFO 418 receives a signal at a first clock speed, namely, the processor clock speed, sync 414 will hold CMD Bus 406 and data bus 408. The signal received in the receive FIFO 418 will then be transmitted to core 412 at a second clock speed, which can be a slower clock speed at which core 412 operates.
Core 412 will receive the signal from the receive FIFO 418 at the second, core clock speed and process the signal at the second, core clock speed. When the core is finished processing the signal and the signal is to be transmitted to the processor, core 412 can transmit the signal at the second core clock speed to transmit FIFO 416. Transmit FIFO 416 can receive the signal processed by core 412 at the second core clock speed and hold the signal in the transmit FIFO 416 as a buffer. The transmit FIFO 416 can then transmit the signal received from core 412 across CMD Bus 406 and/or data bus 408 to be sent to processor 400 at the processor clock speed. Arbitrator 402 can then receive the signal over the CMD Bus 406 and for data bus 408 for further processing.
Upon completion of the transmission of the signal from the transmit FIFO 416 to processor 400, sync 414 can release the CMD Bus 406 and/or data bus 408. Arbitrator 402 will then receive the signal from the client 410 and will transmit the signal to access controller 404 for further processing.
From the above description, it is evident that processor 400 will take requests and commands to access a client's data. Thus, arbitrator 402 will manage the command and data traffic between processor 400 and its clients 410, 420 and 430. Access controller 404 can then take these commands and data for further processing by processor 400.
The core for each of the clients is responsible for issuing commands and data to processor 400 and also for accepting manipulated data from processor 400. Since the processor 400 and each of the cores 412, 422 and 432 operate at different clock speeds, each client is provided with a transmit FIFO and a receive FIFO to buffer the data. In the case of client 410, the transmit FIFO 416 and a receive FIFO 418 are provided. In the case of client 420, a transmit FIFO 426 and a receive FIFO 429 are provided. In the case of client 430, a transmit FIFO 436 and a receive FIFO 438 are provided.
One of the FIFOs (receive FIFO 418, receive FIFO 428, receive FIFO 438) is for the ingress and the other FIFO (transmit FIFO 416, transmit FIFO 426, transmit FIFO 436) is for the egress.
Each of the syncs 414, 424 and 434 play an important role in the present invention. Syncs 414, 424 and 434 synchronize the request signal from the clock domain of the cores 412, 422 and 432 to the clock domain of the processor 400. At the same time the core requests commands from the processor, data must also be ready to be accessed. Once processor 400 grants the bus to the client, the processor can drive command and data at the clock speed of processor 400. After data processing is complete, processor 400 will send manipulated data to the client that requests the processor. Each of the sync block 414, 424 and 434 will latch the manipulated data at the clock speed of the processor and will hold the data long enough to write the data to the egress FIFO (transmit FIFO 416, transmit FIFO 426, transmit FIFO 436) at the clock speed of the core.
When the entire process is complete, processor 400 can then serve other clients. The advantage of the invention as described above is that only a few signals will need to be synchronized which will achieve reduced clock speed on the core of each client. This will also reduce engineering efforts for asynchronous FIFO and controller design.
FIG. 5 is a flow chart which illustrates another embodiment of the invention. In this embodiment, in step 510, sync 414 latches a bus (command bus 406 and/or data bus 408). In step 520 a signal is received over the bus at a first processor clock speed in receive buffer, RX FIFO 418. In step 530, the signal is transmitted from the receive FIFO 418 to core 412 at a second clock speed (core clock speed) and processed by the core at the second core clock speed.
In step 540, the signal is transmitted from the core to transmit FIFO 416 at the second core clock speed. The signals are transmitted from the transmit FIFO 416 to processor 400 over the bus at the first processor clock speed in step 550 and in step 560, sync 414 frees the bus and allows processor 400 to serve other clients.
The above-discussed configuration of the invention is, in a preferred embodiment, embodied on a semiconductor substrate, such as silicon, with appropriate semiconductor manufacturing techniques and based upon a circuit layout which would, based upon the embodiments discussed above, be apparent to those skilled in the art. A person of skill in the art with respect to semiconductor design and manufacturing would be able to implement the various modules, interfaces, and tables, buffers, etc. of the present invention onto a single semiconductor substrate, based upon the architectural description discussed above. It would also be within the scope of the invention to implement the disclosed elements of the invention in discrete electronic components, thereby taking advantage of the functional aspects of the invention without maximizing the advantages through the use of a single semiconductor substrate.
Although the invention has been described based upon these preferred embodiments, it would be apparent to those of skilled in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention. In order to determine the metes and bounds of the invention, therefore, reference should be made to the appended claims.

Claims (6)

1. A system comprising:
a core that transmits and receives signals at a first clock speed;
a receive buffer in communication with said core and configured to transmit said signals to said core at said first clock speed;
a transmit buffer in communication with said core and configured to receive signals from said core at said first clock speed;
a sync configured to allow signals to be received in said receive buffer at a second clock speed and to allow signals to be transmitted from said transmit buffer at said second clock speed, said sync in communication with said transmit buffer and said receive buffer;
a command bus in communication with said sync, said transmit buffer, and said receive buffer;
a data bus in communication with said sync, said transmit buffer, and said receive buffer;
a processor in communication with said command bus and said data bus, said processor having, a bus arbitrator in communication with said command bus and said data bus to receive, transmit and manage signals transferred along said command bus and said data bus; and
an access controller in communication with said bus arbitrator to process said signals.
2. The system as recited in claim 1 wherein said transmit buffer comprises a transmit FIFO.
3. The system as recited in claim 1 wherein said receive buffer comprises a receive FIFO.
4. The system as recited in claim 1 wherein said signals comprise command signals.
5. The system as recited in claim 1 wherein said signals comprise data signals.
6. The system as recited in claim 1 wherein said sync is configured to latch said signals at said second clock speed and hold said signals long enough to allow said core to transmit signals to said transmit buffer at said first clock speed.
US09/858,505 2000-10-03 2001-05-17 Method and apparatus for reducing clock speed and power consumption Expired - Lifetime US7274705B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US09/858,505 US7274705B2 (en) 2000-10-03 2001-05-17 Method and apparatus for reducing clock speed and power consumption
EP01308917A EP1207640A3 (en) 2000-10-19 2001-10-19 Method and apparatus for reducing clock speed and power consumption
US11/889,741 US7656907B2 (en) 2000-10-03 2007-08-16 Method and apparatus for reducing clock speed and power consumption

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US23776400P 2000-10-03 2000-10-03
US24133200P 2000-10-19 2000-10-19
US09/858,505 US7274705B2 (en) 2000-10-03 2001-05-17 Method and apparatus for reducing clock speed and power consumption

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/889,741 Continuation US7656907B2 (en) 2000-10-03 2007-08-16 Method and apparatus for reducing clock speed and power consumption

Publications (2)

Publication Number Publication Date
US20020041599A1 US20020041599A1 (en) 2002-04-11
US7274705B2 true US7274705B2 (en) 2007-09-25

Family

ID=26934202

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/858,505 Expired - Lifetime US7274705B2 (en) 2000-10-03 2001-05-17 Method and apparatus for reducing clock speed and power consumption
US11/889,741 Expired - Fee Related US7656907B2 (en) 2000-10-03 2007-08-16 Method and apparatus for reducing clock speed and power consumption

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/889,741 Expired - Fee Related US7656907B2 (en) 2000-10-03 2007-08-16 Method and apparatus for reducing clock speed and power consumption

Country Status (2)

Country Link
US (2) US7274705B2 (en)
EP (1) EP1207640A3 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060248376A1 (en) * 2005-04-18 2006-11-02 Bertan Tezcan Packet processing switch and methods of operation thereof
US20070291757A1 (en) * 2004-02-27 2007-12-20 Robert William Albert Dobson Data Storage and Processing Systems
US7706387B1 (en) 2006-05-31 2010-04-27 Integrated Device Technology, Inc. System and method for round robin arbitration
US7747904B1 (en) 2006-05-12 2010-06-29 Integrated Device Technology, Inc. Error management system and method for a packet switch
US7817652B1 (en) * 2006-05-12 2010-10-19 Integrated Device Technology, Inc. System and method of constructing data packets in a packet switch
US8737410B2 (en) 2009-10-30 2014-05-27 Calxeda, Inc. System and method for high-performance, low-power data center interconnect fabric
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9069929B2 (en) 2011-10-31 2015-06-30 Iii Holdings 2, Llc Arbitrating usage of serial port in node card of scalable and modular servers
US9077654B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US9311269B2 (en) 2009-10-30 2016-04-12 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US9585281B2 (en) 2011-10-28 2017-02-28 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US9648102B1 (en) 2012-12-27 2017-05-09 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US9876735B2 (en) 2009-10-30 2018-01-23 Iii Holdings 2, Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US10140245B2 (en) 2009-10-30 2018-11-27 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11467883B2 (en) 2004-03-13 2022-10-11 Iii Holdings 12, Llc Co-allocating a reservation spanning different compute resources types
US11494235B2 (en) 2004-11-08 2022-11-08 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11496415B2 (en) 2005-04-07 2022-11-08 Iii Holdings 12, Llc On-demand access to compute resources
US11522952B2 (en) 2007-09-24 2022-12-06 The Research Foundation For The State University Of New York Automatic clustering for self-organizing grids
US11630704B2 (en) 2004-08-20 2023-04-18 Iii Holdings 12, Llc System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information
US11652706B2 (en) 2004-06-18 2023-05-16 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US11650857B2 (en) 2006-03-16 2023-05-16 Iii Holdings 12, Llc System and method for managing a hybrid computer environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11960937B2 (en) 2022-03-17 2024-04-16 Iii Holdings 12, Llc System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050083824A (en) * 2002-12-26 2005-08-26 마쯔시다덴기산교 가부시키가이샤 Data transmission device, data transmission system, and method
US7480282B2 (en) * 2005-03-17 2009-01-20 Agere Systems Inc. Methods and apparatus for controlling ethernet packet transfers between clock domains
US8614954B2 (en) * 2006-10-26 2013-12-24 Hewlett-Packard Development Company, L.P. Network path identification
DE102006060821A1 (en) * 2006-12-21 2008-06-26 Infineon Technologies Ag Network synchronous data interface
US8391300B1 (en) * 2008-08-12 2013-03-05 Qlogic, Corporation Configurable switch element and methods thereof
US8996736B2 (en) * 2013-02-01 2015-03-31 Broadcom Corporation Clock domain crossing serial interface, direct latching, and response codes
US9407736B1 (en) * 2015-01-13 2016-08-02 Broadcom Corporation Remote monitoring and configuration of physical layer devices
CN106612539A (en) * 2016-12-30 2017-05-03 上海与德信息技术有限公司 Power consumption control method
US10694546B2 (en) * 2017-09-22 2020-06-23 Nxp Usa, Inc. Media access control for duplex transmissions in wireless local area networks
US11463187B2 (en) * 2020-04-14 2022-10-04 Google Llc Fault tolerant design for clock-synchronization systems

Citations (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0312917A2 (en) 1987-10-19 1989-04-26 Oki Electric Industry Company, Limited Self-routing multistage switching network for fast packet switching system
EP0465090A1 (en) 1990-07-03 1992-01-08 AT&T Corp. Congestion control for connectionless traffic in data networks via alternate routing
US5126598A (en) * 1989-09-29 1992-06-30 Fujitsu Limited Josephson integrated circuit having an output interface capable of providing output data with reduced clock rate
JPH04189023A (en) 1990-11-22 1992-07-07 Victor Co Of Japan Ltd Pulse synchronizing circuit
US5278789A (en) 1990-12-12 1994-01-11 Mitsubishi Denki Kabushiki Kaisha Semiconductor memory device with improved buffer for generating internal write designating signal and operating method thereof
US5390173A (en) 1992-10-22 1995-02-14 Digital Equipment Corporation Packet format in hub for packet data communications system
US5414704A (en) 1992-10-22 1995-05-09 Digital Equipment Corporation Address lookup in packet data communications link, using hashing and content-addressable memory
US5423015A (en) 1988-10-20 1995-06-06 Chung; David S. F. Memory structure and method for shuffling a stack of data utilizing buffer memory locations
US5448715A (en) * 1992-07-29 1995-09-05 Hewlett-Packard Company Dual clock domain interface between CPU and memory bus
US5459717A (en) 1994-03-25 1995-10-17 Sprint International Communications Corporation Method and apparatus for routing messagers in an electronic messaging system
US5473607A (en) 1993-08-09 1995-12-05 Grand Junction Networks, Inc. Packet filtering for data networks
US5499295A (en) 1993-08-31 1996-03-12 Ericsson Inc. Method and apparatus for feature authorization and software copy protection in RF communications devices
FR2725573A1 (en) 1994-10-11 1996-04-12 Thomson Csf METHOD AND DEVICE FOR THE CONGESTION CONTROL OF SPORADIC EXCHANGES OF DATA PACKAGES IN A DIGITAL TRANSMISSION NETWORK
US5524254A (en) 1992-01-10 1996-06-04 Digital Equipment Corporation Scheme for interlocking line card to an address recognition engine to support plurality of routing and bridging protocols by using network information look-up database
US5555398A (en) 1994-04-15 1996-09-10 Intel Corporation Write back cache coherency module for systems with a write through cache supporting bus
US5568477A (en) 1994-12-20 1996-10-22 International Business Machines Corporation Multipurpose packet switching node for a data communication network
US5579301A (en) 1994-02-28 1996-11-26 Micom Communications Corp. System for, and method of, managing voice congestion in a network environment
EP0752796A2 (en) 1995-07-07 1997-01-08 Sun Microsystems, Inc. Buffering of data for transmission in a computer communications system interface
US5644784A (en) 1995-03-03 1997-07-01 Intel Corporation Linear list based DMA control structure
US5652579A (en) 1991-12-27 1997-07-29 Sony Corporation Knowledge-based access system for control functions
US5696899A (en) 1992-11-18 1997-12-09 Canon Kabushiki Kaisha Method and apparatus for adaptively determining the format of data packets carried on a local area network
WO1998009473A1 (en) 1996-08-30 1998-03-05 Sgs-Thomson Microelectronics Limited Improvements in or relating to an atm switch
US5734927A (en) * 1995-06-08 1998-03-31 Texas Instruments Incorporated System having registers for receiving data, registers for transmitting data, both at a different clock rate, and control circuitry for shifting the different clock rates
US5742613A (en) 1990-11-02 1998-04-21 Syntaq Limited Memory array of integrated circuits capable of replacing faulty cells with a spare
US5748631A (en) 1996-05-09 1998-05-05 Maker Communications, Inc. Asynchronous transfer mode cell processing system with multiple cell source multiplexing
US5754540A (en) 1995-07-18 1998-05-19 Macronix International Co., Ltd. Expandable integrated circuit multiport repeater controller with multiple media independent interfaces and mixed media connections
EP0849917A2 (en) 1996-12-20 1998-06-24 International Business Machines Corporation Switching system
US5781549A (en) 1996-02-23 1998-07-14 Allied Telesyn International Corp. Method and apparatus for switching data packets in a data network
EP0853441A2 (en) 1996-11-13 1998-07-15 Nec Corporation Switch control circuit and switch control method of ATM switchboard
EP0854606A2 (en) 1996-12-30 1998-07-22 Compaq Computer Corporation Network switch with statistics read accesses
US5787084A (en) 1996-06-05 1998-07-28 Compaq Computer Corporation Multicast data communications switching system and associated method
US5790539A (en) 1995-01-26 1998-08-04 Chao; Hung-Hsiang Jonathan ASIC chip for implementing a scaleable multicast ATM switch
EP0859492A2 (en) 1997-02-07 1998-08-19 Lucent Technologies Inc. Fair queuing system with adaptive bandwidth redistribution
US5802287A (en) 1993-10-20 1998-09-01 Lsi Logic Corporation Single chip universal protocol multi-function ATM network interface
US5802052A (en) 1996-06-26 1998-09-01 Level One Communication, Inc. Scalable high performance switch element for a shared memory packet or ATM cell switch fabric
EP0862349A2 (en) 1997-02-01 1998-09-02 Philips Patentverwaltung GmbH Switching device
US5826092A (en) * 1995-09-15 1998-10-20 Gateway 2000, Inc. Method and apparatus for performance optimization in power-managed computer systems
US5825772A (en) 1995-11-15 1998-10-20 Cabletron Systems, Inc. Distributed connection-oriented services for switched communications networks
US5828653A (en) 1996-04-26 1998-10-27 Cascade Communications Corp. Quality of service priority subclasses
US5831980A (en) 1996-09-13 1998-11-03 Lsi Logic Corporation Shared memory fabric architecture for very high speed ATM switches
US5842038A (en) 1996-10-10 1998-11-24 Unisys Corporation Optimized input/output memory access request system and method
US5845081A (en) 1996-09-03 1998-12-01 Sun Microsystems, Inc. Using objects to discover network information about a remote network having a different network protocol
WO1999000036A1 (en) 1997-06-26 1999-01-07 Formway Furniture Limited A work station support and/or a mounting bracket used in said work station support
WO1999000939A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Shared memory management in a switched network element
WO1999000950A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Trunking support in a high performance network device
WO1999000949A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. A system and method for a quality of service in a multi-layer network element
WO1999000938A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Routing in a multi-layer distributed network element
WO1999000948A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. A system and method for a multi-layer network elememt
WO1999000944A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Mechanism for packet field replacement in a multi-layer distributed network element
WO1999000945A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Multi-layer destributed network element
US5884099A (en) * 1996-05-31 1999-03-16 Sun Microsystems, Inc. Control circuit for a buffer memory to transfer data between systems operating at different speeds
US5887187A (en) 1993-10-20 1999-03-23 Lsi Logic Corporation Single chip network adapter apparatus
US5892922A (en) 1997-02-28 1999-04-06 3Com Corporation Virtual local area network memory access system
EP0907300A2 (en) 1997-10-01 1999-04-07 Nec Corporation Buffer controller incorporated in asynchronous transfer mode network for changing transmission cell rate depending on duration of congestion
US5898687A (en) 1996-07-24 1999-04-27 Cisco Systems, Inc. Arbitration mechanism for a multicast logic engine of a switching fabric circuit
US5909686A (en) 1997-06-30 1999-06-01 Sun Microsystems, Inc. Hardware-assisted central processing unit access to a forwarding database
US5918074A (en) 1997-07-25 1999-06-29 Neonet Llc System architecture for and method of dual path data processing and management of packets and/or cells and the like
US5940596A (en) 1996-03-25 1999-08-17 I-Cube, Inc. Clustered address caching system for a network switch
US5951635A (en) * 1996-11-18 1999-09-14 Vlsi Technology, Inc. Asynchronous FIFO controller
US5987507A (en) 1998-05-28 1999-11-16 3Com Technologies Multi-port communication network device including common buffer memory with threshold control of port packet counters
US6011795A (en) 1997-03-20 2000-01-04 Washington University Method and apparatus for fast hierarchical address lookup using controlled expansion of prefixes
EP0978968A2 (en) 1998-08-05 2000-02-09 Vitesse Semiconductor Corporation High speed cross point switch routing circuit with word-synchronous serial back plane
US6041053A (en) 1997-09-18 2000-03-21 Microsfot Corporation Technique for efficiently classifying packets using a trie-indexed hierarchy forest that accommodates wildcards
US6061351A (en) 1997-02-14 2000-05-09 Advanced Micro Devices, Inc. Multicopy queue structure with searchable cache area
US6084856A (en) 1997-12-18 2000-07-04 Advanced Micro Devices, Inc. Method and apparatus for adjusting overflow buffers and flow control watermark levels
US6119196A (en) 1997-06-30 2000-09-12 Sun Microsystems, Inc. System having multiple arbitrating levels for arbitrating access to a shared memory by network ports operating at different data rates
US6145100A (en) * 1998-03-04 2000-11-07 Advanced Micro Devices, Inc. Debug interface including timing synchronization logic
US6175902B1 (en) 1997-12-18 2001-01-16 Advanced Micro Devices, Inc. Method and apparatus for maintaining a time order by physical ordering in a memory
US6185185B1 (en) 1997-11-21 2001-02-06 International Business Machines Corporation Methods, systems and computer program products for suppressing multiple destination traffic in a computer network
US6317442B1 (en) * 1998-01-20 2001-11-13 Network Excellence For Enterprises Corp. Data switching system with cross bar transmission
US6546451B1 (en) * 1999-09-30 2003-04-08 Silicon Graphics, Inc. Method and apparatus for decoupling processor speed from memory subsystem speed in a node controller
US6597693B1 (en) * 1999-05-21 2003-07-22 Advanced Micro Devices, Inc. Common scalable queuing and dequeuing architecture and method relative to network switch data rate
US20040136711A1 (en) * 1999-09-13 2004-07-15 Ciena Corporation Optical fiber ring communication system
US7100020B1 (en) * 1998-05-08 2006-08-29 Freescale Semiconductor, Inc. Digital communications processor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5920836A (en) * 1992-11-13 1999-07-06 Dragon Systems, Inc. Word recognition system using language context at current cursor position to affect recognition probabilities
US6246680B1 (en) 1997-06-30 2001-06-12 Sun Microsystems, Inc. Highly integrated multi-layer switch element architecture

Patent Citations (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0312917A2 (en) 1987-10-19 1989-04-26 Oki Electric Industry Company, Limited Self-routing multistage switching network for fast packet switching system
US5423015A (en) 1988-10-20 1995-06-06 Chung; David S. F. Memory structure and method for shuffling a stack of data utilizing buffer memory locations
US5126598A (en) * 1989-09-29 1992-06-30 Fujitsu Limited Josephson integrated circuit having an output interface capable of providing output data with reduced clock rate
EP0465090A1 (en) 1990-07-03 1992-01-08 AT&T Corp. Congestion control for connectionless traffic in data networks via alternate routing
US5742613A (en) 1990-11-02 1998-04-21 Syntaq Limited Memory array of integrated circuits capable of replacing faulty cells with a spare
JPH04189023A (en) 1990-11-22 1992-07-07 Victor Co Of Japan Ltd Pulse synchronizing circuit
US5278789A (en) 1990-12-12 1994-01-11 Mitsubishi Denki Kabushiki Kaisha Semiconductor memory device with improved buffer for generating internal write designating signal and operating method thereof
US5652579A (en) 1991-12-27 1997-07-29 Sony Corporation Knowledge-based access system for control functions
US5524254A (en) 1992-01-10 1996-06-04 Digital Equipment Corporation Scheme for interlocking line card to an address recognition engine to support plurality of routing and bridging protocols by using network information look-up database
US5448715A (en) * 1992-07-29 1995-09-05 Hewlett-Packard Company Dual clock domain interface between CPU and memory bus
US5414704A (en) 1992-10-22 1995-05-09 Digital Equipment Corporation Address lookup in packet data communications link, using hashing and content-addressable memory
US5390173A (en) 1992-10-22 1995-02-14 Digital Equipment Corporation Packet format in hub for packet data communications system
US5696899A (en) 1992-11-18 1997-12-09 Canon Kabushiki Kaisha Method and apparatus for adaptively determining the format of data packets carried on a local area network
US5473607A (en) 1993-08-09 1995-12-05 Grand Junction Networks, Inc. Packet filtering for data networks
US5499295A (en) 1993-08-31 1996-03-12 Ericsson Inc. Method and apparatus for feature authorization and software copy protection in RF communications devices
US5887187A (en) 1993-10-20 1999-03-23 Lsi Logic Corporation Single chip network adapter apparatus
US5802287A (en) 1993-10-20 1998-09-01 Lsi Logic Corporation Single chip universal protocol multi-function ATM network interface
US5579301A (en) 1994-02-28 1996-11-26 Micom Communications Corp. System for, and method of, managing voice congestion in a network environment
US5459717A (en) 1994-03-25 1995-10-17 Sprint International Communications Corporation Method and apparatus for routing messagers in an electronic messaging system
US5555398A (en) 1994-04-15 1996-09-10 Intel Corporation Write back cache coherency module for systems with a write through cache supporting bus
FR2725573A1 (en) 1994-10-11 1996-04-12 Thomson Csf METHOD AND DEVICE FOR THE CONGESTION CONTROL OF SPORADIC EXCHANGES OF DATA PACKAGES IN A DIGITAL TRANSMISSION NETWORK
US5568477A (en) 1994-12-20 1996-10-22 International Business Machines Corporation Multipurpose packet switching node for a data communication network
US5790539A (en) 1995-01-26 1998-08-04 Chao; Hung-Hsiang Jonathan ASIC chip for implementing a scaleable multicast ATM switch
US5644784A (en) 1995-03-03 1997-07-01 Intel Corporation Linear list based DMA control structure
US5734927A (en) * 1995-06-08 1998-03-31 Texas Instruments Incorporated System having registers for receiving data, registers for transmitting data, both at a different clock rate, and control circuitry for shifting the different clock rates
EP0752796A2 (en) 1995-07-07 1997-01-08 Sun Microsystems, Inc. Buffering of data for transmission in a computer communications system interface
US5754540A (en) 1995-07-18 1998-05-19 Macronix International Co., Ltd. Expandable integrated circuit multiport repeater controller with multiple media independent interfaces and mixed media connections
US5826092A (en) * 1995-09-15 1998-10-20 Gateway 2000, Inc. Method and apparatus for performance optimization in power-managed computer systems
US5825772A (en) 1995-11-15 1998-10-20 Cabletron Systems, Inc. Distributed connection-oriented services for switched communications networks
US5781549A (en) 1996-02-23 1998-07-14 Allied Telesyn International Corp. Method and apparatus for switching data packets in a data network
US5940596A (en) 1996-03-25 1999-08-17 I-Cube, Inc. Clustered address caching system for a network switch
US5828653A (en) 1996-04-26 1998-10-27 Cascade Communications Corp. Quality of service priority subclasses
US5748631A (en) 1996-05-09 1998-05-05 Maker Communications, Inc. Asynchronous transfer mode cell processing system with multiple cell source multiplexing
US5884099A (en) * 1996-05-31 1999-03-16 Sun Microsystems, Inc. Control circuit for a buffer memory to transfer data between systems operating at different speeds
US5787084A (en) 1996-06-05 1998-07-28 Compaq Computer Corporation Multicast data communications switching system and associated method
US5802052A (en) 1996-06-26 1998-09-01 Level One Communication, Inc. Scalable high performance switch element for a shared memory packet or ATM cell switch fabric
US5898687A (en) 1996-07-24 1999-04-27 Cisco Systems, Inc. Arbitration mechanism for a multicast logic engine of a switching fabric circuit
WO1998009473A1 (en) 1996-08-30 1998-03-05 Sgs-Thomson Microelectronics Limited Improvements in or relating to an atm switch
US5845081A (en) 1996-09-03 1998-12-01 Sun Microsystems, Inc. Using objects to discover network information about a remote network having a different network protocol
US5831980A (en) 1996-09-13 1998-11-03 Lsi Logic Corporation Shared memory fabric architecture for very high speed ATM switches
US5842038A (en) 1996-10-10 1998-11-24 Unisys Corporation Optimized input/output memory access request system and method
EP0853441A2 (en) 1996-11-13 1998-07-15 Nec Corporation Switch control circuit and switch control method of ATM switchboard
US5951635A (en) * 1996-11-18 1999-09-14 Vlsi Technology, Inc. Asynchronous FIFO controller
EP0849917A2 (en) 1996-12-20 1998-06-24 International Business Machines Corporation Switching system
EP0854606A2 (en) 1996-12-30 1998-07-22 Compaq Computer Corporation Network switch with statistics read accesses
EP0862349A2 (en) 1997-02-01 1998-09-02 Philips Patentverwaltung GmbH Switching device
EP0859492A2 (en) 1997-02-07 1998-08-19 Lucent Technologies Inc. Fair queuing system with adaptive bandwidth redistribution
US6061351A (en) 1997-02-14 2000-05-09 Advanced Micro Devices, Inc. Multicopy queue structure with searchable cache area
US5892922A (en) 1997-02-28 1999-04-06 3Com Corporation Virtual local area network memory access system
US6011795A (en) 1997-03-20 2000-01-04 Washington University Method and apparatus for fast hierarchical address lookup using controlled expansion of prefixes
WO1999000036A1 (en) 1997-06-26 1999-01-07 Formway Furniture Limited A work station support and/or a mounting bracket used in said work station support
WO1999000945A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Multi-layer destributed network element
WO1999000944A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Mechanism for packet field replacement in a multi-layer distributed network element
US6119196A (en) 1997-06-30 2000-09-12 Sun Microsystems, Inc. System having multiple arbitrating levels for arbitrating access to a shared memory by network ports operating at different data rates
WO1999000948A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. A system and method for a multi-layer network elememt
US5909686A (en) 1997-06-30 1999-06-01 Sun Microsystems, Inc. Hardware-assisted central processing unit access to a forwarding database
WO1999000938A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Routing in a multi-layer distributed network element
WO1999000949A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. A system and method for a quality of service in a multi-layer network element
WO1999000939A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Shared memory management in a switched network element
WO1999000950A1 (en) 1997-06-30 1999-01-07 Sun Microsystems, Inc. Trunking support in a high performance network device
US5918074A (en) 1997-07-25 1999-06-29 Neonet Llc System architecture for and method of dual path data processing and management of packets and/or cells and the like
US6041053A (en) 1997-09-18 2000-03-21 Microsfot Corporation Technique for efficiently classifying packets using a trie-indexed hierarchy forest that accommodates wildcards
EP0907300A2 (en) 1997-10-01 1999-04-07 Nec Corporation Buffer controller incorporated in asynchronous transfer mode network for changing transmission cell rate depending on duration of congestion
US6185185B1 (en) 1997-11-21 2001-02-06 International Business Machines Corporation Methods, systems and computer program products for suppressing multiple destination traffic in a computer network
US6084856A (en) 1997-12-18 2000-07-04 Advanced Micro Devices, Inc. Method and apparatus for adjusting overflow buffers and flow control watermark levels
US6175902B1 (en) 1997-12-18 2001-01-16 Advanced Micro Devices, Inc. Method and apparatus for maintaining a time order by physical ordering in a memory
US6317442B1 (en) * 1998-01-20 2001-11-13 Network Excellence For Enterprises Corp. Data switching system with cross bar transmission
US6145100A (en) * 1998-03-04 2000-11-07 Advanced Micro Devices, Inc. Debug interface including timing synchronization logic
US7100020B1 (en) * 1998-05-08 2006-08-29 Freescale Semiconductor, Inc. Digital communications processor
US5987507A (en) 1998-05-28 1999-11-16 3Com Technologies Multi-port communication network device including common buffer memory with threshold control of port packet counters
EP0978968A2 (en) 1998-08-05 2000-02-09 Vitesse Semiconductor Corporation High speed cross point switch routing circuit with word-synchronous serial back plane
US6597693B1 (en) * 1999-05-21 2003-07-22 Advanced Micro Devices, Inc. Common scalable queuing and dequeuing architecture and method relative to network switch data rate
US20040136711A1 (en) * 1999-09-13 2004-07-15 Ciena Corporation Optical fiber ring communication system
US6546451B1 (en) * 1999-09-30 2003-04-08 Silicon Graphics, Inc. Method and apparatus for decoupling processor speed from memory subsystem speed in a node controller

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"A 622-Mb/s 8x8 ATM Switch Chip Set with Shared Multibuffer Architecture," Harufusa Kondoh et al., 8107 IEEE Journal of Solid-State Circuits 28(1993) Jul., No. 7, New York, US, pp. 808-814.
"A High-Speed CMOS Circuit for 1.2Gb/s 16x16 ATM Switching," Alain Chemarin et al. 8107 IEEE Journal of Solid-State Circuits 27(1992) Jul., No. 7, New York, US, pp. 1116-1120.
"Catalyst 8500 CSR Architecture," White Paper XP-002151999, Cisco Systems Inc. 1998, pp. 1-19.
"Computer Networks," A.S. Tanenbaum, Prentice-Hall Int., USA, XP-002147300(1998), Sec. 5.2-Sec. 5.3, pp. 309-320.
"Local Area Network Switch Frame Lookup Technique for Increased Speed and Flexibility," 700 IBM Technical Disclosure Bulletin 38(1995) Jul., No. 7, Armonk, NY, US, pp. 221-222.
"Queue Management for Shared Buffer and Shared Multi-buffer ATM Switches," Yu-Sheng Lin et al., Department of Electronics Engineering & Institute of Electronics, National Chiao Tung University, Hsinchu, Taiwan, R.O.C., Mar. 24, 1996, pp. 688-695.

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7881319B2 (en) * 2004-02-27 2011-02-01 Actix Limited Data storage and processing systems
US20070291757A1 (en) * 2004-02-27 2007-12-20 Robert William Albert Dobson Data Storage and Processing Systems
US11467883B2 (en) 2004-03-13 2022-10-11 Iii Holdings 12, Llc Co-allocating a reservation spanning different compute resources types
US11652706B2 (en) 2004-06-18 2023-05-16 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US11630704B2 (en) 2004-08-20 2023-04-18 Iii Holdings 12, Llc System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information
US11656907B2 (en) 2004-11-08 2023-05-23 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11494235B2 (en) 2004-11-08 2022-11-08 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11886915B2 (en) 2004-11-08 2024-01-30 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11861404B2 (en) 2004-11-08 2024-01-02 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11537434B2 (en) 2004-11-08 2022-12-27 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11537435B2 (en) 2004-11-08 2022-12-27 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11762694B2 (en) 2004-11-08 2023-09-19 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11709709B2 (en) 2004-11-08 2023-07-25 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11765101B2 (en) 2005-04-07 2023-09-19 Iii Holdings 12, Llc On-demand access to compute resources
US11831564B2 (en) 2005-04-07 2023-11-28 Iii Holdings 12, Llc On-demand access to compute resources
US11496415B2 (en) 2005-04-07 2022-11-08 Iii Holdings 12, Llc On-demand access to compute resources
US11522811B2 (en) 2005-04-07 2022-12-06 Iii Holdings 12, Llc On-demand access to compute resources
US11533274B2 (en) 2005-04-07 2022-12-20 Iii Holdings 12, Llc On-demand access to compute resources
US7739424B2 (en) 2005-04-18 2010-06-15 Integrated Device Technology, Inc. Packet processing switch and methods of operation thereof
US7882280B2 (en) 2005-04-18 2011-02-01 Integrated Device Technology, Inc. Packet processing switch and methods of operation thereof
US20060248376A1 (en) * 2005-04-18 2006-11-02 Bertan Tezcan Packet processing switch and methods of operation thereof
US11650857B2 (en) 2006-03-16 2023-05-16 Iii Holdings 12, Llc System and method for managing a hybrid computer environment
US7747904B1 (en) 2006-05-12 2010-06-29 Integrated Device Technology, Inc. Error management system and method for a packet switch
US7817652B1 (en) * 2006-05-12 2010-10-19 Integrated Device Technology, Inc. System and method of constructing data packets in a packet switch
US7706387B1 (en) 2006-05-31 2010-04-27 Integrated Device Technology, Inc. System and method for round robin arbitration
US11522952B2 (en) 2007-09-24 2022-12-06 The Research Foundation For The State University Of New York Automatic clustering for self-organizing grids
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US9262225B2 (en) 2009-10-30 2016-02-16 Iii Holdings 2, Llc Remote memory access functionality in a cluster of data processing nodes
US11526304B2 (en) 2009-10-30 2022-12-13 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9929976B2 (en) 2009-10-30 2018-03-27 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US8737410B2 (en) 2009-10-30 2014-05-27 Calxeda, Inc. System and method for high-performance, low-power data center interconnect fabric
US9977763B2 (en) 2009-10-30 2018-05-22 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US8745302B2 (en) 2009-10-30 2014-06-03 Calxeda, Inc. System and method for high-performance, low-power data center interconnect fabric
US10050970B2 (en) 2009-10-30 2018-08-14 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US10135731B2 (en) 2009-10-30 2018-11-20 Iii Holdings 2, Llc Remote memory access functionality in a cluster of data processing nodes
US10140245B2 (en) 2009-10-30 2018-11-27 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9866477B2 (en) 2009-10-30 2018-01-09 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9008079B2 (en) 2009-10-30 2015-04-14 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9749326B2 (en) 2009-10-30 2017-08-29 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9876735B2 (en) 2009-10-30 2018-01-23 Iii Holdings 2, Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9509552B2 (en) 2009-10-30 2016-11-29 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9479463B2 (en) 2009-10-30 2016-10-25 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US9454403B2 (en) 2009-10-30 2016-09-27 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9405584B2 (en) 2009-10-30 2016-08-02 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric with addressing and unicast routing
US9311269B2 (en) 2009-10-30 2016-04-12 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US9077654B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US9075655B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric with broadcast or multicast addressing
US9585281B2 (en) 2011-10-28 2017-02-28 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US10021806B2 (en) 2011-10-28 2018-07-10 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US9092594B2 (en) 2011-10-31 2015-07-28 Iii Holdings 2, Llc Node card management in a modular and large scalable server system
US9069929B2 (en) 2011-10-31 2015-06-30 Iii Holdings 2, Llc Arbitrating usage of serial port in node card of scalable and modular servers
US9792249B2 (en) 2011-10-31 2017-10-17 Iii Holdings 2, Llc Node card utilizing a same connector to communicate pluralities of signals
US9965442B2 (en) 2011-10-31 2018-05-08 Iii Holdings 2, Llc Node card management in a modular and large scalable server system
US9648102B1 (en) 2012-12-27 2017-05-09 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11960937B2 (en) 2022-03-17 2024-04-16 Iii Holdings 12, Llc System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter

Also Published As

Publication number Publication date
US7656907B2 (en) 2010-02-02
EP1207640A2 (en) 2002-05-22
US20020041599A1 (en) 2002-04-11
EP1207640A3 (en) 2005-10-19
US20070286223A1 (en) 2007-12-13

Similar Documents

Publication Publication Date Title
US7274705B2 (en) Method and apparatus for reducing clock speed and power consumption
US6851000B2 (en) Switch having flow control management
US20050235129A1 (en) Switch memory management using a linked list structure
EP1313291B1 (en) Apparatus and method for header processing
US7480310B2 (en) Flexible DMA descriptor support
US7339938B2 (en) Linked network switch configuration
EP1181792B1 (en) Stacked network switch configuration
US6430626B1 (en) Network switch with a multiple bus structure and a bridge interface for transferring network data between different buses
US6813268B1 (en) Stacked network switch configuration
US7764674B2 (en) Address resolution snoop support for CPU
US20080247394A1 (en) Cluster switching architecture
EP1195955B1 (en) Switch transferring data using data encapsulation and decapsulation
US6084878A (en) External rules checker interface
US6907036B1 (en) Network switch enhancements directed to processing of internal operations in the network switch
US7120155B2 (en) Switch having virtual shared memory
US7420977B2 (en) Method and apparatus of inter-chip bus shared by message passing and memory access
US7031302B1 (en) High-speed stats gathering in a network switch
EP1338974A2 (en) Method and apparatus of inter-chip bus shared by message passing and memory access
EP1248415B1 (en) Switch having virtual shared memory
EP1212867B1 (en) Constructing an address table in a network switch

Legal Events

Date Code Title Description
AS Assignment

Owner name: ALTIMA COMMUNICATIONS INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHANG, MICHAEL;SOKOL, MICHAEL A.;REEL/FRAME:011816/0093

Effective date: 20010508

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: MERGER;ASSIGNOR:ALTIMA COMMUNICATIONS, INC.;REEL/FRAME:015571/0985

Effective date: 20040526

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

REMI Maintenance fee reminder mailed
FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047196/0097

Effective date: 20180509

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE PREVIOUSLY RECORDED AT REEL: 047196 FRAME: 0097. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:048555/0510

Effective date: 20180905

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12