WO2001091390A2 - Pipelined and sliced shared-memory switch - Google Patents

Pipelined and sliced shared-memory switch Download PDF

Info

Publication number
WO2001091390A2
WO2001091390A2 PCT/US2001/016222 US0116222W WO0191390A2 WO 2001091390 A2 WO2001091390 A2 WO 2001091390A2 US 0116222 W US0116222 W US 0116222W WO 0191390 A2 WO0191390 A2 WO 0191390A2
Authority
WO
WIPO (PCT)
Prior art keywords
words
output
sliced
input
port
Prior art date
Application number
PCT/US2001/016222
Other languages
French (fr)
Other versions
WO2001091390A3 (en
Inventor
Jintae Oh
Taehee Lee
Chan Park
Hojae Lee
Dongbum Jung
Original Assignee
Engedi Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Engedi Networks, Inc. filed Critical Engedi Networks, Inc.
Priority to AU2001263297A priority Critical patent/AU2001263297A1/en
Publication of WO2001091390A2 publication Critical patent/WO2001091390A2/en
Publication of WO2001091390A3 publication Critical patent/WO2001091390A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/108ATM switching elements using shared central buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/255Control mechanisms for ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • H04L49/309Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling

Definitions

  • This invention relates generally to data packet switching and, more particularly, to a de ⁇ ce and method for switching segmented packets and fixed-length cells through a sliced shared memory using a pipelined architecture.
  • the complexity of the multiplexer increases geometrically as the number of ports and the bus width increase. For example, a 4x1 multiplexer with a 64-bit data bus width per input port and output port is required for a 4x4 shared memory switch having an eight-bit data bus width per port.
  • the complexity and the delay time are considerably large when the multiplexer with a 64-bit data bus width is implemented in hardware. As the complexity of the multiplexer increases, the delay time associated with transferring data across the multiplexer also increases, especially at high clock speeds.
  • a sliced shared memory architecture is also in found in the prior art, such as U.S. patent
  • Multiple input and output ports share a common memory in the switch.
  • the common memory is sliced into a division for each input port, and some bits of input data from each input port are shared by the common memory. For example, if an input port has an 8-bit data bus width then the first bit and the second bit for each of the input ports are stored in the first slice of the common memory. Since the switch uses a standard crosspoint for the slices, the operation of the switch must be a first-in, first-out (FIFO) configuration.
  • the memory address for data access must be identical all the time since the data is stored in the common memory from each of the input ports by a slice crosspoint in a time sliced manner. Therefore, it is not possible to switch the cells to alternating output ports and thereby limiting the applications for switches using this architecture.
  • variable-length packet can be transformed into multiple fixed-length cells, and the fixed-length cells can be stored in a shared memory before switching.
  • shared memory architecture with traditional fixed-length cell switches is generally known, present systems require simultaneous storage of an entire cell in memory. Even in present systems that split cells into words, each of the words must be simultaneously stored in the shared memory.
  • U.S. Patent 5,905,725 in which a shared memory is divided into multiple memory banks. Each memory bank stores an entire fixed-length cell that is transformed from the variable-length packet during one cell length period. The succeeding fixed-length cells are stored in successive memory banks during corresponding successive cell length periods. Since the switch must store each whole fixed-cell into the shared memory, a cell length period is required before saving the cell.
  • the switch has a substantial queuing delay for switching a variable-length data packet because a cell cannot be switched until it is first stored in the shared memory.
  • the maximum length of a packet for switching is restricted by the number of memory banks. For example, if the number of memory banks in the shared memory is eight then the packet's maximum length is limited to eight cells.
  • a second object of the present invention is reducing delays in a shared memory switch by simultaneously transferring data words from multiple input ports into a sliced shared memory.
  • a third object of the present invention is permitting data words to be switched to an output port immediately upon being stored in the sliced shared memory by sequentially storing the words.
  • a pipelined switch fabric device has a plurality of memory units having a plurality of input ports and at least one output port. Each memory unit has a multiplexer for sequentially receiving sliced words from at least one of the input ports, a buffer for temporarily storing the sliced words, and a de-multiplexer for transmitting the sliced words to at least one output port.
  • the switch fabric device has a switch controller for selecting each memory unit to receive each sliced word and for selecting at least one output port for each memory unit to send each sliced word.
  • a method of switching segmented data packets or fixed-length cells through a sliced shared memory where the shared memory has a plurality of memory units between a plurality of input ports and at least one output port.
  • the steps include sequentially receiving segmented data packets or fixed-length cells from the input ports and slicing the fixed-length cells into words.
  • An input sequence for the words is scheduled according to the order in which the words are sliced and the input port at which data packets are received, and each word is assigned a port-slice identifier.
  • the words are sequentially transferred to each of the memory units according to the scheduled input sequence and stored with known port-slice identifiers in the sliced shared memory.
  • An output sequence of the words is scheduled according to their port-slice identifier and a variable switching logic, and the output sequence includes at least one output port and a merging order.
  • the words are sequentially output from each of the memory units to at least one output port according to the scheduled output sequence, thereby forming output cells.
  • Figure 1 illustrates a block diagram of a pipelined switch fabric device between a single input port and a single output port
  • Figure 2 illustrates a block diagram of a pipelined switch fabric device between eight input ports and eight output ports
  • FIG. 3 illustrates a detailed block diagram of a switch input control module according to the invention illustrated in Figure 2;
  • Figure 4 illustrates a detailed block diagram of a pipelined switch element module according to the invention illustrated in Figure 2;
  • FIG. 5 illustrates a detailed block diagram of a switch output control module according to the invention illustrated in Figure 2.
  • Figure 1 illustrates a first embodiment of the present invention for a pipelined switch 10.
  • the switch is located between a single input port 12 and a single output port 14.
  • a fixed-length cell 16 (or segmented data packet) is sliced into multiple words 18 before entering the pipelined and sliced shared memory architecture 20.
  • the shared memory 22 is sliced into buffers 24 so that each buffer in the sliced shared memory can store one word at one clock cycle period from an input port, and the stored word can be transmitted to the output port at one clock cycle period.
  • the pipelined switch fabric has a plurality of memory units 26 where each memory unit includes one of the buffers, a multiplexer 28, and a de-multiplexer 30.
  • a switch controller 32 alternatively selects between the memory units that will receive the words from the input port and send the words to the output port.
  • each word is based on one clock cycle of the switch and the input port can have a variable width
  • the size of the word may be a byte, a 16-bit word, a 32-bit word, or even larger depending on the width of the input port.
  • a word according to the present invention is more than one bit.
  • the buffers sequentially store the words sliced from an incoming cell.
  • the cell length is N words long 34 resulting ⁇ in a minimum time for slicing the cell of at least N clock periods.
  • the sliced word can be written to each buffer in the sliced shared memory at each one clock cycle period.
  • the cell's first word 36 is stored in the first buffer 38 at the first clock cycle period.
  • the second word 40 is stored in the second buffer 42.
  • the switch controller sends a control signal 44 to select the first multiplexer 46 in the first memory unit 48.
  • Each multiplexer can have an associated delay element that delays the control signal it receives by at least one clock period.
  • the delayed control signal 50 sent to the second multiplexer 52 which then receives the second word that is stored in the second buffer.
  • the entire cell has been stored completely in the sliced shared memory.
  • the combination of the multiplexer and an associated delay element is one example of a packet sheer that could be used to slice the incoming cell.
  • each word of the entire cell is stored in a buffer at least during one clock cycle period during the N clock cycle period, the pipelined architecture does not require that the each word be stored in a buffer for the entire N clock cycle period. Therefore, each word in a buffer is available to be sent to the output port through the de-multiplexer as soon as the word is stored in the buffer. This immediate output availability greatly reduces queuing delay compared to traditional shared memory architectures that use one whole cell length period for storing data.
  • the words form an output cell 54 in the same order as the words are sliced from the incoming cell.
  • a preferred embodiment of the pipelined switch fabric device 100 has eight input ports 102 and eight output ports 104.
  • the pipelined switch fabric has a switch input controller (XIC) 106, and each of the input controllers has a packet handler 108 communicating with the respective input port.
  • Each packet handler sequentially receives segmented data packets, or fixed-length cells 110, from the respective input port.
  • At least one packet sheer 112 communicates with the input controller for each input port, and each fixed-length cell is sliced into a pre-determined number of words 114 in a sequential order 116 by the packet slicers.
  • each packet sheer can be formed from a combination of a multiplexer with an associated delay element.
  • a routing controller 118 schedules an input sequence 120 for the words according to the slicing order of the words and the input port at which the data packet is received, and the routing controller assigns a respective port-slice identifier 122 to each word.
  • the routing controller may be a part of a larger switch controller 124 that also has a memory controller 126.
  • the pipelined switch fabric has pipelined switch elements 128 that communicate with the packet slicers and receive words therefrom. Generally, the number of words sliced for each fixed-length cell is equal to or less than the nimber of pipelined switch elements.
  • Each of the switch elements has a buffer 130 that is preferably formed from a sliced shared memory 132. The buffers sequentially store the words with known port-slice identifiers, and each switch element has the ability to store a sliced word in a period of one clock cycle.
  • the switch elements also communicate with switch output controllers 134, and each switch element has the ability to transmit a sliced word in a period of one clock cycle.
  • the output controllers sequentially receive the words stored in the switch elements according to the known port-slice identifiers for the words.
  • the routing controller schedules an output sequence 136 of the words transferred between the switch elements and the output controllers according to the known port-slice identifiers and a variable switching logic 138.
  • Each output controller has an output packet processor 140 for merging the words into an output cells 142 and transmitting the output cells onto at least one output port. Similar to the input sequence, the output sequence includes at least one output port and a merging order 144.
  • the words are sequentially output from each of the memory units to at least one output port according to the scheduled output sequence, thereby forming output cells.
  • the 8x8 pipelined switch fabric can provide a OC-48 ATM/POS switching solution (OC: Optical Carrier, ATM: Asynchronous Transfer Mode, POS: Packet Over SONET, SONET: Synchronous Optical Network) with a 40-Gbits per second (G- bits/sec) switching rate.
  • OC Optical Carrier
  • ATM Asynchronous Transfer Mode
  • POS Packet Over SONET
  • SONET Synchronous Optical Network
  • a 2.5G input line using 32-bit data stream can be pumped into the 8x8 pipelined switch fabric running at a speed of 100 MHz.
  • Internal cells entering and leaving the I/O (Input/Output) of the chip carry ATM cells and segmented packets.
  • the pipelined switch fabric also provides asynchronous interfaces for the input port and can be operated without system synchronization supporting asynchronous SOC (Start of Cell) inputs and back-to-back cell inputs for increasing the speed of the switch fabric.
  • fabric uses a weighted round-robin algorithm for scheduling and a queuing delay reduction
  • Multi-casting is supported using a single
  • the buffer is fully shared across all queues so that
  • the cell length and routing tag location can be programmed by microprocessor through
  • Programmable cell lengths include 14, 15 and 16, with 8 bytes allocated
  • the pipelined switch fabric also provides individual 32-bit statistic cell counters. It has input normal cell counters for each input port, input discard counters for
  • microprocessor interface is provided with automatic monitoring functions.
  • Self-protection algorithms are implemented by monitoring the input cells to protect
  • the switch input controller (XIC) 106 contains two basic sub-
  • the input packet processor is responsible for synchronizing incoming asynchronous cells
  • the individual input blocks of the input packet processor have been designed to accommodate its own incoming clock so that no system synchronization is necessary at this input stage.
  • the packet slicers 116 may be formed as a part of the switch input controllers or individually as a multiplexer with a delay element.
  • the input packet processor in the switch input controller can also check the interval of a SOC (Start Of Cell) when there is a cell enable signal.
  • the input packet processor consists of a FIFO and the SOC interval is checked using the address of the FIFO that receives incoming cells.
  • the address of the FIFO consists of a lower address for maintaining the proper sequence and an upper address for counting each time the lower address circulates its value.
  • the lower address circulates its pre-defined values, such as from 0 to 15 in the preferred embodiment.
  • the input packet processor considers that there is an error in the SOC or cell enable signal, and discards the cell from the FIFO.
  • the lower address For example, given a pre-defined SOC interval of 16, the lower address expects a second SOC at the next lower address value of 0. But, if the second SOC is received at the lower address value of 14, the input packet processor considers that there is an error in SOC or cell enable signal, and discards the cell from the FIFO and resets of the lower address value with its initial value at the same time. This operation prevents the link of the shared memory from being broken by accepting the protocol violated cells.
  • the input packet processor knows the finite interval of the mcoming SOC using the address of the FIFO and treats a cell as a long cell if there is no SOC where it suppose to be. For a long cell the input packet processor only switches one cell region that is already defined by the CPU, discarding the remainder of the cell and preventing a malfunction. For example, if an incoming cell has a SOC interval of one and one half (lVz) of the finite intervals, the incoming cell of the first one finite interval will be saved in the FIFO, but the remaining one half cell will be discarded from the FIFO, and the input packet processor resets the lower address value with its initial value at the same time.
  • Vz one half
  • Another example is that if an mcoming cell has two (2) finite intervals with the first SOC in the right place but the second SOC missing, the input packet processor will save the entire cell in such a manner that the second cell will be saved at the next upper address after the first cell has been saved. If an incoming cell has three (3) finite intervals with the first SOC in the right place but the second and the third SOCs missing, the first cell and the third cell will be saved in the FIFO discarding the second cell only.
  • the 8-bytes routing tag location is programmable by CPU providing flexible scalability of the pipelined switch fabric. If the routing tag contains a null set, all zeros (0), then the routing tag is invalid and the input packet processor cannot specify a port to switch the current cell. In such a case, the input packet processor discards the cell and increases the discarded cell counter by one. When FIFOs of the input packet processor are full, it generates back pressures to the individual port to prevent further mcoming of cells.
  • the packet handler has a temporary storage for the words before they are moved into the shared memory.
  • the packet handler accepts successive words from the input packet processor while outputting the preceding words to the shared memory, thereby providing back-to-back input functionality.
  • the buffer in each pipelined switch element (PXE) 128 sequentially saves sliced words from the switch input controllers into the shared memory location according to the known port-slice identifier for each word.
  • the switch element reads the words from the shared memory and transmits them to switch output controllers.
  • the buffer saves the sliced words from switch input controllers into the shared memory according to the write enable 131, mux port data 133, and write address 135 produced by the switch controller.
  • the buffer uses read enable 137, demux port data 139, and read address 141 from the switch controller.
  • the switch output controller (XOC) 134 has an output packet processor (OPP) 143 that temporarily saves words before outputting the output cells. It can also hold output cells when it receives a back pressure from the external device that is connected to the output port.
  • OPP output packet processor
  • the output packet processor can suspend words from the shared memory when it is full.
  • the output packet processor also provides a clock that is fully synchronized to the system clock of the external devices and counts up the output cell counter by one as the cell is transmitted.
  • the switch controller has a memory controller (MEMCTL) and a routing controller (ROUTCTL) that work very closely together to manage the queue of cells passing through the pipelined switching fabric.
  • the memory controller has a queue start, a queue end and a queue status for each of the output ports in order to form single queue and uses a single port RAM to maintain a link between them. Another single port RAM is used to save empty memory pointer. A new queue is created when there is no cell at the output, and the link is established and saved in the proper output sequence of output. The output cell is scheduled using queue status.
  • the memory controller In order to reduce the queuing delays of cells from in and out of the shared memory, the memory controller immediately schedules and outputs the word as soon as a queue has been generated.
  • the memory controller rejects input cells from input controller while preserving the link.
  • the memory controller reports the full/empty status of the shared memory to the input packet processors, and prevents further mcoming words when the shared memory is full.
  • the memory controller also supports input packet processors in producing back pressure to the respective input ports. When saving a word for multi-casting, the memory controller only needs to use one memory location since the input routing tag is saved in the link memory.
  • the routing controller In scheduling the communication of words from the input controller to the shared memory, the routing controller preferably uses a weighted round-robin algorithm. In particular, the routing controller sends an enable signal to an input packet processor to save a word to a respective pipelined switch element. For packet slicers that are a combination of a multiplexer and a delay element, the routing controller sends an input control signal to at least one of the multiplexers. If another request to save a cell is received at the input packet processor while it is currently storing a cell, the routing controller will not send an enable signal until the cell is stored, thereby preventing the loss of the link. The routing controller also schedules the sequential transfer of the stored words been transmitted in sequence from the shared memory to the output packet processor.
  • the routing controller refers to the queue start address and the queue status signal to produce a single output address for each port.
  • the routing controller uses the saved routing tag in finding an idle port. The routing controller keeps deleting the output port bit from the input routing tag, for each bit corresponding to the output port that has received a multi-casting word until the routing tag actually reaches all zero bits. In this manner, each multi-casting word can be transmitted from a single memory.
  • the output scheduling can start as soon as a single word is saved in the shared memory which reduces queuing delay in the switch
  • the input statistic (1ST) 148 gathers the input normal cell count enable signal and input discard cell count enable signal from input controllers, and increments the count as each corresponding enable signal is received. It uses 1ST memory to count both normal and discarded cells. All the corresponding counter values are cleared after the CPU accesses the counter. For each output cell count enable signal received from output controllers, the output statistic (OST) 150 is incremented. The output statistic uses one OST memory for counting output cells. The value of the counter is cleared after CPU accesses the output cell counter.
  • the CPU interface (CPUJDF) provides a microprocessor interface with device control, configuration and monitoring.
  • the interface also provides a automatic monitoring functions.
  • the interface is capable of operating in either an interrupt driven or polled-mode configuration.
  • the routing tag location and the cell length can be pre-programmed according to user definitions via the interface.
  • the value of the input normal cell counter, discard cell counter and output cell counter can also be selected and transmitted to CPU.

Abstract

A pipelined switch fabric device has a plurality of memory units between a plurality of input ports and at least one output port. Each memory unit has multiplexer for sequentially receiving sliced words from at least one of the input ports, a buffer for temporarily storing the sliced words, and a de-multiplexer for transmitting the sliced words to at least one output port. The switch fabric device has a switch controller for selecting each of the memory units to receive each of the sliced words and for selecting at least one output port for each of the memory units to send each of the sliced words.

Description

TITLE OF THE INVENTION Pipelined and Sliced Shared-Memory Switch
CROSS-REFERENCE TO RELATED APPLICATIONS Not Applicable.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT Not Applicable.
BACKGROUND OF THE INVENTION 1. FIELD OF THE INVENTION
This invention relates generally to data packet switching and, more particularly, to a deλάce and method for switching segmented packets and fixed-length cells through a sliced shared memory using a pipelined architecture.
2. DESCRIPTION OF RELATED ART
There are many types of data switching architectures, including switches using a shared memory fabric. To share a common memory by all ports in a traditional shared memory switch, the operation speed of accessing the common memory is accelerated by means of expanding its data bus width. Traditional shared memory switches, such as the 4x4 switch (four input ports x four output ports) with an eight-bit data bus width for each of the eight ports are described in U.S. Patents 5,309,432 and 5,603,064. Aggregating eight bytes of incoming data from each port before accessing the common memory forms one parallel, eight-byte (64 bits) data path. The operation speed of accessing the common memory is accelerated by eight times when one parallel, eight-byte data bus width is used to access the common memory instead of using a one byte data bus width. Therefore, all eight ports can share the common memory by time-sharing using a single time-division multiplexer.
However, the complexity of the multiplexer increases geometrically as the number of ports and the bus width increase. For example, a 4x1 multiplexer with a 64-bit data bus width per input port and output port is required for a 4x4 shared memory switch having an eight-bit data bus width per port. The complexity and the delay time are considerably large when the multiplexer with a 64-bit data bus width is implemented in hardware. As the complexity of the multiplexer increases, the delay time associated with transferring data across the multiplexer also increases, especially at high clock speeds. A sliced shared memory architecture is also in found in the prior art, such as U.S. patent
6,031,842. Multiple input and output ports share a common memory in the switch. The common memory is sliced into a division for each input port, and some bits of input data from each input port are shared by the common memory. For example, if an input port has an 8-bit data bus width then the first bit and the second bit for each of the input ports are stored in the first slice of the common memory. Since the switch uses a standard crosspoint for the slices, the operation of the switch must be a first-in, first-out (FIFO) configuration. The memory address for data access must be identical all the time since the data is stored in the common memory from each of the input ports by a slice crosspoint in a time sliced manner. Therefore, it is not possible to switch the cells to alternating output ports and thereby limiting the applications for switches using this architecture.
It is also well known that a variable-length packet can be transformed into multiple fixed-length cells, and the fixed-length cells can be stored in a shared memory before switching. Although shared memory architecture with traditional fixed-length cell switches is generally known, present systems require simultaneous storage of an entire cell in memory. Even in present systems that split cells into words, each of the words must be simultaneously stored in the shared memory. One particular design is described in U.S. Patent 5,905,725, in which a shared memory is divided into multiple memory banks. Each memory bank stores an entire fixed-length cell that is transformed from the variable-length packet during one cell length period. The succeeding fixed-length cells are stored in successive memory banks during corresponding successive cell length periods. Since the switch must store each whole fixed-cell into the shared memory, a cell length period is required before saving the cell. Therefore, the switch has a substantial queuing delay for switching a variable-length data packet because a cell cannot be switched until it is first stored in the shared memory. Additionally, according to the prior art, the maximum length of a packet for switching is restricted by the number of memory banks. For example, if the number of memory banks in the shared memory is eight then the packet's maximum length is limited to eight cells.
BRIEF SUMMARY OF THE INVENTION It is in view of the above problems that the present invention was developed. Among the objects and features of the present invention is an improved shared memory switch that eliminates delays associated with converting a serial data stream into parallel data stream before storing the data in a sliced shared memory.
A second object of the present invention is reducing delays in a shared memory switch by simultaneously transferring data words from multiple input ports into a sliced shared memory.
A third object of the present invention is permitting data words to be switched to an output port immediately upon being stored in the sliced shared memory by sequentially storing the words. hi one aspect of the present invention, a pipelined switch fabric device has a plurality of memory units having a plurality of input ports and at least one output port. Each memory unit has a multiplexer for sequentially receiving sliced words from at least one of the input ports, a buffer for temporarily storing the sliced words, and a de-multiplexer for transmitting the sliced words to at least one output port. The switch fabric device has a switch controller for selecting each memory unit to receive each sliced word and for selecting at least one output port for each memory unit to send each sliced word.
In a second aspect of the invention, a method of switching segmented data packets or fixed-length cells through a sliced shared memory is described, where the shared memory has a plurality of memory units between a plurality of input ports and at least one output port. The steps include sequentially receiving segmented data packets or fixed-length cells from the input ports and slicing the fixed-length cells into words. An input sequence for the words is scheduled according to the order in which the words are sliced and the input port at which data packets are received, and each word is assigned a port-slice identifier. The words are sequentially transferred to each of the memory units according to the scheduled input sequence and stored with known port-slice identifiers in the sliced shared memory. An output sequence of the words is scheduled according to their port-slice identifier and a variable switching logic, and the output sequence includes at least one output port and a merging order. The words are sequentially output from each of the memory units to at least one output port according to the scheduled output sequence, thereby forming output cells.
Further features and advantages of the present invention, as well as the structure and operation of various embodiments of the present invention, are described in detail below with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and form a part of the specification, illustrate the embodiments of the present invention and together with the description, serve to explain the principles of the invention. In the drawings: Figure 1 illustrates a block diagram of a pipelined switch fabric device between a single input port and a single output port;
Figure 2 illustrates a block diagram of a pipelined switch fabric device between eight input ports and eight output ports;
Figure 3 illustrates a detailed block diagram of a switch input control module according to the invention illustrated in Figure 2;
Figure 4 illustrates a detailed block diagram of a pipelined switch element module according to the invention illustrated in Figure 2; and
Figure 5 illustrates a detailed block diagram of a switch output control module according to the invention illustrated in Figure 2.
DETAILED DESCRIPTION OF THE INVENTION Referring to the accompanying drawings in which like reference numbers indicate like elements, Figure 1 illustrates a first embodiment of the present invention for a pipelined switch 10. In this embodiment, the switch is located between a single input port 12 and a single output port 14. A fixed-length cell 16 (or segmented data packet) is sliced into multiple words 18 before entering the pipelined and sliced shared memory architecture 20. The shared memory 22 is sliced into buffers 24 so that each buffer in the sliced shared memory can store one word at one clock cycle period from an input port, and the stored word can be transmitted to the output port at one clock cycle period. The pipelined switch fabric has a plurality of memory units 26 where each memory unit includes one of the buffers, a multiplexer 28, and a de-multiplexer 30. A switch controller 32 alternatively selects between the memory units that will receive the words from the input port and send the words to the output port.
Since each word is based on one clock cycle of the switch and the input port can have a variable width, the size of the word may be a byte, a 16-bit word, a 32-bit word, or even larger depending on the width of the input port. In any event, a word according to the present invention is more than one bit. Each word that is stored in an input FIFO is transmitted to the sliced shared memory while the incoming word is being stored in the input FIFO. This means that as soon as the first word of a cell is stored in the sliced shared memory, it is ready to be switched to the destination output port. With this method, the queuing delay that exists in the traditional shared memory architecture, switching a cell after one entire cell has been stored in the shared memory, is significantly reduced.
Referring again to Figure 1, the buffers sequentially store the words sliced from an incoming cell. In this shared memory architecture, the cell length is N words long 34 resulting in a minimum time for slicing the cell of at least N clock periods. The sliced word can be written to each buffer in the sliced shared memory at each one clock cycle period. The cell's first word 36 is stored in the first buffer 38 at the first clock cycle period. At a consecutive clock cycle period, the second word 40 is stored in the second buffer 42. To perform this sequence, the switch controller sends a control signal 44 to select the first multiplexer 46 in the first memory unit 48. Each multiplexer can have an associated delay element that delays the control signal it receives by at least one clock period. Therefore, the delayed control signal 50 sent to the second multiplexer 52 which then receives the second word that is stored in the second buffer. When at least N clock cycle periods have transpired, the entire cell has been stored completely in the sliced shared memory. The combination of the multiplexer and an associated delay element is one example of a packet sheer that could be used to slice the incoming cell.
Even though each word of the entire cell is stored in a buffer at least during one clock cycle period during the N clock cycle period, the pipelined architecture does not require that the each word be stored in a buffer for the entire N clock cycle period. Therefore, each word in a buffer is available to be sent to the output port through the de-multiplexer as soon as the word is stored in the buffer. This immediate output availability greatly reduces queuing delay compared to traditional shared memory architectures that use one whole cell length period for storing data. Using FIFO sequencing, the words form an output cell 54 in the same order as the words are sliced from the incoming cell.
Referring to Figures 2-5, a preferred embodiment of the pipelined switch fabric device 100 has eight input ports 102 and eight output ports 104. For each input port, the pipelined switch fabric has a switch input controller (XIC) 106, and each of the input controllers has a packet handler 108 communicating with the respective input port. Each packet handler sequentially receives segmented data packets, or fixed-length cells 110, from the respective input port. At least one packet sheer 112 communicates with the input controller for each input port, and each fixed-length cell is sliced into a pre-determined number of words 114 in a sequential order 116 by the packet slicers. As in the first embodiment, each packet sheer can be formed from a combination of a multiplexer with an associated delay element. A routing controller 118 schedules an input sequence 120 for the words according to the slicing order of the words and the input port at which the data packet is received, and the routing controller assigns a respective port-slice identifier 122 to each word. The routing controller may be a part of a larger switch controller 124 that also has a memory controller 126.
The pipelined switch fabric has pipelined switch elements 128 that communicate with the packet slicers and receive words therefrom. Generally, the number of words sliced for each fixed-length cell is equal to or less than the nimber of pipelined switch elements. Each of the switch elements has a buffer 130 that is preferably formed from a sliced shared memory 132. The buffers sequentially store the words with known port-slice identifiers, and each switch element has the ability to store a sliced word in a period of one clock cycle. The switch elements also communicate with switch output controllers 134, and each switch element has the ability to transmit a sliced word in a period of one clock cycle.
The output controllers sequentially receive the words stored in the switch elements according to the known port-slice identifiers for the words. The routing controller schedules an output sequence 136 of the words transferred between the switch elements and the output controllers according to the known port-slice identifiers and a variable switching logic 138. Each output controller has an output packet processor 140 for merging the words into an output cells 142 and transmitting the output cells onto at least one output port. Similar to the input sequence, the output sequence includes at least one output port and a merging order 144. As a result, the words are sequentially output from each of the memory units to at least one output port according to the scheduled output sequence, thereby forming output cells. h the particular embodiment, the 8x8 pipelined switch fabric can provide a OC-48 ATM/POS switching solution (OC: Optical Carrier, ATM: Asynchronous Transfer Mode, POS: Packet Over SONET, SONET: Synchronous Optical Network) with a 40-Gbits per second (G- bits/sec) switching rate. As such, a 2.5G input line using 32-bit data stream can be pumped into the 8x8 pipelined switch fabric running at a speed of 100 MHz. Internal cells entering and leaving the I/O (Input/Output) of the chip carry ATM cells and segmented packets. The pipelined switch fabric also provides asynchronous interfaces for the input port and can be operated without system synchronization supporting asynchronous SOC (Start of Cell) inputs and back-to-back cell inputs for increasing the speed of the switch fabric. The pipelined switch
fabric uses a weighted round-robin algorithm for scheduling and a queuing delay reduction
mechanism to reduce queuing delays for each queue. Multi-casting is supported using a single
copy storage with different departure times based on the status of the output ports, thereby
forming a plurality of identical output cells. The buffer is fully shared across all queues so that
« no external memory is required by the pipelined switch fabric and provides an individual back
pressure mechanism for each of the input ports and egress ports to prevent the switch fabric cell
buffer from overflowing.
The cell length and routing tag location can be programmed by microprocessor through
a CPU interface 146. Programmable cell lengths include 14, 15 and 16, with 8 bytes allocated
for the routing tag location. The pipelined switch fabric also provides individual 32-bit statistic cell counters. It has input normal cell counters for each input port, input discard counters for
protocol violated input cells and output normal cell counters for each output port. All of the statistic cell counters are automatically cleared after accessed by the CPU. A generic 16-bit
microprocessor interface is provided with automatic monitoring functions.
Self-protection algorithms are implemented by monitoring the input cells to protect
switch fabric malfunction, which may be caused by incoming protocol violated cells. It is self-
protected from incoming protocol violation especially for corrupted SOC (Start of Cell), cell
enable signal and routing tag. It discards short cells at the input stage and adjusts cell length for
long cells. It also discards cells that have missing routing tag field.
Referring to Figure 3, the switch input controller (XIC) 106 contains two basic sub-
modules, the input packet processor module (JP?) 109 and the packet handling module (PH)
108. The input packet processor is responsible for synchronizing incoming asynchronous cells
to the internal system clock. The individual input blocks of the input packet processor have been designed to accommodate its own incoming clock so that no system synchronization is necessary at this input stage. The packet slicers 116 may be formed as a part of the switch input controllers or individually as a multiplexer with a delay element.
The input packet processor in the switch input controller can also check the interval of a SOC (Start Of Cell) when there is a cell enable signal. The input packet processor consists of a FIFO and the SOC interval is checked using the address of the FIFO that receives incoming cells. In the preferred embodiment, the address of the FIFO consists of a lower address for maintaining the proper sequence and an upper address for counting each time the lower address circulates its value. The lower address circulates its pre-defined values, such as from 0 to 15 in the preferred embodiment. When the interval of the checked SOC is shorter than a pre-defined SOC interval, the input packet processor considers that there is an error in the SOC or cell enable signal, and discards the cell from the FIFO. For example, given a pre-defined SOC interval of 16, the lower address expects a second SOC at the next lower address value of 0. But, if the second SOC is received at the lower address value of 14, the input packet processor considers that there is an error in SOC or cell enable signal, and discards the cell from the FIFO and resets of the lower address value with its initial value at the same time. This operation prevents the link of the shared memory from being broken by accepting the protocol violated cells.
The input packet processor knows the finite interval of the mcoming SOC using the address of the FIFO and treats a cell as a long cell if there is no SOC where it suppose to be. For a long cell the input packet processor only switches one cell region that is already defined by the CPU, discarding the remainder of the cell and preventing a malfunction. For example, if an incoming cell has a SOC interval of one and one half (lVz) of the finite intervals, the incoming cell of the first one finite interval will be saved in the FIFO, but the remaining one half cell will be discarded from the FIFO, and the input packet processor resets the lower address value with its initial value at the same time. Another example is that if an mcoming cell has two (2) finite intervals with the first SOC in the right place but the second SOC missing, the input packet processor will save the entire cell in such a manner that the second cell will be saved at the next upper address after the first cell has been saved. If an incoming cell has three (3) finite intervals with the first SOC in the right place but the second and the third SOCs missing, the first cell and the third cell will be saved in the FIFO discarding the second cell only. The 8-bytes routing tag location is programmable by CPU providing flexible scalability of the pipelined switch fabric. If the routing tag contains a null set, all zeros (0), then the routing tag is invalid and the input packet processor cannot specify a port to switch the current cell. In such a case, the input packet processor discards the cell and increases the discarded cell counter by one. When FIFOs of the input packet processor are full, it generates back pressures to the individual port to prevent further mcoming of cells.
The packet handler has a temporary storage for the words before they are moved into the shared memory. The packet handler accepts successive words from the input packet processor while outputting the preceding words to the shared memory, thereby providing back-to-back input functionality.
Referring to Figure 4, the buffer in each pipelined switch element (PXE) 128 sequentially saves sliced words from the switch input controllers into the shared memory location according to the known port-slice identifier for each word. The switch element reads the words from the shared memory and transmits them to switch output controllers. In the preferred embodiment, the buffer saves the sliced words from switch input controllers into the shared memory according to the write enable 131, mux port data 133, and write address 135 produced by the switch controller. For the read process from the shared memory, the buffer uses read enable 137, demux port data 139, and read address 141 from the switch controller.
Referring to Figure 5, the switch output controller (XOC) 134 has an output packet processor (OPP) 143 that temporarily saves words before outputting the output cells. It can also hold output cells when it receives a back pressure from the external device that is connected to the output port. The output packet processor can suspend words from the shared memory when it is full. The output packet processor also provides a clock that is fully synchronized to the system clock of the external devices and counts up the output cell counter by one as the cell is transmitted.
The switch controller (SWCTL) has a memory controller (MEMCTL) and a routing controller (ROUTCTL) that work very closely together to manage the queue of cells passing through the pipelined switching fabric. The memory controller has a queue start, a queue end and a queue status for each of the output ports in order to form single queue and uses a single port RAM to maintain a link between them. Another single port RAM is used to save empty memory pointer. A new queue is created when there is no cell at the output, and the link is established and saved in the proper output sequence of output. The output cell is scheduled using queue status. In order to reduce the queuing delays of cells from in and out of the shared memory, the memory controller immediately schedules and outputs the word as soon as a queue has been generated.
When the shared memory reaches a certain threshold value, the memory controller rejects input cells from input controller while preserving the link. The memory controller reports the full/empty status of the shared memory to the input packet processors, and prevents further mcoming words when the shared memory is full. The memory controller also supports input packet processors in producing back pressure to the respective input ports. When saving a word for multi-casting, the memory controller only needs to use one memory location since the input routing tag is saved in the link memory.
In scheduling the communication of words from the input controller to the shared memory, the routing controller preferably uses a weighted round-robin algorithm. In particular, the routing controller sends an enable signal to an input packet processor to save a word to a respective pipelined switch element. For packet slicers that are a combination of a multiplexer and a delay element, the routing controller sends an input control signal to at least one of the multiplexers. If another request to save a cell is received at the input packet processor while it is currently storing a cell, the routing controller will not send an enable signal until the cell is stored, thereby preventing the loss of the link. The routing controller also schedules the sequential transfer of the stored words been transmitted in sequence from the shared memory to the output packet processor. For the output cell, the routing controller refers to the queue start address and the queue status signal to produce a single output address for each port. For multi-casting operations, the routing controller uses the saved routing tag in finding an idle port. The routing controller keeps deleting the output port bit from the input routing tag, for each bit corresponding to the output port that has received a multi-casting word until the routing tag actually reaches all zero bits. In this manner, each multi-casting word can be transmitted from a single memory. The output scheduling can start as soon as a single word is saved in the shared memory which reduces queuing delay in the switch In the preferred embodiment, the input statistic (1ST) 148 gathers the input normal cell count enable signal and input discard cell count enable signal from input controllers, and increments the count as each corresponding enable signal is received. It uses 1ST memory to count both normal and discarded cells. All the corresponding counter values are cleared after the CPU accesses the counter. For each output cell count enable signal received from output controllers, the output statistic (OST) 150 is incremented. The output statistic uses one OST memory for counting output cells. The value of the counter is cleared after CPU accesses the output cell counter.
The CPU interface (CPUJDF) provides a microprocessor interface with device control, configuration and monitoring. The interface also provides a automatic monitoring functions.
The interface is capable of operating in either an interrupt driven or polled-mode configuration. The routing tag location and the cell length can be pre-programmed according to user definitions via the interface. The value of the input normal cell counter, discard cell counter and output cell counter can also be selected and transmitted to CPU. In view of the foregoing, it will be seen that the several advantages of the invention are achieved and attained. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. As various modifications could be made in the constructions and methods herein described and illustrated without departing from the scope of the invention, it is intended that all matter contained in the foregoing description or shown in the accompanying drawings shall be interpreted as illustrative rather than limiting.
Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims appended hereto and their equivalents.

Claims

CLA SWhat is Claimed Is:
1. A pipelined switch fabric device through which a plurality of sliced words are transferred between a plurality of input ports and at least one output port, comprising: a plurality of memory units in which said sliced words are temporarily stored, wherein each memory unit has a multiplexer for sequentially receiving said sliced words from at least one of said input ports, a buffer for storing said sliced words, and a de-multiplexer for transmitting said sliced words to at least one said output port; and a switch controller for selecting the memory units to receive said sliced words and for selecting at least one said output port for the memory units to send said sliced words.
2. A pipelined switch fabric device according to claim i, further comprising a packet sheer for sequentially slicing said sliced words from a plurality of fixed-length cells at each of said input ports.
3. A pipelined switch fabric device according to claim 2, wherein the packet sheer is comprised of said multiplexer and a delay element.
4. A pipelined switch fabric device according to claim 3, wherein the switch controller sends an input control signal to at least one of the multiplexers.
5. A pipelined switch fabric device according to claim 1, wherein the plurality of memory unit buffers are formed from a sliced shared memory.
6. A pipelined switch fabric device according to claim 1, wherein the switch controller sends an output control signal to at least one of said de-multiplexers.
7. A pipelined switch fabric device according to claim 1 wherein the plurality of memory units store said sliced words in a period of one clock cycle.
8. A pipelined switch fabric device according to claim 1 wherein the plurality of memory units transmit said sliced words in a period of one clock cycle.
9. A pipelined switch fabric device having a plurality of input ports and at least one output port carrying fixed-length cells, comprising: a plurality of switch input controllers, each of the input controllers having a packet handler in communication with one of said input ports for receiving said fixed-length cells therefrom; a plurality of packet slicers in communication with the input controllers for receiving the fixed-length cells and slicing the fixed-length cells into a plurality of words whereby each of the words is assigned a port-slice identifier; a plurality of pipelined switch elements in communication with the packet slicers for receiving the words, wherein each of the switch elements has a buffer formed from a sliced shared memory for sequentially storing words having known port-slice identifiers; a plurality of switch output controllers in communication with the switch elements, each of the output controllers sequentially receiving words according to their known port-slice identifiers and having an output packet processor for merging the words into output cells and transmitting the output cells onto at least one of said output ports; and a routing controller for assigning the port-slice identifiers to the words, for scheduling an input sequence of the words transferred between the input controllers and the switch elements such that the words are stored having the known port-slice identifiers, and for scheduling an output sequence of the words transferred between the switch elements and the output controllers according to the known port-slice identifiers.
10. A pipelined switch fabric device according to claim 9, wherein each of the packet slicers is comprised of at least one multiplexer and a delay element.
11. A pipelined switch fabric device according to claim 10, wherein the routing controller sends an input control signal to at least one of the multiplexers.
12. A pipelined switch fabric device according to claim 9 wherein the fixed-length cells have a selectable fixed-length that can be programmed into said switch fabric architecture.
13. A pipelined switch fabric device according to claim 12 wherein the switch input controller further includes an input packet processor having a cell length check including a FIFO storage such that, for words consecutively arriving at the same port, the first word is being transmitted to a pipelined switch element while the next word is being stored in the FIFO storage.
14. A pipelined switch fabric device according to claim 9 wherein the number of words sliced for each fixed-length cell is equal to or less than the number of pipelined switch elements.
15. A pipelined switch fabric device according to claim 9 wherein the plurality of pipelined switch elements store said sliced words in a period of one clock cycle.
16. A pipelined switch fabric device according to claim 9 wherein the plurality of pipelined switch elements transmits said sliced words in a period of one clock cycle.
17. A method for switching fixed-length cells through a sliced shared memory having a plurality of memory units between a plurality of input ports and at least one output port comprising the steps of: sequentially receiving said fixed-length cells from each of said input ports; slicing said fixed-length cells into an ordered plurality of words; scheduling an input sequence for the words according to the slicing order of the words and the input port at which said data packet is received such that each word is assigned a port- slice identifier; sequentially transferring the words to each of said memory units according to the scheduled input sequence and storing the words having known port-slice identifiers in said sliced shared memory; scheduling an output sequence for the words according to their port-slice identifier and a variable switching logic such that the output sequence includes at least one said output port and a merging order; and sequentially outputting the words from each of said memory units to at least one said output port according to the scheduled output sequence and thereby forming output cells.
18. A switching method according to claim 17 comprising the additional step of multicasting the output words to multiple output ports and thereby forming a plurality of identical output cells.
19. A switching method according to claim 17 comprising the additional step of preprogramming a length for the fixed-length cell.
20. A switching method according to claim 19 further comprising the additional step of checking the length of the fixed-length cell according to the pre-programmed length.
PCT/US2001/016222 2000-05-19 2001-05-17 Pipelined and sliced shared-memory switch WO2001091390A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001263297A AU2001263297A1 (en) 2000-05-19 2001-05-17 Pipelined and sliced shared-memory switch

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US57491700A 2000-05-19 2000-05-19
US09/574,917 2000-05-19

Publications (2)

Publication Number Publication Date
WO2001091390A2 true WO2001091390A2 (en) 2001-11-29
WO2001091390A3 WO2001091390A3 (en) 2002-04-18

Family

ID=24298171

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/016222 WO2001091390A2 (en) 2000-05-19 2001-05-17 Pipelined and sliced shared-memory switch

Country Status (3)

Country Link
KR (1) KR20010106079A (en)
AU (1) AU2001263297A1 (en)
WO (1) WO2001091390A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004079961A2 (en) * 2003-03-03 2004-09-16 Xyratex Technology Limited Apparatus and method for switching data packets

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5799209A (en) * 1995-12-29 1998-08-25 Chatter; Mukesh Multi-port internally cached DRAM system utilizing independent serial interfaces and buffers arbitratively connected under a dynamic configuration
US5905725A (en) * 1996-12-16 1999-05-18 Juniper Networks High speed switching device
US5910928A (en) * 1993-08-19 1999-06-08 Mmc Networks, Inc. Memory interface unit, shared memory switch system and associated method
US6031842A (en) * 1996-09-11 2000-02-29 Mcdata Corporation Low latency shared memory switch architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5910928A (en) * 1993-08-19 1999-06-08 Mmc Networks, Inc. Memory interface unit, shared memory switch system and associated method
US5799209A (en) * 1995-12-29 1998-08-25 Chatter; Mukesh Multi-port internally cached DRAM system utilizing independent serial interfaces and buffers arbitratively connected under a dynamic configuration
US6031842A (en) * 1996-09-11 2000-02-29 Mcdata Corporation Low latency shared memory switch architecture
US5905725A (en) * 1996-12-16 1999-05-18 Juniper Networks High speed switching device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004079961A2 (en) * 2003-03-03 2004-09-16 Xyratex Technology Limited Apparatus and method for switching data packets
WO2004079961A3 (en) * 2003-03-03 2005-10-06 Xyratex Tech Ltd Apparatus and method for switching data packets

Also Published As

Publication number Publication date
KR20010106079A (en) 2001-11-29
WO2001091390A3 (en) 2002-04-18
AU2001263297A1 (en) 2001-12-03

Similar Documents

Publication Publication Date Title
US5291482A (en) High bandwidth packet switch
US5838904A (en) Random number generating apparatus for an interface unit of a carrier sense with multiple access and collision detect (CSMA/CD) ethernet data network
EP0886939B1 (en) Efficient output-request packet switch and method
EP0531599B1 (en) Configurable gigabit/s switch adapter
US5640399A (en) Single chip network router
US5668809A (en) Single chip network hub with dynamic window filter
US5802287A (en) Single chip universal protocol multi-function ATM network interface
US5654962A (en) Error detection and correction method for an asynchronous transfer mode (ATM) network device
EP0680173B1 (en) Multicasting apparatus
EP0680179B1 (en) Multicasting apparatus
US4926416A (en) Method and facilities for hybrid packet switching
US7031330B1 (en) Very wide memory TDM switching system
EP0415629B1 (en) Interconnect fabric providing connectivity between an input and arbitrary output(s) of a group of outputs
EP1788755A1 (en) Method for cell reordering, method and apparatus for cell processing using the same
CA2273208A1 (en) Method and apparatus for high-speed, scalable communication system
JPH07226770A (en) Packet switching device and its control method
US7352766B2 (en) High-speed memory having a modular structure
US6046982A (en) Method and apparatus for reducing data loss in data transfer devices
WO2001091390A2 (en) Pipelined and sliced shared-memory switch
JPS6386938A (en) Exchanger
KR100211024B1 (en) Multiplexing apparatus in atm switch
KR100299312B1 (en) apparatus and method for arbitration cell transmission in ATM switching system
JPH04291548A (en) High speed large capacity matrix type time division label exchange system
JPH088921A (en) Cell exchange
Jeong et al. Implementation of high performance buffer manager for an advanced input-queued switch fabric

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC DATED 04-03-2003

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP