CA2196567C - Network communication unit using an adaptive router - Google Patents

Network communication unit using an adaptive router Download PDF

Info

Publication number
CA2196567C
CA2196567C CA002196567A CA2196567A CA2196567C CA 2196567 C CA2196567 C CA 2196567C CA 002196567 A CA002196567 A CA 002196567A CA 2196567 A CA2196567 A CA 2196567A CA 2196567 C CA2196567 C CA 2196567C
Authority
CA
Canada
Prior art keywords
node
path
port
packet
output port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002196567A
Other languages
French (fr)
Other versions
CA2196567A1 (en
Inventor
Robert C. Duzett
Stanley P. Kenoyer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NCube Corp
Original Assignee
NCube Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=23086669&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CA2196567(C) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by NCube Corp filed Critical NCube Corp
Publication of CA2196567A1 publication Critical patent/CA2196567A1/en
Application granted granted Critical
Publication of CA2196567C publication Critical patent/CA2196567C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17381Two dimensional, e.g. mesh, torus

Abstract

A parallel processor network comprised of a plurality of nodes, each node including a processor containing a number of I/O ports, and a local memory. A
communication path is established through a node by comparing a target node address in a first address packet with a processor ID of the node. If node address is equal to the target node address a receive channel is allocated to the input port and a route ready command is sent over an output port paired with the input port. If the node address is not equal to the target node address, then a first unallocated output port is selected from a port vector and the address packet is forwarded to a next node over the selected output port.

Description

NETWORK COMMUNICATION UNIT USING AN ADAPTIVE ROUTER
Cross-reference to Related Application US patent 5,367,636 entitled "Network Communication Unit for use in a High Performance Computer System" of Stephen R. Colley, et al., -- granted on November 22, 1994, assigned to nCUBE
Corporation, the assignee of the present invention.
Background of the Invention Field of the Invention The invention relates to data-processing systems, and more particularly, to a communication mechanism for use in a high-performance, parallel-processing system.
Description of the Prior Art US patent 5,113,523 describes a parallel processor comprised of a plurality of processing nodes, each node including a processor and a memory. Each processor includes means for executing instructions, logic connected to the memory for interfacing the processor with the memory and an internode communication mechanism.
The internode communication mechanism connects the nodes to form a first array of order n having a hypercube topology. A second array of order n having nodes connected together in a hypercube topology:
is interconnected with the first array to form an order n+1 array.
The order n+1 array is made up of the first and second arrays of order n, such that a parallel processor system may be structured :with any number of processors that is a power of two. A set of I/O
processors are connected to the nodes of the arrays by means of I/O
channels. The internode communication comprises a serial data channel driven by a clock that is common to all of the nodes.
The above-referenced US patent 5,367,636 describes a fixed-routing communication system in which each of the processors in the network described in US patent 5,113,523 1 is assigned a unique processor identification (ID). The processor IDs of two processors 2 connected to each other through port number n, vary only in the nt" bit. A
plurality of input 3 ports and a plurality of output ports are provided at each node. Control means at one of 4 the input ports of the node receives address packets related to a current message from s an output port of another of the nodes. A data bus connects the input and output ports of s the node together such that a message received on any one input port is routed to any other output port. A compare logic compares a node address in a first address packet a with the processor ID of the node to determine the bit position of the first difference 9 between the node address in the first address packet and the processor ID of the node.
to The compare logic includes means for activating for transmission of the message packet 11 placed on the data bus by the input port, the one of the plurality of output ports whose lz port number corresponds to the bit position of the first difference, starting at bit n+1, 13 where n is the number of the port on which the message was received.

is In the fixed routing scheme described in the above-referenced US patent 5,367,636, a is message from a given source to a given destination can take exactly one routing path, 17 unless it is forwarded, cutting through intermediate nodes and blocking on busy channels le until the path is established. The path taken is the dimension-order minimum-length path.
19 While this scheme is deadlock-free, it will not reroute messages around blocked or faulty 2o nodes.

zz A publication of Gaughan, et al. "Adaptive Routing Protocols for Hypercube z3 Interconnection Networks" Computer, May 1993, pages 12-23, describes theoretical 24 adaptive routing protocols that if implemented could dynamically adapt to network as conditions and bottlenecks. It suggests the use of a probe to establish a source-2s destination path. In the absence of a bottleneck, the probe would follow a fixed path, but 27 in an adaptive protocol the probe could establish a route around a faulty node if one 2a exists. Two types of theoretical adaptive routing protocols are covered, progressive and 29 backtracking. Progressive protocols move forward and wait for a path to become h:.

1 available while backtracking protocols backup and try other potential paths.
The z publication at page 20 recognizes that these protocols present major hardware/software implementation issues, and no specific implementation is described.

A publication of Schmidt-Voigt M. entitled "Efficient Parallel Communication with the s nCUBE 2S Processor" Parallel Computing, 20 (1994) April, No. 4, Amsterdam, NL, pages 509-530 describes the nCUBE 2S processor's hardware and software facilities for s communication. The publication describes the fixed-routing communication system that is 9 described above with respect to the above-referenced US patent 5,367,636.
to 11 It is desirable to provide a new communication system using reliable messaging la mechanisms, whereby both the sender and receiver of a message can know quickly 13 whether the message was delivered reliably, and the receiver may deliver status 14 information back to the sender before an established path is broken.
is It is also desirable that the communication system be able to route messages around 17 blocked or faulty nodes and hot spots in a parallel processor, by implementing unique la adaptive routing methods that make use of the reliable messaging mechanisms.

ao It is also desirable that the communication mechanism provide a more efficient utilization zl of the bandwidth of a hypercube communication network, by duplicating (folding) network 22 links or otherwise unused communications ports, and by avoiding extended network 23 blockages through the adaptive routing methods.

Summary of the Invention 26 The reliable messaging mechanisms, whereby both the sender and receiver of a a7 message can know quickly whether that the message was delivered reliably, and the as receiver may deliver status information back to the sender before an established path is 29 broken, is accomplished in accordance with an embodiment of the present invention by 3o providing an end-to-end reporting network. When end-to-end is enabled and a message transmission along an established path from node A to node B is fully received at a 2 receiver or "target" node B, hardware error status or a programmed receive code is sent, 3 by the hardware, from node B back to node A along a parallel "back-track"
path. Thus 4 the communications architecture provides a transmission network and a corresponding s back-track reporting network. These two networks are implemented as virtual networks 6 that share the same physical communications network of inter-nodal links.
This has the 7 advantage that a back-track network is added to an existing transmission network a without a requirement for additional internodal communications links or signals.

~o In accordance with an aspect of the invention, end-to-end status packets are delivered via the back-track network to the send channel, at the sending node, that initiated the ~2 corresponding transmission. There is provided at each send channel an end-to-and i3 status queue at which the programmer/central processing unit (CPU) is notified of and extracts the status.
~s is The end-to-end hardware has the advantage of providing reliable messaging without additional message transmissions and the corresponding CPU and software overhead.
~s Also, status is returned more reliably and much quicker by these dedicated end-to-end i9 mechanisms than would be the case if a separate message had to be delivered from the 2o receiver to the sender. Therefore, operating system resources dedicated to a given 2i transmission can be released much sooner. In addition, these end-to-end mechanisms 22 provide the back-track network upon which an adaptive routing protocol is built.

24 A communication system able to route messages around blocked or faulty nodes and hot as spots in a parallel processor, is accomplished by implementing a unique maze adaptive zs routing mechanism that makes use of the back-track mechanism described above.

3a 1 In a maze adaptive routing scheme for a transmission from node A to node B, all 2 minimum-length paths between the two nodes are searched by a single-packet scout that attempts to find a free path.

s One minimum path at a time is scouted, starting with the lowest-order uphill path and 6 doing a depth-first, helical traversal of the minimum-path graph until a free path to the destination is found. If no free minimum-length path is found, other, non-minimum-length s paths may be searched; or the central processing unit may be interrupted so that 9 software can restart the search or implement some other policy.
to 11 The maze router also exhibits superior bandwidth usage and latency for most message la mixes. This is attributed to its exhaustive yet sequential approach to route searching. The 13 maze router eliminates the blockage of the fixed routing wormhole scheme, yet keeps 14 route-search traffic to a minimum.
16 In accordance with an aspect of the invention, a node address packet, used for finding 17 and establishing a route to a destination node, is provided with a destination-node le address, plus other necessary control bits and parameters. The address packet, in 19 conjunction with the back-track routing network, provides the means by which a suitable 2o transmission path can be found adaptively by "scouting-out" various possible routes and 21 reporting routing status back to the sending node as appropriate.
22 , 23 The invention has the advantage that the mechanism automatically routes around 24 blocked or disabled nodes.
2s A more efficient utilization of the bandwidth of a hypercube communication network, by 27 duplicating network links or otherwise unused communications ports, and by avoiding 2a extended network blockages through the adaptive routing methods, is accomplished in 29 accordance with the present invention by a network folding mechanism. For system 3o configurations in which the hypercube is of a smaller dimension than the maximum 3b 3c supported by available port-links, the unused hypercube links can be used to duplicate the required internodal connections and thus provide additional routing paths and message data bandwidth. These "folded" connections are configured in the processor with a fold-enable mask register. The route-selection logic and node addressing logic is modified to allow transmission paths to be established through these folded/duplicated links.
The folding mechanisms provide the advantage of improved network bandwidth and message latency for systems configured as less than the maximum hypercube. This is accomplished in two ways. First, more links are available to paths within the folded cube and therefore more message data can be directed from one node to another in a given amount of time, thus increasing bandwidth. Second, more routing paths are available so that there is less chance of a blockage or a failure to route, and therefore a higher network bandwidth usage. Also, the average time to establish a route is shorter, thus reducing overall latency. These advantages apply to both fixed and adaptive routing methods, but are most efficiently exploited by the adaptive router.
According to one broad aspect, the invention provides in a network of interconnected nodes; each node in said network being assigned a unique identification (ID); a sending node;
said sending node originating an address packet having a target node address of a target node; each node in said network including comparing means for comparing said target node address with an ID of said node; said comparing means creating a first condition provided that said ID is not equal to said target node address, indicating that a node is an intermediate node and, alternatively, a second condition provided that said ID is equal 3d to said target node address, indicating that a node is said target nodes a plurality of input ports at each of said nodes; a plurality of output ports at each of said nodes; each one of said input ports being paired with a corresponding one of said output ports; control means at one input port of said input ports of a particular node for receiving said address packet transmitted from an output port of a previous node of said interconnected nodes; allocating means, operative upon occurrence of said first condition indicating that said particular node is an intermediate node, for allocating to said one input port, one of said output ports of said particular node, but excluding the output port paired with the input port over which said address packet is received; an improvement characterized by: means at said particular node for establishing a path to a next node upon occurrence of said first condition at said particular node, resulting in an established path from said sending node through said particular node; and, means for sending a path command packet from said particular node back to said sending node, said path command packet being sent out of a particular node output port paired with a particular node input port over which said address packet was received.
In another broad aspect, the invention may be characterized as in a network of interconnected nodes; each node in said network being assigned a unique identification (ID) and including a plurality of input ports and a plurality of output ports; a sending node originating an address packet having a target node address of a target node; each node in said network including comparing means for comparing said target node address with a unique identification (ID) of said node; a method comprising steps of: A. creating a first condition provided that said ID is not equal to said target node address, indicating 3e that a node is an intermediate node and, alternatively, a second condition provided that said ID is equal to said target node address, indicating that a node is said target node; B.
receiving, at one input port of a particular node, said address packet transmitted from of a previous node of said interconnected nodes; C. allocating, upon occurrence of said first condition indicating that said particular node is an intermediate node, to said one input port, one of said output ports of said particular node, but excluding an output port paired with an input port over which said address packet is received, said particular node establishing a path to a next node upon occurrence of said first condition at said particular node, resulting in an established path from said sending node through said particular node; and, D. sending a path command packet from said particular node back to said sending node out of a particular node output port paired with a particular node input port over which said address packet is received.
Brief Description of the Drawings The foregoing and other objects, features, and advantages of the invention will be apparent from the following detailed description of a preferred embodiment of the invention, as illustrated in the accompanying drawings wherein:
FIGURE 1 is a detailed block diagram of a communications unit in which the present invention is embodied;
FIGURE 2 is block diagram of a receive channel shown in FIGURE 1;
FIGURE 3 is block diagram of a send channel shown in FIGURE 1;
FIGURE 4 is block diagram of an input port shown in FIGURE l;
FIGURE 5 is block diagram of an output port shown in FIGURE 1;

3f FIGURE 6 is block diagram of routing registers and logic shown in FIGURE 1;
FIGURE 7 is a maze routing example in an eight processor network dimension 3 hyper cube;
FIGURE 8 is a graph of message latency versus percent of network transmitting data; for fixed and adaptive routing;
FIGURE 9 is a diagram of an address packet;

2196567 r v Figure 10 is a diagram of a data packet;
FIGURE 11 is a diagram of a command packet;
FIGURE 12 is a flow diagram of the routing state of a send operation;
FIGURE 13 is a flow diagram of the routing state of an input port operation;
FIGURE 14 illustrates end-to-end acknowledge;
FIGURE 15 is a maze-route timing diagram wherein there are no blocked links;
and, FIGURE 16 is a maze-route timing diagram wherein there are blocked links.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Sic,~nal Line Definitions The following is a summary of signal line abbreviations used in FIGURE 1 and their definitions:
CPU- Central Processing Unit.
CUTB- cut through bus by which commands, addresses and data are passed among ports and channels.
(PINS- input pins.
MEMADR- Memory address bus.
MEMDAT- Memory data bus.
NODEID- Node identification -a unique code number assigned to each node processor to distinguish a node from other nodes.
OPALL- output port allocation, one for each of 18 ports and 8 receive channels.
OPBSY- Output port busy -one line for each of 18 output ports and 8 receive channels to indicate that the corresponding output port or channel is busy.
OPINS- output pins.
OPSELV- output port and receive channel select vector; routing logic indicates the output port or channel selected.
PORTCUT- a vector indicating to which port, send channel or receive channel to cut through.
PORTSRC- a vector indicating which input port or send channel requests a route. , PORTRT- a vector of candidate ports from an agent requesting a route.
PRB- Processor Bus - a data bus from the central processing unit (CPU).
RDMAADR- receive DMA address.
RDMADAT- receive DMA data.

2195 67 . ,.

SDMAADR- send DMA address.
SDMADAT- send DMA data.
Command Definitions ETE-ack- End-to -end acknowledge - when a transmission has completed, backtracking takes place in the reverse direction along the transmission route as ETE-ack logic retraces the path in order to deallocate all ports in that path and delivers status to the originating send channel.
BOT- Beginning of Transmission - A signal generated by a send instruction to a sending channel that indicates the beginning of a transmission.
EOM -End of message is a command delivered to the target node that indicates that this is the end of a message.
EOT -End of transmission is a command delivered to the target node that indicates that this is the last packet of this transmission.
ETE ack -The End-to -end acknowledge command indicates a transmission was delivered successfully to the target node. It includes a receive code set up by the software at the receiver end.
ETE nack -The End-to -end not acknowledge command indicates that a message was not delivered successfully to the target node and returns status such as parity error or receive count overtlow.
ETE en -The end-to-end enable signal is sent with the address packet to indicate that the end-to-end logic at the receive channel in the target node is enabled.
Flush_path - is a flush command that deallocates and frees up all ports and channels in a path to a target node.

2?9b5b7 Reset node - Reset node is a command that resets a node and its ports and channels to an initial state.
Reset CPU - Reset CPU is a command that resets a CPU at a node but not its ports and channels to an initial state.
Rcv rej - Receive reject is a path rejection command that indicates that no receive channel is available at the target node.
Rcv rdy -Receive ready is a command that indicates that the receive channel is ready to accept a transmission.
Route rdy -Route ready is a command that indicates to a send channel that a requested route to the target node and receive channel has been found and allocated for a transmission from the send channel.
Route reject -Route reject is a path rejection command that indicates that all attempted paths to the target node are blocked.
Rt ack - Route acknowledge is a path acknowledge command that indicates that the path that a scout packet took to the target node is available and allocated (reserved).
Send rdy -Send ready is a status register that indicates the send channels that are ready to start a message or transmission.
Refer to FIGURE 1 which is a detailed block diagram of a communications unit in which the present invention is embodied. A direct memory access (DMA) buffers and logic block (10) is connected to a main memory (not shown) via memory address (memadr) and memory data (memdat) buses. Eight receive (rcv) channels (12) are paired with eight send channels (14). A
routing registers and logic block (16) is connected to the send channels via Portrt and Opselv, and to the receive channels via Opselv. A cut-through arbiter and signals block (18) is ~196~67 connected to the send and receive channels. A routing arbiter and signals block (20) is connected to the send channels (14) and to the routing registers and logic (16). Eighteen output port/input-port pairs (22) are connected to the cut-through arbiter and signals (18), the routing arbiter and signals (20), and to the send and receive channels via Portcut, Cutb, and Portsrc. A processor bus (prb) is connected from a central processing unit (not shown) to the receive channels, the send channels, the routing registers and logic and to input and output ports (22). The input ports are connected to the routing arbiter and signals block (20) and off-chip via the input pins (Ipins). The output ports are connected to the cut-through arbiter and signals block (18) and off-chip via the output pins (Opins).
Receive Channel Refer to FIGURE 2 which is block diagram of one of eight receive channels (12) shown in FIGURE 1. Each receive channel includes a receive direct memory access register, RDMA, (50), a receive status register, RSTAT, (52), a DMA four word-deep buffer DMABUF, (54) and a receive source vector register, RSRCVEC, (56). The input port that cuts through transmission data to this receive channel is indicated by the contents of the RSRCVEC
register; which is placed on the PORTCUT bus when the receive channel returns an ete-ack or ete-nak command.
An address packet or data packet is received from an input port over the cut through bus CUTE. Data are buffered in the receive DMA buffer DMA BUF(54) before being written to memory. An address and length to describe where in memory to place the data is stored in the receive dma register (50). As data is received it is transferred over the data write bus DWB
to a memory controller along with the address DADR and word count DCNT.
The receive source vector register RSRCVEC is an indication of from which input port the data was sent. An end to~ end (ETE) command, with end-to end (ETE) status, is sent back to the sending port from the RSTAT register (52).
Send Channel Refer to FIGURE 3 which is block diagram of one of eight send channels (14) shown in FIGURE 1. Each send channel includes a send buffer DMA, SBDMA (58), send DMA
register, SMDA (60), send buffer path, SBPTH (62), send path register, SPTH (64), end-to-end buffer, 21965b1 _8_ ETEB (66), end-to-end register, ETE (68), DMA buffer, DMABUF (70), send port vector register, SPORTVEC (72), send port select register, SPORTSEL (74) and send port alternate register, SPORTALT (76).
The SDMA register (60) stores address and length fields, double queued with the SBDMA
register (58). The SPTH register (64) stores port vector and node address of the destination (target) node, double queued with the SBPTH register (62). At the end of a transmission the SBDMA register (58) is popped to the SDMA register (60) and the SBPTH register (62) is popped to the SPTH register (64). At the end of a message the SBDMA only is popped, to the SDMA register.
The ETE register (68) is the top of the end-to-end (ETE) queue and the ETEB
register (66) is the bottom of the end-to-end (ETE) queue. An end-to-end ETE is returned via the CUTBUS and stored in the ETEB register (66). The ETEB is popped to the ETE if the ETE is empty or invalid.
CPU Instructions access the registers by means of the PRB bus. If a SDMA
register is in use, the information is placed in the buffer SBDMA(58). If SBDMA (58) is also full, a flag is set in the CPU indicating that the send channel is full. (same for SPTH and SBPTH) The send port vector SPORTVEC (72) is in a separate path in the send channel but is part of the SPTH register. SPORTVEC stores a bit pattern indicating through which output ports the transmission may be routed. The port vector is passed by means of the port route PORTRT
bus to the routing logic shown in FIGURE 6. A PORTSRC line is asserted to indicate which channel or port is requesting the new route. If accepted a vector, which is a one in a field of zeros, is sent from the routing logic via the output port select OPSEL bus to the send port select SPORTSEL register (74). The SPORTSEL register indicates the one selected output port.
The send channel sends an address packet, and/or data from the DMABUF (70), to the output port via the cut through bus CUTB. The output port for the cut through is selected by placing the port select vector SPORTSEL (74) on the port cut-through select bus PORTCUT.

_9_ ~~~~~~~~1~'~b5b7 The SPORTVEC vector is inverted and the inverted vector is placed in the alternate port vector register SPORTALT (76). If all attempted routes fail using the SPORTSEL
register, the SPORTALT register is transferred to the SPORTSEL to provide an alternate route select attempt.
The output port allocated OPALL lines, are activated to prevent another channel or port from intertering with the selected (allocated) port.
Ink Refer to FIGURE 4 which is block diagram of an input port shown in FIGURE 1.
Each input port includes input data register, IDAT (78), input data buffer, IBUFDAT (80), input buffer command, IBUFCMD (81 ), input back track data register, IBAKDAT (82), identification difference register, IDDIF (84), input port source register, IPORTSRC (86), input port vector register, IPORTVEC, (88) and input port select register, IPORTSEL (90).
Pairs of bits that are shifted out of an output port of a corresponding hypercube neighbor node are shifted into the input port command IBUFCMD register (81 ) on the (PINS.
At the front of a packet is the packet type which indicates the size of the packet. If a short packet, it is shifted into the middle of the IBUFDAT register (80). If it is backtrack command, the bits are shifted into the IBAKDAT register (82). If an address is shifted in, it is compared with an address of the node, NODEID. The result is a difference vector that is loaded into the IDDIF register (84). If folding is enabled, the FOLDEN line is asserted and the ID bits corresponding to the folded port are used to modify the difference vector IDDIF accordingly. The contents of the IDDIF register are loaded into the input port vector register IPOR1'VEC (88) which is used to identify the next minimum path ports through which a message may be routed. The IPORTVEC is sent to the routing logic of FIGURE 6 via the port route bus PORTRT. At the same time, the input port asserts its corresponding bit on PORTSRC, which is passed to the routing logic with port routing.
The output port selected by the routing logic is indicated with the OPSELV
bus, which is written into the iportsel register. Also, the PORTSRC value is written into the srcvec register of the iport corresponding to the selected oport. If a back track command is received at an -21'96567 _10_ iport via the IBAKDAT register, SRCVEC selects the oport to which the back track data is sent.
Any time data is put out on the CUTB bus, the contents of the IPORTSEL
register (90) or of the SRCVEC register are put out on the PORTCUT bus to select the output port to receive the data.
Output Port Refer to FIGURE 5 which is block diagram of an output port shown in FIGURE 1.
Each output port includes output data register, ODAT (92), output data buffer, OBUFDAT
(94), output backtrack data register, OBAKDAT (96) and output acknowledge data register, OACKDAT (98).
An address or data packet arrives from an input port or send channel on the cut through bus CUTB and is loaded into the output data ODAT register (92). The ODAT register is popped into the output buffer data OBUFDAT register (94) if it is not busy. If ODAT is full, an output port busy OPBSY line is asserted. The output backtrack data OBAKDAT register (96) stores backtrack commands. The output acknowledge data OACKDAT register (98) stores packet acknowledge commands. OBUFDAT register (94), OBAKDAT register (96), and OACKDAT
register (98) are shift registers that shift bits out of the OPINS every clock period, two pins (bits) per clock period.
Routing Regusters and Loqic Refer to FIGURE 6 which is block diagram of the routing registers and logic (16) shown in FIGURE 1. The routing registers include node identification register, NODEID
(100), termination register, TERMIN (102), fold enable register, FOLDEN (104), output port allocation register, OPALL
(108), output port busy register, OPBSY (110), alternate mask register, ALTMSK
(112), input output mask, IOMSK (114), input output select, IOSEL (116), OPORTEN (118) and a routing find first one (FFO) logic(120).
The NODEID register (100) contains the node address of this processor.

-2 ~~9~567 The terminal register TERMIN (102) indicates which input ports are terminal ports. Terminal ports do not compare address packets with the NODEID and any address packet that arrives at such a port is accepted as having arrived at the target node.
The fold enable register FOLDEN (104) holds a vector which indicates which ports can be folded. FOLDEN is considered by the routing FFO (120) when pertorming a wormhole routing protocol, such that if the first port is not available but its folding partner is available, a route is set up using the folded port; or when pertorming a maze routing protocol, such that the folding partners of the PORTRT ports are also considered as potential routing candidates.
When a send channel or an input port is requesting a path, the PORTRT bus carries a vector which indicates all the oports from which the routing logic may choose to form the next link in the Xmission path. PORTSRC carries a vector which identifies from which channel the request comes; and the OPALL vector indicates which output ports have already been allocated.
For a wormhole routing protocol, a find first one (FFO) is pertormed with the routing FFO (120) on bits in the PORTRT vector, starting at the first bit position beyond the port (on the PORTSRC bus) from which a request came, and wrapping around to bit position 0 and beyond if necessary ( this is a ' helical' search). The first ' one' bit indicates the output port through which the next link in the route must be taken. If folding is enabled for this port (FOLDEN), the folded output port corresponding to this one is also available for routing and is indicated in the output vector of the FFO. This vector is masked with OPORTEN and OPALL to generate the output port select vector on the OPSELV bus. If OPSELV is all zeroes, route selection has failed, i.e. the output port required to route the next link is unavailable, and the send channel or ioport must retry the routing request until the output port becomes available.
For a maze routing protocol, the PORTRT vector is first enhanced with any folded ports as indicated by the FOLDEN register. It is then masked with OPALL and OPORTEN, before the FFO is pertormed. As with the wormhole case, the FFO operation starts at the first bit position beyond that indicated with the PORTSRC bus and wraps around as needed (' helical' search).
Thus, for maze routing, the first available output port from the PORTRT vector will be selected and placed on the OPSELV bus. If none of the PORTRT ports is available, OPSELV will be zero and the route selections has failed. If the route request was from a Send Channel, the Send Channel will then place a route-rejection status in its ETE
queue and interrupt the CPU. If the route request was from an input port, a route-rejection command will be sent back to the previous node in the path, via the output port paired with the requesting input port.
Routing Arbiter and Signals The routing arbiter and signals (20) shown in FIGURE 1 is a find-first-one (FFO) chain. The last port or Send Channel that was selected is saved. The next time a search is made, the search starts just beyond that last position and a round-robin type of priority search is conducted. Ports or Send Channels arbitrate for route select to get access to the routing logic (16) which will select an output port to form the next link in an Xmission path. The cut-through arbiter and signals logic (18) then is invoked.
Cut-through Arbiter and Signals The cut-through arbiter and signals (18) shown in FIGURE 1 is a find-first-one (FFO) chain. Cut-through port allocation priority is similar to that described in U.S. Patent No.
5,367,636 to Colley et al which issued November 22, 1994, but port allocation priority is not hard-wired. The last port in the channel that was selected is saved. The next time a search is made, the search starts just beyond that last position and a round-robin type of priority search is conducted.
Maze Routing Refer to FIGURE 7 which is a maze routing example in an eight processor network dimension 3 hyper cube. A source (src) node (000) attempts a transmission to a destination or target node (111) by sending scout packets along route search paths indicated by broken lines. Scout packets encounter blocked links illustrated by the mark "+". Links from 001 to 101 and from 011 to 111 are blocked in this example. A route ready acknowledge path is illustrated by the solid line from node 111 through nodes 110 and 010, back to the source node 000. The message is then sent out to the target node 111 over the path as illustrated by the bold solid line.
Cut-through hardware To support multi-path routing, the receive and send channels (12, 14) are logically independent of the communication ports (22). Each send and receive channel therefore needs cut-through logic to direct data to or from the selected port or ports.
This cut-through logic is similar to that described U.S. Patent No. 5,367,636 replicated for each of the 18 ports and 8 send and 8 receive channels.
Route-rejection logic As a scout packet searches out a path, it must recognize a blocked or busy channel and respond with a rejection packet that retraces and deallocates each channel in the nodes of the scout packet's path. The retracing is done using the same baktrak paths and logic used by end-to-end acknowledge (ETE-ack) packets.
Maze route-selection and retry logic The routes from any node along a path are selected by performing an exclusive OR (XOR) of a destination node-ID with a node ID.
This is just like cut-through port selection for a fixed-router as described in U.S. Patent No. 5,367,636, but all selected 13a ports not already allocated, are potential paths, rather than just the lowest one. The lowest unallocated port is selected by the routing logic. A port rejection from an allocated path causes the corresponding cut-through cell to be invalidated and the next selected port to be tried. If no valid port selections remain, a rejection message is passed back to the previous node or Send Channel in the path. A scout that arrives successfully at the destination node is directed to the next available Receive Channel and then retraces its path as a path ack message. If the Send Channel receives a path-ack message, it can start transmitting the requested message along the selected path. If all potential paths are rejected, the CPU is interrupted, at which time random wait and retry is involved by the software.
Refer to FIGURE 8 which is a graph of message latency versus percent of network transmitting data. An adaptive maze router is represented by solid lines and a fixed wormhole router is represented by broken lines. Three message mixes are plotted:
small (16 packets) medium (128 packets), and large (1024 packets). The vertical axis is message latency which is the -14- ~ ~ 96567 number of packet time units to deliver the first packet. The horizontal axis is the percent of the network that is transmitting data, that is, the percent of network bandwidth transmitting data.
The message mixes described in TABLE I are plotted for both routing types in the graph of FIGURE 8.
TABLE I
PLOT MESSAGE MESSAGE MESSAGE

a E & 96 96 As shown in the graph of FIGURE 8, the maze router out-performs a fixed wormhole router in most situations.
Message Protocols End-to-End reporting A transmission may optionally hold a path until an end-to-end (ETE) acknowledge is received back at the source node from the destination (target) node. The ETE ack or ETE
nak is sent back along the same source to target path, but in the reverse direction, from target to source, as the transmission that was delivered. The target to source path uses companion ports that transmit in the other direction along a back-track routing network. The ETE
nak includes error status that indicates "parity-error", "rcv count overtlow", or "flushed". The ETE ack includes a 6-bit status field set up by software at the receiver end. ETE packets are not queued behind other messages along the companion path, but are inserted between normal message packets using the back-track routing network. ETE packets are delivered to the send channel, at the source node, that initiated the transmission.
Packet Formats Messages are delivered via packets of different types. Every data transmission begins with a 32-bit address packet, followed by 72-bit data packets, which include 64 bits of data and 8 WO 96!04604 PCT/US95/09474 -15_ 2 ~ 9667 bits of packet overhead. Message data must be double-word aligned in memory.
Commands are delivered in 18-bit packets and are generated and interpreted by the hardware only.
Figure 9 is a diagram of an address (scout) packet (32 bits):
Start bits - 2 Packet type - 2 Node Address -Forward bit - 1 Routing type -Reserved - 2 Acknowledge -Parity -Figure 10 is a diagram of a data packet (72 bits):
Start bits - 2 Packet type - 2 Data - 64 Acknowledge - 2 Parity - 2 FIGURE 11 is a diagram of a command packet (18 bits):
Start bits - 2 Packet type - 2 Command - 4 Status - 6 Acknowledge - 2 Parity - 2 Routing types:
BitO indicates "oblivious" routing, i.e. only a single route is possible at any intermediate node; non-adaptive.

-1s- ~ 196.567 Bit1 indicates "progressive" routing, i.e. data wormholes behind the address -there is no circuit probe(scout packet).
Bit2 indicates "alternate" routing, i.e. mis-route to non-minimum-path neighbors when further routing is otherwise blocked.
000 = "maze" routing: exhaustive back-track using a circuit probe (scout packet) 001 = "helix" routing: n minimum path oblivious routes tried from the sender, using a circuit probe 010 = RESERVED
011 = oblivious wormhole routing, (the only non-adaptive rt type) 100 = "alternate maze": maze until source is fully blocked, then mis-route and maze through neighbor nodes, in turn as necessary 101 = "alternate helix": helix until source is fully blocked, then mis-route and helix along non-minimum paths 110 = "hydra": maze progressively (take 1 st available minimum path port at each inter-mediate node, wormholing data behind) until all paths are blocked, then mis-route and maze from blocked node 111 = "oblivious hydra": oblivious wormhole (take only 1 st minimum path port at each intermediate node, wormholing data behind) until path is blocked, then mis-route and maze from blocked node.
Packet types:
00 = address 01 = data 10 = bak-trak routing command(rt rej,rcv rej,fwd rej,rt ack,ETE ack,ETE nak) 11 = fwd message command (EOM,EOT,flush,reset) Commands:

R'O 96/04604 PCT/US95/09474 - - ~ ~ 96567 stat cmd x~ooooc 0000 = packet acknowledge xxxxxx 0001 = route ack (path ack) xxxxxx 0010 = route rejected (blocked) (path rejection) xxxxxx 0011 = reserved x~oooc0 0100 = rcv channel rejected or hydra route rejected xxxxxl 0100 = parity_err flushed back xxxxxx 0101 = forwarded route rejected ssssss 0110 = ETE ack (ssssss = rcv code) ssrrrr 0111 = ETE nack (rrrr= error status; ss=rcv code) xxxxxx 1000 = EOM
xxxxx0 1001 = EOT - no ETE requested xxxxxl 1001 = EOT - ETE requested xxxxxx 101 x = reserved xxxxxx 1100 = reset CPU
xxxxxx 1101 = reset node xxxxx0 1110 = flush_path xxxxxl 1110 = parity_err flushed forward xxxxxx 1111 = reserved Node Addressing Processor IDs and. destination addresses are 18-bit unique values, specifying one of 256K
possible nodes in the system. The 18-bit physical node address of a target node is included in the address packet at the head of a message transmission, and as part of a "scout" packet (circuit probe) when a maze route is being established.
Logical to physical node address conversion, node address checking, and port vector calculation for a new Transmission Send are done directly by system software.
Messa e4 Routing _1 g_ A message can be routed any of seven ways, as listed under routing types table in the packet formats section above. A programmer selects the routing method via the routing-type field in the operand of the "Set Path" or "Send" instruction.
Oblivious wormhole routing is a "fixed" routing scheme. A message from a given source to a given destination takes exactly one unique predetermined routing path. The path taken is the lowest-order uphill minimum-length path. The message, node address followed immediately by message data, worms its way toward the destination node, not knowing if the path is free and never backing out, but blocking on busy ports as it encounters them and continuing on when the busy port frees up.
Maze routing in accordance with the present invention is an adaptive routing scheme. For a message from node A to node B, all minimum-length paths between the two nodes are searched one at a time (actually, paths in which the first leg is non- minimum may optionally be tried also) by a single-packet scout, starting with the lowest uphill path and doing a depth-first helical traversal of the minimum-path graph until a free path to the destination is found.
The successful arrival of a scout packet at the destination establishes the path. Then, once a path acknowledge packet is delivered back to the sender, this reserved path is used to transmit the message. If no free path is found, however, an interrupt is generated at the source node, whereupon the software may retry the path search after an appropriate delay or use alternate routing (and/or using a different set of first-leg paths).
Maze routing protocol In a maze router, a transmission from a source (sender) node to a destination (target) node cannot be accomplished until a path is established from a Send Channel at the source node to a Receive Channel at the target node. The path is established as follows:
At the Sender node Each of the source's Send channels has associated with it a send_port vector (SPORTVEC), provided to it by the software via a Send instruction, which indicates the output ports of the sender's node through which the routing will initially be attempted. These ports may or may not ~~

start minimum-length paths. This first hop may thus route non-minimally, while all subsequent hops will take only minimum paths to the target. In other words, the maze router does an exhaustive search of minimum paths between a set of nodes, that set including the source node and/or some number of its immediate accessible neighbors, and the target node.
A scout packet, including the node address of the target node, is sent out the first of the source's selected output ports which is enabled and free, and is thus delivered to a neighboring node, the next node in a potential path to the target node. The path from the Send Channel to the selected output port is now locked, reserved for the pending transmission, unless a path rejection packet is subsequently received on the corresponding input port. If the selected output port receives a path rejection packet, because all paths beyond the next node are blocked, a new output port from the send- port vector will be selected, if available, and the scout packet sent out that port. When no more send_port vector output ports are available, because they were blocked or rejected, an "all_paths blocked" status is pushed into the ETE
queue for the respective Send channel, the CPU is interrupted, and the Send channel goes into a wait state, waiting for software to clear it. If, however, a path acknowledge packet is received, it is passed back to the Send_DMA Channel that initiated the search and the selected path remains reserved for the subsequent transmission, which can now be started.
At the Tar9iet node A node that receives a scout packet at one of its input ports first compares the target node address from the scout packet with its own node ID. If they match, the scout packet has found the target node. If a receive channel is free, the scout packet is delivered to it and a path acknowledge packet is sent all the way back to the source (sender) node, retracing the successful legs of the scout's path. If a receive channel is not available, a path rejection packet, encoded as a "rcv channel- unavailable" command, is sent back to the source node via the established path and the input port is freed-up.
At Intermediate nodes If a receiving node's node ID does not match the target node address, then this node is an intermediate node and it will attempt to deliver the scout packet to the next (neighboring) node k along a minimum-length path to the target node. The XOR of the target node address with this node's node ID, the Hamming distance between them, indicates which output ports connect to minimum paths, and is latched in the IPORNEC register as the cut-through vector. The output port paired with this input port, i.e. the link back to the node from where the scout just came, is disqualified from the cut-through vector, thus preventing any cycles that could be caused by non-minimal routes (which are allowed on the first hop). If folding is enabled, the bits corresponding to the folded partners of the cut-through vector are also asserted. The scout packet is sent out the first of the cut-through output ports, starting beyond this input port, which is enabled and free. The path from the input port to the selected output port is reserved for the pending transmission, unless and until a path rejection packet is received on the output port's companion input port. If a path rejection packet is received, because all minimum paths beyond the next node are blocked, a new cut-through port will be selected, if available, and the scout packet sent out that port. When no more cut-through ports are available from this node, a path rejection packet is sent to the previous node, the one from which the scout packet got here, and the input port is freed-up. If, however, a path_ acknowledge packet is received, it is passed back to the source node via the established path and the selected path remains reserved for the subsequent transmission.
The above process is continued recursively until a free path to the target is found and established, or until all desired paths from the source node have been tried and failed.
Path Cmd packets:
A scout returns path rejection status to the previous node, or path found status to the source node, by sending back a path cmd packet. Path cmd packets are sent back along a path using the path's "companion" ports, just like an ETE packet. There are two kinds of path cmd packets. A "path acknowledge" packet, indicating that the scout has established a path to the destination node, is delivered all the way back to the source, leaving the path established for the subsequent transmission. A "path rejection" packet, indicating that the scout has been completely blocked at an intermediate node, is delivered to the previous node in the path, clearing the path (this last hop) along the way. A new path from that node may now be tried or, if no new paths remain open from that node, it will in tum send a "path-rejection" packet to its antecedent node. If it has no antecedent node, i.e. it is the source node, the rejection packet is placed into the ETE queue, the Send DMA channel goes into a wait state, and the CPU is interrupted.
Routing Retr~usingi Alternate Send Port Vector If the routing logic fails to find a path using the given send_port vector, an alternative set of paths may optionally be attempted before interrupting the CPU.
When alternate routing is enabled, and after the initial set of routes has failed, the initial send-port vector is inverted and and'ed with the alternate port mask to create a new send_port vector. Then, a second attempt is made at finding a route, through neighboring nodes that were not used in the initial try. If the alternate routes also fail, the CPU is then interrupted in the usual manner.
Non-minimum paths through alternate send ports are exactly two hops longer than minimum, since all routing is minimum after the first hop. If a source and destination node are separated in j dimensions, the minimum path distance is j hops and the alternate path distance is j+2 hops.
Attempting alternate routes can be especially important for transmissions to target nodes that are only a short distance away. For example, there is only one minimum-length path to a connected neighbor, yet by attempting routes through all the other neighbors, there are a total of n unique paths to any nearest neighbor in a cube of dimension n as described by the alternate mask.
There is one Alternate Port Mask per node, but alternate routing is enabled on a per-transmission basis (a bit in the path-setup operand of the SEND instruction).
Foldin4 Folding increases the number of output ports available for routing a message in a non-maximum-size system. Any of the connections from the lower 8 output ports to the 2 i 96557 corresponding input ports of nearest neighbor nodes, can be duplicated on the upper 8 output ports, in reverse order, to the same nearest neighbor nodes. In other words, any subset of the interconnect network can be duplicated on otherwise unused upper ports.
If folding is enabled (see FOLDEN register, figure 6), then when a port vector (PORTVEC) is calculated at an intermediate node, any selected ports that are folded will enable their respective companion ports to also be selected into the port vector.
At any hop of a wormhole route, either of the two folded ports, that duplicate the link for the desired dimension, may be used. Folding thus greatly improves the chances of a wormhole route finding its way to the target with minimal or no blocking.
For a maze route, folding increases the number of minimum-path links that can be tried at each hop, and thus improves the chances of finding an open path.
Forwarding The maze roofer finds a route to the forwarding node, reserves that path, then transmits the next address (fetched from the message data) to that node, whereupon the address is maze-routed from there to the new node. This can be repeated as long as new addresses are forwarded, or until a route cannot be found, in which case the entire path is unraveled and deallocated and a "forward route rejected" command is delivered to the send channel's ETE
queue. On the other hand, if a path to the final target node is established, the message data is then transmitted normally from the source to the target.
Communication Direct Memoqr Access (DMA) Channels A message is transmitted from a contiguous block of physical memory at the sender to a contiguous block of physical memory at the receiver, in increments of double-words (64 bits).
To provide memory access and message and path control at both ends of the transmission, there are eight Send DMA Channels and eight Receive DMA Channels at each processor.

E

DMA channels are set up with the appropriate SEND or RECEIVE instruction. A
Set DMA
instruction is also provided to assist in setting up the DMA operand of the SEND or RECEIVE
instruction. The SEND and RECEIVE operands provide path control, messaging parameters, addresses, etc. for the DMA channels and routing logic.
In order to reduce page-mode page-break limitations on DMA memory bandwidth, each channel, send or receive, buffers up to 32 bytes of data. This corresponds to 4 double-word (64-bit) memory accesses. Messages must be aligned on double-word boundaries and sized in double-word-multiples.
Send DMA
Each Send channel has associated with it a physical memory address and a message length, stored in its DMA register, as well as a destination node ID and a send_port vector, stored in its Path register. The Send channels are double-buffered, such that the DMA
and Path control descriptors of the next message can be setup while the current one is being transmitted.
Communications software can use this feature to hide messaging overhead and to efficiently implement send-chaining.
After a Send channel has been setup for a new transmission, it first enters the routing state to establish a path to the target node. The path is established once the address packet is transmitted to the output port, if routing progressively, or when a path acknowledge packet is received by the channel, if routing maze.
If the node address is forwarded, the send channel enters the forwarding state and transmits address packets from the message data until the last address packet is not marked as forwarded. If routing maze, the channel waits for a path acknowledge after each address is transmitted.
Once a Send channel establishes a path to the target node , it commences reading the message data from memory and transmitting it along the path to the target node. As the message data is fetched, the memory address is incremented and the message length is ,: . 2196567 . . -24-decremented, until the length counter reaches zero. When the send counter reaches zero, an End-of-Message (EOM) or End-of-Transmission (EDT) packet is sent, depending on the EDT-enable bit of the channel setup.
If it's an EOM, the DMA register is cleared and a new one popped in from the Send buffer. If it's an EOT and ETE is not enabled, the DMA and Path registers are both cleared and reloaded from the Send buffer. If it's an EOT and ETE is enabled, the Send channel is not cleared in any way, but waits for the ETE packet. When the ETE packet arrives, it is pushed into the ETE
Queue, and the Send channel (both registers) is cleared. The Send channel then moves on directly to the next transmission (pops the Send buffer) if it's ready.
Whenever the Send buffer is popped due to an EOM or EOT condition, the CPU is also interrupted to indicate that a free Send channel is now available. ETE also generates an interrupt if interrupt is enabled.
When maze routing, the ETE queue is also pushed with status information if a route could not be found to the target node. In this case, the path rdy bit is cleared, an ETE
interrupt is. raised, but the DMA channel is not popped, cleared, or reloaded. A programmer can subsequently clear the Send channel by writing to the corresponding DMA register.
An ongoing Send transmission can be stopped by clearing the DMA rdy bit in the channel's DMA register. This stops the transmission, but leaves it in the transmitting state. The DMA rdy bit can be cleared by writing a 1 to the respective bit, corresponding to the send channel, of the Send rdy register (see Send Channel Status Registers).
A blocked or stopped Send transmission can be flushed by writing a 1 to the respective bit, corresponding to the send channel, of the Send transmission rdy register (see Send Channel Status Registers).
When a message is flushed, a flush-cmd packet traverses the allocated path, clearing and deallocating the path behind it.

. zt _ -25- z ? 9 5 5 6 ~
End-to-End Queue For each Send channel there is an End-to-End (ETE) queue, into which ETE
status, from the target node's receive channel, or route rejection or error status is pushed.
When status is pushed into the ETE queue, an ETE interrupt is generated. The queue is 2 entries deep and a processor register, one for each send channel, contains both entries. A
programmer can read an ETE queue, without side effects, via a RDPR instruction. The programmer can then clear an ETE entry by writing a zero into its valid bit, via a WRPR instruction (though they must be read together, each entry in the queue can be written separately). When the first entry is cleared (popped) in this way, the second entry is automatically copied into its place and cleared. The Send channel cannot start a new transmission while the ETE Dueue is full.
Send Operation FIGURE 12 is a flow diagram of a send operation. From an idle state, the send channel enters the routing state (200). The first unallocated output port is selected from the send port vector (202). If a port is selected (204), the flow proceeds to block (206). The send channel allocates the selected port, and sends the address packet out of the selected output port (208). The send channel then waits for routing status to be returned via the route command (210).
When the route command arrives, the status (212) is either route rejected or route established.
If at block (212) the status is route rejected, the send channel clears the corresponding bit in the send port vector, clears port select, and deallocates the output port it had allocated at block (206). If the send port vector is now reduced to 0, and alternate routing is not enabled (205), or if enabled but this is not the first pass (207) through the sequence, the send channel pushes route rej status onto the ETE queue and if interrupt is enabled, the send channel interrupts the CPU (218). The send channel then enters the idle state (220).
If at block (212) the route is established, route ready is set (222) and the forward bit is checked (223). If the forward bit is set, the forwarding state is entered (225). If not, the enter .. , -2s- 219567 message transmission state is entered (224). The send channel transmits data to the target node until the message count is 0.
If at block (204) a port is not selected, the flow proceeds to.). decision block (205). If alternate routing is enabled, and this is a first pass through the flow sequence (207), the SPORTVEC is made equal an inverted version of the initial send_port vector (209). Thus when all initially attempted routes fail using the initial SPORTVEC, the inverted version provides an alternate route select attempt as the flow proceeds to block (202). The first unallocated output port is selected from the now inverted send port vector (202). If a port is selected (204), the flow proceeds to block (206). If a port is not selected (204), the flow proceeds to block (205).
Alternate routing is enabled (205), but this is not the first pass (207) through the sequence, so the flow proceeds to block (218).The send channel pushes route rej status onto the ETE queue and if interrupt is enabled, the send channel interrupts the CPU (218). The send channel then enters the idle state (220).
Receive DMA
Each Receive channel has associated with it a physical memory address and a message length (also called the receive count), stored in its respective DMA register. ft also has a rcv status register that includes error status and the receive code. As a message flows through the channel, the address increments and the message length decrements, until the length counter reaches zero or until an EOM/EOT packet is received.
If a Receive channel receives an EOM or EOT before the counter has reached zero, or immediately after it reached zero, the,message has successfully completed and the channel returns to the idle state, clearing dma rdy. If no receive errors occurred during the reception, a rcv rdy interrupt is raised. Otherwise, a rcv err interrupt is raised.
For example, if a parity error is detected anywhere along the transmission path, a parity_err flush message is delivered forward to the receive channel of the target (as well as back to the send channel of the sender). The parity error or flush bits in the receive status field are set and the target CPU is interrupted with a rcv err interrupt by the receive channel.

v r 2196567 If the receive counter reaches zero, the message should be complete and the next packet should be an EOM or EOT. If it is not, the rcv count overflow flag in the receive status field is set, and all further packets are ignored, i.e. simply shifted into oblivion, until an EOM or EOT is received, at which point a rcv err interrupt is generated. The counter wraps and continues to decrement (the address does not increment), thus providing a way for a programmer to calculate flow far the message overflowed.
A programmer can read the receive status, message count, etc. at any time, by simply reading the processor registers associated with the channel.
Scatter/Gather at the Receive Channel To facilitate fast °gather~ functions at the receiver, the programmer can optionally set the °ignore EDM~ flag at the receive channel for a given transmission (see Receive instruction description). Thus, the sender may gather disjoint bundles of data, as individual messages, into a single transmission, and the receiver can be sit up to ignore the message boundaries for the length of the entire transmission, and thus store the bundles sequentially in a single DMA
operation, rather than taking an interrupt and setting up a new receive DMA
after every message.
To implement a ~scatter~ function, the programmer can optionally set the °force EOM" flag at the receive channel. Thus, the sender may deliver a sequential block of data in one message, and the receiver can be set up to force message boundaries for sub-lengths of the transmission, and thus scatter the data in sub-blocks to different areas in memory. The receive channel is set up with a length shorter than the incoming message, and when the length counter drops to zero, the receive channel treats it as an EOM and blocks the incoming data until new DMA parameters are set up by the programmer. This is especially useful for DMAihg a message across virtual page boundaries that may map to disjoint physical memory pages.
Routing From an Input Port 2 i 9d 567 . _28_ FIGURE 13 is a flow diagram an address packet input port operation. The input port receives an address packet (300) and computes the exclusive OR of the address in the address packet with the Node ID of this node (302). The result is ID difif. If ID diff is 0 or if the input port is designated as a terminal, then the flow proceeds to block (322). If not, then the flow proceeds to block (306).
At block (306) the port vector (portVec) is generated and used to select the first unallocated output port (308).
At block (310), if a port is not selected, then the input port sends a route reject command via the output port paired with this input port (335), and waits for a new address packet (336).
If a port is selected (310), then an address packet is forwarded to the next node via the selected output port (312) and the port is allocated. The transmission path through this node is now setup and the input port waits for routing status that will be supplied by an incoming route command (314). A route command (316) will either indicate that the route is rejected or that the route is ready. If rejected, the flow proceeds to block (318). If ready, the flow proceeds to block (330).
At block (318), the receive channel clears the corresponding bit in the port vector, clears port select, and deallocates the output port allocated at block (312). The input port selects the next unallocated output port from the port vector (308) via the routing logic,and the flow proceeds as described above.
At decision block (304), if the node ID is equal to the address in the address packet or this port is terminal, then this node is the target node and the flow proceeds to block (322).
At block (322) the port vector (portVec) is generated and used to select the first ready receive channel (324). If a channel is selected (326), then the input port allocates a receive channel to receive the message (328). The input port sends a route ready (route rdy) command via the output port paired with this input port (330) and waits for message data to arrive (332).

-2s- ~ ? 90567 At block (326), if a channel is not selected, then the input port sends a route reject command via the output port paired with this input port (335) and waits for a new address packet (336).
End to End Reaortin4 FIGURE 14 illustrates end-to-end acknowledge. At the source node (350), the send channel sends a message packet out of an output port (352) to an intermediate node (355) that receives the message at an input port (354). The message is sent by the intermediate node (354) out of an output port (356). The message travels from node to node until the target node (358) receives the message packet. A receive channel is allocated (362) and an ETE ack message is sent back over the same path by using the output ports that are paired with the respective input ports in the path (ports 361, 353, and 351 ). The message path is held until the ETE ack is received at the source node and receive status is returned with the ETE ack. For each Send channel there is an End-to-End (ETE) queue, into which ETE status is pushed. When End-to-End status is pushed into the ETE queue, a Send rdy and ETE interrupt are generated, depending on the status.
FIGURE 15 is a maze route timing diagram wherein there are no blocked links.
FIGURE 16 is a maze route timing diagram wherein there are blocked links and wherein backtracking is invoked.
While the invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (24)

THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. In a network of interconnected nodes;
each node in said network being assigned a unique identification (ID);
a sending node;
said sending node originating an address packet having a target node address of a target node;
each node in said network including comparing means for comparing said target node address with an ID of said node;
said comparing means creating a first condition provided that said ID is not equal to said target node address, indicating that a node is an intermediate node and, alternatively, a second condition provided that said ID is equal to said target node address, indicating that a node is said target node;
a plurality of input ports at each of said nodes;
a plurality of output ports at each of said nodes;
each one of said input ports being paired with a corresponding one of said output ports;

control means at one input port of said input ports of a particular node for receiving said address packet transmitted from an output port of a previous node of said interconnected nodes;
allocating means, operative upon occurrence of said first condition indicating that said particular node is an intermediate node, for allocating to said one input port, one of said output ports of said particular node, but excluding the output port paired with the input port over which said address packet is received;
an improvement characterized by:
means at said particular node for establishing a path to a next node upon occurrence of said first condition at said particular node, resulting in an established path from said sending node through said particular node; and, means for sending a path command packet from said particular node back to said sending node, said path command packet being sent out of a particular node output port paired with a particular node input port over which said address packet was received.
2. The improvement of claim 1 further characterized by:
said path command packet being encoded to return path rejection status to a previous node, indicating that a path is not established at said particular node, or path found status to said sending node, indicating that a path is established to said target node through said particular node.
3. The improvement of claim 1 further characterized by:
means, operative upon occurrence of said second condition indicating that said particular node is a target node, for sending a path command packet from said target node back to said sending node;
said path command packet being sent out of a target node output port paired with a target node input port over which said address packet was received;

said path command packet being encoded to return path found status to said sending node, indicating that a path is established to said target node through said intermediate node;
each intermediate node in an established path sending said path command packet to a previous node out of an output port paired with an input port over which said message packet was received.
4. The improvement of claim 1 further characterized by:
a port vector;
said allocating means selecting a first unallocated output port from said port vector upon occurrence of said first condition; and, forwarding means for forwarding said address packet to said next node over said first unallocated output port.
5. The improvement of claim 4 further characterized by:
said means for establishing a path to a next node including means for sequentially selecting unallocated output ports from said port vector upon occurrence of said first condition; and, means for encoding said path command packet to return path rejection status to a previous node over an output port paired with said input port on which said address packet was received, upon a condition that selecting unallocated output ports from said port vector results in no path being established to said next node.
6. The improvement of claim 4 further characterized by:
a fold enable vector indicating which of said plurality of output ports can be folded resulting in a folding partner for a chosen output port, said folding partner being an output port through which a message may be routed;

such that if said chosen output port is not available for routing but said folding partner is available, said folding partner is considered as a potential routing candidate.
7. The improvement of claim 3 further characterized by:
sending an end to end acknowledge command packet out of said target node output port paired with said target node input port on which a message packet is received, upon a condition that a message transmission has completed.
8. The improvement of claim 7 further characterized by:
disconnecting an intermediate node output port from an intermediate node input port paired with said intermediate node output port, in response to said end to end acknowledge command packet.
9. The improvement of claim 1 further characterized by:
a plurality of send channels at said sender node;
each of said send channels having associated with it a send port vector;
said send port vector indicating selected output ports of said sender node through which routing is to be attempted;
a scout packet;
said scout packet being sent out of a first output port of said selected output ports of said sender node, and thereby delivered to an intermediate node, said intermediate node being a next node in a potential path to said target node;
means for locking a path from said send channel to said first of said selected output ports of said sender node, so that said output port is reserved for a pending transmission;
and, means for unlocking said path from said send channel to said first output port upon a condition that a path rejection packet is subsequently received on a corresponding input port paired with said first output port.
10. The improvement of claim 9 further characterized by:
inverting said send port vector to provide an alternate send port vector upon a condition that a routing status is route rejected; and, using said alternate send port vector to select a first unallocated output port from said alternate send port vector to provide said selected output port.
11. The improvement of claim 9 further characterized by:
means for selecting a new output port from said send port vector upon said condition that a path rejection packet is received on said corresponding input port paired with said first output port; and, means for sending said scout packet out said new output port.
12. The improvement of claim 3 further characterized by:
means for starting a transmission from said sender node to said target node over a path established a send channel ~ at said sender node to a receive channel at said target node, upon a condition that a path acknowledge packet is received at said sender node on said corresponding input port paired with said output port.

34a
13. In a network of interconnected nodes;
each node in said network being assigned a unique identification (ID) and including a plurality input ports and a plurality of output ports;
a sending node originating an address packet having a target node address of a target node;
each node in said network including comparing means for comparing said target node address with a unique identification (ID) of said node;
a method comprising steps of:
A. creating a first condition provided that said ID
is not equal to said target node address, indicating that a node is an intermediate node and, alternatively, a second condition provided that said ID is equal to said target node address, indicating that a node is said target node;
B. receiving, at one input port of a particular node, said address packet transmitted from of a previous node of said interconnected nodes;
C. allocating, upon occurrence of said first condition indicating that said particular node is an intermediate node, to said one input port, one of said output ports of said particular node, but excluding an output port paired with an input port over which said address packet is received, said particular node establishing a path to a next node upon occurrence of said first condition at said particular node, resulting in an established path from said sending node through said particular node; and, D. sending a path command packet from said particular node back to said sending node out of a particular node output port paired with a particular node input port over which said address packet is received.
14. The method of claim 13 further comprising steps of:
E. encoding said path command packet to return path rejection status to a previous node, indicating that a path is not established at said particular node, or path found status to said sending node, indicating that a path is established to said target node through said particular node.
15. The method of claim 13 further comprising steps of:
E. sending, upon occurrence of said second condition indicating that said particular node is a target node, a path command packet from said target node back to said sending node out of a target node output port paired with a target node input port over which said address packet was received over a target node output port paired with a target node input;
F. encoding said path command packet to return path found status to said sending node, indicating that a path is established to said target node through said intermediate node; and, G. sending from each intermediate node in an established path said command message to a previous node out of an output port paired with an input port over which said message packet was received.
16. The method of claim 13 further comprising steps of:
E. providing a port vector;
F. selecting a first unallocated output port from said port vector upon occurrence of said first condition; and, G. forwarding means for forwarding said address packet to said next node over said first unallocated output port.
17. The method of claim 16 further comprising steps of:
H. sequentially selecting unallocated output ports from said port vector upon occurrence of said first condition; and, I. sending said path command packet encoded to return path rejection status to a previous node over an output port paired with said input port on which said address packet was received, upon a condition that the step of sequentially selecting unallocated output ports from said port vector results in no path being established to said next node.
18. The method of claim 16 further comprising steps of:
H. providing a fold enable vector indicating which of said plurality of output ports can be folded resulting in a folding partner for a chosen output port, said folding partner being an output port through which a message may be routed;
and, I. using said folding partner as potential routing candidate upon a condition that said chosen output port is not available for routing but said folding partner is available.
19. The method of claim 16 further comprising steps of:

H. sending an end to end acknowledge command packet out of said target node output port paired with said target node input port on which a message packet is received, upon a condition that a message transmission has completed.
20. The method of claim 15 further comprising steps of:
I. disconnecting an intermediate node output port from an intermediate node input port paired with said intermediate node output port, in response to said end to end acknowledge command packet.
21. The method of claim 13 further comprising steps of:
E. providing a plurality of send channels at said sender node;
each of said send channels having associated with it a send port vector;
Said send port vector indicating selected output ports of said sender node through which routing is to be attempted;
F. sending a scout packet out of a first output port of said selected output ports of said sender node, and thereby delivered to an intermediate node, said intermediate node being a next node in a potential path to said target node;
G. locking a path from said send channel to said first of said selected output ports of said sender node, so that said output port is reserved for a pending transmission; and, H. unlocking said path from said send channel to said first output port upon a condition that a path rejection packet is subsequently received on a corresponding input port paired with said first output port.

37a
22. The method of claim 21 further comprising steps of:
I. inverting said send port vector to provide an alternate send port vector upon a condition that a routing status is route rejected; and, J. using said alternate send port vector to select a first unallocated output port from said alternate send port vector to provide said selected output port.
23. The method of claim 22 further comprising steps of:
I. selecting a new output port from said send port vector upon said condition that a path rejection packet is received on said corresponding input port paired with said first output port; and, J. sending said scout packet out said new output port.
24. The method of claim 15 further comprising steps of:
H. starting a transmission from said sender node to said target node over a path established a send channel at said sender node to a receive channel at said target node, upon a condition that a path acknowledge packet is received at said sender node on said corresponding input port paired with said first output port.
CA002196567A 1994-08-01 1995-07-28 Network communication unit using an adaptive router Expired - Fee Related CA2196567C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US08/283,572 1994-08-01
US08/283,572 US5638516A (en) 1994-08-01 1994-08-01 Parallel processor that routes messages around blocked or faulty nodes by selecting an output port to a subsequent node from a port vector and transmitting a route ready signal back to a previous node
PCT/US1995/009474 WO1996004604A1 (en) 1994-08-01 1995-07-28 Network communication unit using an adaptive router

Publications (2)

Publication Number Publication Date
CA2196567A1 CA2196567A1 (en) 1996-02-15
CA2196567C true CA2196567C (en) 2001-03-13

Family

ID=23086669

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002196567A Expired - Fee Related CA2196567C (en) 1994-08-01 1995-07-28 Network communication unit using an adaptive router

Country Status (9)

Country Link
US (1) US5638516A (en)
EP (1) EP0774138B1 (en)
JP (1) JP3586281B2 (en)
KR (1) KR100244512B1 (en)
AT (1) ATE173101T1 (en)
AU (1) AU694255B2 (en)
CA (1) CA2196567C (en)
DE (1) DE69505826T2 (en)
WO (1) WO1996004604A1 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6449730B2 (en) 1995-10-24 2002-09-10 Seachange Technology, Inc. Loosely coupled mass storage computer cluster
US5862312A (en) 1995-10-24 1999-01-19 Seachange Technology, Inc. Loosely coupled mass storage computer cluster
US5761534A (en) * 1996-05-20 1998-06-02 Cray Research, Inc. System for arbitrating packetized data from the network to the peripheral resources and prioritizing the dispatching of packets onto the network
JP2993444B2 (en) * 1996-10-25 1999-12-20 日本電気株式会社 Connection setting and recovery method in ATM network
US6230200B1 (en) * 1997-09-08 2001-05-08 Emc Corporation Dynamic modeling for resource allocation in a file server
US6064647A (en) * 1998-05-13 2000-05-16 Storage Technology Corporation Method and system for sending frames around a head of line blocked frame in a connection fabric environment
US6611874B1 (en) * 1998-09-16 2003-08-26 International Business Machines Corporation Method for improving routing distribution within an internet and system for implementing said method
AU755189B2 (en) * 1999-03-31 2002-12-05 British Telecommunications Public Limited Company Progressive routing in a communications network
US6378014B1 (en) * 1999-08-25 2002-04-23 Apex Inc. Terminal emulator for interfacing between a communications port and a KVM switch
JP3667585B2 (en) * 2000-02-23 2005-07-06 エヌイーシーコンピュータテクノ株式会社 Distributed memory type parallel computer and its data transfer completion confirmation method
EP1281142A4 (en) * 2000-03-07 2006-01-11 Invinity Systems Corp Inventory control system and methods
US6996538B2 (en) 2000-03-07 2006-02-07 Unisone Corporation Inventory control system and methods
US6681250B1 (en) * 2000-05-03 2004-01-20 Avocent Corporation Network based KVM switching system
US7461150B1 (en) * 2000-07-19 2008-12-02 International Business Machines Corporation Technique for sending TCP messages through HTTP systems
US6738842B1 (en) * 2001-03-29 2004-05-18 Emc Corporation System having plural processors and a uni-cast/broadcast communication arrangement
US20020199205A1 (en) * 2001-06-25 2002-12-26 Narad Networks, Inc Method and apparatus for delivering consumer entertainment services using virtual devices accessed over a high-speed quality-of-service-enabled communications network
US7899924B2 (en) * 2002-04-19 2011-03-01 Oesterreicher Richard T Flexible streaming hardware
US20040006635A1 (en) * 2002-04-19 2004-01-08 Oesterreicher Richard T. Hybrid streaming platform
US20040006636A1 (en) * 2002-04-19 2004-01-08 Oesterreicher Richard T. Optimized digital media delivery engine
TW200532454A (en) * 2003-11-12 2005-10-01 Gatechange Technologies Inc System and method for message passing fabric in a modular processor architecture
WO2006012418A2 (en) * 2004-07-21 2006-02-02 Beach Unlimited Llc Distributed storage architecture based on block map caching and vfs stackable file system modules
JP4729570B2 (en) * 2004-07-23 2011-07-20 ビーチ・アンリミテッド・エルエルシー Trick mode and speed transition
US8427489B2 (en) * 2006-08-10 2013-04-23 Avocent Huntsville Corporation Rack interface pod with intelligent platform control
US8009173B2 (en) * 2006-08-10 2011-08-30 Avocent Huntsville Corporation Rack interface pod with intelligent platform control
US8095769B2 (en) * 2008-08-19 2012-01-10 Freescale Semiconductor, Inc. Method for address comparison and a device having address comparison capabilities
JP2014092722A (en) * 2012-11-05 2014-05-19 Yamaha Corp Sound generator
US9514083B1 (en) * 2015-12-07 2016-12-06 International Business Machines Corporation Topology specific replicated bus unit addressing in a data processing system
JP2018025912A (en) * 2016-08-09 2018-02-15 富士通株式会社 Communication method, communication program and information processing device
JP2021157604A (en) * 2020-03-27 2021-10-07 株式会社村田製作所 Data communication device and data communication module

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5282270A (en) * 1990-06-06 1994-01-25 Apple Computer, Inc. Network device location using multicast
US5150464A (en) * 1990-06-06 1992-09-22 Apple Computer, Inc. Local area network device startup process
US5367636A (en) * 1990-09-24 1994-11-22 Ncube Corporation Hypercube processor network in which the processor indentification numbers of two processors connected to each other through port number n, vary only in the nth bit
JPH06500655A (en) * 1990-10-03 1994-01-20 スィンキング マシンズ コーポレーション parallel computer system
US5471623A (en) * 1991-02-26 1995-11-28 Napolitano, Jr.; Leonard M. Lambda network having 2m-1 nodes in each of m stages with each node coupled to four other nodes for bidirectional routing of data packets between nodes
US5151900A (en) * 1991-06-14 1992-09-29 Washington Research Foundation Chaos router system
US5471589A (en) * 1993-09-08 1995-11-28 Unisys Corporation Multiprocessor data processing system having nonsymmetrical channel(x) to channel(y) interconnections

Also Published As

Publication number Publication date
CA2196567A1 (en) 1996-02-15
AU3150595A (en) 1996-03-04
AU694255B2 (en) 1998-07-16
ATE173101T1 (en) 1998-11-15
DE69505826D1 (en) 1998-12-10
DE69505826T2 (en) 1999-05-27
EP0774138A1 (en) 1997-05-21
KR100244512B1 (en) 2000-02-01
EP0774138B1 (en) 1998-11-04
WO1996004604A1 (en) 1996-02-15
US5638516A (en) 1997-06-10
JPH10507015A (en) 1998-07-07
JP3586281B2 (en) 2004-11-10

Similar Documents

Publication Publication Date Title
CA2196567C (en) Network communication unit using an adaptive router
US5347450A (en) Message routing in a multiprocessor computer system
EP0169208B1 (en) Self-routing packet switching network
US5181017A (en) Adaptive routing in a parallel computing system
US5367636A (en) Hypercube processor network in which the processor indentification numbers of two processors connected to each other through port number n, vary only in the nth bit
US5450578A (en) Method and apparatus for automatically routing around faults within an interconnect system
EP0391583B1 (en) Dual-path computer interconnect system with four-ported packet memory control
US5020020A (en) Computer interconnect system with transmit-abort function
US6453406B1 (en) Multiprocessor system with fiber optic bus interconnect for interprocessor communications
US5175733A (en) Adaptive message routing for multi-dimensional networks
JP2566681B2 (en) Multi-processing system
JP3657428B2 (en) Storage controller
US5187780A (en) Dual-path computer interconnect system with zone manager for packet memory
US5398317A (en) Synchronous message routing using a retransmitted clock signal in a multiprocessor computer system
US7643477B2 (en) Buffering data packets according to multiple flow control schemes
KR100259276B1 (en) Interconnection network having extendable bandwidth
US5594866A (en) Message routing in a multi-processor computer system with alternate edge strobe regeneration
US6385657B1 (en) Chain transaction transfers between ring computer systems coupled by bridge modules
US8885673B2 (en) Interleaving data packets in a packet-based communication system
Scott The SCX channel: A new, supercomputer-class system interconnect
US5495589A (en) Architecture for smart control of bi-directional transfer of data
JP4179303B2 (en) Storage system
Lu et al. A fault-tolerant multistage combining network
CN114157401A (en) Retransmission buffer device supporting long and short message formats
JPH0624361B2 (en) Data transmission method

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed