CA2168035A1 - Variable latency cut-through bridging - Google Patents

Variable latency cut-through bridging

Info

Publication number
CA2168035A1
CA2168035A1 CA002168035A CA2168035A CA2168035A1 CA 2168035 A1 CA2168035 A1 CA 2168035A1 CA 002168035 A CA002168035 A CA 002168035A CA 2168035 A CA2168035 A CA 2168035A CA 2168035 A1 CA2168035 A1 CA 2168035A1
Authority
CA
Canada
Prior art keywords
bridge
variable
point
data packet
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002168035A
Other languages
French (fr)
Inventor
Bernard N. Daines
Lazar Birenbaum
Richard J. Hausman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2168035A1 publication Critical patent/CA2168035A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • H04L12/4625Single bridge functionality, e.g. connection of two networks over a single bridge
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks

Abstract

A variable latency cut through bridge (210) for selectively forwarding data packets (10) within a network (310) of computers (312), the variable latency cut through bridge (210) employing a variable latency bridging method wherein the latency factor of the variable latency cut through bridge (210) is set according to the position of a variable threshold point (428). The variable threshold point (428) is optionally set to within a rapid drop off portion (520) of a probability line (514) describing the probability that the data packet (10) is bad as a function of the amount of the packet (10) which has been examined within the variable latency cut through bridge (210).

Description

O 95/04970 216 8 0 3 ~ PCT~S94/08656 3 v~T~Rr~ LATENCY CUT-THROUGH BRIDGING
TECHNICAL FIELD
7 The present invention relates generally to the field 8 of computer science and more particularly to an improved 9 device and method for communicating between computers. The predominant current usage of the variable threshold network 11 packeting method is in computer networks wherein a number of 12 individual computers are interconnected for the sharing of 13 programs and data.

BACKGROUND ART

17 The interconnection of computers such that programs 18 and data can be shared among a network of computers is 19 presently a subject of much interest. A number of different methods and means for communicating program and/or file data 21 between computers have been devised, and some of these have 22 developed into standards which allow for the interconnection 23 of computer devices which are in compliance with such 24 standards. A specification for one such convention is found in the Institute of Electrical and Electronic Engineers 26 ("IEEE") standard 802.3. This standard specifies the protocol 27 for a Local Area Network ("LAN") communications method which 28 is commonly referred to as "Ethernet" or, more descriptively 29 as "carrier sense, multiple access with collision detection"
("CSMA/CD").
31 Groups of computers connected via LANs in general and 32 Ethernet in particular can be broken into segments or separate 33 LANs on an application and/or a geographical basis. Each 34 segment or LAN can consist of one or more computers. The segments and LANs may be connected together in a topology by 36 switching elements employing a variety of information 37 forwarding schemes. Each segment of an interconnected LAN is 38 electrically distinct but logically continuous in that 39 information transmitted from one computer to another appears W095/04970 ~ 3~ PCT~S94/0865 1 on all segments of a network. Connected LANs are not only 2 electrically distinct but are also logically separate in that 3 information is selectively forwarded from one LAN of an 4 interconnected network to some subset of the other LANs of the network, depending upon the topology of the segments and 6 information forwarding schemes of the network switching 7 elements.
8 In Ethernet, as in several other computer 9 intercommunication methods, information is communicated in units sometimes referred to as "packets". An Ethernet packet 11 is depicted in Fig. 1 and is designated therein by the general 12 reference character lo. The standardized Ethernet packet 10 13 has a preamble 12 which is 64 bits in length, a destination 14 address 14 which is 48 bits in length, a source address 16 which is 48 bits in length, a length/type field 18 which is 16 16 bits in length and a data field 20 which is variable in length 17 from a minimum of 46 eight bit bytes to a maximum of 1500 18 bytes. Following the data field 20 in the packet 10 is a 4 19 byte (32 bit) frame sequence check ("FCS") 22. The packet 10 is transmitted serially beginning at a "head" 24 and ending at 21 a "tail" 26 thereof.
22 In CSMA/CD (Ethernet), computers and switching 23 elements having a packet 10 destined for a particular computer 24 of the network "listen" for the appropriate segment of a LAN
to be quiet before transmitting the packet 10. This feature 26 is to avoid interference on the segment and is the "carrier 27 sense" aspect of CSMA/CD. "Multiple access" relates to the 28 distributed nature of the decision making among the computers 29 and switching elements that access a particular LAN.
Despite the carrier sense function it is, 31 nevertheless, possible for more than one computer or switching 32 element to have a packet 10 ready to send to a LAN at 33 precisely the same time. In such an instance, when both units 34 sense quiet on the segment, both begin to transmit at the same time. Each of these transmitting computers and/or switching 36 elements will then detect that a "collision" has occurred and 37 will abort its respective transmission. The resulting 38 incomplete (improperly formed) packets 10 are known as 39 "runts".

~ 95/04970 21 6 8 0 3 5 PCT~S94/08656 1 Various different types of switching elements have 2 been utilized to electrically interconnect LANs and segments 3 of LANs. For example a "repeater" is a simple switching 4 element which interconnects segments of a LAN. The function of a repeater is merely to receive a data stream from one 6 segment of the LAN and forward it on to the other connected 7 segments of the LAN. The carrier sense and collision detect 8 functions of CSMA/CD take place on all segments of a LAN
9 simultaneously with all computers and switching elements listening for quiet and/or detecting collisions in parallel.
11 All the segments of a LAN interconnected by repeaters are said 12 to be in the same "collision domain", since only one packet 10 13 can traverse a LAN at a time no matter what is the arrangement 14 of the segments of the LAN. Multiple repeaters can connect numerous segments into a single LAN.
16 A "bridge" is a somewhat more sophisticated switching 17 element in that it directs data streams between LANs and can, 18 in fact, forward more than one packet 10 at a time with the 19 restriction, discussed above, that only one packet 10 at a time is allowed on each of the connected LANs whether it be 21 transmitting or receiving. Packets received from LANs are 22 directed to their int~n~e~ destinations by selecting which of 23 the LAN(s) are to receive a particular packet 10. Given the 24 description of the packet 10 previously discussed herein, it can be appreciated that a bridge must have some buffering 26 capability, as it cannot ascertain the intended destination of 27 a packet 10 at least until the destination address 1~ is 28 received and interpreted. A so called "st~Ard bridge"
29 receives the packet 10 into its buffer before forwarding it.
A "cut through bridge" attempts to speed up the process by 31 beginning to forward the packet lo before it is fully received 32 (typically, as soon as the destination address 1~ is received 33 at the bridge). However, it may not be possible to forward 34 the packet 10 as soon as the destination address 1~ is received, since the destination LAN may not be quiet (for 36 example, because another computer or switching element of the 37 destination LAN is transmitting, or for any of various other 38 reasons). Therefore, a bridge should have the cap~bility of 39 buffering substantially more than one packet 10 so that W095/04970 2 ~6~ ~ 35 PCT~S94/086~6 -1 packets lo can be queued for subsequent sending therefrom.
2 Furthermore, a bridge may be required to retransmit a packet 3 10 if there is a collision in the destination LAN. This 4 "buffering" in the bridge is required so as to avoid "reflecting" the collision to the source LAN.
6 The scheme discussed above may seem to be rather 7 simple in description, but it becomes somewhat more 8 complicated in practice. For example, since a number of 9 devices may be competing for access to a particular network LAN there will, as previously mentioned, occur collisions of 11 data resulting in the creation of incomplete packets 10 known 12 as runts. Under heavy load conditions or in a large network, 13 runts can occupy a significant portion of the available 14 network traffic capability. A runt occurs because each device involved in a collision stops transmitting when the collision 16 is detected, generally after only a portion of its packet 10 17 is transmitted.
18 A "dumb" bridge attempts to forward all packets 10 19 which it receives. A "filtering" bridge, on the other hand, attempts to identify packets 10 which, for one reason or 21 another, should not be forwarded to a particular segment. Not 22 forwarding ("filtering out") those packets 10 which should not 23 be forwarded from one LAN to another reduces the traffic 24 overhead in the network leaving more bandwidth available for the complete packets 10 which should be forwarded. This 26 filtering also affects the delay a packet 10 faces in being 27 forwarded to a particular LAN in that the lesser amount of the 28 bandwidth which is being consumed by unwanted packets 10, the 29 more often a packet 10 can be forwarded from a source LAN to a destination LAN immediately (without being queued).
31 Bridges may "choose" which packets 10 to forward to 32 a particular LAN based on a comparison of the destination 33 address 1~ of each packet 10 with some accumulated history 34 data relating to the source addresses 16 of packets 10 previously seen from that LAN. Thus, in the case of a bridge, 36 a packet is (generally) forwarded only to the LAN where the 37 destination address 14 of a packet 10 matches a source address 38 16 of previous packets 10 seen on that LAN. This "destination 39 address filtering" also reduces traffic on various segments of ~ 95/04970 21 ~ 8 ~ 3 S PCT~S94108656 1 the network, thus increasing overall performance. Another of 2 the several potential reasons why a packet should not be 3 forwarded is that it is a runt. U.S. Patent No. 4,679,193 4 issued to Jensen et al. discloses a Runt Packet Filter for filtering out such runts in particular applications.
6 It can be appreciated in light of the prior 7 discussion that there exist a number of "trade offs" in the 8 operation of prior art network systems. How much of a packet 9 10 the bridge must receive prior to beginning to forward the packet lO is known as the "latency" of the bridge. The longer 11 the latency, the longer is the time delay involved in 12 forwarding a packet lO and, of course, it is desirable to 13 reduce this delay as much as possible in order to speed up 14 communications. On the other hand, to attempt to reduce this delay by allowing a bridge to begin transmission before an 16 entire packet 10 is received, and thus before the packet lO
17 can be verified as being a complete packet lO that should 18 in~ be forwarded, will result in the improper forwarding of 19 at leact some packets lO. This, of course, will only slow down the system in that not only is time taken in improperly 21 forwarding a packet lO, but also other packets lO may be 22 queued h~h; n~ the improper packet lO which other packets lO
23 should and could have been immediately forwarded were the 24 bridge not occupied in forwarding the improper packet 10.
Because of these conflicting considerations, prior 26 art cut through bridges have been designed to provide a 27 latency which allows the bridge to filter out only a 28 relatively small percentage of the improper packets lO. Such 29 prior art filtering, as discussed above, has been accomplished primarily based on characteristics of the packets lO found in 31 the preamble 12 and/or the destination address ~. Since the 32 preamble 12 and the destination address 1~ occur early in the 33 packets lO, the simple prior art filtering scheme does have 34 the advantage that filtering packets 10 based upon these characteristics prevents a significant amount of clogging of 36 the system because many unwanted packets lO can be quickly and 37 easily rejected for forwarding. However, even after such 38 prior art filtering as is described herein, there remain a 39 great many packets lO which according to prior art methods WO9~/04970 2 ~ G S ~ 3 PCT~S94/0865 ~

1 are, but should not be, forwarded.
2 Clearly, it would be desirable to eliminate the 3 forwarding of as many improper packets as possible without 4 increasing latency in the bridge to be longer than is absolutely necessary. However, to the inventors' knowledge, 6 no prior art method has succeeded in optimizing throughput of 7 bridges by providing an optimal bridge latency. Moreover, 8 this problem is exacerbated by the fact that what might be an 9 optimal latency in one application of a bridge might well not be optimal in another application. Indeed, the "optimal"
ll latency may even change in a fixed application as changes are 12 made in the structure or usage of the system.

14 DISCLOSURE OF Ihv~llON
16 Accordingly, it is an object of the present invention 17 to provide a method and means for optimizing the latency 18 period within a bridge.
19 It is another object of the present invention to provide a method and means which can adapt a bridge for 21 maximum throughput in a variety of different network 22 configurations.
23 It is still another object of the present invention 24 to provide a method and means by which network communication among computer devices is maximized.
26 It is yet another object of the present invention to 27 provide a method and means for eliminating as many improper 28 data packets as is practical without unduly delaying the 29 forwarding of proper data packets.
Briefly, the preferred emho~;ment of the present 31 invention is a cut through bridge with a variable latency.
32 Since a large percentage of the improper packets 10 are runts, 33 and since runts can be identified after only a small portion 34 of the packet 10 is received at the bridge (given that a collision, if one has occurred, will be detected soon after 36 the relevant packet ~O has begun to be forwarded), the 37 inventive bridge begins sending after the threshold of most 38 runts. However, there are a number of other improper packets 39 10 in addition to the runts which should also not be ~ 95/04970 ~1 ~ 8 ~ 3 ~ PCT~S94/08656 1 forwarded. In the worst case, a packet 10 may not be 2 identified as being improper until the FCS 22 is encountered.
3 It should be noted that the solution of filtering out only 4 runts, while it eliminates a high percentage of improper packets 10, eliminates only the shortest packets 10, while the 6 greatest time delay is involved in the forwarding of longer 7 improper packets 10. According to the inventive method, after 8 a determination is made as to a threshold cut off point for 9 the network in which a bridge is installed, provision is made for varying the latency of the bridge, from time to time, to 11 optimize throughput on the network for the existing 12 circumstances.
13 An advantage of the present invention is that 14 throughput on a network is improved.
Yet another advantage of the present invention is 16 that a bridge can operate at optimal efficiency even as the 17 requirements of the application vary.
18 Still another advantage of the present invention is 19 that a proper balance can be achieved between delays caused by bridge latency and delays caused by the forwarding of improper 21 packets.
22 Yet another advantage of the present invention is 23 that a network is not clogged with an excess of improper 24 packets, nor does the network unnecessarily delay packets in order to m; ni~; ze such improper packets.
26 These and other objects and advantages of the present 27 invention will become clear to those skilled in the art in 28 view of the description of the best presently known modes of 29 carrying out the invention and the industrial applicability of the preferred embodiments as described herein and as 31 illustrated in the several figures of the drawing.

33 ~RIEF DESCRIPTION OF THE DRAWING

Fig. 1 is a block diagram of a stAn~A~d Ethernet 36 packet;
37 Fig. 2 is a block diagram of a variable latency 38 bridge according to the present invention;
39 Fig. 3 is block diagram of a simple computer network W095/04g70 2 1 6 ~ ~ 3 ~ PCT~S94/08656 -1 having therein the variable latency bridge of Fig. 2;
2 Fig. 4 is a block diagram of an Ethernet packet, 3 similar to that shown in Fig. 1, showing a variable threshold 4 point;
Fig. 5 is a graph showing the probability that an 6 Ethernet packet is bad charted against the amount of the 7 packet which has been analyzed;
8 Fig. 6 is an equally preferred alternate embodiment 9 of the inventive variable latency bridge;
Fig. 7 is an example of the inventive variable 11 latency bridge in use in a single link segment Ethernet; and 12 Fig. 8 is an example of the inventive variable 13 latency bridge in use in a maximally configured Ethernet.

BEST MODE FOR CARRYING OUT INVENTION

17 The best presently known mode for carrying out the 18 invention is a variable latency cut through bridge. The 19 predominant expected usage of the inventive variable latency cut through bridge is in the interconnection LANs of computer 21 devices, particularly in local area networks wherein the 22 maximization of throughput of data packets is desirable. The 23 variable latency cut through bridge connects LANs making up 24 the overall network.
The variable latency bridge of the presently 26 preferred embodiment of the present invention is illustrated 27 in a functional block diagram in Fig. 2 and is designated 28 therein by the general reference character 210. The variable 29 latency cut through bridge 210 described herein is adapted for use with the standardized Ethernet communications packet 10 31 described in Fig. 1 herein, although the invention is equally 32 application to other communications protocols that use data 33 packets or "frames".
34 The variable latency cut through bridge 210 has a buffer 212, a controller 214, an input port ("receiver") 216 36 and an output port ("transmitter") 218. The variable latency 37 cut through bridge 210 described herein is a simplified unit 38 in that it has only the single receiver 216 and the single 39 transmitter 218. Further, the variable latency cut through 21 6803s ~ 95/04970 PCT~S94/08656 1 bridge 210 described herein provides for the forwarding of the 2 packets 10 (Fig. 1) in one direction only. One skilled in the 3 art will recognize that the principles described herein could J 4 easily be utilized to build a more complex bridge by the provision of additional receivers 216 and/or transmitters 218 6 (with appropriate buffers 212 between them, as required), and 7 that bidirectional communications could be accomplished using 8 two iterations of the variable latency cut through bridge 210.
9 As can be appreciated by a practitioner in the field, an invention such as the one described herein can be 11 accomplished primarily in hardware, in software, or in some 12 combination thereof, the distinction between hardware and 13 software in this context being more a matter of convenience 14 and efficiency than of a critical aspect of the inventive method of the variable latency cut through bridge 210. In the 16 best presently known emhoAiment 210 of the present invention, 17 handling, forwarding and filtering of the packets 10 is done 18 in the hardware of the variable latency cut through bridge 210 19 with monitoring and associated functions in software. One skilled in the art, given an understAn~;ng of the inventive 21 method as described herein, can readily accomplish a 22 hardware/software combination for accomplishing the inventive 23 method.
24 Fig. 3 is a block diagram of a computer network 310 having therein the variable latency cut through bridge 210 of 26 Fig. 2. A plurality of computers 312 are connected to the 27 variable latency cut through bridge 210 via a plurality of 28 interconnecting cables 314. In the example of Fig. 3, a first 29 computer 312~ is indicated as transmitting to the variable latency cut through bridge 210 and the variable latency cut 31 through bridge 210 is, in turn, shown forwarding data to a 32 second computer 312b and a third computer 312c. In accordance 33 with the present inventive method, data transmitted over the 34 interconnecting cables 314 is in the form of the Ethernet packets 10 of Fig. 1.
36 Fig. 4 is a block diagram of the Ethernet packet 10 37 showing a variable threshold point 428. The variable 38 threshold point 428 is that point in the Ethernet packet 10 at 39 which the variable latency cut through bridge 210 (Fig. 2) W095/04970 PCT~S94/0865 ~
21 ~8~3~ lo 1 begins to forward the Ethernet packet lo. According to the 2 present inventive method, when a determination is made as to a 3 proper location for a threshold point ~28 the controller 21 4 causes the threshold point 428 to move to that location.
Fig. 5 is a graph representing the probability that 6 an Ethernet packet 10 (Fig. 1) is a "bad" or improper packet 7 on the Y axis 510 plotted against the amount of the Ethernet 8 packet 10 that has been examined at the variable latency 9 bridge 210 on the X axis 511. In this sense, "bad" Ethernet packets 10 are those that the variable latency bridge 210 11 should automatically filter out and not forward. As has been 12 previously discussed herein, in Ethernet many bad packets 10 13 will be runts. However, bad Ethernet packets 10 also include 14 those with errors in the FCS 22 and other locations within the Ethernet packet 10. The probability that a packet 16 transmission will be involved in a collision, resulting in a 17 runt, depends on what is referred to as the acquisition time 18 for the transmitting station (the first computer 312a in the 19 example of Fig. 3). This will be discussed in greater detail hereinafter in relation to the industrial applicability of the 21 invention. The acquisition time will vary for each 22 application.
23 As can be seen in the view of Fig. 5, a probability 24 line 512 is highest at an initial point 513 which is a function of the specific acquisition time for the application.
26 The initial point 513 corresponds to the head 2~ of the 27 Ethernet packet 10 (Fig. 1). This can be understood as being 28 a reflection of the fact that, since the variable latency cut 29 through bridge 210 (Fig. 2) will reject any Ethernet packet 10 that is "bad" once such condition is discovered, the highest 31 probability that the particular Ethernet packet 10 being 32 examined is "bad" exists at the inception of the process, 33 before the variable latency cut through bridge 210 has had an 34 opportunity to examine any of the Ethernet packet 10. In such a case, no potential flaw locations have been eliminated and 36 the maximum possible flaw locations remain, thus the maximum 37 probability of errors exists.
38 Since in Ethernet collision processing all runts will 39 be at least of a certain fixed length (such length varying with the application), a first portion 514 of the probability 2 line 512 will be generally flat up to a minimum fragment size 3 point 515 (which minimum fragment size point 515 corresponds 4 to the minimum length of a runt). After the minimum fragment size point 515, the probability line 512 decreases 6 continuously as more stations see the transmitted packet 10 7 until a transmitting station's network acquisition time which 8 is represented in the graph of Fig. 5 by an acquisition time 9 point 516. Thereafter, the variable latency bridge 210 can no longer be assured that the received packet 10 is a collision 11 fra~ment and cannot filter it for that reason. However, other 12 errors (such as errors in the FCS 12) may be detected that 13 would ideally cause filtering and the resulting probability 14 does not go to zero until the packet is fully received (probability line end point 517. It can be readily understood 16 that at the end point 517 of the probability line 512 the 17 probability that the Ethernet packet 10 is "bad" is 18 essentially zero for the present purposes, the entire Ethernet 19 packet having been examined within the variable latency cut through bridge 210. That is, were an error (or other reason 21 for not forwarding it) discovered within the Ethernet packet 22 lo, the variable latency cut through bridge 210 would have 23 rejected the Ethernet packet 10 and examination would not have 24 progressed to the tail 26. Since some reasons for not forwarding a packet 10 may not be discoverable until the 26 entire packet 10 is examined, there will be a distinct drop 27 off of the probability line 512 at the end point 517.
28 Note that the initial point 513, the acquisition time 29 point 516 and the end point 517 will be different for different transmitting stations and that the position of the 31 end point 517 will also depend upon the size of the particular 32 packet 10 being received. The shape of the graph of Fig. 5 is 33 only an example, with specific values of the points 513, 515, 34 516 and 517 thereof being a function of the particular application. Indeed, the shape of the declining probability 36 line 512 may well not even be linear (at least in portions) 37 although it is assuredly monotonically decreasing.
38 Of particular significance is that, regardless of 39 there application, there will be three points (the minimum W095/04970 ~ 3~ 12 PCT~S94/0865 fragment size point 515, the acquisition time point 516 and 2 the end point 517) at which the probability line 512 drops off 3 markedly. These are shown in the graph of Fig. 5 as rapid 4 drop off portions 520 of the probability line 512. Since an object of the variable latency bridge 210 is to position the 6 variable threshold point ~28 (Fig. 4) which balance overall 7 latency (the X axis 511 of Fig. 5) with minimization of 8 forwarded junk (the Y axis 510 of Fig. 5), the rapid drop off 9 points 520 are good candidates for the variable threshold point 428. It should be noted that the minimum fragment size 11 point 515 will always occur before the destination address 1~.
12 (Fig. 1) is received and cannot, therefore, be used as a 13 position for the variable threshold point ~28 where filtering 14 based upon the destination address 14 is desired.
Nevertheless, the minimum fragment size point 515 could be 16 useful where the variable latency bridge 210 is not required 17 to filter based upon the destination address 1~.
18 As will be discussed in more detail hereinafter in 19 relation to the industrial applicability of the invention, determination of the values of the points 513, 515, 516 and 21 517 of the probability line 512 can be achieved either 22 analytically or empirically and either statically or 23 dynamically. Analytically, the worst case values for a 24 network of ma~ ~ size with the variable latency bridge 210 at one extreme thereof can be calculated. Empirically, 26 network traffic may be monitored at the point in which the 27 variable latency bridge 210 is (or would be) operating to 28 establish the values. In a more sophisticated future version, 29 the variable latency bridge 210 may itself monitor its received traffic and determine the values empirically and 31 adjusting its values in a dynamic fashion.
32 It should be noted that, while the example of Fig. 5 33 is drawn in relation to an Ethernet packet 10, the principles 34 illustrated are applicable to any packet network in which the probability of a packet's being filtered varies over the 36 packet's length.
37 It should be noted that the rapid drop off portion 38 520 iS by no means the only position to which the variable 39 threshold point 428 might be set. It should further be noted ~ 95/04970 ~03~ PCT~S94/08656 1 it is a feature of the present inventive variable latency 2 bridge 210 that the variable threshold point ~28 may be set 3 according to criteria established to maximize the efficiency 4 of any type of network 310 in which the variable latency bridge 210 might be employed. The setting of the variable 6 threshold point 428 to correspond to the rapid drop off 7 portion 520 is, in the best presently known embodiment 210 of 8 the present invention, an initial "best guess" as to what 9 might be an optimal setting for the variable threshold point ~28. As stated previously, the actual location of the rapid ll drop off portion 520 can readily be empirically determined for 12 a particular application or, more generally, for applications 13 of particular types. It is anticipated that the present 14 inventors, as well as others, will develop improved methods and means for determining the optimal location for the 16 variable threshold point 428. The actual method currently 17 employed by the inventors to set the variable threshold point 18 428 will be presented in more detail hereinafter in relation 19 to the industrial applicability of the invention.
It will be of interest to those practicing the 21 present invention to note that while the probability of the 22 generation of "junk" - that is, improper packets - is a 23 function of the sending unit (the first computer 312a in the 24 example of Fig. 3), the sensitivity to such "junk" - that is, the amount of harm to efficient throughput that is caused 26 when such junk gets into the network 310 - is generally a 27 function of the receiving equipment (the second computer 312b 28 and/or the third computer 312c in the example of Fig. 3).
29 That being the case, it is anticipated that the determination of an "optimal" variable threshold point ~28 may require some 31 feedback from the receiving equipment (the second computer 32 312b and/or the third computer 312c in the example of Fig. 3).
33 As stated previously herein, the variable latency cut 34 through bridge 210 described herein is a "bare bones" example 3S intended to illustrate the invention. For example, one 36 skilled in the art will recognize that the variable latency 37 cut through bridge 210 might also be equipped to include a 38 buffer clearing means (not shown) for clearing the buffer 212 39 between iterations of the packet lo, additional buffers (not W095/04970 PCT~S94/086 ~ 14 1 shown) for buffering several of the packets o (as discussed 2 previously herein in relation to the prior art) and/or other 3 conventional appurtenances and features.
4 Fig. 6 is a block diagram of an equally preferred alternate embodiment 610 of the inventive variable latency cut 6 through bridge. While the first preferred embodiment 210, as 7 previously stated, is a very simple example to best illustrate 8 the principle of the invention, the equally preferred 9 alternate embodiment 610 of Fig. 6 is somewhat more complex in order to illustrate the movement of a data packet 10 within 11 the variable latency bridge 210 according to the present 12 inventive method. In the example of Fig. 6, the variable 13 latency bridge has a plurality (two in the present example) of 14 receivers 216 and a plurality (two in the present example) of transmitters 218. Like the first preferred embodiment 210, 16 the equally preferred alternate embodiment 610 of the present 17 invention also has a buffer 212 and a controller 21~. The 18 data packets 210 travel between the various aspects of the 19 equally preferred alternate embodiment 610 of the invention on a data bus 612. The buffer 212 is divided into a plurality 21 (six, in the example of Fig. 6) of packet buffer slots 61~.
22 The controller also has associated therewith a plurality (one 23 for each transmitter 218) of first-in-first-out ("FIFOs") 616 24 memories. The FIFOs 616 are configured to contain packet buffer numbers or pointers to the packet buffer slots 61~ of 26 the buffer 212.
27 A packet 10 received by a receiver 216 from a source 28 LAN (not shown in the view of Fig. 6) is assigned by the 29 controller 214 to a particular packet buffer slot 614 in the buffer 212. As the bytes of the packet 10 (not including 31 preamble the preamble 12) are received by the receiver 216 32 they are transferred over the data bus 612 and stored 33 sequentially in their assigned packet buffer slot 614. Other 34 packets 10 being received by other receivers 216 will have their bytes of data stored in other assigned packet buffer 36 slots 614 using the controller 214 and the data bus 612 on an 37 interleaved or "time division multiplexed" basis. Each entire 38 packet 10, whether a full packet 10 or a "runt" will be stored 39 in a packet buffer slot 614 of the buffer 212.

J
~ , . .. 10 ~ 1995 page 15 1 The controller 21~ monitors the various received 2 packets 10 as they are transferred on the data bus 612 and 3 examines the relevant portions with respect to making a decision 4 as to where and when to forward the packet 10. For example, the destination address 14 will generally be of interest to the 6 controller 21~ as will be the number of bytes of the packet 10 7 which have been transferred at any point in time. When the 8 number of bytes determined by the current position of the g variable threshold point 428 has been transferred on the data bus 612, the coll~Loller 21~ will attempt to begin transmission of the 11 packet 10 through the one or more of the transmitters 218 12 selected by the controller 21~ (for example that transmitter 218 13 which is associated with the packet's destination address 1~).
The controller 214 will examine the FIFOs 616 associated with each of the transmitters 218 selected to forward the packet 10 16 and, if it is empty, the transmission can be started on that 17 transmitter 218 immediately. If the FIF0 6i6 of a selected 18 transmitter 218 is not empty, the number of the packet buffer 19 slot 61~ assigned to the incoming packet 10 will be entered into the appropriate FIF0 616 to enable later transmission. Indeed, 21 the number of the packet buffer slot 614 is entered into the FIF0 22 616 even if that FIF0 616 is empty (and transmission can begin 23 immediately) so that in the case of a transmit collision the 24 packet 10 can be retransmitted. It cho~ be noted that, in some occurrences, a valid position for the variable threshold point i 428 may be such that the entire packet 10 is received before any 27 attempt is made to transmit. When a transmission is sl~cc~ccfully 28 completed, the number of the packet buffer slot 61~ is removed ~9 from the FIF0 616 and that packet buffer slot 614 can be used to store yet another incoming packet 10.
31 As is shown above, in great part, the variable latency 32 cut through bridge 210 according to the present invention 33 resembles prior art conventional cut through bridges in many 34 respectsu Among the substantial differences are the inclusion of a variable threshold point for adjusting the latency of the 36 bridge. No significant changes of materials are envisioned nor 37 are any special constructions required.
38 Various modifications may be made to the invention A~ ED SHEET

WOg5/04970 ~ 3~ PCT~S94/08656 1 without altering its value or scope. For example, although 2 the variable latency cut through bridge 210 described herein, 3 is relatively simple in structure, the inventive method can be 4 used in combination with most features of existing prior art network systems. Also, as previously mentioned herein, 6 although the best presently known embodiment 210 of the 7 present invention is adapted for use with st~n~rd Ethernet, 8 one skilled in the art could readily adapt the invention for 9 use with essentially any type of communications means which utilizes data packets and for which the probability of a bad 11 packet varies with the amount of the packet received.
12 All of the above are only some of the examples of 13 available embodiments of the present invention. Those skilled 14 in the art will readily observe that numerous other modifications and alterations may be made without departing 16 from the spirit and scope of the invention. Accordingly, the 17 above disclosure is not intended as limiting and the appended 18 claims are to be interpreted as encompassing the entire scope 19 of the invention.

23 The variable latency cut through bridge is adapted to 24 be widely used in computer network communications. The predominant current usages are for the interconnection of 26 computers and computer peripheral devices within networks and 27 for the interconnection of several computer networks.
28 The variable latency cut through bridges of the 29 present invention may be utilized in any application wherein conventional computer interconnection bridging devices are 31 used. A significant area of improvement is in the inclusion 32 of the variable threshold point 428 and associated aspects of 33 the invention as described herein.
34 The inventive variable latency bridge 210 is used in a network in much the same manner as have been conventional 36 prior art cut through bridges, with a potentially significant 37 increase in efficiency in almost all applications. The 38 position of the variable threshold point 428 may be made 39 either statically or dynamically. In the static setting case, ~ 9s~04970 2 1 6 8 0 3 ~ PCT~S94/08656 1 a setting is made through a user configuration of the variable 2 latency bridge 210. In this case, the setting would remain 3 unchanged during the operation of the variable latency bridge 4 210, or until the setting is modified through an explicit action of a user reconfiguring the variable latency bridge 6 210.
7 In the case of dynamically setting the variable 8 threshold point 428, decision making logic within the variable 9 latency bridge 210 (heuristic based learning) will be applied, as will be discussed in more detail hereinafter, to modify the 11 setting of the variable threshold point 428 over time to 12 accomplish tuning to minimize errors or to maximize 13 throughput, or to maximize responsiveness to changing 14 conditions of the application within which the variable latency bridge 210 is running.
16 The inventors have found that static assignment of 17 the variable threshold point 428 may effectively be based on 18 characteristics of the network segments attached to the bridge 19 and on characteristics of the network controllers of devices on those segments. For example, if all controllers on those 21 segments are such that unwanted packets ("junk") is readily 22 discarded without impact on the computer containing the 23 controller (as is the case with many Ethernet controllers in 24 personal computers and workstations today), then the impact of junk is purely loss of bandwidth on the segment. In this 26 case, and where segment bandwidth utilization is generally 27 low, a very low threshold setting may be considered to be 28 highly effective.
29 On segments where junk has a more negative impact, or where bandwidth is at a premium, more effective settings may 31 require consideration of the rapid drop off portion S20 of the 32 probability line 514 of Fig. 4. The location of the rapid 33 drop off portion 520 is predictable based on the fact that 34 proper deference behavior on an Ethernet precludes collisions outside the so called "collision window", which is the period 36 of time beginning with the start of packet transmission and 37 continuing for a period equal to the maximum round trip signal 38 propagation time from end to end of a maximally configured 39 network segment. It is reasonable to expect the vast majority W095/04970 2 ~6~93~ 18 PCT~S94/0865 1 of junk to be collision fragments whose length will not exceed 2 this collision window length.
3 An example of the use of the rapid drop off portion 4 520 of the probability line 514 to set the variable threshold point 428 in a point-to-point ("private channel") Ethernet is 6 as follows: A private channel Ethernet is one comprised of 7 only two controllers; one at a station and one at a hub. When 8 a variable latency cut-through bridge is employed as the hub 9 for such a segment, collision fragments can arise only from collisions occurring when both the bridge and the station 11 begin transmission at around the same time. In such cases, 12 the reception (and possible forwarding) by the variable 13 latency bridge 210 of the fragment can be precluded by the 14 knowledge possessed by the variable latency bridge 210 of its own participation in the collision. Thus, the collision 16 fragments which cause the high initial probabilities of 17 receiving junk (illustrated by the high initial point 516 of 18 the probability line 514 of Fig. 5) will not be present. This 19 suggests use of a very low cut-through latency threshold for such connections.
21 An example of the use of the rapid drop off portion 22 520 of the probability line 514 to set the variable threshold 23 point 428 in a single link segment thin coax Ethernet 710 as 24 depicted in Fig 7. For an attachment from the variable latency bridge to the single segment thin coax Ethernet 710, 26 it can reasonably be expected that an effective threshold 27 setting will be just past the collision window indicated by 28 the m~; mum round trip propagation time on such a segment.
29 The latest such collision would arise when a first station 712 located very near the variable latency bridge 210, and very 31 near one end of a (185 meter maximum length) cable 713, 32 experiences a last possible moment collision with a second 33 station 714 located at the far end of the cable 713. The 34 time calculation would be as follows: (Note that for a 10 Megabits per second Ethernet, 1 bit time=100 nanoseconds 36 {lOOns}) At time=TO, signal from the first station 712 is on 37 the cable 713 at the first station 712 and (for all practical 38 purposes as this example has been defined) at the variable 39 latency bridge 210. At time=T1, the signal has propagated the 95/04g70 216~ PCT~S94/08656 1 full length of the cable 713 to the second station 714. At 2 tima=Tl+T2 the second station 714 controller senses the signal 3 and has just released the first bit of its own packet lo onto 4 the cable 713, causing a collision condition. At time=Tl+T2+T3, the collision combination of signals first 6 arrives back at the first station 712 and at the variable 7 latency bridge 210. At time=Tl+T2+T3+T4 the last of the 8 collided signal from the second station 714 reaches the first 9 station 712 and the variable latency bridge 714, at which time the variable latency bridge 210 may determine that the packet 11 10 transmitted from the second station 714 is a runt.
12 Given the above max; ~11~ error time scenario, 13 calculation of T1 (which is also equal to T3) may be made from 14 the cable length, the speed of light, and the specified cable light speed factor (0.65) of the cable 713 as follows:
16 T1=T3=((185m/0.65c))=9.5 bit times 17 (where c is the speed of light in meters per second.) 18 Calculation of worst case times for T2 have been made 19 based on IEEE 802.3 Ethernet st~n~Ard worst case delay values.
This is T2=22.14 bit times.
21 Calculation of T4 is based on the specified minimum 22 collision fragment, which is 64 bits of preamble 2 and start 23 of frame delimiter, followed by a 32 bit jam pattern, for a 24 total of 96 bit times.
Thus, the worst case collision window is:
26 Tl+T2+T3+T4=9.5+22.14+9.5+96=137.14 bit times (or 27 13.714 microseconds) 28 For the example of Fig. 7, a good candidate for the 29 setting of the variable threshold point 428 (which begins measuring only after the 64 bits of preamble 12) is:
31 137.14-64=73.14 bit times, or between 9 and 10 bytes 32 into the received packet.
33 For an attachment from the variable latency bridge 10 34 to a ma~;r-lly configured Ethernet segment 810 as depicted in Fig. 8, it is again assumed (at least initially) that an 36 effective position for the variable threshold point 428 would 37 be just past the collision window indicated by the maximum 38 round trip propagation time on such a segment. This maximal 39 configuration 810 has 5 full length cable runs 713a through W095/04g70 2~ 3~ PCT~S94/086~6 -1 713b attached with a plurality (four, in the case of the 2 maximally configured Ethernet segment 810) of maximally 3 delaying repeaters 816. In this case, the "latest" collision 4 detection would arise when a first end station 818 located very near one end of the first cable 713a and very near the 6 variable latency bridge 210 experiences a last possible moment 7 collision with a second end station 820 located at the far end 8 of the fifth cable 713e through all four repeaters 816. Given 9 the general practice, it can be assumed in the example of Fig.
8 that the interior cables 713b, 713c and 713d are "thick 11 coax" cabling, and the "end run" cables 713a and 713e are 12 "thin coax". The time calculation for this example is much 13 the same as in the example of Fig. 7, except that the 14 propagation times (T1 and ~3) are quite a bit larger. Also the propagation back of the collision is subject to 16 potentially larger delays within the repeaters 816 than is the 17 propagation forward, so T3 will be larger than T1. Again 18 using 802.3 worst case delay specifications, these propagation 19 delays are calculated to be T1=182.48 bit times and T3=222.48 bit times.
21 The other components of this calculation remain as in 22 the example of Fig. 7, revealing the worst case window to be:
23 Tl+T2+T3+T4=182.48+22.14+222.48+96=523.1 bit times 24 (or 52.3 microseconds) A good candidate for the setting of the variable 26 threshold point 428 in the example of Fig. 8 would be 523.1-27 64=459.1 bit times, or between 57 and 58 bytes into the 28 received packet.
29 As previously mentioned, the variable threshold point 428 of the variable latency bridge 210 certainly need not 31 remain fixed during operation of the variable latency bridge 32 210. In order to m~ ; ze overall data throughput, a small 33 percentage of errors being forwarded may be preferable to 34 overly delaying the cut through operation. The specific acceptable percentage of errors may be employed using simple 36 heuristic logic to periodically adjust the variable threshold 37 point 428 based on the number of packets 10 which the variable 38 latency bridge 210 has been forwarding and the amount of 39 "junk" packets among the good packets. More specifically, if 9S/04970 ~ ~ PCT~S94/08656 1 PE is the ~;mum acceptable percentage of errors which it is 2 decided will be tolerated in forwarding the packets 10, and 3 the variable latency bridge 210 maintains counts of packets 10 4 forwarded (PF) and the number of those forwarded which, subsequent to forwarding, were found to be errored packets 6 (EP), then every time PF reaches some sample size (such as 7 10,000) the variable latency bridge 210 could (in hardware or 8 software) compute EP divided by PF, and compare the resulting 9 percentage to PE. If the ratio is greater than PE, the threshold position would be increased, to seek to reduce the 11 forwarded error rate. If the ratio is less than PE, the 12 threshold value would be decreased, since a higher error rate 13 is considered acceptable. The two counts would then be reset 14 for the next sample period.
Since the variable latency cut through bridges of the 16 present invention may be readily constructed and are 17 compatible with existing computer equipment it is expected 18 that they will be acceptable in the industry as substitutes 19 for conventional bridges. For these and other-reasons, it is expected that the utility and industrial applicability of the 21 invention will be both significant in scope and long-lasting 22 in duration.

Claims (19)

In the Claims:
1. A bridge for a computer network system, the network having a plurality of computer devices interconnected by a plurality of data cables wherein data packets are transmitted over the data cables, the bridge comprising:
a buffer for temporarily holding the data packets, and;
a controller for controlling said buffer such that the data packets are forwarded out of said buffer upon command from said controller, wherein;
said controller variably sets a latency threshold of the buffer, the latency threshold being that portion of each data packet which is received by the buffer prior to said controller commanding said buffer to forward that data packet.
2. The bridge of claim 1, wherein:
the latency threshold is set to be within a rapid drop off portion of a probability function, the probability function describing the probability that a data packet is bad as a function of the amount of the data packet which has been examined.
3. The bridge of claim 2, wherein:
the probability function is empirically determined.
4. A method for improving the efficiency of a computer network having a plurality of network segments therein, wherein data is communicated in the form of data packets, the method comprising:
providing a bridge between the segments of the network, said bridge being configured to attempt to begin forwarding each data packet when that data packet is received by the bridge and verified as being one which should be forwarded up to a variable threshold point of that data packet; and setting the variable threshold point such that said bridge begins to forward each data packet after at least a portion of that data packet has been verified as being one which should be forwarded and before all of the data packet has been verified as being one which should be forwarded.
5. The method of claim 4, wherein:
the variable threshold point is varied as said bridge is used.
6. The method of claim 4, wherein:
the variable threshold point is set at a point such that any runts will have been rejected as being bad before the variable threshold point is reached.
7. The method of claim 4, wherein:
the variable threshold point is set according to empirically gathered data.
8. The method of claim 4, wherein:
the variable threshold point is set such as a function of a probability line, the probability line being represented by a graph plotting a probability value that the data packet should not be forwarded against an amount of the data packet that has been examined within said bridge.
9. The method of claim 8, wherein:
the variable threshold point is set to within a rapid drop off portion of the probability line, the rapid drop off portion being a portion of the probability line wherein the probability value of the probability line drops markedly toward zero.
10. A method for forwarding data packets within a bridge of a computer network, comprising:
setting a variable latency point within the bridge such that an amount of each data packet which is received at the bridge before the bridge attempts to forward that data packet is variable according to the position of the variable latency point.
11. The method of claim 10, wherein:
said variable latency point is set after a preamble of the data packet.
12. The method of claim 10, wherein:
said variable latency point is set such that essentially all runts will be rejected before said variable latency point is reached in each data packet, a runt being an incomplete data packet resulting from an aborted attempt to transmit that data packet.
13. The method of claim 10, wherein:
said variable latency point is adjustable according to data obtained during the operation of the bridge.
14. The method of claim 10, wherein:
said variable latency point is set by software from a computer.
15. The method of claim 14, wherein:
the computer is connected to the computer network through the bridge.
16. The method of claim 14, wherein:
the computer retains information concerning the packets for optimizing the position of said variable latency point.

page 25
17. The method of claim 10, wherein:
said variable latency point is reset from time to time as demands of the computer network vary.
18. The method of claim 10, wherein:
said variable latency point is set according to a calculation of an acquisition time of an application.
19. The method of claim 10, wherein:
the variable latency point is set according to empirically determined data gather during operation of the bridge.
CA002168035A 1993-08-06 1994-07-27 Variable latency cut-through bridging Abandoned CA2168035A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/103,439 US5598581A (en) 1993-08-06 1993-08-06 Variable latency cut through bridge for forwarding packets in response to user's manual adjustment of variable latency threshold point while the bridge is operating
US08/103,439 1993-08-06

Publications (1)

Publication Number Publication Date
CA2168035A1 true CA2168035A1 (en) 1995-02-16

Family

ID=22295190

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002168035A Abandoned CA2168035A1 (en) 1993-08-06 1994-07-27 Variable latency cut-through bridging

Country Status (6)

Country Link
US (2) US5598581A (en)
EP (1) EP0712515A1 (en)
JP (1) JPH09509289A (en)
AU (1) AU680031B2 (en)
CA (1) CA2168035A1 (en)
WO (1) WO1995004970A1 (en)

Families Citing this family (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598581A (en) * 1993-08-06 1997-01-28 Cisco Sytems, Inc. Variable latency cut through bridge for forwarding packets in response to user's manual adjustment of variable latency threshold point while the bridge is operating
US5793978A (en) 1994-12-29 1998-08-11 Cisco Technology, Inc. System for routing packets by separating packets in to broadcast packets and non-broadcast packets and allocating a selected communication bandwidth to the broadcast packets
US5867666A (en) 1994-12-29 1999-02-02 Cisco Systems, Inc. Virtual interfaces with dynamic binding
US6256313B1 (en) * 1995-01-11 2001-07-03 Sony Corporation Triplet architecture in a multi-port bridge for a local area network
US5857075A (en) * 1995-01-11 1999-01-05 Sony Corporation Method and integrated circuit for high-bandwidth network server interfacing to a local area network
US6188720B1 (en) * 1995-05-12 2001-02-13 Itt Manufacturing Enterprises, Inc. Modulation and signaling converter
US6097718A (en) 1996-01-02 2000-08-01 Cisco Technology, Inc. Snapshot routing with route aging
US6147996A (en) 1995-08-04 2000-11-14 Cisco Technology, Inc. Pipelined multiple issue packet switch
US5924112A (en) * 1995-09-11 1999-07-13 Madge Networks Limited Bridge device
GB9518522D0 (en) * 1995-09-11 1995-11-08 Madge Networks Ltd Bridge device
US6182224B1 (en) 1995-09-29 2001-01-30 Cisco Systems, Inc. Enhanced network services using a subnetwork of communicating processors
US6091725A (en) 1995-12-29 2000-07-18 Cisco Systems, Inc. Method for traffic management, traffic prioritization, access control, and packet forwarding in a datagram computer network
US6035105A (en) 1996-01-02 2000-03-07 Cisco Technology, Inc. Multiple VLAN architecture system
IL116986A (en) * 1996-01-31 2000-01-31 Galileo Technology Ltd Switching ethernet controller providing packet routing
US5815673A (en) * 1996-03-01 1998-09-29 Samsung Electronics Co., Ltd. Method and apparatus for reducing latency time on an interface by overlapping transmitted packets
DE19614737A1 (en) * 1996-04-15 1997-10-16 Bosch Gmbh Robert Error-proof multiplex process with possible retransmission
US6243667B1 (en) 1996-05-28 2001-06-05 Cisco Systems, Inc. Network flow switching and flow data export
US6308148B1 (en) * 1996-05-28 2001-10-23 Cisco Technology, Inc. Network flow data export
US6212182B1 (en) 1996-06-27 2001-04-03 Cisco Technology, Inc. Combined unicast and multicast scheduling
US5802042A (en) * 1996-06-28 1998-09-01 Cisco Systems, Inc. Autosensing LMI protocols in frame relay networks
US6434120B1 (en) 1998-08-25 2002-08-13 Cisco Technology, Inc. Autosensing LMI protocols in frame relay networks
US5805595A (en) 1996-10-23 1998-09-08 Cisco Systems, Inc. System and method for communicating packetized data over a channel bank
US6304546B1 (en) 1996-12-19 2001-10-16 Cisco Technology, Inc. End-to-end bidirectional keep-alive using virtual circuits
US6094434A (en) * 1996-12-30 2000-07-25 Compaq Computer Corporation Network switch with separate cut-through buffer
US6002675A (en) * 1997-01-06 1999-12-14 Cabletron Systems, Inc. Method and apparatus for controlling transmission of data over a network
US6097705A (en) 1997-01-06 2000-08-01 Cabletron Systems, Inc. Buffered repeater with independent ethernet collision domains
US6115387A (en) * 1997-02-14 2000-09-05 Advanced Micro Devices, Inc. Method and apparatus for controlling initiation of transmission of data as a function of received data
US6301620B1 (en) * 1997-03-11 2001-10-09 Matsushita Electric Industrial Co., Ltd. Method of sending data from server computer, storage medium, and server computer
US6151325A (en) * 1997-03-31 2000-11-21 Cisco Technology, Inc. Method and apparatus for high-capacity circuit switching with an ATM second stage switch
US6256660B1 (en) * 1997-04-08 2001-07-03 International Business Machines Corporation Method and program product for allowing application programs to avoid unnecessary packet arrival interrupts
US6356530B1 (en) 1997-05-23 2002-03-12 Cisco Technology, Inc. Next hop selection in ATM networks
US6122272A (en) 1997-05-23 2000-09-19 Cisco Technology, Inc. Call size feedback on PNNI operation
US7103794B2 (en) * 1998-06-08 2006-09-05 Cacheflow, Inc. Network object cache engine
US6078590A (en) 1997-07-14 2000-06-20 Cisco Technology, Inc. Hierarchical routing knowledge for multicast packet routing
US6330599B1 (en) 1997-08-05 2001-12-11 Cisco Technology, Inc. Virtual interfaces with dynamic binding
US6157641A (en) 1997-08-22 2000-12-05 Cisco Technology, Inc. Multiprotocol packet recognition and switching
US6512766B2 (en) 1997-08-22 2003-01-28 Cisco Systems, Inc. Enhanced internet packet routing lookup
US6212183B1 (en) 1997-08-22 2001-04-03 Cisco Technology, Inc. Multiple parallel packet routing lookup
US6308218B1 (en) 1997-09-17 2001-10-23 Sony Corporation Address look-up mechanism in a multi-port bridge for a local area network
US6617879B1 (en) 1997-09-17 2003-09-09 Sony Corporation Transparently partitioned communication bus for multi-port bridge for a local area network
US6751225B1 (en) 1997-09-17 2004-06-15 Sony Corporation Port within a multi-port bridge including a buffer for storing routing information for data packets received in the port
US6442168B1 (en) 1997-09-17 2002-08-27 Sony Corporation High speed bus structure in a multi-port bridge for a local area network
US6446173B1 (en) 1997-09-17 2002-09-03 Sony Corporation Memory controller in a multi-port bridge for a local area network
US6363067B1 (en) 1997-09-17 2002-03-26 Sony Corporation Staged partitioned communication bus for a multi-port bridge for a local area network
US6157951A (en) * 1997-09-17 2000-12-05 Sony Corporation Dual priority chains for data-communication ports in a multi-port bridge for a local area network
US6301256B1 (en) * 1997-09-17 2001-10-09 Sony Corporation Selection technique for preventing a source port from becoming a destination port in a multi-port bridge for a local area network
US6343072B1 (en) 1997-10-01 2002-01-29 Cisco Technology, Inc. Single-chip architecture for shared-memory router
US6128296A (en) * 1997-10-03 2000-10-03 Cisco Technology, Inc. Method and apparatus for distributed packet switching using distributed address tables
US6393526B1 (en) 1997-10-28 2002-05-21 Cache Plan, Inc. Shared cache parsing and pre-fetch
US6252878B1 (en) 1997-10-30 2001-06-26 Cisco Technology, Inc. Switched architecture access server
US6233243B1 (en) * 1997-11-24 2001-05-15 Ascend Communications, Inc. Method and apparatus for performing cut-through virtual circuit merging
US6144668A (en) * 1997-11-26 2000-11-07 International Business Machines Corporation Simultaneous cut through and store-and-forward frame support in a network device
US6594283B1 (en) 1997-11-28 2003-07-15 3Com Corporation Network communication device
US7570583B2 (en) * 1997-12-05 2009-08-04 Cisco Technology, Inc. Extending SONET/SDH automatic protection switching
US6091707A (en) * 1997-12-18 2000-07-18 Advanced Micro Devices, Inc. Methods and apparatus for preventing under-flow conditions in a multiple-port switching device
US6252855B1 (en) 1997-12-22 2001-06-26 Cisco Technology, Inc. Method and apparatus for identifying a maximum frame size to maintain delay at or below an acceptable level
US6111877A (en) 1997-12-31 2000-08-29 Cisco Technology, Inc. Load sharing across flows
US6131131A (en) * 1998-01-22 2000-10-10 Dell U.S.A., L.P. Computer system including an enhanced communication interface for an ACPI-compliant controller
WO1999045677A1 (en) * 1998-03-06 1999-09-10 3Com Technologies Network communication device
US6205481B1 (en) 1998-03-17 2001-03-20 Infolibria, Inc. Protocol for distributing fresh content among networked cache servers
US6738814B1 (en) * 1998-03-18 2004-05-18 Cisco Technology, Inc. Method for blocking denial of service and address spoofing attacks on a private network
US6098122A (en) * 1998-03-27 2000-08-01 International Business Machines Corporation Method and apparatus for adaptively blocking outgoing communication requests and adjusting the blocking factor according to the volume of requests being received in an information handling system
US6853638B2 (en) * 1998-04-01 2005-02-08 Cisco Technology, Inc. Route/service processor scalability via flow-based distribution of traffic
US6430196B1 (en) 1998-05-01 2002-08-06 Cisco Technology, Inc. Transmitting delay sensitive information over IP over frame relay
US6370121B1 (en) 1998-06-29 2002-04-09 Cisco Technology, Inc. Method and system for shortcut trunking of LAN bridges
US6377577B1 (en) 1998-06-30 2002-04-23 Cisco Technology, Inc. Access control list processing in hardware
US6308219B1 (en) 1998-07-31 2001-10-23 Cisco Technology, Inc. Routing table lookup implemented using M-trie having nodes duplicated in multiple memory banks
US6182147B1 (en) 1998-07-31 2001-01-30 Cisco Technology, Inc. Multicast group routing using unidirectional links
US6427187B2 (en) 1998-07-31 2002-07-30 Cache Flow, Inc. Multiple cache communication
US6101115A (en) 1998-08-07 2000-08-08 Cisco Technology, Inc. CAM match line precharge
US6389506B1 (en) 1998-08-07 2002-05-14 Cisco Technology, Inc. Block mask ternary cam
US6535520B1 (en) 1998-08-14 2003-03-18 Cisco Technology, Inc. System and method of operation for managing data communication between physical layer devices and ATM layer devices
US6269096B1 (en) 1998-08-14 2001-07-31 Cisco Technology, Inc. Receive and transmit blocks for asynchronous transfer mode (ATM) cell delineation
US6381245B1 (en) 1998-09-04 2002-04-30 Cisco Technology, Inc. Method and apparatus for generating parity for communication between a physical layer device and an ATM layer device
US6535509B2 (en) 1998-09-28 2003-03-18 Infolibria, Inc. Tagging for demultiplexing in a network traffic server
US6700872B1 (en) 1998-12-11 2004-03-02 Cisco Technology, Inc. Method and system for testing a utopia network element
US6917617B2 (en) * 1998-12-16 2005-07-12 Cisco Technology, Inc. Use of precedence bits for quality of service
US6643260B1 (en) 1998-12-18 2003-11-04 Cisco Technology, Inc. Method and apparatus for implementing a quality of service policy in a data communications network
US6535511B1 (en) 1999-01-07 2003-03-18 Cisco Technology, Inc. Method and system for identifying embedded addressing information in a packet for translation between disparate addressing systems
US6453357B1 (en) * 1999-01-07 2002-09-17 Cisco Technology, Inc. Method and system for processing fragments and their out-of-order delivery during address translation
US6771642B1 (en) 1999-01-08 2004-08-03 Cisco Technology, Inc. Method and apparatus for scheduling packets in a packet switch
US6449655B1 (en) 1999-01-08 2002-09-10 Cisco Technology, Inc. Method and apparatus for communication between network devices operating at different frequencies
US6341315B1 (en) * 1999-02-26 2002-01-22 Crossroads Systems, Inc. Streaming method and system for fiber channel network devices
US6757791B1 (en) 1999-03-30 2004-06-29 Cisco Technology, Inc. Method and apparatus for reordering packet data units in storage queues for reading and writing memory
US6760331B1 (en) 1999-03-31 2004-07-06 Cisco Technology, Inc. Multicast routing with nearest queue first allocation and dynamic and static vector quantization
US6603772B1 (en) 1999-03-31 2003-08-05 Cisco Technology, Inc. Multicast routing with multicast virtual output queues and shortest queue first allocation
US6781956B1 (en) 1999-09-17 2004-08-24 Cisco Technology, Inc. System and method for prioritizing packetized data from a distributed control environment for transmission through a high bandwidth link
US6798746B1 (en) 1999-12-18 2004-09-28 Cisco Technology, Inc. Method and apparatus for implementing a quality of service policy in a data communications network
US6606628B1 (en) 2000-02-14 2003-08-12 Cisco Technology, Inc. File system for nonvolatile memory
US7660902B2 (en) * 2000-11-20 2010-02-09 Rsa Security, Inc. Dynamic file access control and management
US20020083189A1 (en) * 2000-12-27 2002-06-27 Connor Patrick L. Relay of a datagram
GB2388501A (en) * 2002-05-09 2003-11-12 Sony Uk Ltd Data packet and clock signal transmission via different paths
WO2004066562A1 (en) * 2003-01-24 2004-08-05 Fujitsu Limited Data transmission apparatus
US20050213595A1 (en) * 2004-03-23 2005-09-29 Takeshi Shimizu Limited cyclical redundancy checksum (CRC) modification to support cut-through routing
US8281031B2 (en) 2005-01-28 2012-10-02 Standard Microsystems Corporation High speed ethernet MAC and PHY apparatus with a filter based ethernet packet router with priority queuing and single or multiple transport stream interfaces
US20070055798A1 (en) * 2005-08-31 2007-03-08 Ain Jonathan W Apparatus and method to adjust one or more input/output parameters for a computing system
US8799633B2 (en) 2011-02-11 2014-08-05 Standard Microsystems Corporation MAC filtering on ethernet PHY for wake-on-LAN

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4399531A (en) * 1980-09-29 1983-08-16 Rockwell International Corporation Distributed digital data communications network
GB8407102D0 (en) * 1984-03-19 1984-04-26 Int Computers Ltd Interconnection of communications networks
US4679193A (en) * 1985-11-14 1987-07-07 Hewlett Packard Company Runt packet filter
EP0245765B1 (en) * 1986-05-14 1993-09-22 Mitsubishi Denki Kabushiki Kaisha Data transfer control system
US4771391A (en) * 1986-07-21 1988-09-13 International Business Machines Corporation Adaptive packet length traffic control in a local area network
US4769810A (en) * 1986-12-31 1988-09-06 American Telephone And Telegraph Company, At&T Bell Laboratories Packet switching system arranged for congestion control through bandwidth management
US4769811A (en) * 1986-12-31 1988-09-06 American Telephone And Telegraph Company, At&T Bell Laboratories Packet switching system arranged for congestion control
US4926415A (en) * 1987-02-04 1990-05-15 Kabushiki Kaisha Toshiba Local area network system for efficiently transferring messages of different sizes
US4839891A (en) * 1987-07-24 1989-06-13 Nec Corporation Method for controlling data flow
US5123091A (en) * 1987-08-13 1992-06-16 Digital Equipment Corporation Data processing system and method for packetizing data from peripherals
US4841527A (en) * 1987-11-16 1989-06-20 General Electric Company Stabilization of random access packet CDMA networks
US4860003A (en) * 1988-05-27 1989-08-22 Motorola, Inc. Communication system having a packet structure field
US4922503A (en) * 1988-10-28 1990-05-01 Infotron Systems Corporation Local area network bridge
US4891803A (en) * 1988-11-07 1990-01-02 American Telephone And Telegraph Company Packet switching network
US5020058A (en) * 1989-01-23 1991-05-28 Stratacom, Inc. Packet voice/data communication system having protocol independent repetitive packet suppression
US5117486A (en) * 1989-04-21 1992-05-26 International Business Machines Corp. Buffer for packetizing block of data with different sizes and rates received from first processor before transferring to second processor
US5088091A (en) * 1989-06-22 1992-02-11 Digital Equipment Corporation High-speed mesh connected local area network
US5193151A (en) * 1989-08-30 1993-03-09 Digital Equipment Corporation Delay-based congestion avoidance in computer networks
US5247517A (en) * 1989-10-20 1993-09-21 Novell, Inc. Method and apparatus for analyzing networks
US5014265A (en) * 1989-11-30 1991-05-07 At&T Bell Laboratories Method and apparatus for congestion control in a data network
US5103446A (en) * 1990-11-09 1992-04-07 Moses Computers, Inc. Local area network adaptive throughput control for instantaneously matching data transfer rates between personal computer nodes
WO1992016066A1 (en) * 1991-02-28 1992-09-17 Stratacom, Inc. Method and apparatus for routing cell messages using delay
DE69225822T2 (en) * 1991-03-12 1998-10-08 Hewlett Packard Co Diagnostic method of data communication networks based on hypotheses and conclusions
GB9111524D0 (en) * 1991-05-29 1991-07-17 Hewlett Packard Co Data storage method and apparatus
US5404353A (en) * 1991-06-28 1995-04-04 Digital Equipment Corp. Dynamic defer technique for traffic congestion control in a communication network bridge device
US5313454A (en) * 1992-04-01 1994-05-17 Stratacom, Inc. Congestion control for cell networks
US5303302A (en) * 1992-06-18 1994-04-12 Digital Equipment Corporation Network packet receiver with buffer logic for reassembling interleaved data packets
US5307345A (en) * 1992-06-25 1994-04-26 Digital Equipment Corporation Method and apparatus for cut-through data packet transfer in a bridge device
US5598581A (en) * 1993-08-06 1997-01-28 Cisco Sytems, Inc. Variable latency cut through bridge for forwarding packets in response to user's manual adjustment of variable latency threshold point while the bridge is operating
US5473607A (en) * 1993-08-09 1995-12-05 Grand Junction Networks, Inc. Packet filtering for data networks
US5491687A (en) * 1994-09-28 1996-02-13 International Business Machines Corporation Method and system in a local area network switch for dynamically changing operating modes

Also Published As

Publication number Publication date
AU680031B2 (en) 1997-07-17
AU7409794A (en) 1995-02-28
WO1995004970A1 (en) 1995-02-16
JPH09509289A (en) 1997-09-16
US5598581A (en) 1997-01-28
EP0712515A1 (en) 1996-05-22
US5737635A (en) 1998-04-07

Similar Documents

Publication Publication Date Title
US5598581A (en) Variable latency cut through bridge for forwarding packets in response to user's manual adjustment of variable latency threshold point while the bridge is operating
US6192422B1 (en) Repeater with flow control device transmitting congestion indication data from output port buffer to associated network node upon port input buffer crossing threshold level
US5351241A (en) Twisted pair ethernet hub for a star local area network
EP0529774B1 (en) Method and apparatus for traffic congestion control in a communication network bridge device
US5436902A (en) Ethernet extender
US6198722B1 (en) Flow control method for networks
US5568476A (en) Method and apparatus for avoiding packet loss on a CSMA/CD-type local area network using receive-sense-based jam signal
US5859837A (en) Flow control method and apparatus for ethernet packet switched hub
US6252849B1 (en) Flow control using output port buffer allocation
US6026095A (en) Method and apparatus for controlling latency and jitter in shared CSMA/CD (repeater) environment
US6091725A (en) Method for traffic management, traffic prioritization, access control, and packet forwarding in a datagram computer network
JP3160350B2 (en) Communication network control method
CA2277097C (en) Buffered repeater with early filling of transmit buffer
US5796738A (en) Multiport repeater with collision detection and jam signal generation
KR19990021934A (en) 802.3 Media Access Control and Associated Signal Scheme for Dual Ethernet
JPH11501196A (en) Method and apparatus for automatic retransmission of packets in a network adapter
US8693492B2 (en) Quality of service half-duplex media access controller
US6111890A (en) Gigabuffer lite repeater scheme
US6370115B1 (en) Ethernet device and method for applying back pressure
GB2355374A (en) Packet forwarding device with selective packet discarding when paused
JPH11239163A (en) Inter-lan flow control method and switch
Cisco Troubleshooting Ethernet
US8880759B2 (en) Apparatus and method for fragmenting transmission data
JPH0779253A (en) Packet switchboard
WO1997011540A1 (en) Method and apparatus for controlling flow of incoming data packets by target node on an ethernet network

Legal Events

Date Code Title Description
FZDE Discontinued