US20050041684A1 - Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment - Google Patents

Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment Download PDF

Info

Publication number
US20050041684A1
US20050041684A1 US10/900,793 US90079304A US2005041684A1 US 20050041684 A1 US20050041684 A1 US 20050041684A1 US 90079304 A US90079304 A US 90079304A US 2005041684 A1 US2005041684 A1 US 2005041684A1
Authority
US
United States
Prior art keywords
backplane
chassis
processing
module
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/900,793
Inventor
Alastair Reynolds
Douglas Carson
George Lunn
William MacIsaac
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agilent Technologies Inc
Original Assignee
Agilent Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB9923143A external-priority patent/GB2354883B/en
Priority claimed from GB9923142A external-priority patent/GB2354905B/en
Application filed by Agilent Technologies Inc filed Critical Agilent Technologies Inc
Priority to US10/900,793 priority Critical patent/US20050041684A1/en
Assigned to AGILENT TECHNOLOGIES, INC. reassignment AGILENT TECHNOLOGIES, INC. ASSIGNMENT BY OPERATION OF LAW Assignors: AGILENT TECHNOLOGIES UK LIMITED, CARSON, DOUGLAS JOHN, HEWLETT-PACKARD LIMITED, LUNN, GEORGE CROWTHER, MACLASSAC, WILLIAM ROSS, REYNOLDS, ALASTAIR
Publication of US20050041684A1 publication Critical patent/US20050041684A1/en
Assigned to AGILENT TECHNOLOGIES, INC. reassignment AGILENT TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AGILENT TECHNOLOGIES UK LIMITED, CARSON, DOUGLAS JOHN, HEWLETT-PACKARD LIMITED, LUNN, GEORGE CROWTHER, MACISAAC, WILLIAM ROSS, REYNOLDS, ALASTAIR
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/555Error detection

Definitions

  • the invention relates to telecommunications networks, and in particular to apparatus and systems for monitoring traffic in broadband networks.
  • network element connectivity can be achieved using optical fibre bearers to carry data and voice traffic.
  • IP Internet Protocol
  • ATM Asynchronous Transfer Mode
  • a widely-used monitoring system for SS7 signalling networks is acceSS7 TM from Agilent Technologies (and previously from Hewlett-Packard).
  • An instrument extracts all the SS7 packetised signals at Signalling Transfer Points (STPs), which are packet switches analogous to IP routers, that route messages between end points in SS7 networks.
  • STPs Signalling Transfer Points
  • the need can be seen for similar monitoring systems able to cope with combined IP/PSTN networks, especially at gateways where the two protocols meet.
  • a problem arises, however, in the quantity of data that needs to be processed for the monitoring of IP traffic.
  • Internet Protocol networks there is no out of band signalling network separate from the data traffic itself.
  • Networks such as these may be monitored using instruments (generally referred to as probes) by making a passive optical connection to the optical fibre bearer using an optical splitter.
  • instruments generally referred to as probes
  • this approach cannot be considered without due attention to the optical power budget of the bearer, as the optical splitters are lossy devices.
  • it may be desirable to monitor the same bearer many times or to monitor the same bearer twice as part of a backup strategy for redundancy purposes. With available instrumentation, this implies a multiplication of the losses, and also disruption to the bearers as each new splitter is installed. Issues of upgrading the transmitter and/or receiver arise as losses mount up.
  • Such a hardware platform should be as flexible as possible to allow for changes in telecommunications technology and utilise standard building blocks to ensure cross platform compatibility.
  • ANSI American National Standards Institute
  • Bellcore which differ from those of Europe as set by the European Telecommunications Standards Institute (ETSI). Versions of SS7 may also vary from country to country, owing to the flexibility of the standard, although the ITU standard is generally used at international gateways.
  • the USA Bellcore Network Equipment-Building System (NEBS) is of particular relevance to rack-mounted telecommunications equipment as it provides design standards for engineering construction and should be taken into account when designing network monitoring equipment.
  • NEBS Network Equipment-Building System
  • the typical general purpose chassis provides a rack-mounted enclosure in which a backplane supports and interconnects a number of cPCI cards, including a processor card and peripheral cards, to form a functional system.
  • the cards are generally oriented vertically, with power supply (PSU) modules located above or below. Fans force air through the enclosure from bottom to top for cooling the modules.
  • a peripheral card may have input and output (I/O) connections on its front panel.
  • I/O connections may be arranged at the rear of the enclosure, using a special “transition card”. Examples of rack widths in common use are 19 inch (483 mm) and 23 inch (584 mm). The siting of racks in telecommunications equipment rooms implies an enclosure depth should be little over 12 inches (305 mm).
  • cPCI and VME standard processor cards and compatible peripheral cards are already 205 mm deep (including mountings) and the conventional interface card mounted behind the back plane adds another 130 mm.
  • parts of the connector pin-outs for cPCI products are standardised, different vendors use other connectors differently for management bus signals and for LAN connections. These variations must also be adapted to by dedicated interconnect, and designs will often assume that cards from a single vendor only are used.
  • the invention provides a rack-mountable enclosure comprising a housing and interconnection backplane for the mounting and interconnection of a plurality of card-shaped processing modules and at least one interface module, the interface module being arranged to provide a plurality of external connectors and to transport signals via the backplane between each external connection and an individual processing module, wherein:
  • This arrangement allows a compact housing to contain several processing modules and to receive a corresponding number of external connections, in a more compact and functionally dense manner than known instrument chassis designs.
  • the location of the power supply module behind the backplane saves height and/or width in the rack.
  • the enclosure may be constructed so that the processor modules lie generally horizontally when the enclosure is rack mounted. Air paths may be defined through the enclosure so as to pass from end to end thereof, along and between the processor modules and, if necessary, the power supply and interface modules. Fans may be included, optionally in a redundant configuration, to ensure adequate air flow to cool the various components of the enclosure.
  • the interface module may change the format, for example to multiplex several of the external signals onto a single pair of conductors in the backplane.
  • the enclosure and modules will find particular application wherever a large quantity of data needs to be processed at speed, and reduced by filtering and aggregation to provide information for use elsewhere.
  • the switching module and interface module may provide for re-routing one of said signals from an external input connector to an additional output connector, to allow processing in another enclosure.
  • the number of external input connectors may exceed the capacity of processing modules that can be accommodated, or may match it.
  • the backplane may separately provide local bus interconnections for communication between the processing modules.
  • Said local bus interconnections may include a processor-peripheral parallel bus, for example cPCI.
  • the processing module locations may be subdivided into groups, each group receiving a set of separately pluggable modules which together co-operate for processing of a given external signal.
  • the backplane may in particular provide a plurality of independent local buses, each for communication between the modules of one group.
  • the groups may each include a first processor module having specific capability for a type of input signal (such as IP packet data) to be analysed, and a second processor module of generic type for receiving partially processed data from the first processor module, and for further processing and reducing said data for onward communication.
  • a first interface module being the one referred to above, is for the signals to be processed (which broadly could mean input signals to be analysed or output signals being generated).
  • a second interface module is provided for communication for control and management purposes, such as the onward communication of the processing results via LAN.
  • the external outputs may be connections to a computer Local Area Network (LAN), which can also provide for remote control and configuration of the processing modules.
  • LAN Local Area Network
  • the LAN connections in the backplane can be unique to each module, and can further be duplicated for each module.
  • all modules can communicate via a common LAN.
  • the backplane may provide a dedicated location for a management module for selective routing of the LAN or other output communications from the external connectors to the processing modules.
  • the backplane may provide:
  • the enclosure and backplane may further provide a location for a communication and management module to provide one or more of the following functions:
  • the first aspect of the invention provides a rack-mountable enclosure comprising a housing, a power supply module, a fan assembly and an interconnection backplane for the mounting and interconnection of a plurality of card-shaped processing modules, wherein the processing modules in use are arranged to lie generally horizontally in front of the backplane and generally parallel with one another, the power supply module is located behind the backplane, and the fan assembly is located to left or right of the processing modules (in use, as viewed from the front) so as to provide a generally horizontal airflow between them.
  • a shared interface module or modules for providing external connections to the backplane and hence to all of the processing modules may also be located behind the backplane.
  • the cPCI standard defines a number of physical connectors to be present on the backplane, but only two of these (J1, J2) are specified as to their pin functions.
  • J1, J2 the second processing modules mentioned above are generic processor cards based for example on Pentium (TM of Intel Corp.) microprocessors, different card vendors use the remaining connectors differently for communication and management signals such as SMB and LAN connections.
  • a multi-processor equipment enclosure provides a housing and a backplane providing locations for a plurality of processing modules, and further providing a plurality of locations for a configuration module corresponding to respective processing module locations, each configuration module adapting the routing of communication and management signals via the backplane, in accordance with the vendor-specific implementation of the processing module.
  • the configuration module locations may be on the backplane, or on another card connected to the backplane.
  • a communication and management module is provided at a specific location, and the configuration module locations are provided on the management module.
  • the type sensing protocols may for example be implemented via geographic address lines in the standardised portion of a compact PCI backplane.
  • chassis designs and backplanes do not provide for several channels of signals to be monitored by independent processing sub-systems within the same chassis, especially when each monitoring unit processor in fact requires more than one card slot for its implementation.
  • first processing module dedicated to a first stage of data acquisition and processing, where the sheer quantity of broadband data would defeat a general-purpose processor card, and a second processing module of generic type, for further processing onward reporting of the data processed by the first processing module.
  • a computer equipment chassis provides a housing and backplane providing locations for at least four independent processing sub-systems, each processing sub-system comprising first and second processing modules separately mounted on the backplane at adjacent locations, wherein the backplane provides at least four independent CPU-peripheral interfaces, each extending only between the adjacent locations of said first and second processing modules, the first processing module operating as a peripheral and the second processing module operating as host.
  • the switching unit may further be operable to connect the same incoming channel simultaneously to more than one channel processor.
  • the same bearer can therefore be monitored in different ways, without the need for another physical tap.
  • the channel processors may be in the form of modules mounted and interconnected on a common backplane.
  • the switching unit may comprise a further module mounted on said backplane.
  • the external input connectors may be provided by a common interface module separate from or integrated with the switching unit.
  • the external communication connectors may be connected to the channel processors via a communication management module and via the backplane.
  • the external communication connectors and communication management module may optionally provide for said onward communication to be implemented over plural independent networks for redundancy. Redundancy of the networks may extend to each channel processor itself providing two or more network connections.
  • the backplane provides an independent connection between each respective channel processor and the communication management module. This provides better redundancy than shared network communication.
  • the channel processors may each comprise a self-contained sub-system of host and peripheral processing modules interconnected via a CPU-peripheral interface in the backplane, the backplane providing a separate peripheral interface for each channel processor.
  • the interconnection may in particular comprise a parallel peripheral interface such as cPCI.
  • the above systems will typically further comprise one or more multi-channel optical power splitters, for tapping into active optical communications bearers to obtain the said incoming signals for the monitoring apparatuses.
  • the redundancy and adaptability within the monitoring system reduces the need for multiple monitoring taps, preserving the integrity of the network.
  • Such a device allows multiple monitoring applications to be performed on a network signal with only one optical tap being inserted in the physical bearer or the operating network. Redundancy in the monitoring equipment can be provided, also with the single bearer tap. Change in the configuration of the monitoring equipment can be implemented without disturbing the bearer operation, or even the other monitoring applications.
  • the replicating device may further comprise an one or more additional optical outputs, and a selector devices for selecting which of the input signals is replicated at said additional output. This selection can be useful in particular in response to fault situations and planned outages within the network monitoring equipment.
  • FIG. 1 shows a model of a typical ATM network.
  • FIG. 2 shows a data collection and packet processing apparatus connected to a physical telecommunications network via a LAN/WAN interconnect.
  • FIG. 3 shows the basic functional architecture of a novel network probe apparatus, as featured in FIG. 2 .
  • FIG. 4 shows a simple network monitoring system which can be implemented using the apparatus of the type shown in FIG. 3 .
  • FIG. 8 shows an example of daisy chaining the probe chassis of FIG. 7 giving 8+1 redundancy.
  • FIG. 10 shows a second means of increasing processing power by linking more than one chassis together.
  • FIG. 11 shows a signal replicating device (referred to as a Broadband Bridging Isolator (BBI)) for use in a network monitoring system.
  • BBI Broadband Bridging Isolator
  • FIG. 12 shows a typical configuration of a network monitoring system using the BBI of FIG. 11 and several probe apparatuses.
  • FIGS. 13A and 1 3 B illustrate a process of upgrading the processing power of a network monitoring system without interrupting operation.
  • FIG. 15A shows the general physical layout of modules in a specific network probe apparatus implemented in a novel chassis and backplane.
  • FIG. 15B is a front view of the chassis and backplane of FIG. 15A with all modules removed, showing the general layout of connectors and interconnections in the backplane.
  • FIG. 15C is a rear view of the chassis and backplane of FIG. 15A with all modules removed, showing the general layout of connectors and interconnections in the backplane, and showing in cut-away form the location of a power supply module.
  • FIG. 16 shows in block schematic form the interconnections between modules in the apparatus of FIGS. 15 A-C.
  • FIG. 17 is a block diagram showing in more detail a cross-point switch module in the apparatus of FIG. 16 , and its interconnections with other modules.
  • FIG. 18 is a block diagram showing in more detail a packet processor module in the apparatus of FIG. 16 , and its interconnections with other modules.
  • FIG. 19 is a block diagram showing in more detail a combined LAN and chassis management card in the apparatus of FIG. 16 .
  • FIG. 1 shows a model of a telecommunication network 10 based on asynchronous transfer mode (ATM) bearers. Possible monitoring points on various bearers in the network are shown at 20 and elsewhere.
  • Each bearer is generally an optical fibre carrying packetised data with both routing information and data “payload” travelling in the same data stream.
  • “bearer” is used to mean the physical media that carries the data and is distinct from a “link”, which in this context is defined to mean a logical stream of data. Many links of data may be multiplexed onto a single bearer.
  • channel refers to a link (as defined), or “channel” may be used to refer to one of a number of virtual channels being carried over one link, which comprises the logical connection between two subscribers, or between a subscriber and a service provider. Note that such “channels” within the larger telecommunications network should not be confused with the monitoring channels within the network probe apparatus of the embodiments to be described hereinafter.
  • the payload may comprise voice traffic and/or other data. Different protocols may be catered for, with examples showing connections to Free Relay Gateway, ATM and DSLAM equipment being illustrated. User-Network traffic 22 and Network-Network traffic 24 are shown here as dashed lines and solid lines respectively.
  • FIG. 2 various elements 25 - 60 of a data collection and packet processing system distributed at different sites are provided for monitoring bearers L 1 -L 8 etc. of a telecommunications network.
  • the bearers in the examples herein operate in pairs L 1 , L 2 etc. for bi-directional traffic, but this is not universal, nor is it essential to the invention.
  • Each pair is conveniently monitored by a separate probe unit 25 , by means of optical splitters S 1 , S 2 etc. inserted in the physical bearers.
  • one probe unit 25 which monitors bearers L 1 and L 2 , is connected to a local area network (LAN) 60 , along with other units at the same site.
  • LAN local area network
  • the probe unit 25 on an ATM/IP network must examine a vast quantity of data, and can be programmed to filter the data by a Virtual Channel (VC) as a means of reducing the onboard processing load. Filtering by IP address can be used to the same effect in the case of IP over SDH and other such optical networks. Similar techniques can be used for other protocols.
  • Site processors 40 collate and aggregate the large quantity of information gathered by the probe units, and pass the results via a Wide Area Network (WAN) 30 to a central site 65 .
  • WAN Wide Area Network
  • this information may be used for network planning and operations. It may alternatively be used for billing according to the volume of monitored traffic per subscriber or service provider or other applications.
  • FIG. 3 shows the basic functional architecture of a multi-channel optical fibre telecommunications probe apparatus 50 combining several individual probe units, into a more flexible system than has hitherto been available.
  • the network monitoring apparatus shown receives N bearer signals 70 such as may be available from N optical fibre splitters. These enter a cross-point switch 80 capable of routing each signal to any of M individual and independently-replaceable probe units 90 .
  • Each probe unit corresponds in functionality broadly with the unit 25 shown in FIG. 2 .
  • An additional external output 85 from the cross-point switch 80 is routed to an external connector. This brings important benefits, as will described below.
  • the cross-point switch 80 and interconnections shown in FIG. 3 may be implemented using different technologies, for example using passive optical or optoelectronic cross-points.
  • High speed networks for example OC48, require electrical path lengths as short as possible.
  • An optical switch would therefore be desirable, deferring as much as possible the conversion to electrical.
  • the optical switching technology is not yet fully mature. Therefore the present proposal is to have an electrical implementation for the cross-point switch 80 , the signals converted from optical to electrical at the point of entry into the probe apparatus 50 .
  • the scale of an optoelectronic installation will be limited by the complexities of the cross-point switch and size of the probe unit.
  • the choice of interconnect technology (for example between electrical and optical), is generally dependent on signal bandwidth-distance product. For example, in the case of high bandwidth/speed standards such as OC3, OC12, or OC48, inter-rack connections may be best implemented using optical technology.
  • FIGS. 14 to 19 Detailed implementation of the probe apparatus in a specific embodiment will be described in more detail with reference to FIGS. 14 to 19 .
  • a novel chassis arrangement for multi-channel processing products is described, with reference to FIGS. 15A-15C , which may find application in fields beyond telecommunications monitoring.
  • applications of the multi-channel probe architecture will be described, with reference to FIGS. 4 to 13 .
  • FIG. 4 shows a simple monitoring application which can be implemented using the apparatus of the type shown in FIG. 3 .
  • the cross-point switch 80 is integrated into the probe chassis 100 together with up to four independently operating probe units.
  • each probe unit ( 90 in FIG. 3 ) is formed by a packet processor module 150 and single board computer SBC 160 as previously described.
  • SBC 160 in each probe unit has the capacity to analyse and report the data collected by the two packet processors.
  • Other modules included in the chassis provide LAN interconnections for onward reporting of results, probe management, power supply, and cooling modules (not shown in FIGS. 4 to 13 B).
  • FIG. 6 shows a larger redundant system comprising four primary probe apparatuses (chassis 100 - 1 to 100 - 4 ), and a backup chassis 130 which operates in the event of a failure in one of the primary chassis.
  • a large redundant system there are 16 duplex bearers being monitored.
  • Each external input pair of the backup chassis is connected to receive a duplex bearer signal from the external optical output 85 of a respective one of the primary apparatuses 100 - 1 to 100 - 4 .
  • a spare probe unit within the backup chassis can take over the out-of-service unit's function. Assuming all inputs and all of the primary probe units are operational in normal circumstances, we may say that 4:1 redundancy is provided.
  • the chassis containing the optical interfaces can if desired have redundant communications and/or power supply units (PSUs) and adopt a “hot swap” strategy to permit rapid replacement of any hardware failures.
  • PSUs power supply units
  • Hot swap in this context means the facility to unplug one module of a probe unit within the apparatus and replace it with another without interrupting the operation or functionality of the other probe units. Higher levels of protection can be provided on top of this, if desired, as described below with reference to FIGS. 15 A-C and 16 .
  • FIG. 7 shows a modified probe apparatus which provides an additional optical input 170 to the cross-point switch 80 .
  • the cross-point switch 80 has inputs for more bearer signals than can be monitored by the probe units within its chassis.
  • the cross-point switch 80 outputs for more signals than can be monitored by the probe units within the chassis.
  • These additional inputs 170 and outputs 85 can be used to connect a number of probe chassis together in a “daisy chain”, to provide extra redundancy and/or processing power.
  • copies of the bearer signals received at daisy chain inputs 170 are routed to the external outputs 85 . Any other routing can be commanded, however, either from within the apparatus or from outside via the LAN (not shown).
  • FIG. 8 shows an example of daisy chaining the probe chassis to give 8:1 redundancy.
  • the four primary probe chassis 100 - 1 to 100 - 4 are connected in pairs ( 100 - 1 & 100 - 2 and 100 - 3 & 100 - 4 ).
  • the external outputs 85 on the first chassis of each pair are connected to the daisy chain inputs 170 on the second chassis.
  • the external outputs 85 of the second chassis are connected to input of a spare of backup chassis 130 as before.
  • These connections can carry the signal through to the spare chassis when there has been a failure in a probe unit in either the first or second chassis in each pair.
  • the backup chassis 130 still has two spare pairs of external inputs. Accordingly, the system could be extended to accommodate a further four chassis (up to sixteen further probe units, and up to thirty-two further bearer signals), with the single backup chassis 130 providing some redundancy for all of them.
  • two chassis 100 - 1 and 100 - 2 are fully loaded with four probe units 90 each. External signals for all eight probe units are received at 140 from a single duplex bearer.
  • the cross-point switch 80 is used to replicate these signals to every probe unit 90 within the chassis 100 - 1 , and also to the external outputs 85 of the first chassis 100 - 1 . These outputs in turn are connected to one pair of inputs 144 of the second chassis 100 - 2 . Within the second chassis, the same signals are replicated again and applied to all four probe units, and (optionally) to the external outputs 85 of the second chassis 100 - 2 .
  • all eight probe units are able to apply their processing power to the same pair of signals, without tapping into the bearer more than once.
  • processing power is scaleable practically to as much processing power as needed.
  • one output 85 of a first chassis might be connected to one input 170 of a second chassis, while the other output 85 is connected to an input 170 of a third chassis.
  • This arrangement can be repeated if desired to form a bi-directional ring of apparatuses, forming a kind of “optical bus”.
  • the probe apparatus described above allow the system designer to achieve N+1 redundancy by using the cross-point switch 80 to internally re-route a bearer to a spare processor, or to another chassis.
  • some types of failure e.g. in the chassis power supply
  • N+1 PSU redundancy it is possible to reduce such a risk by providing N+1 PSU redundancy, as will be/has been described.
  • FIG. 11 shows an optional signal replicating device for use in conjunction with the probe apparatus described above, or other monitoring apparatus.
  • This device will be referred to as a Broadband Bridging Isolator (BBI).
  • Broadband Bridging Isolator can be scaled to different capacities, and to provide additional fault tolerance independently of the probe apparatuses described above.
  • the basic unit comprises a signal replicator 175 .
  • an (optical) input 176 is converted at 177 to an electrical signal, which is then replicated and converted at 179 etc. to produce a number of identical optical output signals at outputs 178 - 1 etc.
  • BBI 172 takes a single tap input 176 from a bearer being monitored and distributes this to multiple monitoring devices, for example probe apparatuses of the type shown in FIGS. 3 to 10 .
  • the standby selector 180 allows any of the input signals to be switched to a standby chassis.
  • the number of outputs that are duplicated from each input is not critical.
  • a typical implementation may provide four, eight or sixteen replicators 175 in a relatively small rack mountable chassis, each having (for example) four outputs per input.
  • the concepts here are described in terms of optical bearers, the same concepts could be applied to high speed electrical bearers (e.g. E 3 , DS 3 and STM 1 e).
  • each optical tap reduces the strength of the optical signal reaching the receiver.
  • adding a tap may require boosting the signal on the operational bearer.
  • Network operators do not want to disrupt their operational networks unless they have to.
  • the BBI allows different monitoring apparatuses for different applications to be connected, and removed and re-configured without affecting the operational bearer, hence the name “isolator”.
  • the BBI can even be used to re-generate this signal by feeding one of the outputs back into the network, so that the BBI becomes part of the operational network.
  • FIG. 12 shows a system configuration using BBIs and two separate probe chassis 100 - 1 and 100 - 2 implementing separate monitoring applications.
  • the two application chassis may be operated by different departments within the network operators organisation.
  • a third, spare probe chassis 130 is shared in a standby mode.
  • This example uses two BBIs 172 to monitor a duplex bearer pair shown at L 1 , L 2 , and other bearers not shown.
  • Splitters S 1 and S 2 respectively provide tap input signals from L 1 , L 2 to the inputs 176 of the separate BBIs.
  • Each BBI duplicates the signal at its input 176 to two outputs 178 , and the manner described above with reference to FIG. 11 .
  • the two four-way BBIs 172 are used to half duplex bearers L 1 and L 2 separately.
  • the two halves of the same duplex bearer are handled by different BBIs.
  • Three further duplex bearers (L 3 -L 8 , say, not shown in the drawing), are connected to the remaining inputs of the BBIs 172 in a similar fashion.
  • any one of the bearers can be switched through to the standby chassis 130 in the event of a failure of a probe unit in one of the main probe chassis 100 - 1 , 100 - 2 . It will be appreciated that, if there is a failure of a complete probe chassis, then only one of the bearers can be switched through to the standby probe. In a larger system with, say, 16 duplex bearers, four main probe chassis and two standby chassis, the bearers distributed by each BBI can be shared around the probe chassis so that each probe chassis processes one bearer from each BBI. Then all four bearers can be switched to the standby probe in the event of a complete chassis failure.
  • the BBI offers increased resilience for users particularly when they have multiple departments wanting to look at the same bearers.
  • the size of the BBI used is not critical and practical considerations will influence the number of inputs and outputs.
  • the BBI could provide inputs for 16 duplex bearers, each being distributed to two or three outputs with four standby outputs. Where multiple standby circuits are used each will be capable of being independently switched to any of the inputs.
  • FIGS. 13A and 13B illustrate a process of upgrading the processing power of a network monitoring system without interrupting operation, using the facilities of the replicating devices (BBIs 172 ) and probe chassis described above.
  • FIG. 13A shows an example of an “existing” system with one probe chassis 100 - 1 .
  • Four duplex bearer signals are applied to inputs 140 of the chassis. Via the internal cross-point switch 80 , each bearer signal is routed to one probe unit 90 .
  • a broadband bridging isolator BBI.
  • Each bearer signal is received from a tap in the actual bearer (not shown) at a BBI input 176 .
  • the same bearer signal is replicated at BBI outputs 178 - 1 , 178 - 2 etc.
  • the first set of outputs 178 - 1 are connected to the inputs 140 of the probe chassis.
  • the second set of outputs 178 - 2 are not used in the initial configuration.
  • FIG. 13B shows the an expanded system, which includes a second probe chassis 100 - 2 also loaded with four probe units 90 . Consequently there are now provided two probe units per bearer, increasing the processing power available per bearer. It is a simple task to migrate from the original configuration in FIG. 13A to the new one shown in FIG. 13B :
  • the hardware and methods used in these steps can be arranged to comply with “hot-swap” standards as defined earlier.
  • the system of FIGS. 13A and 13B may further provide automatic sensing of the removal (or failure) of a probe unit (or entire chassis), and automatic re-configuration of switches and re-programming of probe units to resume critical monitoring functions with minimum delay.
  • the engineer would instruct the re-programming prior to any planned removal of a probe unit module.
  • a further level of protection which allows completely uninterrupted operation with minimum staff involvement, is to sense the unlocking of a processing card prior to actual removal, to reconfigure other units to take over the functions of the affected module, and then to signal to the engineer that actual removal is permitted. This will be illustrated further below with reference to FIG. 15A .
  • a network interface module 200 provides optical fibre connectors for the incoming bearer signals EXT 1 - 8 ( 70 - 1 to 70 -N in FIG. 3 ), and performs optical to electrical conversion.
  • a cross-point switch 80 provides a means of linking these connections to appropriate probe units 90 .
  • Each input of a probe unit can be regarded as a separate monitoring channel CH 1 , CH 2 etc. As mentioned previously, each probe unit may in fact accept plural signals for processing simultaneously, and these may or may not be selectable independently, or grouped into larger monitoring channels. Additional optical outputs EXT 9 , 10 are provided to act as “spare” outputs (corresponding to 85 in FIG.
  • each probe unit 90 controls the cross-point switch 80 to feed its inputs (forming channel CH 1 , 2 , 3 or 4 etc.)with a bearer signal selected from among the incoming signals EXT 1 - 8 .
  • This selection may be pre-programmed in the apparatus, or may be set by remote command over a LAN.
  • Each probe unit ( 90 ) is implemented in two parts, which may conveniently be realised as a specialised packet processor 150 and a general purpose single board computer SBC 160 module. There are provided four packet processors 150 to 150 each capable of filtering and pre-processing eight half duplex bearer signals at full rate, and four SBCs 160 capable of further processing the results obtained by the packet processors.
  • LAN and chassis management modules 230 , 235 (which in the implementation described later are combined on a single card) provide central hardware platform management and onward communication of the processing results. For this onward communication, multiple redundant LAN interfaces are provided between every SBC 160 and the LAN management module 230 across the backplane.
  • the LAN management function has four LAN inputs (one from each SBC) and four LAN outputs (for redundancy) to the monitoring LAN network. Multiple connections are provided as different SBC manufacturers use different pin connectors on their connectors. For any particular manufacturer there is normally only one connection between the SBC 160 and the LAN management module 230 .
  • the dual redundant LAN interfaces are provided for reliability in reporting the filtered and processed data to the next level of aggregation (site processor 40 in FIG.
  • Each outgoing LAN interface is connectable to a completely independent network, LANA or LANB to ensure reporting in case of LAN outages. In case of dual outages, the apparatus has buffer space for a substantial quantity of reporting data.
  • the chassis management module 235 oversees monitoring and wiring functions via (for example) an I 2 C bus using various protocols. Although I 2 C is normally defined as a shared bus system, each probe unit for reliability has its own I 2 C connection direct to the management module.
  • the management module can also instruct the cross-point switch to activate the “spare” output (labelled as monitoring channels CH 9 , 10 and optical outputs EXT 9 , 10 ) when it detects failure of one of the probe unit modules. This operation can also be carried out under instruction via LAN.
  • FIGS. 15A , B and C show how the probe architecture of FIG. 14 can be implemented with a novel chassis, in a particularly compact and reliable manner.
  • a custom backplane 190 To support the network probe architecture for this embodiment there is also provided a custom backplane 190 .
  • FIG. 16 shows which signals are carried by the backplane, and which modules provide the external connections. Similar reference signs are used as in FIG. 14 , where possible.
  • the network probe apparatus again has eight external optical terminals for signals EXT 1 - 8 to be monitored. These are received at a network interface module 200 .
  • a cross-point switch module 80 receives eight corresponding electrical signals EXT 1 ′- 8 ′ from module 200 through the backplane 190 .
  • Switch 80 has ten signal outputs, forming eight monitoring channels CH 1 - 8 plus two external outputs (CH 9 , 10 ).
  • Four packet processor modules 150 - 1 to 150 - 4 receive pairs of these channels CH 1 , 2 , CH 3 , 4 etc. respectively.
  • CH 9 , 10 signals are fed back to the network interface module 200 , and reproduced in optical form at external terminals EXT 9 , 10 . All internal connections just mentioned are made through the backplane via transmission lines in the backplane 190 .
  • Each packet processor is paired with a respective SBC 160 - 1 to 160 - 4 by individual cPCI bus connections in the backplane.
  • a LAN & Chassis Management module 230 is provided, which is connected to the other modules by I 2 C buses in the backplane, and by LAN connections.
  • a LAN interface module 270 provides external LAN connections for the onward reporting of processing results.
  • a fan assembly 400 for cooling and a power supply (PSU) module 420 is also provided.
  • chassis 100 carries a backplane 190 and provides support and interconnections for various processing modules.
  • the processing modules are arranged in slots to the “front” of the backplane, and space behind the backplane in a telecommunications application is occupied by specialised interconnect.
  • This specialised interconnect may include further removable I/O cards referred to as “transition cards”.
  • the power supply and fans are generally located above and/or below the main card space, and the cards (processing modules) are arranged vertically in a vertical airflow.
  • the power supply module (PSU) 420 is located in a shallow space behind the backplane 190 .
  • the processing modules 150 - 1 , 160 - 1 etc. at the front of the backplane are, moreover, arranged to lie horizontally, with their long axes parallel to the front panel.
  • the cooling fans 400 are placed to one side of the chassis. Airflow enters the chassis at the front at 410 and flows horizontally over the components to be cooled, before exiting at the rear at 412 .
  • This arrangement gives the chassis a high cooling capability while at the same time not extending the size of the chassis beyond the desired dimensions.
  • the outer dimensions and front flange of the housing allow the chassis to be mounted on a standard 19 inch (483 mm) equipment rack, with just 5 U height.
  • the horizontal arrangement allows the space occupied by the enclosure to be matched to the number of processor slots required by the application.
  • a chassis which provides ten slots must be just as high as one which provides twenty slots, and additional height must be allowed for airflow arrangements at top and bottom.
  • FIGS. 15B and 15C there are ten card slots labelled F 1 -F 10 on the front side of the backplane 190 .
  • the front slot dimensions correspond to those of the cPCI standard, which also defines up to five standard electrical connectors referred to generally as J1 to J5,as marked in FIGS. 15B and 15C .
  • J1 to J5 As marked in FIGS. 15B and 15C .
  • connectors J1 and J2 have 110 pins each, and the functions of these are specified in the cPCI standard (version PICMG 2.0 R2.1 (March 1st 1998)).
  • Eight of the front slots support the Packet Processor/SBC cards in pairs.
  • the cards are removable using ‘hot swap’ techniques, as previously outlined, using thumb levers 195 to lock/unlock the cards and to signal that a card is to be inserted/removed.
  • the other two front slots F 9 and F 10 are used for cross-point switch 80 and LAN/Management card 230 respectively.
  • Slots F 1 to F 8 comply with the cPCI insofar as connectors J1, J2, J3 and J5 are concerned. Other bus standards such as VME could be also be used.
  • the other slots F 9 and F 10 are unique to this design.
  • cPCI connections are standard and the connectivity, routing and termination requirements are taken from the cPCI standard specification. Keying requirements are also taken from the cPCI standard.
  • the cPCI bus does not connect all modules, however: it is split into four independent buses CPCI 1 - 4 to form four self-contained host-peripheral processing sub-systems. Failure of any packet processor/SBC combination will not affect the other three probe units.
  • Each of the cards is hot-swappable and will automatically recover from any reconfiguration. Moreover, by providing switches responsive to operation of the thumb levers 195 , prior to physical removal of the card, the system can be warned of impending removal of an module. This warning can be used to trigger automatic re-routing of the affected monitoring channel(s).
  • the engineer replacing the card can be instructed to await a visual signal on the front panel of the card or elsewhere, before completing the removal of the card. This signal can be sent by the LAN/Management module 270 , or by a remote controlling site.
  • This scheme allows easy operation for the engineer, without any interruption of the monitoring functions, and without special steps to command the re-routing. Such commands might otherwise require the co-ordination of actions at the local site with staff at a central site, or at best the same engineer might be required to move between the chassis being worked upon and a nearby PC workstation.
  • the J4 position in the backplane is customised to route high integrity network signals (labelled “RF” in FIG. 15B ). These are transported on custom connections not within cPCI standards.
  • FIG. 15B shows schematically how these connectors transport the bearer signals in monitoring channels CHI etc. from the cross-point switch 80 in slot F 9 to the appropriate packet processors 150 - 1 etc. in slots F 2 , F 4 , F 6 , F 8 .
  • the external bearer signals EXT 1 ′- 8 ′ in electrical form can be seen passing through the backplane from the cross-point switch 80 (in slot F 9 ) to the network interface module 200 (B 1 ). These high speed, high-integrity signals are carried via appropriately designed transmission lines in the printed wiring of the backplane 190 .
  • the backplane also carries I 2 C buses (SMB protocol) and the LAN wiring. These are carried to each SBC 160 - 1 etc. either in the J3 position or the J5 position, depending on the manufacturer of the particular SBC, as described later.
  • the LAN interface module 270 provides the apparatus with two external LAN ports for communications to the next layer of data processing/aggregation, for example a site processor.
  • Connectivity is achieved using two LANs (A and B) at 100 BaseT for a cardcage.
  • the LAN I/O can be arranged to provide redundant connection to the external host computer 40 . This may be done, for example, by using four internal LAN connection and four external LAN connections routed via different segments of the LAN 60 . It is therefore possible to switch any SBC to either of the LAN connections such that any SBC may be on any one connection or split between connections. This arrangement may be changed dynamically according to circumstances, as in the case of an error occurring, and allows different combinations of load sharing and redundancy. Additionally, this allows the probe processors to communicate with each other without going on the external LAN. However, this level of redundancy in the LAN connection cannot be achieved if the total data from the probe processors exceeds the capacity of any one external LAN connection.
  • FIG. 17 is a block diagram of the cross-point switch 80 and shows also the network line interfaces 300 (RX) and 310 (TX) provided on the network interface module 200 .
  • Ten individually configurable multiplexers (selectors) M are provided, each freely selecting one of the eight inputs.
  • Each monitoring channel (CH 1 - 8 ) and hence each packet processor 150 can receive any of the eight incoming network signals (EXT 1 ′- 8 ′).
  • the outputs to the packet processors are via the backplane 190 (position J4, FIG. 15B as described above) and may follow, amongst others, DS3/OC3/OC12/OC48 electrical standards or utilise a suitable proprietary interface.
  • Each packet processor module 150 controls its own pair of multiplexers M directly.
  • FIG. 18 is a block diagram of one of the Packet Processor modules 150 of the apparatus.
  • the main purpose of packet processor (PP) 150 is to capture data from the network interface. This data is then processed, analysed and filtered before being sent to a SBC via a local cPCI bus.
  • Packet processor 150 complies with Compact PCI Hot Swap specification PICMG-2.1 R 1.0, mentioned above. Packet Processor 150 here described is designed to work up to 622 Mbits/s using a Sonet/SDH frame structure carrying ATM cells using AAL 5 Segmentation And Reassembly (SAR). Other embodiments can be employed using the same architecture, for example to operate at OC48 (2.4 Gbit.s ⁇ 1 ).
  • the chassis as described supports four such Packet Processor/SBC pairs, and each packet processor comprises two processing means to handle multiple bearer signals (multiple monitoring channels).
  • the Packet Processor 150 It is possible for the Packet Processor 150 to filter the incoming data. This is essential due to the very high speed of the broadband network interfaces being monitored, such as would be the case for OC-3 and above.
  • the incoming signals are processed by the Packet Processor, this generally taking the form of time stamping the data and performing filtering based on appropriate fields in the data. Different fields can be chosen accordingly, for example ATM cells by VPI/NVCI (VC) number, IP by IP address, or filtering can be based on other, user defined fields. It is necessary to provide the appropriate means to recover the clock and data from the incoming signal, as the means needed varies dependent on link media and coding schemes used. In a typical example using ATM, ATM cells are processed by VPI/NVCI (VC) number.
  • the Packet Processor is provided with means 320 to recover the clock and data from the incoming signal bit stream.
  • the data is then ‘deframed’ at a transmission convergence sub-layer 330 to extract the ATM cells.
  • the ATM cells are then time-stamped 340 and then buffered in a First In First Out (FIFO) buffer 350 to smooth the rate of burst type data. Cells from this FIFO buffer are then passed sequentially to an ATM cell processor 360 .
  • the packet processor can store ATM cells to allow it to re-assemble cells into a message—a Protocol Data Unit (PDU). Only when the PDU has been assembled will it be sent to the SBC. Before assembly, the VC of a cell is checked to ascertain what actions should be taken, for example, to discard cell, assemble PDU, or pass on the raw cell.
  • PDU Protocol Data Unit
  • Data is transferred into the SBC memory using cPCI DMA transfers to a data buffer 38 .
  • This ensures the very high data throughput that may be required if large amounts of data are being stored.
  • the main limitation in the amount of data that is processed will be due to the applications software that processes it. It is therefore the responsibility of the Packet Processor 150 to carry out as much pre-processing of the data as possible so that only that data which is relevant is passed up into the application domain.
  • the first function of the Packet Processor 150 is to locate the instructions for processing the VC (virtual channel or ) to which the cell belongs. To do this it must convert the very large VPI/VCI of the cell into a manageable pointer to its associated processing instructions (VC # key). This is done using a hashing algorithm by hash generator 390 , which in turn uses a VC hash table. Processor 150 , having located the instructions, can then process the cell.
  • a time stamping function 340 can be synchronised to an external GPS time signal or can be adjusted by the SBC 160 .
  • the SBC can also configure and monitor the ‘deframer’ (e.g. set up frame formats and monitor alarms) as well as select the optical inputs (EXT 1 - 8 ) to be monitored.
  • Packet Processor 150 provides all of the necessary cPCI interface functions.
  • Each packet processor board 150 - 1 etc. is removable without disconnecting power from the chassis. This board will not impact the performance of other boards in the chassis other than the associated SBC.
  • the microprocessor notifies the presence or absence of the packet processor and processes any signal loss conditions generated by the Packet Processor.
  • the SBC module 160 is not shown in detail herein, being a general-purpose processing module, examples including the Motorola CPV5350, FORCE CPCI-730, and SMT NAPA.
  • the SBC 150 is a flexible, programmable device. In this specific embodiment two such devices may exist on one cPCI card, in the form of “piggyback” modules (PMCs).
  • PMCs piggyback modules
  • the 100 BaseT interfaces, disk memory etc. may also be in the form of PMCs.
  • communications via the cPCI bus (J1/J2) on the input side and via the LAN port on the output side and all other connections are via the backplane at the rear, unless for diagnostic purposes for which an RS-232 port is provided at the front.
  • FIG. 19 is a block diagram of the combined LAN and chassis management card for the network probe as has been described.
  • Module 230 performs a number of key management functions, although the probe units 150 / 160 can be commanded independently from a remote location, via the LAN interface.
  • the card firstly provides a means for routing probe units SMB and LAN connections, including dual independent LAN switches 500 A and 500 B to route the LAN connections with redundancy and sufficient bandwidth to the outside world.
  • FGPA Field Programmable Gate Array
  • the chassis carries at some locations, cPCI processor modules from a choice of selected vendors, but these are coupled via cPCI bus to special peripheral cards. While such cards are known in principle, and the processor-peripheral bus is fully specified, the apparatus described does not have a conventional interconnect arrangement for the broadband signals, multiple redundant LAN connections and so forth. Even for the same functions, such as the LAN signals and I 2 C/SMB protocol for hardware monitoring, different SBC vendors place the relevant signals on different pins of the cPCI connector set, particularly they may be on certain pins in J3 with some vendors, and on various locations in J5 with others.
  • the “Geographic Address” pins defined in the cPCI connector specifications may be available for signalling (under control of a start-up program) which type of SBC 160 is in a given slot. The routing of SMB, LAN and other signals can then be switched electronically under control of programs in the LAN & management card 230 .
  • the invention in any of its aspects is not limited to the specific embodiments disclosed herein.
  • the invention is in no way limited to any particular type of processor, type of network to be monitored, protocol, choice of physical interconnect, choice of peripheral bus (cPCI v. VME, parallel v. serial etc.), number of bearers per chassis, number of bearers per monitoring channel, number of monitoring channels per probe unit.

Abstract

A multi-channel network monitoring apparatus has input connectors for network signals to be monitored and four channel processors in a rack-mountable chassis/enclosure for receiving and processing a respective pair of incoming signals to produce monitoring results. Each processor operates independently of the others and is replaceable without interrupting their operation. LAN connectors enable onward communication of the monitoring results. A cross-point switch routes each incoming signal to a selected processor and can re-route a channel to another processor in the event of processor outage. Each processor has a self-contained sub-system of processing modules interconnected via a CPU-peripheral interface in a backplane, which provides a separate peripheral interface for each processor. The backplane provides locations for processors to lie horizontally across a major portion of the backplane area facing the front of the enclosure, and a location for an interface module over a minor portion of that area facing the rear, so as to provide external connectors at the rear of the enclosure. A power supply module is positioned over another portion of the backplane area, on the same side as the interface module. The location of the power supply module behind the backplane saves height and/or width in the rack.

Description

    INTRODUCTION
  • The invention relates to telecommunications networks, and in particular to apparatus and systems for monitoring traffic in broadband networks.
  • In telecommunication networks, network element connectivity can be achieved using optical fibre bearers to carry data and voice traffic.
  • Data traffic on public telecommunication networks is expected to exceed voice traffic with Internet Protocol (IP) emerging as one data networking standard, in conjunction with Asynchronous Transfer Mode (ATM) systems. Voice over IP is also becoming an important application for many Internet service providers with IP switches connecting IP networks to the public telephony network (PSTN). IP can be carried over a Sonet transport layer, either with or without ATM. In order to inter-operate with the PSTN, IP switches are also capable of inter-working with SS7, the common signalling system for telecommunications networks, as defined by the International Telecommunications Union (ITU) standard for the exchange of signalling messages over a common signalling network.
  • Different protocols are used to set up calls according to network type and supported services. The signalling traffic carries messages to set up calls between the necessary network nodes. In response to the SS7 messages, an appropriate link through the transport network is established, to carry the actual data and voice traffic (the payload data) for the duration of each call,. Traditional SS7 links are time division multiplexed, so that the same physical bearer may be carrying the signalling and the payload data. The SS7 network is effectively an example of an “out of band” signalling network, because the signalling is readily separated from the payload. For ATM and IP networks, however, the signalling and payload data is statistically multiplexed on the same bearer. In the case of statistical multiplexing the receiver has to examine each message/cell to decide if it is carrying signalling or payload data. One protocol similar to SS7 used in such IP networks is known as Gateway Control Protocol (GCP).
  • The monitoring of networks and their traffic is a fundamental requirement of any system. The “health” of the network must be monitored, to predict, detect and even anticipate failures, overloads and so forth. Monitoring is also crucial to billing of usage charges, both to end users and between service providers. The reliability (percentage availability) of monitoring equipment is a prime concern for service providers and users, and many applications such as billing require “high availability” monitoring systems, such that outages, due to breakdown or maintenance, must be made extremely rare.
  • A widely-used monitoring system for SS7 signalling networks is acceSS7 ™ from Agilent Technologies (and previously from Hewlett-Packard). An instrument extracts all the SS7 packetised signals at Signalling Transfer Points (STPs), which are packet switches analogous to IP routers, that route messages between end points in SS7 networks. The need can be seen for similar monitoring systems able to cope with combined IP/PSTN networks, especially at gateways where the two protocols meet. A problem arises, however, in the quantity of data that needs to be processed for the monitoring of IP traffic. In Internet Protocol networks, there is no out of band signalling network separate from the data traffic itself. Rather, routing information is embedded in the packet headers of the data transport network itself, and the full data stream has to be processed by the monitoring equipment to extract the necessary information as to network health, billing etc. Moreover, IP communication is not based on allocating each “call” with a link of fixed bandwidth for the duration of the call: rather bandwidth is allocated by packets on demand, in a link shared with any number of other data streams.
  • Accordingly, there is a need for a new kind of monitoring equipment capable of grabbing the vast volume of data flowing in the IP network bearers, and of processing it fast enough to extract and analyse the routing and other information crucial to the monitoring function. The requirements of extreme reliability mentioned above apply equally in the new environment.
  • Networks such as these may be monitored using instruments (generally referred to as probes) by making a passive optical connection to the optical fibre bearer using an optical splitter. However, this approach cannot be considered without due attention to the optical power budget of the bearer, as the optical splitters are lossy devices. In addition to this, it may be desirable to monitor the same bearer many times or to monitor the same bearer twice as part of a backup strategy for redundancy purposes. With available instrumentation, this implies a multiplication of the losses, and also disruption to the bearers as each new splitter is installed. Issues of upgrading the transmitter and/or receiver arise as losses mount up.
  • The inventors have analysed acceSS7 network monitoring systems (unpublished at the present filing date). This shows that the reasons for lack of availability of the system can be broken down into three broad categories: unplanned outages, such as software defects; planned outages, such as software and hardware upgrades; and hardware failures. Further analysis shows that the majority of operational hours lost are caused by planned and unplanned maintenance, while hardware failures have a relatively minor effect. increasing the redundancy of disk drives, power supplies and the like, although psychologically comforting, can do relatively little to improve system availability. The greatest scope for reducing operational hours lost and hence increasing availability is in the category of planned outages.
  • In order to implement a reliable monitoring system it would therefore be advantageous to have an architecture with redundancy allowing for spare probe units that is tolerant of both probe failure and probe reconfiguration, and provides software redundancy.
  • Monitoring equipment designed for this purpose does not currently exist. Service providers may therefore use stand-alone protocol analysers which are tools really intended for the network commissioning stage. These usually terminate the fibre bearer, in place of the product being installed, or they plug into a specific test port on the product under test. Specific test software is then needed for each product. Manufacturers have alternatively built diagnostic capability into the network equipment itself, but each perceives the problems differently, leading to a lack of uniformity, and actual monitoring problems, as opposed to perceived problems, may not be addressed.
  • Further considerations include the physical environment needed to house such processing architecture. Such a hardware platform should be as flexible as possible to allow for changes in telecommunications technology and utilise standard building blocks to ensure cross platform compatibility. For example, there exist standards in the USA, as set out by the American National Standards Institute (ANSI) and Bellcore, which differ from those of Europe as set by the European Telecommunications Standards Institute (ETSI). Versions of SS7 may also vary from country to country, owing to the flexibility of the standard, although the ITU standard is generally used at international gateways. The USA Bellcore Network Equipment-Building System (NEBS) is of particular relevance to rack-mounted telecommunications equipment as it provides design standards for engineering construction and should be taken into account when designing network monitoring equipment. Such standards impose limitations such as connectivity and physical dimensions upon equipment and, consequently, on cooling requirements and aisle spacing of network rack equipment.
  • It is known that standard processing modules conforming for example to the cPCI standard are suitable for use in telecommunication applications. The further standard H.110 provides a bus for multiplexing baseband telephony signals in the same backplane as the cPCI bus. Even with Intel Pentium™ or similar processors, however, such arrangements do not currently accommodate the computing power needed for the capture and analysis of broadband packet data. Examples of protocols and their data rates to be accommodated in the monitored bearers in the future equipment are for example DS3 (44 Mbit.s−1), OC3 (155 Mbit.s−1), OC12 (622 Mbit.s−1) and OC48 (2.4 Gbit.s−1). Aside from the volume of data to be handled, conventional chassis for housing such modules do not also support probe architectures of the type currently desired, both in terms of processing capability and also to the extent that their dimensions do not suit the layout of telecommunication equipment rooms such as may be designed to NEBS allowing them to co-reside with network equipment.
  • For example, the typical general purpose chassis provides a rack-mounted enclosure in which a backplane supports and interconnects a number of cPCI cards, including a processor card and peripheral cards, to form a functional system. The cards are generally oriented vertically, with power supply (PSU) modules located above or below. Fans force air through the enclosure from bottom to top for cooling the modules. A peripheral card may have input and output (I/O) connections on its front panel. Alternatively, I/O connections may be arranged at the rear of the enclosure, using a special “transition card”. Examples of rack widths in common use are 19 inch (483 mm) and 23 inch (584 mm). The siting of racks in telecommunications equipment rooms implies an enclosure depth should be little over 12 inches (305 mm). However, cPCI and VME standard processor cards and compatible peripheral cards are already 205 mm deep (including mountings) and the conventional interface card mounted behind the back plane adds another 130 mm. Moreover, although parts of the connector pin-outs for cPCI products are standardised, different vendors use other connectors differently for management bus signals and for LAN connections. These variations must also be adapted to by dedicated interconnect, and designs will often assume that cards from a single vendor only are used.
  • In a first aspect the invention provides a rack-mountable enclosure comprising a housing and interconnection backplane for the mounting and interconnection of a plurality of card-shaped processing modules and at least one interface module, the interface module being arranged to provide a plurality of external connectors and to transport signals via the backplane between each external connection and an individual processing module, wherein:
      • said backplane provides locations for said processing modules to lie across a major portion of the backplane area facing a front side of the enclosure;
      • said backplane provides a location for said interface module over a minor portion of the backplane area facing a rear side of the enclosure, so as to provide said external connectors at the rear of the enclosure; and
      • a power supply module for powering the modules within the enclosure is positioned over another portion of the backplane area, on the same side as the interface module.
  • This arrangement allows a compact housing to contain several processing modules and to receive a corresponding number of external connections, in a more compact and functionally dense manner than known instrument chassis designs. In particular, the location of the power supply module behind the backplane saves height and/or width in the rack.
  • It will be understood that “front” and “rear” are used for convenience, and their meanings can be reversed. One particular benefit of the specified arrangement is that all external connectors (and hence the associated cabling) can be located on one side of the enclosure, allowing consistent access for all cables at the rear in the crowded equipment rooms common to telecommunication and other installations. Using cPCI standard processor and peripheral cards, the depth of the enclosure can be kept within or close to 12 inches (305 mm), no greater than the surrounding telecommunication equipment.
  • The enclosure may be constructed so that the processor modules lie generally horizontally when the enclosure is rack mounted. Air paths may be defined through the enclosure so as to pass from end to end thereof, along and between the processor modules and, if necessary, the power supply and interface modules. Fans may be included, optionally in a redundant configuration, to ensure adequate air flow to cool the various components of the enclosure.
  • The external connectors may provide inputs, outputs or both. In a telecommunications network probe application, the transport of data in the backplane will generally be inward, from the external connectors to the processing modules. In particular, external input connectors may be provided by the interface module for broadband telecommunications signals, with high bandwidth interconnections provided in the backplane. In principle, the backplane could include optical interconnects. With present technology, however, any necessary optical to electrical conversion will more likely be included in the interface module. In other applications, for example process control or computer telephony, the transport may be in both directions, or outwards only. The transport via the backplane may be in essentially the same format in which it arrives. Alternatively, the interface module may change the format, for example to multiplex several of the external signals onto a single pair of conductors in the backplane. The enclosure and modules will find particular application wherever a large quantity of data needs to be processed at speed, and reduced by filtering and aggregation to provide information for use elsewhere.
  • For flexibility and particularly for redundancy in fault and maintenance situations, the enclosure may provide a location for at least one switching module, whereby routing of signals between the external connectors and individual processing modules can be varied. The switching module may in particular comprise a cross-point switch, in accordance with another aspect of the invention, set forth in more detail elsewhere. It is assumed in that case that the processing modules are “hot-swappable”, so that operation of other modules is unaffected by module replacement. The switching module may be operable to route signals between one external connector and a plurality of processing modules. This allows increased processing capacity to be provided for each external connector, whether this is used for redundancy or merely to add processing functionality.
  • For additional redundancy in larger systems, the switching module and interface module may provide for re-routing one of said signals from an external input connector to an additional output connector, to allow processing in another enclosure. The number of external input connectors may exceed the capacity of processing modules that can be accommodated, or may match it.
  • The backplane may separately provide local bus interconnections for communication between the processing modules. Said local bus interconnections may include a processor-peripheral parallel bus, for example cPCI. The processing module locations may be subdivided into groups, each group receiving a set of separately pluggable modules which together co-operate for processing of a given external signal. The backplane may in particular provide a plurality of independent local buses, each for communication between the modules of one group. The groups may each include a first processor module having specific capability for a type of input signal (such as IP packet data) to be analysed, and a second processor module of generic type for receiving partially processed data from the first processor module, and for further processing and reducing said data for onward communication.
  • The first and second processing modules can be regarded as packet and probe processor modules respectively, each such pair forming a self-contained probe unit. It will be understood that each probe processor card may be served by more than one packet processor card, and references to pairs should not be construed as excluding the presence of a further packet processor module in any group.
  • In the specific embodiments disclosed herein, two separate interface modules are provided at the rear side of the backplane. A first interface module, being the one referred to above, is for the signals to be processed (which broadly could mean input signals to be analysed or output signals being generated). A second interface module is provided for communication for control and management purposes, such as the onward communication of the processing results via LAN. These modules could of course be combined in one physical module, or further sub-divided, according to design requirements.
  • The external outputs may be connections to a computer Local Area Network (LAN), which can also provide for remote control and configuration of the processing modules. For redundancy of operation, the LAN connections in the backplane can be unique to each module, and can further be duplicated for each module. Alternatively, all modules can communicate via a common LAN. The backplane may provide a dedicated location for a management module for selective routing of the LAN or other output communications from the external connectors to the processing modules.
  • The backplane may further provide a communication bus connecting all modules, for management functions including for example power and cooling management. Said interconnections may for example include an I2C or SMB bus carrying standard protocols. For improved redundancy, separate buses may be provided for each sub-system.
  • Combining the above features, according to a particular embodiment of the invention in its first aspect, the backplane may provide:
      • a plurality of pairs of processing module locations, each pair comprising adjacent first and second processing module locations;
      • a plurality of independent communication buses each extending between the first processing module location and second processing module location of a respective one of said pairs;
      • a plurality of independent interconnections each for bringing a different external input signal from said interface module to a respective one of said first processing module location;
      • one or a plurality of independent interconnections for bringing communication signals from said second processing module locations to a second interface module.
  • The enclosure and backplane may further provide a location for a communication and management module to provide one or more of the following functions:
      • routing of processing module communication and management signals;
      • communication (e.g. LAN) switching to route communications from the processing modules to the outside world with sufficient redundancy and bandwidth;
      • “magic packet” handling, to allow remote resetting of the modules within the enclosure; and
      • environmental control, controlling fan speed in response to operating temperatures sensed on each module.
  • Alternatively, the first aspect of the invention provides a rack-mountable enclosure comprising a housing, a power supply module, a fan assembly and an interconnection backplane for the mounting and interconnection of a plurality of card-shaped processing modules, wherein the processing modules in use are arranged to lie generally horizontally in front of the backplane and generally parallel with one another, the power supply module is located behind the backplane, and the fan assembly is located to left or right of the processing modules (in use, as viewed from the front) so as to provide a generally horizontal airflow between them.
  • A shared interface module or modules for providing external connections to the backplane and hence to all of the processing modules may also be located behind the backplane.
  • It is noted at this point that the cPCI standard defines a number of physical connectors to be present on the backplane, but only two of these (J1, J2) are specified as to their pin functions. Although the second processing modules mentioned above are generic processor cards based for example on Pentium (™ of Intel Corp.) microprocessors, different card vendors use the remaining connectors differently for communication and management signals such as SMB and LAN connections.
  • According to a second aspect of the invention a multi-processor equipment enclosure provides a housing and a backplane providing locations for a plurality of processing modules, and further providing a plurality of locations for a configuration module corresponding to respective processing module locations, each configuration module adapting the routing of communication and management signals via the backplane, in accordance with the vendor-specific implementation of the processing module.
  • The configuration module locations may be on the backplane, or on another card connected to the backplane. In the preferred embodiment, a communication and management module is provided at a specific location, and the configuration module locations are provided on the management module.
  • In an alternative solution according to the second aspect of the invention, a multi-processor equipment enclosure provides a housing and a backplane providing interconnect for a plurality of processing modules and a management module, the backplane interconnect including generic portions standardised over a range of processing modules and other portions specific to different processing modules within said range, wherein said management module is arranged to sense automatically the specific type of processing module using protocols implemented by the modules via connections in the generic portion of the interconnect, and to route communication and management signals via the backplane, in accordance with the specific implementation of each processing module.
  • The type sensing protocols may for example be implemented via geographic address lines in the standardised portion of a compact PCI backplane.
  • It is noted that known chassis designs and backplanes do not provide for several channels of signals to be monitored by independent processing sub-systems within the same chassis, especially when each monitoring unit processor in fact requires more than one card slot for its implementation. In particular, for monitoring of broadband communication signals in IP or similar protocols, it is presently necessary to provide a first processing module dedicated to a first stage of data acquisition and processing, where the sheer quantity of broadband data would defeat a general-purpose processor card, and a second processing module of generic type, for further processing onward reporting of the data processed by the first processing module.
  • According to a third aspect of the invention a computer equipment chassis provides a housing and backplane providing locations for at least four independent processing sub-systems, each processing sub-system comprising first and second processing modules separately mounted on the backplane at adjacent locations, wherein the backplane provides at least four independent CPU-peripheral interfaces, each extending only between the adjacent locations of said first and second processing modules, the first processing module operating as a peripheral and the second processing module operating as host.
  • The enclosure and backplane may further provide a location for a multi-channel interface module providing external connections for all of the processing sub-systems, the backplane routing signals from the interface module to the appropriate processing sub-systems. The enclosure and backplane may further provide a location for a switching module, such that each external connection can be routed and re-routed to different processing sub-systems.
  • The backplane may further provide interconnections between the channel processors for communication externally of the enclosure. The enclosure and backplane may further provide a management module location for routing of said communication from the channel processors to external connectors. Said interconnections may form part of a computer local area network (LAN). The enclosure and backplane may in fact provide multiple redundant network connections in order that said onward communication can continue in the event of a network outage.
  • The inventors have recognised that, particularly because passive optical splitters have extremely high reliability, a probe architecture which provides for replication and redundancy in the monitoring system after the splitter would allow all the desired functionality and reliability to be achieved, without multiple physical taps in the network bearer, and hence without excessive power loss and degradation in the system being monitored.
  • In a fourth aspect the invention provides a multi-channel network monitoring apparatus for the monitoring of traffic in a broadband telecommunications network, the apparatus comprising:
      • a plurality of external input connectors for receipt of network signals to be monitored;
      • a plurality of channel processors mounted within a chassis, each for receiving and processing a respective incoming signal to produce monitoring results for onward communication, the incoming network signals individually or in groups forming channels for the purposes of the monitoring apparatus, each channel processor being arranged to operate independently of the others and being replaceable without interrupting their operation;
      • one or more external communication connectors for onward communication of said monitoring results from the channel processors; and
      • a switching unit;
        wherein the external input connectors are connected to the channel processors via said switching unit, the switching unit in use routing each incoming signal to a selected channel processor and being operable to re-route an incoming channel to another selected channel processor in the event of processor outage.
  • The switching unit may further be operable to connect the same incoming channel simultaneously to more than one channel processor. The same bearer can therefore be monitored in different ways, without the need for another physical tap.
  • The channel processors may be in the form of modules mounted and interconnected on a common backplane. The switching unit may comprise a further module mounted on said backplane. The external input connectors may be provided by a common interface module separate from or integrated with the switching unit.
  • The external communication connectors may be connected to the channel processors via a communication management module and via the backplane. The external communication connectors and communication management module may optionally provide for said onward communication to be implemented over plural independent networks for redundancy. Redundancy of the networks may extend to each channel processor itself providing two or more network connections. In the particular embodiments described, the backplane provides an independent connection between each respective channel processor and the communication management module. This provides better redundancy than shared network communication.
  • The channel processors may each comprise a self-contained sub-system of host and peripheral processing modules interconnected via a CPU-peripheral interface in the backplane, the backplane providing a separate peripheral interface for each channel processor. The interconnection may in particular comprise a parallel peripheral interface such as cPCI.
  • The backplane and card-like modules may be provided in a single rack-mountchassis, which may also house a power supply and cooling fans. These may be arranged internally in accordance with the first aspect of the invention, as set forth.
  • The switching unit may be operable to route any incoming signal to any of the channel processors. The switching unit may further provide for routing any of the incoming channels to an further external connector, for processing by a channel processor separate from the chassis.
  • The invention yet further provides a network monitoring system wherein a first group of multi-channel network monitoring apparatuses according to the fourth aspect of the invention as set forth above are connected to receive a plurality of incoming signals, wherein the switching unit of each apparatus in the first group provides for routing any of its incoming channels to a further external connector, the system further comprising at least one further multi-channel network monitoring apparatus according to the fourth aspect of the invention as set forth above, connected to receive incoming channels from said further external connectors of the first group of apparatuses, the further apparatus thereby providing back-up in the event of a channel processor failure or replacement within the first group of apparatuses.
  • The invention yet further provides a network monitoring system wherein a plurality of multi-channel network monitoring apparatuses according to the fourth aspect of the invention as set forth above are connected to a larger plurality of incoming channels via multiplexing means, the total number of channel processors within the monitoring apparatuses being greater than the number of incoming channels at any given time, such that any incoming channel can be routed by the multiplexing means and appropriate switching unit to an idle channel processor of one of the monitoring apparatuses. This allows the system to continue monitoring all channels in the event of failure or replacement of any channel processor.
  • The number of channel processors may be greater than the number of incoming channels by at least the number of channel processors in each monitoring apparatus. This allows the system to continue monitoring all channels in the event of failure or replacement of one complete apparatus.
  • The multiplexing means may be formed by optical switches, while the switching units within each monitoring apparatus operate on signals after conversion to electrical form. Alternatively, the multiplexing means may include electronic switches, while inputs and outputs are converted to-and-from optical form for ease of interconnection between separate enclosures. In principle, the conversion from optical to electrical for could happen at any point, from the network tap point to the processing module itself.
  • The above systems will typically further comprise one or more multi-channel optical power splitters, for tapping into active optical communications bearers to obtain the said incoming signals for the monitoring apparatuses. The redundancy and adaptability within the monitoring system reduces the need for multiple monitoring taps, preserving the integrity of the network.
  • In a fifth aspect the invention provides a multi-channel replicating device for broadband optical signals, the device comprising one or more modules having:
      • a first plurality of input connectors for receiving broadband optical signals;
      • a larger plurality of output connectors for broadband optical signals;
      • means for replicating each received broadband optical signal to a plurality of said output connectors without digital processing.
  • Such a device allows multiple monitoring applications to be performed on a network signal with only one optical tap being inserted in the physical bearer or the operating network. Redundancy in the monitoring equipment can be provided, also with the single bearer tap. Change in the configuration of the monitoring equipment can be implemented without disturbing the bearer operation, or even the other monitoring applications.
  • The replicating means may in particular involve components for optical to electrical conversion and back to optical again.
  • The replicating device may further comprise an one or more additional optical outputs, and a selector devices for selecting which of the input signals is replicated at said additional output. This selection can be useful in particular in response to fault situations and planned outages within the network monitoring equipment.
  • The invention in the fifth aspect further provides a telecommunications network monitoring system comprising:
      • an optical splitting device, providing a tap signal for monitoring signals carried by a bearer in a broadband telecommunications network;
      • a plurality of network monitoring units, each for receiving and analysing signals from a broadband optical bearer; and
      • a signal replicating device according to the fifth aspect of the invention as set forth above, the signal replicating device being connected so as to receive said optical tap signal, and to provide replicas of said optical tap signal to inputs of two or more of said network monitoring units.
    BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
  • FIG. 1 shows a model of a typical ATM network.
  • FIG. 2 shows a data collection and packet processing apparatus connected to a physical telecommunications network via a LAN/WAN interconnect.
  • FIG. 3 shows the basic functional architecture of a novel network probe apparatus, as featured in FIG. 2.
  • FIG. 4 shows a simple network monitoring system which can be implemented using the apparatus of the type shown in FIG. 3.
  • FIG. 5 shows another application of the apparatus of FIG. 4 giving 3+1 redundancy.
  • FIG. 6 shows a larger redundant network monitoring system including a backup apparatus.
  • FIG. 7 shows an example of a modified probe apparatus permitting a “daisy chain” configuration to provide extra redundancy and/or processing power.
  • FIG. 8 shows an example of daisy chaining the probe chassis of FIG. 7 giving 8+1 redundancy.
  • FIG. 9 shows a further application of the probe apparatus giving added processing power per bearer.
  • FIG. 10 shows a second means of increasing processing power by linking more than one chassis together.
  • FIG. 11 shows a signal replicating device (referred to as a Broadband Bridging Isolator (BBI)) for use in a network monitoring system.
  • FIG. 12 shows a typical configuration of a network monitoring system using the BBI of FIG. 11 and several probe apparatuses.
  • FIGS. 13A and 1 3B illustrate a process of upgrading the processing power of a network monitoring system without interrupting operation.
  • FIG. 14 is a functional schematic diagram of a generalised network probe apparatus showing the functional relationships between the major modules of the apparatus.
  • FIG. 15A shows the general physical layout of modules in a specific network probe apparatus implemented in a novel chassis and backplane.
  • FIG. 15B is a front view of the chassis and backplane of FIG. 15A with all modules removed, showing the general layout of connectors and interconnections in the backplane.
  • FIG. 15C is a rear view of the chassis and backplane of FIG. 15A with all modules removed, showing the general layout of connectors and interconnections in the backplane, and showing in cut-away form the location of a power supply module.
  • FIG. 16 shows in block schematic form the interconnections between modules in the apparatus of FIGS. 15A-C.
  • FIG. 17 is a block diagram showing in more detail a cross-point switch module in the apparatus of FIG. 16, and its interconnections with other modules.
  • FIG. 18 is a block diagram showing in more detail a packet processor module in the apparatus of FIG. 16, and its interconnections with other modules.
  • FIG. 19 is a block diagram showing in more detail a combined LAN and chassis management card in the apparatus of FIG. 16.
  • DETAILED DESCRIPTION OF THE EMBODIMENTS Background
  • FIG. 1 shows a model of a telecommunication network 10 based on asynchronous transfer mode (ATM) bearers. Possible monitoring points on various bearers in the network are shown at 20 and elsewhere. Each bearer is generally an optical fibre carrying packetised data with both routing information and data “payload” travelling in the same data stream. Here “bearer” is used to mean the physical media that carries the data and is distinct from a “link”, which in this context is defined to mean a logical stream of data. Many links of data may be multiplexed onto a single bearer. These definitions are provided for consistency in the present description, however, and should not be taken to imply any limitation of the applicability of the techniques disclosed, or on the scope of the invention defined in the appended claims. Those skilled in the art will sometimes use the term “channel” to refer to a link (as defined), or “channel” may be used to refer to one of a number of virtual channels being carried over one link, which comprises the logical connection between two subscribers, or between a subscriber and a service provider. Note that such “channels” within the larger telecommunications network should not be confused with the monitoring channels within the network probe apparatus of the embodiments to be described hereinafter.
  • The payload may comprise voice traffic and/or other data. Different protocols may be catered for, with examples showing connections to Free Relay Gateway, ATM and DSLAM equipment being illustrated. User-Network traffic 22 and Network-Network traffic 24 are shown here as dashed lines and solid lines respectively.
  • In FIG. 2 various elements 25-60 of a data collection and packet processing system distributed at different sites are provided for monitoring bearers L1-L8 etc. of a telecommunications network. The bearers in the examples herein operate in pairs L1, L2 etc. for bi-directional traffic, but this is not universal, nor is it essential to the invention. Each pair is conveniently monitored by a separate probe unit 25, by means of optical splitters S1, S2 etc. inserted in the physical bearers. For example, one probe unit 25, which monitors bearers L1 and L2, is connected to a local area network (LAN) 60, along with other units at the same site. The probe unit 25 on an ATM/IP network must examine a vast quantity of data, and can be programmed to filter the data by a Virtual Channel (VC) as a means of reducing the onboard processing load. Filtering by IP address can be used to the same effect in the case of IP over SDH and other such optical networks. Similar techniques can be used for other protocols. Site processors 40 collate and aggregate the large quantity of information gathered by the probe units, and pass the results via a Wide Area Network (WAN) 30 to a central site 65. Here this information may be used for network planning and operations. It may alternatively be used for billing according to the volume of monitored traffic per subscriber or service provider or other applications.
  • The term “probe unit” is used herein refer to a functionally self-contained sub-system designed to carry out the required analysis for a bearer, or for a pair or larger group of bearers. Each probe unit may include separate modules to carry out such operations as filtering the packets of interest and then interpreting the actual packet or other data analysis.
  • In accordance with current trends, it is assumed in this description that the links to be monitored carry Internet Protocol (IP) traffic over passive optical networks (PONs) comprising optical fibre bearers. Connection to such a network can only really be achieved through use of passive optical splitters S1, S2 etc. Passive splitters have advantages such as high reliability, comparatively small dimensions, various connection configurations and the fact that no power or element management resources required. An optical splitter in such a situation works by paring off a percentage of the optical power in a bearer to a test port, the percentage being variable according to hardware specifications.
  • A number of issues are raised when insertion of such a device is considered. For example, there should be sufficient bearer receiver power margins remaining at both test device and the through port to the rest of the network. It becomes necessary to consider what is the most economic method of monitoring the bearer in the presence of a reduced test port power budget while limiting the optical power needed by the monitoring probe and if the network would have to be re-configured as a result of inserting the device.
  • Consequently, inserting a power splitter to monitor a network frequently requires an increase in launch power. This entails upgrading the transmit laser assembly and installing an optical attenuator where needed to reduce optical power into the through path to normal levels. Such an upgrade would ideally only be performed once.
  • For these reasons it is not desirable to probe, for example, an ATM network more than once on any given bearer. Nevertheless, it would be desirable to have the ability for multiple probing devices to be connected to the same bearer, that is, have multiple outputs from the optical interface. The different probes may be monitoring different parameters. In addition, however, any network monitoring system must offer a high degree of availability, and multiple probes are desirable in the interests of redundancy. The probe apparatuses and ancillary equipment described below allow the implementation of such a network monitoring system which can be maintained and expanded with simple procedures, with minimal disruption to the network itself and to the monitoring applications.
  • Network Probe System—General Architecture
  • FIG. 3 shows the basic functional architecture of a multi-channel optical fibre telecommunications probe apparatus 50 combining several individual probe units, into a more flexible system than has hitherto been available. The network monitoring apparatus shown receives N bearer signals 70 such as may be available from N optical fibre splitters. These enter a cross-point switch 80 capable of routing each signal to any of M individual and independently-replaceable probe units 90. Each probe unit corresponds in functionality broadly with the unit 25 shown in FIG. 2. An additional external output 85 from the cross-point switch 80 is routed to an external connector. This brings important benefits, as will described below.
  • The cross-point switch 80 and interconnections shown in FIG. 3 may be implemented using different technologies, for example using passive optical or optoelectronic cross-points. High speed networks, for example OC48, require electrical path lengths as short as possible. An optical switch would therefore be desirable, deferring as much as possible the conversion to electrical. However, the optical switching technology is not yet fully mature. Therefore the present proposal is to have an electrical implementation for the cross-point switch 80, the signals converted from optical to electrical at the point of entry into the probe apparatus 50. The scale of an optoelectronic installation will be limited by the complexities of the cross-point switch and size of the probe unit. The choice of interconnect technology (for example between electrical and optical), is generally dependent on signal bandwidth-distance product. For example, in the case of high bandwidth/speed standards such as OC3, OC12, or OC48, inter-rack connections may be best implemented using optical technology.
  • Detailed implementation of the probe apparatus in a specific embodiment will be described in more detail with reference to FIGS. 14 to 19. As part of this, a novel chassis arrangement for multi-channel processing products is described, with reference to FIGS. 15A-15C, which may find application in fields beyond telecommunications monitoring. First, however, applications of the multi-channel probe architecture will be described, with reference to FIGS. 4 to 13.
  • FIG. 4 shows a simple monitoring application which can be implemented using the apparatus of the type shown in FIG. 3. The cross-point switch 80 is integrated into the probe chassis 100 together with up to four independently operating probe units. In this implementation of the architecture each probe unit (90 in FIG. 3) is formed by a packet processor module 150 and single board computer SBC 160 as previously described. There are provided two packet processors 150 in each probe unit 90. Each packet processor can receive and process the signal of one half-duplex bearer. SBC 160 in each probe unit has the capacity to analyse and report the data collected by the two packet processors. Other modules included in the chassis provide LAN interconnections for onward reporting of results, probe management, power supply, and cooling modules (not shown in FIGS. 4 to 13B).
  • In this application example a single, fully loaded chassis 100 is used with no redundancy to monitor eight single (four duplex) bearers connected at 140 to the external optical inputs of the apparatus (inputs 70-1 to 70-N in FIG. 3). The cross-point switch external outputs 85 are shown but not used in this configuration. Applications of these outputs are explained for example in the description of FIGS. 6, 8, 10 and 12 below.
  • FIG. 5 shows an alternative application of the apparatus giving 3+1 redundancy. Here three duplex bearer signals are applied to external inputs 140 of the probe apparatus chassis 100, while the fourth pair of inputs 142 is unused. Within the chassis there are thus three primary probe units 90 plus a fourth, spare probe unit 120. The cross-point switch 80 can be used to switch any of the other bearers to this spare probe unit in the event of a failure in another probe unit. Already by integrating the cross-point switch and several probe units in a single chassis, a scaleable packet processor redundancy down to 1:1 is achieved without the overhead associated with an external cross-point switch. Since electrical failures cause only a very minor proportion of outages, redundancy within the chassis is valuable, with the added bonus that complex wiring outside the chassis may be avoided. One or more processors within each chassis can be spare at a given time, and switched instantaneously and/or remotely if one of the other probe units becomes inoperative.
  • FIG. 6 shows a larger redundant system comprising four primary probe apparatuses (chassis 100-1 to 100-4), and a backup chassis 130 which operates in the event of a failure in one of the primary chassis. In this example of a large redundant system there are 16 duplex bearers being monitored. Each external input pair of the backup chassis is connected to receive a duplex bearer signal from the external optical output 85 of a respective one of the primary apparatuses 100-1 to 100-4. By this arrangement, in the event of a single probe unit failure in one of the primary apparatuses, a spare probe unit within the backup chassis can take over the out-of-service unit's function. Assuming all inputs and all of the primary probe units are operational in normal circumstances, we may say that 4:1 redundancy is provided.
  • Recognising that in this embodiment only one optical interface is connected to the bearer under test, the chassis containing the optical interfaces can if desired have redundant communications and/or power supply units (PSUs) and adopt a “hot swap” strategy to permit rapid replacement of any hardware failures. “Hot swap” in this context means the facility to unplug one module of a probe unit within the apparatus and replace it with another without interrupting the operation or functionality of the other probe units. Higher levels of protection can be provided on top of this, if desired, as described below with reference to FIGS. 15A-C and 16.
  • FIG. 7 shows a modified probe apparatus which provides an additional optical input 170 to the cross-point switch 80. In other words, the cross-point switch 80 has inputs for more bearer signals than can be monitored by the probe units within its chassis. At the same time, with the external optical outputs 85, the cross-point switch 80 outputs for more signals than can be monitored by the probe units within the chassis. These additional inputs 170 and outputs 85 can be used to connect a number of probe chassis together in a “daisy chain”, to provide extra redundancy and/or processing power. By default, in the present embodiment, copies of the bearer signals received at daisy chain inputs 170 are routed to the external outputs 85. Any other routing can be commanded, however, either from within the apparatus or from outside via the LAN (not shown).
  • FIG. 8 shows an example of daisy chaining the probe chassis to give 8:1 redundancy. The four primary probe chassis 100-1 to 100-4 are connected in pairs (100-1 & 100-2 and 100-3 & 100-4). The external outputs 85 on the first chassis of each pair are connected to the daisy chain inputs 170 on the second chassis. The external outputs 85 of the second chassis are connected to input of a spare of backup chassis 130 as before. These connections can carry the signal through to the spare chassis when there has been a failure in a probe unit in either the first or second chassis in each pair. Unlike the arrangement of FIG. 6, however, it will be seen that the backup chassis 130 still has two spare pairs of external inputs. Accordingly, the system could be extended to accommodate a further four chassis (up to sixteen further probe units, and up to thirty-two further bearer signals), with the single backup chassis 130 providing some redundancy for all of them.
  • For applications that involve processor intensive tasks it may be desirable to increase the processing power available to monitor each bearer. This may be achieved by various different configurations, and the degree of redundancy can be varied at the same time to suit each application.
  • FIG. 9 illustrates how it is possible to increase the processing power available for any given bearer by reconfiguring the probe units. In this configuration only two duplex bearer signals 140 are connected to the chassis 100. Two inputs 142 are unused. Within the cross-point switch 80 each bearer signal is duplicated and routed to two probe units 90. This doubles the processing power available for each of the bearers 140. This may be for different applications (for example routine billing and fraud detection), or for more complex analysis on the same application. Each packet processor (150, FIG. 4) and SBC (160) will be programmed according to the application desired. In particular, each packet processor, while receiving and processing all the data carried by an associated bearer, will be programmed to filter the data and to pass on only those packets, cells, or header information which is needed by the SBC for a particular monitoring task. The ability to provide redundancy via the external outputs 85 still remains.
  • FIG. 10 illustrates a second method of increasing processing power is by connecting more than one chassis together in a daisy chain or similar arrangement. Concerning the “daisy chain” inputs 170, FIG. 10 also illustrates how a similar effect can be achieved using the unmodified apparatus (FIG. 3), providing the apparatus is not monitoring its full complement of bearer signals. The external inputs 140 can thus be connected to the external outputs 85 of the previous chassis, instead of special inputs 170.
  • In the configuration of FIG. 10, two chassis 100-1 and 100-2 are fully loaded with four probe units 90 each. External signals for all eight probe units are received at 140 from a single duplex bearer. The cross-point switch 80 is used to replicate these signals to every probe unit 90 within the chassis 100-1, and also to the external outputs 85 of the first chassis 100-1. These outputs in turn are connected to one pair of inputs 144 of the second chassis 100-2. Within the second chassis, the same signals are replicated again and applied to all four probe units, and (optionally) to the external outputs 85 of the second chassis 100-2.
  • Thus, all eight probe units are able to apply their processing power to the same pair of signals, without tapping into the bearer more than once. By adding further chassis in such a daisy chain, processing power is scaleable practically to as much processing power as needed.
  • The examples given are for way of illustration only, showing how using the chassis architecture described it is possible to provide the user with the processing power needed and the redundancy to maintain operation of the of the system in the event of faults and planned outages. It will be appreciated that there are numerous different configurations possible, besides those described.
  • For example, it is also possible to envisage a bidirectional daisy chain arrangement. Here, one output 85 of a first chassis might be connected to one input 170 of a second chassis, while the other output 85 is connected to an input 170 of a third chassis. This arrangement can be repeated if desired to form a bi-directional ring of apparatuses, forming a kind of “optical bus”.
  • The probe apparatus described above allow the system designer to achieve N+1 redundancy by using the cross-point switch 80 to internally re-route a bearer to a spare processor, or to another chassis. On the other hand, it will be recognised that some types of failure (e.g. in the chassis power supply) will disrupt operation of all of the processors in the chassis. It is possible to reduce such a risk by providing N+1 PSU redundancy, as will be/has been described.
  • Broadband Bridging Isolator
  • FIG. 11 shows an optional signal replicating device for use in conjunction with the probe apparatus described above, or other monitoring apparatus. This device will be referred to as a Broadband Bridging Isolator (BBI). Broadband Bridging Isolator can be scaled to different capacities, and to provide additional fault tolerance independently of the probe apparatuses described above. The basic unit comprises a signal replicator 175. For each unit, an (optical) input 176 is converted at 177 to an electrical signal, which is then replicated and converted at 179 etc. to produce a number of identical optical output signals at outputs 178-1 etc.
  • Also provided within BBI 172 are one or more standby selectors (multiplexers) 180 (one only shown). Each selector 180 receives replicas of the input signals and can select from these a desired one to be replicated at a selector optical output 182. An additional input 186 (shown in broken lines) may be provided which passes to the selector 180 without being replicated, to permit “daisy chain” connection.
  • In use, BBI 172 takes a single tap input 176 from a bearer being monitored and distributes this to multiple monitoring devices, for example probe apparatuses of the type shown in FIGS. 3 to 10. For reliability, the standby selector 180 allows any of the input signals to be switched to a standby chassis.
  • The number of outputs that are duplicated from each input is not critical. A typical implementation may provide four, eight or sixteen replicators 175 in a relatively small rack mountable chassis, each having (for example) four outputs per input. Although the concepts here are described in terms of optical bearers, the same concepts could be applied to high speed electrical bearers (e.g. E3, DS3 and STM1e).
  • The reasons for distributing the signal could be for multiple applications, duplication for reliability, load sharing or a combination of all three. It is important that only one tap need be made in the operational bearer. As described in the introductory part of this specification, each optical tap reduces the strength of the optical signal reaching the receiver. In marginal conditions, adding a tap may require boosting the signal on the operational bearer. Network operators do not want to disrupt their operational networks unless they have to. The BBI allows different monitoring apparatuses for different applications to be connected, and removed and re-configured without affecting the operational bearer, hence the name “isolator”. The BBI can even be used to re-generate this signal by feeding one of the outputs back into the network, so that the BBI becomes part of the operational network.
  • The number of bearer signals that are switched through the standby selector 180 will depend on the users requirements—this number corresponds effectively to “N” in the phrase “N+1 redundancy”. The number of standby selectors in each BBI is not critical. Adding more means that more bearers can be switched should there be a failure.
  • The BBI must have high reliability as when operational in a monitoring environment it an essential component in the monitoring of data, providing the only bridging link between the signal bearers and the probe chassis. No digital processing of the bearer signal is performed in the BBI, which can thus be made entirely of the simplest and most reliable optoelectronic components. When technology permits, in terms of cost and reliability, there may be an “all-optical” solution, which avoids conversion to electrical form and back to optical. Presently, however, the state of the art favours the optoelectronic solution detailed here. The BBI can be powered from a redundant power supply to ensure continuous operation. The number of bearers handled on a single card can be kept small so that in event of a failure the number of bearers impacted is small. The control of the standby switch can be by an external control processor.
  • FIG. 12 shows a system configuration using BBIs and two separate probe chassis 100-1 and 100-2 implementing separate monitoring applications. The two application chassis may be operated by different departments within the network operators organisation. A third, spare probe chassis 130 is shared in a standby mode. This example uses two BBIs 172 to monitor a duplex bearer pair shown at L1, L2, and other bearers not shown. Splitters S1 and S2 respectively provide tap input signals from L1, L2 to the inputs 176 of the separate BBIs. Each BBI duplicates the signal at its input 176 to two outputs 178, and the manner described above with reference to FIG. 11. For improved fault tolerance, the two four-way BBIs 172 are used to half duplex bearers L1 and L2 separately. In other words, the two halves of the same duplex bearer are handled by different BBIs. Three further duplex bearers (L3-L8, say, not shown in the drawing), are connected to the remaining inputs of the BBIs 172 in a similar fashion.
  • Using the standby selector 180 any one of the bearers can be switched through to the standby chassis 130 in the event of a failure of a probe unit in one of the main probe chassis 100-1, 100-2. It will be appreciated that, if there is a failure of a complete probe chassis, then only one of the bearers can be switched through to the standby probe. In a larger system with, say, 16 duplex bearers, four main probe chassis and two standby chassis, the bearers distributed by each BBI can be shared around the probe chassis so that each probe chassis processes one bearer from each BBI. Then all four bearers can be switched to the standby probe in the event of a complete chassis failure.
  • It will be seen that the BBI offers increased resilience for users particularly when they have multiple departments wanting to look at the same bearers. The size of the BBI used is not critical and practical considerations will influence the number of inputs and outputs. For example, the BBI could provide inputs for 16 duplex bearers, each being distributed to two or three outputs with four standby outputs. Where multiple standby circuits are used each will be capable of being independently switched to any of the inputs.
  • FIGS. 13A and 13B illustrate a process of upgrading the processing power of a network monitoring system without interrupting operation, using the facilities of the replicating devices (BBIs 172) and probe chassis described above. FIG. 13A shows an example of an “existing” system with one probe chassis 100-1. Four duplex bearer signals are applied to inputs 140 of the chassis. Via the internal cross-point switch 80, each bearer signal is routed to one probe unit 90. With a view to further upgrades and fault tolerance, a broadband bridging isolator (BBI). Each bearer signal is received from a tap in the actual bearer (not shown) at a BBI input 176. The same bearer signal is replicated at BBI outputs 178-1, 178-2 etc. The first set of outputs 178-1 are connected to the inputs 140 of the probe chassis. The second set of outputs 178-2 are not used in the initial configuration.
  • FIG. 13B shows the an expanded system, which includes a second probe chassis 100-2 also loaded with four probe units 90. Consequently there are now provided two probe units per bearer, increasing the processing power available per bearer. It is a simple task to migrate from the original configuration in FIG. 13A to the new one shown in FIG. 13B:
      • Step 1—Install the extra chassis 100-2 with the probe units, establishing the appropriate power supply and LAN communications.
      • Step 2—Connect two of the duplicate BBI outputs 186 to inputs of the extra chassis 100-2. (All four could be connected for redundancy if desired.)
      • Step 3—Configure the new chassis 100-2 and probe units to monitor the two bearer signals in accordance with the desired applications.
      • Step 4—Re-configure the original chassis to cease monitoring the corresponding two bearer signals of the first set of outputs 178-1 (188 in FIG. 13B). (The processing capacity freed in the original chassis 100-1 can then be assigned expanded monitoring of the two duplex bearer signals which remain connected to the BBI outputs 178-1.)
      • Step 5—Remove the connections 188 no longer being used. (These connections could be left for redundancy if desired.)
  • In this example the processing power has been doubled from one probe unit per bearer to two probe units per bearer but it can be seen that such a scheme could be easily extended by connecting further chassis. At no point has the original monitoring capacity been lost, and at no point have the bearers themselves (not shown) been disrupted. Thus, for example, a module of one probe unit can be removed for upgrade while other units continue their own operations. If there is spare capacity, one of the other units can step in to provide the functionality of the unit being replaced. After Step 2, the entire first chassis 100-1 could be removed and replaced while the second chassis 100-2 steps in to perform its functions. Variations on this method are practically infinite, and can also be used for other types of migration, such as when increasing system reliability.
  • The hardware and methods used in these steps can be arranged to comply with “hot-swap” standards as defined earlier. The system of FIGS. 13A and 13B, and of course any of the systems described above, may further provide automatic sensing of the removal (or failure) of a probe unit (or entire chassis), and automatic re-configuration of switches and re-programming of probe units to resume critical monitoring functions with minimum delay. Preferably, of course, the engineer would instruct the re-programming prior to any planned removal of a probe unit module. A further level of protection, which allows completely uninterrupted operation with minimum staff involvement, is to sense the unlocking of a processing card prior to actual removal, to reconfigure other units to take over the functions of the affected module, and then to signal to the engineer that actual removal is permitted. This will be illustrated further below with reference to FIG. 15A.
  • Multi-channel Probe Apparatus—Functional arrangement
  • FIG. 14 is a functional block schematic diagram of a multi-channel probe apparatus suitable for implementing the systems shown in FIGS. 4 to 13A and 13B. Like numerals depict like elements. All of the modules shown in FIG. 14 and their interconnections are ideally separately replaceable, and housed within a self-contained enclosure of standard rack-mount dimensions. The actual physical configuration of the network probe unit modules in a chassis with special backplane will be described later.
  • A network interface module 200 provides optical fibre connectors for the incoming bearer signals EXT 1-8 (70-1 to 70-N in FIG. 3), and performs optical to electrical conversion. A cross-point switch 80 provides a means of linking these connections to appropriate probe units 90. Each input of a probe unit can be regarded as a separate monitoring channel CH1, CH2 etc. As mentioned previously, each probe unit may in fact accept plural signals for processing simultaneously, and these may or may not be selectable independently, or grouped into larger monitoring channels. Additional optical outputs EXT 9,10 are provided to act as “spare” outputs (corresponding to 85 in FIG. 4)In the embodiment, each probe unit 90 controls the cross-point switch 80 to feed its inputs (forming channel CH1, 2, 3 or 4 etc.)with a bearer signal selected from among the incoming signals EXT 1-8. This selection may be pre-programmed in the apparatus, or may be set by remote command over a LAN. Each probe unit (90) is implemented in two parts, which may conveniently be realised as a specialised packet processor 150 and a general purpose single board computer SBC 160 module. There are provided four packet processors 150 to 150 each capable of filtering and pre-processing eight half duplex bearer signals at full rate, and four SBCs 160 capable of further processing the results obtained by the packet processors. The packet processors 150 comprise dedicated data processing hardware, while the SBC can be implemented using industry standard processors or other general purpose processing modules. The packet processors 150 are closely coupled by individual peripheral buses to their respective SBCs 160 so as to form self-contained processing systems, each packet processor acting as a peripheral to its “host” SBC. Each Packet Processor 150 carries out a high speed time critical cell and packet processing including data aggregation and filtering. A second level of aggregation is carried out in the SBC 160.
  • LAN and chassis management modules 230, 235 (which in the implementation described later are combined on a single card) provide central hardware platform management and onward communication of the processing results. For this onward communication, multiple redundant LAN interfaces are provided between every SBC 160 and the LAN management module 230 across the backplane. The LAN management function has four LAN inputs (one from each SBC) and four LAN outputs (for redundancy) to the monitoring LAN network. Multiple connections are provided as different SBC manufacturers use different pin connectors on their connectors. For any particular manufacturer there is normally only one connection between the SBC 160 and the LAN management module 230. The dual redundant LAN interfaces are provided for reliability in reporting the filtered and processed data to the next level of aggregation (site processor 40 in FIG. 2). This next level can be located remotely. Each outgoing LAN interface is connectable to a completely independent network, LANA or LANB to ensure reporting in case of LAN outages. In case of dual outages, the apparatus has buffer space for a substantial quantity of reporting data.
  • The chassis management module 235 oversees monitoring and wiring functions via (for example) an I2C bus using various protocols. Although I2C is normally defined as a shared bus system, each probe unit for reliability has its own I2C connection direct to the management module. The management module can also instruct the cross-point switch to activate the “spare” output (labelled as monitoring channels CH9,10 and optical outputs EXT 9,10) when it detects failure of one of the probe unit modules. This operation can also be carried out under instruction via LAN.
  • The network probe having the architecture described above must be realised in a physical environment capable of fulfilling the functional specifications and other hardware platform considerations such as the telecommunications environment it is to be deployed in. A novel chassis (or “cardcage”) configuration has been developed to meet these requirements within a compact rack-mountable enclosure. The chassis is deployed as a fundamental component of the data collection and processing system.
  • Multi-channel Probe Apparatus—Physical Implementation
  • FIGS. 15A, B and C show how the probe architecture of FIG. 14 can be implemented with a novel chassis, in a particularly compact and reliable manner. To support the network probe architecture for this embodiment there is also provided a custom backplane 190. FIG. 16 shows which signals are carried by the backplane, and which modules provide the external connections. Similar reference signs are used as in FIG. 14, where possible.
  • Referring to FIG. 16 for an overview of the functional architecture, the similarities with the architecture of FIG. 14 will be apparent. The network probe apparatus again has eight external optical terminals for signals EXT 1-8 to be monitored. These are received at a network interface module 200. A cross-point switch module 80 receives eight corresponding electrical signals EXT 1′-8′ from module 200 through the backplane 190. Switch 80 has ten signal outputs, forming eight monitoring channels CH1-8 plus two external outputs (CH9,10). Four packet processor modules 150-1 to 150-4 receive pairs of these channels CH1,2, CH3,4 etc. respectively. CH9,10 signals are fed back to the network interface module 200, and reproduced in optical form at external terminals EXT 9,10. All internal connections just mentioned are made through the backplane via transmission lines in the backplane 190. Each packet processor is paired with a respective SBC 160-1 to 160-4 by individual cPCI bus connections in the backplane.
  • A LAN & Chassis Management module 230 is provided, which is connected to the other modules by I2C buses in the backplane, and by LAN connections. A LAN interface module 270 provides external LAN connections for the onward reporting of processing results. Also provided is a fan assembly 400 for cooling and a power supply (PSU) module 420.
  • Referring to the views in FIG. 15A chassis 100 carries a backplane 190 and provides support and interconnections for various processing modules. Conventionally, the processing modules are arranged in slots to the “front” of the backplane, and space behind the backplane in a telecommunications application is occupied by specialised interconnect. This specialised interconnect may include further removable I/O cards referred to as “transition cards”. The power supply and fans are generally located above and/or below the main card space, and the cards (processing modules) are arranged vertically in a vertical airflow. These factors make for a very tall enclosure, and one which is far deeper than the ideals of 300 mm or so in the NEBS environment. The present chassis features significant departures from the conventional design, which result in a compact and particularly shallow enclosure.
  • In the present chassis, the power supply module (PSU) 420 is located in a shallow space behind the backplane 190. The processing modules 150-1, 160-1 etc. at the front of the backplane are, moreover, arranged to lie horizontally, with their long axes parallel to the front panel. The cooling fans 400 are placed to one side of the chassis. Airflow enters the chassis at the front at 410 and flows horizontally over the components to be cooled, before exiting at the rear at 412. This arrangement gives the chassis a high cooling capability while at the same time not extending the size of the chassis beyond the desired dimensions. The outer dimensions and front flange of the housing allow the chassis to be mounted on a standard 19 inch (483 mm) equipment rack, with just 5U height. Since the width of the enclosure is fixed by standard rack dimensions, but the height is freely selectable, the horizontal arrangement allows the space occupied by the enclosure to be matched to the number of processor slots required by the application. In the known vertical orientation, a chassis which provides ten slots must be just as high as one which provides twenty slots, and additional height must be allowed for airflow arrangements at top and bottom.
  • Referring also to FIGS. 15B and 15C, there are ten card slots labelled F1-F10 on the front side of the backplane 190. There are two shallow slots B1 and B2 to the rear of the backplane 190, back-to back with F9 and F10 respectively. The front slot dimensions correspond to those of the cPCI standard, which also defines up to five standard electrical connectors referred to generally as J1 to J5,as marked in FIGS. 15B and 15C. It will be known to the skilled reader that connectors J1 and J2 have 110 pins each, and the functions of these are specified in the cPCI standard (version PICMG 2.0 R2.1 (May 1st 1998)).
  • Other connector positions are used differently by different manufacturers. Eight of the front slots (F1-F8) support the Packet Processor/SBC cards in pairs. The cards are removable using ‘hot swap’ techniques, as previously outlined, using thumb levers 195 to lock/unlock the cards and to signal that a card is to be inserted/removed. The other two front slots F9 and F10 are used for cross-point switch 80 and LAN/Management card 230 respectively. Slots F1 to F8 comply with the cPCI insofar as connectors J1, J2, J3 and J5 are concerned. Other bus standards such as VME could be also be used. The other slots F9 and F10 are unique to this design. All of the cPCI connections are standard and the connectivity, routing and termination requirements are taken from the cPCI standard specification. Keying requirements are also taken from the cPCI standard. The cPCI bus does not connect all modules, however: it is split into four independent buses CPCI1-4 to form four self-contained host-peripheral processing sub-systems. Failure of any packet processor/SBC combination will not affect the other three probe units.
  • Each of the cards is hot-swappable and will automatically recover from any reconfiguration. Moreover, by providing switches responsive to operation of the thumb levers 195, prior to physical removal of the card, the system can be warned of impending removal of an module. This warning can be used to trigger automatic re-routing of the affected monitoring channel(s). The engineer replacing the card can be instructed to await a visual signal on the front panel of the card or elsewhere, before completing the removal of the card. This signal can be sent by the LAN/Management module 270, or by a remote controlling site. This scheme allows easy operation for the engineer, without any interruption of the monitoring functions, and without special steps to command the re-routing. Such commands might otherwise require the co-ordination of actions at the local site with staff at a central site, or at best the same engineer might be required to move between the chassis being worked upon and a nearby PC workstation.
  • As mentioned above, the upper two front slots (F10, F9) hold the LAN & Management module 230 and the cross-point switch 80 respectively. Slot B1 (behind F9) carries a Network Transition card forming network interface module 200, while the LAN interface 270 in slot B2 (behind F10) carries the LAN connectors. All external connections are to the apparatus are provided by special transition cards in these rear slots, and routed through the backplane. No cabling needs to reach directly the rear of the individual probe unit slots. No cabling at all is required to the front of the enclosure. This is not only tidy externally of the housing, but leaves a clear volume behind the backplane which can be occupied by the PSU 420, shown cut-away in FIG. 15C, yielding a substantial space saving over conventional designs and giving greater ease of maintenance. The rear slot positions B1, B2 are slightly wider, to accommodate the PSU connectors 422.
  • The J4 position in the backplane is customised to route high integrity network signals (labelled “RF” in FIG. 15B). These are transported on custom connections not within cPCI standards. FIG. 15B shows schematically how these connectors transport the bearer signals in monitoring channels CHI etc. from the cross-point switch 80 in slot F9 to the appropriate packet processors 150-1 etc. in slots F2, F4, F6, F8. The external bearer signals EXT1′-8′ in electrical form can be seen passing through the backplane from the cross-point switch 80 (in slot F9) to the network interface module 200 (B1). These high speed, high-integrity signals are carried via appropriately designed transmission lines in the printed wiring of the backplane 190. The variation in transmission delay between channels in the chassis is not significant for the applications envisaged. However, in order to avoid phase errors it is still important to ensure that each half of any differential signal is routed from its source to its destination using essentially equal delays. To ensure this, the delays must be matched to the packet processors for each set for the backplane and cross-point switch combination. It is important to note that these monitoring channels are carried independently on point-to point connections, rather than through any shared bus such as is provided in the H.110 protocol for computer telephony.
  • The backplane also carries I2C buses (SMB protocol) and the LAN wiring. These are carried to each SBC 160-1 etc. either in the J3 position or the J5 position, depending on the manufacturer of the particular SBC, as described later. The LAN interface module 270 provides the apparatus with two external LAN ports for communications to the next layer of data processing/aggregation, for example a site processor.
  • Connectivity is achieved using two LANs (A and B) at 100 BaseT for a cardcage. The LAN I/O can be arranged to provide redundant connection to the external host computer 40. This may be done, for example, by using four internal LAN connection and four external LAN connections routed via different segments of the LAN 60. It is therefore possible to switch any SBC to either of the LAN connections such that any SBC may be on any one connection or split between connections. This arrangement may be changed dynamically according to circumstances, as in the case of an error occurring, and allows different combinations of load sharing and redundancy. Additionally, this allows the probe processors to communicate with each other without going on the external LAN. However, this level of redundancy in the LAN connection cannot be achieved if the total data from the probe processors exceeds the capacity of any one external LAN connection.
  • An external timing port (not shown in FIG. 16) is additionally provided for accurately time-stamping the data in the packet processor. The signal is derived from any suitable source, for example a GPS receiver giving a 1 pulse per second input. It is also possible to generate this signal using one of the Packet Processor cards, where one Packet Processor becomes a master card and the others can synchronise to it.
  • The individual modules will now be described in detail, with reference to FIGS. 17 19. This will further clarify the inter-relationships between them, and the role of the backplane 190 and chassis 100.
  • Cross-Point Switch Module 80
  • FIG. 17 is a block diagram of the cross-point switch 80 and shows also the network line interfaces 300 (RX) and 310 (TX) provided on the network interface module 200. There are eight optical line receiver interfaces 300 provided within module 200. There are thus eight bearer signals which are conditioned on the transition card (module 200) and transmitted in electrical form EXT1′-8∝ directly through the backplane 190 to the cross-point switch card 80. Ten individually configurable multiplexers (selectors) M are provided, each freely selecting one of the eight inputs. Each monitoring channel (CH1-8) and hence each packet processor 150 can receive any of the eight incoming network signals (EXT1′-8′).
  • The outputs to the packet processors (CH1-CH4) are via the backplane 190 (position J4, FIG. 15B as described above) and may follow, amongst others, DS3/OC3/OC12/OC48 electrical standards or utilise a suitable proprietary interface. Each packet processor module 150 controls its own pair of multiplexers M directly.
  • The external optical outputs EXT 9,10 are provided via transmit interface 310 of the module 200 for connecting to a spare chassis (as in FIG. 8). These outputs can be configured to be any of the eight inputs, using a further pair of multiplexers M which are controlled by the LAN/Management Module 230. In this way, the spare processor or chassis 130 mentioned above can be activated in case of processor failure. In an alternative implementation, the selection of these external output signals CH9 and CH10 can be performed entirely on the network interface module 200, without passing through the backplane or the cross-point switch module 80.
  • Although functionally each multiplexer M of the cross-point switch is described and shown as being controlled by a respective packet processor 150, in the present embodiment this control is conducted via the LAN & management module 230. Commands or requests for a particular connection can be sent to the LAN & management module from the packet processor (or associated SBC 160) via the LAN connections, or I2C buses, provided in connectors J3 or J5.
  • Packet Processor Module 150
  • FIG. 18 is a block diagram of one of the Packet Processor modules 150 of the apparatus. The main purpose of packet processor (PP) 150 is to capture data from the network interface. This data is then processed, analysed and filtered before being sent to a SBC via a local cPCI bus. Packet processor 150 complies with Compact PCI Hot Swap specification PICMG-2.1 R 1.0, mentioned above. Packet Processor 150 here described is designed to work up to 622 Mbits/s using a Sonet/SDH frame structure carrying ATM cells using AAL5 Segmentation And Reassembly (SAR). Other embodiments can be employed using the same architecture, for example to operate at OC48 (2.4 Gbit.s−1).
  • The following description makes reference to a single “half” of the two-channel packet processor module 150, and to a single Packet Processor/SBC pair only (single channel). The chassis as described supports four such Packet Processor/SBC pairs, and each packet processor comprises two processing means to handle multiple bearer signals (multiple monitoring channels).
  • It is possible for the Packet Processor 150 to filter the incoming data. This is essential due to the very high speed of the broadband network interfaces being monitored, such as would be the case for OC-3 and above. The incoming signals are processed by the Packet Processor, this generally taking the form of time stamping the data and performing filtering based on appropriate fields in the data. Different fields can be chosen accordingly, for example ATM cells by VPI/NVCI (VC) number, IP by IP address, or filtering can be based on other, user defined fields. It is necessary to provide the appropriate means to recover the clock and data from the incoming signal, as the means needed varies dependent on link media and coding schemes used. In a typical example using ATM, ATM cells are processed by VPI/NVCI (VC) number. The Packet Processor is provided with means 320 to recover the clock and data from the incoming signal bit stream. The data is then ‘deframed’ at a transmission convergence sub-layer 330 to extract the ATM cells. The ATM cells are then time-stamped 340 and then buffered in a First In First Out (FIFO) buffer 350 to smooth the rate of burst type data. Cells from this FIFO buffer are then passed sequentially to an ATM cell processor 360. The packet processor can store ATM cells to allow it to re-assemble cells into a message—a Protocol Data Unit (PDU). Only when the PDU has been assembled will it be sent to the SBC. Before assembly, the VC of a cell is checked to ascertain what actions should be taken, for example, to discard cell, assemble PDU, or pass on the raw cell.
  • Data is transferred into the SBC memory using cPCI DMA transfers to a data buffer 38. This ensures the very high data throughput that may be required if large amounts of data are being stored. The main limitation in the amount of data that is processed will be due to the applications software that processes it. It is therefore the responsibility of the Packet Processor 150 to carry out as much pre-processing of the data as possible so that only that data which is relevant is passed up into the application domain.
  • The first function of the Packet Processor 150 is to locate the instructions for processing the VC (virtual channel or ) to which the cell belongs. To do this it must convert the very large VPI/VCI of the cell into a manageable pointer to its associated processing instructions (VC # key). This is done using a hashing algorithm by hash generator 390, which in turn uses a VC hash table. Processor 150, having located the instructions, can then process the cell.
  • Processing the cell involves updating status information for the particular VC (e.g. cell count) and forwarding the cell and any associated information (e.g. “Protocol Data Unit (PDU) received”) to the SBC 160 if required. By reading the status of a particular VC, the processor can vary its action depending on the current status of that VC (e.g. providing summary information after first cell received). Cell processor 360 also requires certain configurable information which is applicable to all of its processing functions regardless of VC (e.g. buffer sizes) and this ‘global’ configuration is accessible via a global configuration store.
  • A time stamping function 340 can be synchronised to an external GPS time signal or can be adjusted by the SBC 160. The SBC can also configure and monitor the ‘deframer’ (e.g. set up frame formats and monitor alarms) as well as select the optical inputs (EXT 1-8) to be monitored. Packet Processor 150 provides all of the necessary cPCI interface functions.
  • Each packet processor board 150-1 etc. is removable without disconnecting power from the chassis. This board will not impact the performance of other boards in the chassis other than the associated SBC. The microprocessor notifies the presence or absence of the packet processor and processes any signal loss conditions generated by the Packet Processor.
  • Single Board Computer (SBC) Modules 160
  • The SBC module 160 is not shown in detail herein, being a general-purpose processing module, examples including the Motorola CPV5350, FORCE CPCI-730, and SMT NAPA. The SBC 150 is a flexible, programmable device. In this specific embodiment two such devices may exist on one cPCI card, in the form of “piggyback” modules (PMCs). The 100 BaseT interfaces, disk memory etc. may also be in the form of PMCs. As already described, communications via the cPCI bus (J1/J2) on the input side and via the LAN port on the output side and all other connections are via the backplane at the rear, unless for diagnostic purposes for which an RS-232 port is provided at the front.
  • LAN & Chassis Management Module 230
  • FIG. 19 is a block diagram of the combined LAN and chassis management card for the network probe as has been described. Module 230 performs a number of key management functions, although the probe units 150/160 can be commanded independently from a remote location, via the LAN interface. The card firstly provides a means for routing probe units SMB and LAN connections, including dual independent LAN switches 500A and 500B to route the LAN connections with redundancy and sufficient bandwidth to the outside world.
  • On the chassis management side, a Field Programmable Gate Array (FGPA) 510 within this module performs the following functions:
      • (520) I2C and SMB communications, with reference to chassis configuration storage registers 530
      • (540) ‘magic packet’ handling, for resetting the modules remotely in the event that the higher level network protocols “hang up”;
      • (550) environmental control and monitoring of fan speed and PSU & CPU temperatures functions to ensure optimal operating conditions for the chassis, and preferably also to minimise unnecessary power consumption and fan noise;
  • A hardware watchdog feature 560 is also included to monitor the activity of all modules and take appropriate action in the event that any of them becomes inactive or unresponsive. This includes the ability to reset modules.
  • Finally, the management module implements at 580 “Multivendor Interconnect”, whereby differences in the usage of cPCI connectors pins (or whatever standard is adopted) between a selection of processor vendors can be accommodated.
  • As mentioned previously, the chassis carries at some locations, cPCI processor modules from a choice of selected vendors, but these are coupled via cPCI bus to special peripheral cards. While such cards are known in principle, and the processor-peripheral bus is fully specified, the apparatus described does not have a conventional interconnect arrangement for the broadband signals, multiple redundant LAN connections and so forth. Even for the same functions, such as the LAN signals and I2C/SMB protocol for hardware monitoring, different SBC vendors place the relevant signals on different pins of the cPCI connector set, particularly they may be on certain pins in J3 with some vendors, and on various locations in J5 with others. Conventionally, this means that system designer has to restrict the user's choice of SBC modules to those of one vendor, or a group of vendors who have adopted the same pin assignment for LAN and SMB functions, besides the standard assignments for J1, and J2 which are specified for all cPCI products.
  • To overcome this obstacle a modular Multivendor Interconnect (MVI) solution may be applied. The MVI module 580 is effectively four product-specific configuration cards that individually route the LAN and SMB signals received from each SBC 160-1 etc. to the correct locations on the LAN/Management cards. One MVI card exists for each processor. These are carried piggyback on the LAN/management module 230, and each is accessible from the front panel of the enclosure. The backplane in locations J3 and J5 includes sufficient connectors, pins and interconnections between the modules to satisfy a number of different possible SBC types. Needless to say, when replacing a processor card with one of a different type, the corresponding MVI configuration card needs exchanging also.
  • An alternative scheme to switch the card connection automatically based on vendor ID codes read via the backplane can also be envisaged. In a particular embodiment, for example, the “Geographic Address” pins defined in the cPCI connector specifications may be available for signalling (under control of a start-up program) which type of SBC 160 is in a given slot. The routing of SMB, LAN and other signals can then be switched electronically under control of programs in the LAN & management card 230.
  • Conclusion
  • Those skilled in the art will recognise that the invention in any of its aspects is not limited to the specific embodiments disclosed herein. In particular, unless specified in the claims, the invention is in no way limited to any particular type of processor, type of network to be monitored, protocol, choice of physical interconnect, choice of peripheral bus (cPCI v. VME, parallel v. serial etc.), number of bearers per chassis, number of bearers per monitoring channel, number of monitoring channels per probe unit.
  • The fact that independent processor subsystems are arranged in the chassis allows multiple data paths from the telecommunications network to the LAN network, thereby providing inherent redundancy. On the other hand, for other applications such as computer telephony, reliability and availability may not be so critical as in the applications addressed by the present embodiment. For such applications, a similar chassis arrangement but with H.110 bus in the backplane may be very useful. Similarly, the cPCI bus, I2C bus and/or LAN interconnect may be shared among all the modules,
  • Each aspect of the invention mentioned above is to be considered as independent, such that the probe functional architecture can be used irrespective of the chassis configuration, and vice versa. On the other hand, the reader will recognise that the specific combinations of these features offers in a highly desirable instrumentation system, which provides the desired functionality, reliability and availability levels in a compact and scalable architecture.
  • In the specific embodiments described herein, each probe unit comprising first and second processor modules (the packet processor and SBC respectively) is configured to monitor simplex and duplex bearers. The invention, in any of its aspects, is not limited to such embodiments. In particular, each probe unit may be adapted to process one or more individual bearer signals. In the case of lower speed protocol signals the bearer signals can be multiplexed together (for example within the cross-point switch module 80 or network interface module 200) to take full advantage of the internal bandwidth of the architecture.

Claims (11)

1-12. (canceled)
13. An enclosure as claimed in claim 25, together with a processing sub-system comprising one processing module having specialised capability for processing a specific type of input signal to be analysed, and another processing module of general purpose type for receiving partially processed data from the first processing module and for further processing and reducing said data for onward communication.
14-24. (canceled)
25. A computer equipment chassis comprising a housing and a backplane providing locations for a plurality of independent processing sub-systems, each processing sub-system comprising first and second processing modules to be separately mounted on the backplane at adjacent locations, the backplane providing at least four independent host-peripheral interfaces, each extending only between the adjacent locations for said first and second processing modules, and each being configured such that in operation the first processing module operates as a peripheral and the second processing module operates as host.
26. A chassis as claimed in claim 25, wherein the housing and backplane further provide a location for a multi-channel interface module providing external connections for all of the processing sub-systems, the backplane routing signals from the interface module to the appropriate sub-systems.
27. A chassis as claimed in claim 26, wherein said housing and backplane further provide a location for a switching module, such that each external connection can be routed and re-routed to different processing sub-systems.
28. A chassis as claimed in claimed 25, wherein the backplane further provides interconnections to the locations for the processing sub-systems for communication externally of the housing.
29. A chassis as claimed in claim 28, wherein said housing and backplane further provide a management module location for routing of said communication from the locations for the processing sub-systems to external connectors.
30-50. (canceled)
51. A chassis as claimed in claim 25, wherein the host-peripheral interfaces are cPCI interfaces.
52. A chassis as claimed in claim 25, wherein there are at least four independent processing sub-systems and host-peripheral interfaces.
US10/900,793 1999-10-01 2004-07-27 Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment Abandoned US20050041684A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/900,793 US20050041684A1 (en) 1999-10-01 2004-07-27 Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
GB9923143A GB2354883B (en) 1999-10-01 1999-10-01 Chassis for processing sub-systems
GB9923142.5 1999-10-01
GB9923143.3 1999-10-01
GB9923142A GB2354905B (en) 1999-10-01 1999-10-01 Multi-channel network monitoring apparatus,signal replicating device,and systems including such apparatus and devices
US09/672,593 US6925052B1 (en) 1999-10-01 2000-09-28 Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment
US10/900,793 US20050041684A1 (en) 1999-10-01 2004-07-27 Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/672,593 Division US6925052B1 (en) 1999-10-01 2000-09-28 Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment

Publications (1)

Publication Number Publication Date
US20050041684A1 true US20050041684A1 (en) 2005-02-24

Family

ID=34796852

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/672,593 Expired - Lifetime US6925052B1 (en) 1999-10-01 2000-09-28 Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment
US10/900,793 Abandoned US20050041684A1 (en) 1999-10-01 2004-07-27 Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/672,593 Expired - Lifetime US6925052B1 (en) 1999-10-01 2000-09-28 Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment

Country Status (1)

Country Link
US (2) US6925052B1 (en)

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020105966A1 (en) * 2000-11-17 2002-08-08 Ronak Patel Backplane interface adapter with error control and redundant fabric
US20050147121A1 (en) * 2003-12-29 2005-07-07 Gary Burrell Method and apparatus to double LAN service unit bandwidth
US20050175018A1 (en) * 2003-05-15 2005-08-11 Wong Yuen F. System and method for high speed packet transmission implementing dual transmit and receive pipelines
US20050226148A1 (en) * 2004-04-12 2005-10-13 Nortel Networks Limited Method and apparatus for enabling redundancy in a network element architecture
US20060062233A1 (en) * 2000-12-19 2006-03-23 Chiaro Networks Ltd. System and method for router queue and congestion management
US20060160508A1 (en) * 2005-01-18 2006-07-20 Ibm Corporation Method and apparatus for scheduling wireless LAN traffic
US7187687B1 (en) 2002-05-06 2007-03-06 Foundry Networks, Inc. Pipeline method and system for switching packets
US20070081553A1 (en) * 2005-10-12 2007-04-12 Finisar Corporation Network tap device powered by power over ethernet
US20070081549A1 (en) * 2005-10-12 2007-04-12 Finisar Corporation Network tap/aggregator configured for power over ethernet operation
US20070110088A1 (en) * 2005-11-12 2007-05-17 Liquid Computing Corporation Methods and systems for scalable interconnect
US20070147271A1 (en) * 2005-12-27 2007-06-28 Biswajit Nandy Real-time network analyzer
US20070171966A1 (en) * 2005-11-15 2007-07-26 Light Greta L Passive tap and associated system for tapping network data
US20070174492A1 (en) * 2005-11-15 2007-07-26 Light Greta L Passive network tap for tapping network data
US20070208876A1 (en) * 2002-05-06 2007-09-06 Davis Ian E Method and apparatus for efficiently processing data packets in a computer network
US20070253420A1 (en) * 2000-11-17 2007-11-01 Andrew Chang Backplane interface adapter
US20070288690A1 (en) * 2006-06-13 2007-12-13 Foundry Networks, Inc. High bandwidth, high capacity look-up table implementation in dynamic random access memory
US20080049742A1 (en) * 2006-08-22 2008-02-28 Deepak Bansal System and method for ecmp load sharing
US20080095064A1 (en) * 2006-10-19 2008-04-24 Angel Molina Method and apparatus for improved non-intrusive monitoring functions
US20080225859A1 (en) * 1999-01-12 2008-09-18 Mcdata Corporation Method for scoring queued frames for selective transmission through a switch
US20090282322A1 (en) * 2007-07-18 2009-11-12 Foundry Networks, Inc. Techniques for segmented crc design in high speed networks
US20090279441A1 (en) * 2007-01-11 2009-11-12 Foundry Networks, Inc. Techniques for transmitting failure detection protocol packets
US20090279559A1 (en) * 2004-03-26 2009-11-12 Foundry Networks, Inc., A Delaware Corporation Method and apparatus for aggregating input data streams
US20090279423A1 (en) * 2006-11-22 2009-11-12 Foundry Networks, Inc. Recovering from Failures Without Impact on Data Traffic in a Shared Bus Architecture
US20090282148A1 (en) * 2007-07-18 2009-11-12 Foundry Networks, Inc. Segmented crc design in high speed networks
US7649885B1 (en) * 2002-05-06 2010-01-19 Foundry Networks, Inc. Network routing system for enhanced efficiency and monitoring capability
US7657703B1 (en) 2004-10-29 2010-02-02 Foundry Networks, Inc. Double density content addressable memory (CAM) lookup scheme
US7738450B1 (en) 2002-05-06 2010-06-15 Foundry Networks, Inc. System architecture for very fast ethernet blade
US7817540B1 (en) * 2002-05-08 2010-10-19 Cisco Technology, Inc. Method and apparatus for N+1 RF switch with passive working path and active protection path
US7830884B2 (en) 2002-05-06 2010-11-09 Foundry Networks, Llc Flexible method for processing data packets in a network routing system for enhanced efficiency and monitoring capability
US8090901B2 (en) 2009-05-14 2012-01-03 Brocade Communications Systems, Inc. TCAM management approach that minimize movements
US8149839B1 (en) 2007-09-26 2012-04-03 Foundry Networks, Llc Selection of trunk ports and paths using rotation
US8190881B2 (en) 2007-10-15 2012-05-29 Foundry Networks Llc Scalable distributed web-based authentication
US20120295674A1 (en) * 2009-11-17 2012-11-22 By Light Professional IT Services Systems, methods and devices for convergent communications
US8448162B2 (en) 2005-12-28 2013-05-21 Foundry Networks, Llc Hitless software upgrades
US8599850B2 (en) 2009-09-21 2013-12-03 Brocade Communications Systems, Inc. Provisioning single or multistage networks using ethernet service instances (ESIs)
US20140032748A1 (en) * 2012-07-25 2014-01-30 Niksun, Inc. Configurable network monitoring methods, systems, and apparatus
US8730961B1 (en) 2004-04-26 2014-05-20 Foundry Networks, Llc System and method for optimizing router lookup
US20140371883A1 (en) * 2013-06-13 2014-12-18 Dell Products L.P. System and method for switch management
US8935568B2 (en) 2012-07-27 2015-01-13 Dell Products, Lp System and method of replicating virtual machines for live migration between data centers
US9104645B2 (en) 2012-07-27 2015-08-11 Dell Products, Lp System and method of replicating virtual machines for live migration between data centers
CN111065005A (en) * 2019-12-31 2020-04-24 汇智智能科技有限公司 Intelligent Internet of things edge gateway
US20210364311A1 (en) * 2018-09-06 2021-11-25 Google Llc Displaying Personalized Landmarks in a Mapping Application
US20220038171A1 (en) * 2020-07-28 2022-02-03 Atc Technologies, Llc Single Frequency Network (SFN) for Broadcast/Multicast application on a Spotbeam Satellite

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7843963B1 (en) * 2000-10-17 2010-11-30 Sprint Communications Company L.P. Probe device for determining channel information in a broadband wireless system
KR100434348B1 (en) * 2000-12-27 2004-06-04 엘지전자 주식회사 special resource multiplexing device of the inteligent network system and controlling method therefore
US6904005B2 (en) * 2001-03-22 2005-06-07 International Business Machines Corporation Power supply apparatus and method using same
US7599293B1 (en) 2002-04-25 2009-10-06 Lawrence Michael Bain System and method for network traffic and I/O transaction monitoring of a high speed communications network
US20030202462A1 (en) * 2002-04-29 2003-10-30 Smith David B. Method and apparatus for fail over protection in a voice over internet communication system
US7403525B2 (en) * 2002-05-15 2008-07-22 Broadcom Corporation Efficient routing of packet data in a scalable processing resource
US7152107B2 (en) * 2002-08-07 2006-12-19 Hewlett-Packard Development Company, L.P. Information sharing device
US8266271B2 (en) * 2002-09-10 2012-09-11 Jds Uniphase Corporation Propagation of signals between devices for triggering capture of network data
KR100495876B1 (en) * 2002-11-25 2005-06-16 유앤아이 주식회사 bone fixation appratus and assembling method and tool
US7827248B2 (en) * 2003-06-13 2010-11-02 Randy Oyadomari Discovery and self-organization of topology in multi-chassis systems
US7130275B2 (en) * 2003-06-16 2006-10-31 Motorola, Inc. Extended automatic protection switching arrangement
US8190722B2 (en) * 2003-06-30 2012-05-29 Randy Oyadomari Synchronization of timestamps to compensate for communication latency between devices
US7308705B2 (en) * 2003-08-29 2007-12-11 Finisar Corporation Multi-port network tap
US7373430B2 (en) * 2003-12-24 2008-05-13 Nokia Corporation Cluster accelerator network interface with filter
US8369218B2 (en) * 2004-04-28 2013-02-05 Net Optics, Inc. Zero-interrupt network tap
US7460375B2 (en) * 2004-05-07 2008-12-02 Rackable Systems, Inc. Interface assembly
KR100603599B1 (en) * 2004-11-25 2006-07-24 한국전자통신연구원 Apparatus and Method for Redundancy Control of Redundancy Switch Board
US8320242B2 (en) * 2004-12-24 2012-11-27 Net Optics, Inc. Active response communications network tap
DE102005007062B4 (en) * 2005-02-16 2007-07-19 Siemens Ag Method for transmitting signaling data between peripheral devices of a switching system
US7760859B2 (en) * 2005-03-07 2010-07-20 Net Optics, Inc. Intelligent communications network tap port aggregator
US8264960B2 (en) * 2005-05-31 2012-09-11 Broadcom Corporation Method and system for sharing AV/record resources in a programmable transport demultiplexer and PVR engine
US8571053B2 (en) * 2005-05-31 2013-10-29 Broadcom Corporation Method and system for architecture of a fast programmable transport demultiplexer using double buffered approach
US8098657B2 (en) * 2005-05-31 2012-01-17 Broadcom Corporation System and method for providing data commonality in a programmable transport demultiplexer engine
US7697537B2 (en) * 2006-03-21 2010-04-13 Broadcom Corporation System and method for using generic comparators with firmware interface to assist video/audio decoders in achieving frame sync
US20070248318A1 (en) * 2006-03-31 2007-10-25 Rodgers Stephane W System and method for flexible mapping of AV vs record channels in a programmable transport demultiplexer/PVR engine
US7958396B2 (en) * 2006-05-19 2011-06-07 Microsoft Corporation Watchdog processors in multicore systems
US8094576B2 (en) 2007-08-07 2012-01-10 Net Optic, Inc. Integrated switch tap arrangement with visual display arrangement and methods thereof
US7903576B2 (en) * 2007-08-07 2011-03-08 Net Optics, Inc. Methods and arrangement for utilization rate display
US7898984B2 (en) 2007-08-07 2011-03-01 Net Optics, Inc. Enhanced communication network tap port aggregator arrangement and methods thereof
US7773529B2 (en) 2007-12-27 2010-08-10 Net Optic, Inc. Director device and methods thereof
US9048884B2 (en) * 2008-05-02 2015-06-02 Lockheed Martin Corporation Magnetic based short range communications device, system and method
US9813448B2 (en) 2010-02-26 2017-11-07 Ixia Secured network arrangement and methods thereof
US9019863B2 (en) 2010-02-26 2015-04-28 Net Optics, Inc. Ibypass high density device and methods thereof
US9749261B2 (en) 2010-02-28 2017-08-29 Ixia Arrangements and methods for minimizing delay in high-speed taps
US8902735B2 (en) 2010-02-28 2014-12-02 Net Optics, Inc. Gigabits zero-delay tap and methods thereof
US8755293B2 (en) * 2010-02-28 2014-06-17 Net Optics, Inc. Time machine device and methods thereof
US8798431B2 (en) 2012-06-01 2014-08-05 Telefonaktiebolaget L M Ericsson (Publ) Fine-grained optical shuffle interconnect topology migration
US9819436B2 (en) 2013-08-26 2017-11-14 Coriant Operations, Inc. Intranodal ROADM fiber management apparatuses, systems, and methods

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4978953A (en) * 1988-11-22 1990-12-18 Technology 80, Inc. Device for monitoring multiple digital data channels
US5163052A (en) * 1989-10-12 1992-11-10 Ncr Corporation High reliability computer diagnostics system
US5457729A (en) * 1993-03-15 1995-10-10 Symmetricom, Inc. Communication network signalling system link monitor and test unit
US5522042A (en) * 1994-01-28 1996-05-28 Cabletron Systems, Inc. Distributed chassis agent for distributed network management
US5560033A (en) * 1994-08-29 1996-09-24 Lucent Technologies Inc. System for providing automatic power control for highly available n+k processors
US5649100A (en) * 1994-08-25 1997-07-15 3Com Corporation Network backplane interface having a network management section for managing and configuring networks on the backplane based upon attributes established in a parameter table
US5751932A (en) * 1992-12-17 1998-05-12 Tandem Computers Incorporated Fail-fast, fail-functional, fault-tolerant multiprocessor system
US5771225A (en) * 1993-09-20 1998-06-23 Fujitsu Limited System for adjusting characteristic of protection unit when switching from plural working units having various characteristics to protection unit
US6012151A (en) * 1996-06-28 2000-01-04 Fujitsu Limited Information processing apparatus and distributed processing control method
US6021111A (en) * 1995-12-07 2000-02-01 Nec Corporation Unit switching apparatus with failure detection
US6081503A (en) * 1997-10-01 2000-06-27 Lucent Technologies Inc. Control architecture using an embedded signal status protocol
US6324608B1 (en) * 1997-05-13 2001-11-27 Micron Electronics Method for hot swapping of network components
US20020032842A1 (en) * 1997-11-19 2002-03-14 Shigeo Kawauchi Data acquisition apparatus and memory controller
US6484126B1 (en) * 1997-06-06 2002-11-19 Westinghouse Electric Company Llc Digital plant protection system with engineered safety features component control system
US20030033393A1 (en) * 2001-08-07 2003-02-13 Larson Thane M. System and method for providing network address information in a server system
US6532089B1 (en) * 1998-08-20 2003-03-11 Nec Corporation Optical cross-connect, method of switching over optical path, optical ADM, and optical cross-connect network system
US6636922B1 (en) * 1999-03-17 2003-10-21 Adaptec, Inc. Methods and apparatus for implementing a host side advanced serial protocol
US6859882B2 (en) * 1990-06-01 2005-02-22 Amphus, Inc. System, method, and architecture for dynamic server power management and dynamic workload management for multi-server environment
US6988221B2 (en) * 1998-12-18 2006-01-17 Triconex Method and apparatus for processing control using a multiple redundant processor control system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3280300D1 (en) 1981-11-26 1991-03-07 Toshiba Kawasaki Kk OPTICAL DATA TRANSFER SYSTEM.
DE4024739A1 (en) 1990-08-03 1992-02-06 Siemens Ag TRANSMISSION DEVICE FOR TRANSMITTING NEWS AND ADDITIONAL SIGNALS
GB9405771D0 (en) 1994-03-23 1994-05-11 Plessey Telecomm Telecommunications system protection scheme
GB2292820A (en) 1994-08-25 1996-03-06 Michael Victor Rodrigues Multi-compatible computer with slot-in mother-cards
WO1999046671A1 (en) 1998-03-10 1999-09-16 Quad Research High speed fault tolerant mass storage network information server
NL1011398C1 (en) 1999-02-26 1999-04-13 Koninkl Kpn Nv Optical splitter for telecommunications network
GB2352064A (en) 1999-07-13 2001-01-17 Thomson Training & Simulation Multi-processor system with PCI backplane

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4978953A (en) * 1988-11-22 1990-12-18 Technology 80, Inc. Device for monitoring multiple digital data channels
US5163052A (en) * 1989-10-12 1992-11-10 Ncr Corporation High reliability computer diagnostics system
US6859882B2 (en) * 1990-06-01 2005-02-22 Amphus, Inc. System, method, and architecture for dynamic server power management and dynamic workload management for multi-server environment
US5751932A (en) * 1992-12-17 1998-05-12 Tandem Computers Incorporated Fail-fast, fail-functional, fault-tolerant multiprocessor system
US5457729A (en) * 1993-03-15 1995-10-10 Symmetricom, Inc. Communication network signalling system link monitor and test unit
US5771225A (en) * 1993-09-20 1998-06-23 Fujitsu Limited System for adjusting characteristic of protection unit when switching from plural working units having various characteristics to protection unit
US5522042A (en) * 1994-01-28 1996-05-28 Cabletron Systems, Inc. Distributed chassis agent for distributed network management
US5649100A (en) * 1994-08-25 1997-07-15 3Com Corporation Network backplane interface having a network management section for managing and configuring networks on the backplane based upon attributes established in a parameter table
US5560033A (en) * 1994-08-29 1996-09-24 Lucent Technologies Inc. System for providing automatic power control for highly available n+k processors
US6021111A (en) * 1995-12-07 2000-02-01 Nec Corporation Unit switching apparatus with failure detection
US6012151A (en) * 1996-06-28 2000-01-04 Fujitsu Limited Information processing apparatus and distributed processing control method
US6324608B1 (en) * 1997-05-13 2001-11-27 Micron Electronics Method for hot swapping of network components
US6484126B1 (en) * 1997-06-06 2002-11-19 Westinghouse Electric Company Llc Digital plant protection system with engineered safety features component control system
US6081503A (en) * 1997-10-01 2000-06-27 Lucent Technologies Inc. Control architecture using an embedded signal status protocol
US20020032842A1 (en) * 1997-11-19 2002-03-14 Shigeo Kawauchi Data acquisition apparatus and memory controller
US6532089B1 (en) * 1998-08-20 2003-03-11 Nec Corporation Optical cross-connect, method of switching over optical path, optical ADM, and optical cross-connect network system
US6988221B2 (en) * 1998-12-18 2006-01-17 Triconex Method and apparatus for processing control using a multiple redundant processor control system
US6636922B1 (en) * 1999-03-17 2003-10-21 Adaptec, Inc. Methods and apparatus for implementing a host side advanced serial protocol
US20030033393A1 (en) * 2001-08-07 2003-02-13 Larson Thane M. System and method for providing network address information in a server system

Cited By (113)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080225859A1 (en) * 1999-01-12 2008-09-18 Mcdata Corporation Method for scoring queued frames for selective transmission through a switch
US8014315B2 (en) 1999-01-12 2011-09-06 Mcdata Corporation Method for scoring queued frames for selective transmission through a switch
US7848253B2 (en) 1999-01-12 2010-12-07 Mcdata Corporation Method for scoring queued frames for selective transmission through a switch
US9030937B2 (en) 2000-11-17 2015-05-12 Foundry Networks, Llc Backplane interface adapter with error control and redundant fabric
US20090290499A1 (en) * 2000-11-17 2009-11-26 Foundry Networks, Inc. Backplane Interface Adapter with Error Control and Redundant Fabric
US8619781B2 (en) 2000-11-17 2013-12-31 Foundry Networks, Llc Backplane interface adapter with error control and redundant fabric
US7948872B2 (en) 2000-11-17 2011-05-24 Foundry Networks, Llc Backplane interface adapter with error control and redundant fabric
US7978702B2 (en) 2000-11-17 2011-07-12 Foundry Networks, Llc Backplane interface adapter
US20100034215A1 (en) * 2000-11-17 2010-02-11 Foundry Networks, Inc. Backplane Interface Adapter with Error Control
US7995580B2 (en) 2000-11-17 2011-08-09 Foundry Networks, Inc. Backplane interface adapter with error control and redundant fabric
US20090279561A1 (en) * 2000-11-17 2009-11-12 Foundry Networks, Inc. Backplane Interface Adapter
US20090287952A1 (en) * 2000-11-17 2009-11-19 Foundry Networks, Inc. Backplane Interface Adapter with Error Control and Redundant Fabric
US8964754B2 (en) 2000-11-17 2015-02-24 Foundry Networks, Llc Backplane interface adapter with error control and redundant fabric
US20020105966A1 (en) * 2000-11-17 2002-08-08 Ronak Patel Backplane interface adapter with error control and redundant fabric
US8514716B2 (en) 2000-11-17 2013-08-20 Foundry Networks, Llc Backplane interface adapter with error control and redundant fabric
US20070253420A1 (en) * 2000-11-17 2007-11-01 Andrew Chang Backplane interface adapter
US7813365B2 (en) 2000-12-19 2010-10-12 Foundry Networks, Inc. System and method for router queue and congestion management
US20060062233A1 (en) * 2000-12-19 2006-03-23 Chiaro Networks Ltd. System and method for router queue and congestion management
US7974208B2 (en) 2000-12-19 2011-07-05 Foundry Networks, Inc. System and method for router queue and congestion management
US7738450B1 (en) 2002-05-06 2010-06-15 Foundry Networks, Inc. System architecture for very fast ethernet blade
US20070208876A1 (en) * 2002-05-06 2007-09-06 Davis Ian E Method and apparatus for efficiently processing data packets in a computer network
US8671219B2 (en) 2002-05-06 2014-03-11 Foundry Networks, Llc Method and apparatus for efficiently processing data packets in a computer network
US20110002340A1 (en) * 2002-05-06 2011-01-06 Foundry Networks, Inc. Pipeline method and system for switching packets
US7813367B2 (en) 2002-05-06 2010-10-12 Foundry Networks, Inc. Pipeline method and system for switching packets
US7187687B1 (en) 2002-05-06 2007-03-06 Foundry Networks, Inc. Pipeline method and system for switching packets
US20100246588A1 (en) * 2002-05-06 2010-09-30 Foundry Networks, Inc. System architecture for very fast ethernet blade
US8989202B2 (en) 2002-05-06 2015-03-24 Foundry Networks, Llc Pipeline method and system for switching packets
US7649885B1 (en) * 2002-05-06 2010-01-19 Foundry Networks, Inc. Network routing system for enhanced efficiency and monitoring capability
US7830884B2 (en) 2002-05-06 2010-11-09 Foundry Networks, Llc Flexible method for processing data packets in a network routing system for enhanced efficiency and monitoring capability
US20090279548A1 (en) * 2002-05-06 2009-11-12 Foundry Networks, Inc. Pipeline method and system for switching packets
US8170044B2 (en) 2002-05-06 2012-05-01 Foundry Networks, Llc Pipeline method and system for switching packets
US7817540B1 (en) * 2002-05-08 2010-10-19 Cisco Technology, Inc. Method and apparatus for N+1 RF switch with passive working path and active protection path
US8718051B2 (en) 2003-05-15 2014-05-06 Foundry Networks, Llc System and method for high speed packet transmission
US20050175018A1 (en) * 2003-05-15 2005-08-11 Wong Yuen F. System and method for high speed packet transmission implementing dual transmit and receive pipelines
US9461940B2 (en) 2003-05-15 2016-10-04 Foundry Networks, Llc System and method for high speed packet transmission
US8811390B2 (en) 2003-05-15 2014-08-19 Foundry Networks, Llc System and method for high speed packet transmission
US20050147121A1 (en) * 2003-12-29 2005-07-07 Gary Burrell Method and apparatus to double LAN service unit bandwidth
US7573898B2 (en) * 2003-12-29 2009-08-11 Fujitsu Limited Method and apparatus to double LAN service unit bandwidth
US9338100B2 (en) 2004-03-26 2016-05-10 Foundry Networks, Llc Method and apparatus for aggregating input data streams
US8493988B2 (en) 2004-03-26 2013-07-23 Foundry Networks, Llc Method and apparatus for aggregating input data streams
US20090279559A1 (en) * 2004-03-26 2009-11-12 Foundry Networks, Inc., A Delaware Corporation Method and apparatus for aggregating input data streams
US7817659B2 (en) 2004-03-26 2010-10-19 Foundry Networks, Llc Method and apparatus for aggregating input data streams
US20050226148A1 (en) * 2004-04-12 2005-10-13 Nortel Networks Limited Method and apparatus for enabling redundancy in a network element architecture
US8730961B1 (en) 2004-04-26 2014-05-20 Foundry Networks, Llc System and method for optimizing router lookup
US7657703B1 (en) 2004-10-29 2010-02-02 Foundry Networks, Inc. Double density content addressable memory (CAM) lookup scheme
US7953922B2 (en) 2004-10-29 2011-05-31 Foundry Networks, Llc Double density content addressable memory (CAM) lookup scheme
US20100100671A1 (en) * 2004-10-29 2010-04-22 Foundry Networks, Inc. Double density content addressable memory (cam) lookup scheme
US7953923B2 (en) 2004-10-29 2011-05-31 Foundry Networks, Llc Double density content addressable memory (CAM) lookup scheme
US20060160508A1 (en) * 2005-01-18 2006-07-20 Ibm Corporation Method and apparatus for scheduling wireless LAN traffic
US8117299B2 (en) * 2005-01-18 2012-02-14 Lenovo (Singapore) Pte. Ltd. Method and apparatus for scheduling wireless LAN traffic
US7809960B2 (en) 2005-10-12 2010-10-05 Cicchetti Christopher J Network tap device powered by power over ethernet
US7809476B2 (en) * 2005-10-12 2010-10-05 Cicchetti Christopher J Network tap/aggregator configured for power over ethernet operation
US20070081553A1 (en) * 2005-10-12 2007-04-12 Finisar Corporation Network tap device powered by power over ethernet
US20070081549A1 (en) * 2005-10-12 2007-04-12 Finisar Corporation Network tap/aggregator configured for power over ethernet operation
US20070110088A1 (en) * 2005-11-12 2007-05-17 Liquid Computing Corporation Methods and systems for scalable interconnect
WO2007144698A2 (en) * 2005-11-12 2007-12-21 Liquid Computing Corporation Methods and systems for scalable interconnect
WO2007144698A3 (en) * 2005-11-12 2008-07-10 Liquid Computing Corp Methods and systems for scalable interconnect
US7860034B2 (en) 2005-11-15 2010-12-28 Light Greta L Receive only physical interface device IC used in a passive network tap
US8027277B2 (en) 2005-11-15 2011-09-27 Jds Uniphase Corporation Passive network tap for tapping network data
US20080013467A1 (en) * 2005-11-15 2008-01-17 Finisar Corporation Passive Network Tap With Digital Signal Processing for Separating Signals
US7860033B2 (en) 2005-11-15 2010-12-28 Light Greta L Passive network tap with bidirectional coupler and associated splitting methods
US7787400B2 (en) 2005-11-15 2010-08-31 Light Greta L Passive network tap with digital signal processing for separating signals
US7778207B2 (en) 2005-11-15 2010-08-17 Light Greta L Passive tap and associated system for tapping network data
US20070253349A1 (en) * 2005-11-15 2007-11-01 Finisar Corporation Passive Network Tap With Bidirectional Coupler and Associated Splitting Methods
US20070174492A1 (en) * 2005-11-15 2007-07-26 Light Greta L Passive network tap for tapping network data
US20080014879A1 (en) * 2005-11-15 2008-01-17 Finisar Corporation Receive Only Physical Interface Device IC Used In A Passive Network Tap
US20070171966A1 (en) * 2005-11-15 2007-07-26 Light Greta L Passive tap and associated system for tapping network data
US20100091664A1 (en) * 2005-12-27 2010-04-15 Biswajit Nandy Real-time network analyzer
US20070147271A1 (en) * 2005-12-27 2007-06-28 Biswajit Nandy Real-time network analyzer
US7636318B2 (en) * 2005-12-27 2009-12-22 Solana Networks Inc. Real-time network analyzer
US8737235B2 (en) 2005-12-27 2014-05-27 Cavesson Software Llc Real-time network analyzer
US9378005B2 (en) 2005-12-28 2016-06-28 Foundry Networks, Llc Hitless software upgrades
US8448162B2 (en) 2005-12-28 2013-05-21 Foundry Networks, Llc Hitless software upgrades
US20070288690A1 (en) * 2006-06-13 2007-12-13 Foundry Networks, Inc. High bandwidth, high capacity look-up table implementation in dynamic random access memory
US7903654B2 (en) 2006-08-22 2011-03-08 Foundry Networks, Llc System and method for ECMP load sharing
US20080049742A1 (en) * 2006-08-22 2008-02-28 Deepak Bansal System and method for ecmp load sharing
US20110044340A1 (en) * 2006-08-22 2011-02-24 Foundry Networks, Llc System and method for ecmp load sharing
US20080095064A1 (en) * 2006-10-19 2008-04-24 Angel Molina Method and apparatus for improved non-intrusive monitoring functions
US8000321B2 (en) * 2006-10-19 2011-08-16 Alcatel Lucent Method and apparatus for improved non-intrusive monitoring functions
US8238255B2 (en) 2006-11-22 2012-08-07 Foundry Networks, Llc Recovering from failures without impact on data traffic in a shared bus architecture
US9030943B2 (en) 2006-11-22 2015-05-12 Foundry Networks, Llc Recovering from failures without impact on data traffic in a shared bus architecture
US20090279423A1 (en) * 2006-11-22 2009-11-12 Foundry Networks, Inc. Recovering from Failures Without Impact on Data Traffic in a Shared Bus Architecture
US9112780B2 (en) 2007-01-11 2015-08-18 Foundry Networks, Llc Techniques for processing incoming failure detection protocol packets
US20090279441A1 (en) * 2007-01-11 2009-11-12 Foundry Networks, Inc. Techniques for transmitting failure detection protocol packets
US8395996B2 (en) 2007-01-11 2013-03-12 Foundry Networks, Llc Techniques for processing incoming failure detection protocol packets
US20090279541A1 (en) * 2007-01-11 2009-11-12 Foundry Networks, Inc. Techniques for detecting non-receipt of fault detection protocol packets
US8155011B2 (en) 2007-01-11 2012-04-10 Foundry Networks, Llc Techniques for using dual memory structures for processing failure detection protocol packets
US20090279440A1 (en) * 2007-01-11 2009-11-12 Foundry Networks, Inc. Techniques for processing incoming failure detection protocol packets
US7978614B2 (en) 2007-01-11 2011-07-12 Foundry Network, LLC Techniques for detecting non-receipt of fault detection protocol packets
US20090282322A1 (en) * 2007-07-18 2009-11-12 Foundry Networks, Inc. Techniques for segmented crc design in high speed networks
US8037399B2 (en) 2007-07-18 2011-10-11 Foundry Networks, Llc Techniques for segmented CRC design in high speed networks
US8271859B2 (en) 2007-07-18 2012-09-18 Foundry Networks Llc Segmented CRC design in high speed networks
US20090282148A1 (en) * 2007-07-18 2009-11-12 Foundry Networks, Inc. Segmented crc design in high speed networks
US8509236B2 (en) 2007-09-26 2013-08-13 Foundry Networks, Llc Techniques for selecting paths and/or trunk ports for forwarding traffic flows
US8149839B1 (en) 2007-09-26 2012-04-03 Foundry Networks, Llc Selection of trunk ports and paths using rotation
US8190881B2 (en) 2007-10-15 2012-05-29 Foundry Networks Llc Scalable distributed web-based authentication
US8667268B2 (en) 2007-10-15 2014-03-04 Foundry Networks, Llc Scalable distributed web-based authentication
US8799645B2 (en) 2007-10-15 2014-08-05 Foundry Networks, LLC. Scalable distributed web-based authentication
US8090901B2 (en) 2009-05-14 2012-01-03 Brocade Communications Systems, Inc. TCAM management approach that minimize movements
US8599850B2 (en) 2009-09-21 2013-12-03 Brocade Communications Systems, Inc. Provisioning single or multistage networks using ethernet service instances (ESIs)
US9166818B2 (en) 2009-09-21 2015-10-20 Brocade Communications Systems, Inc. Provisioning single or multistage networks using ethernet service instances (ESIs)
US20120295674A1 (en) * 2009-11-17 2012-11-22 By Light Professional IT Services Systems, methods and devices for convergent communications
US20140032748A1 (en) * 2012-07-25 2014-01-30 Niksun, Inc. Configurable network monitoring methods, systems, and apparatus
US8935568B2 (en) 2012-07-27 2015-01-13 Dell Products, Lp System and method of replicating virtual machines for live migration between data centers
US9104645B2 (en) 2012-07-27 2015-08-11 Dell Products, Lp System and method of replicating virtual machines for live migration between data centers
US20140371883A1 (en) * 2013-06-13 2014-12-18 Dell Products L.P. System and method for switch management
US9477276B2 (en) * 2013-06-13 2016-10-25 Dell Products L.P. System and method for switch management
US10318315B2 (en) 2013-06-13 2019-06-11 Dell Products L.P. System and method for switch management
US20210364311A1 (en) * 2018-09-06 2021-11-25 Google Llc Displaying Personalized Landmarks in a Mapping Application
US11821747B2 (en) * 2018-09-06 2023-11-21 Google Llc Displaying personalized landmarks in a mapping application
CN111065005A (en) * 2019-12-31 2020-04-24 汇智智能科技有限公司 Intelligent Internet of things edge gateway
US20220038171A1 (en) * 2020-07-28 2022-02-03 Atc Technologies, Llc Single Frequency Network (SFN) for Broadcast/Multicast application on a Spotbeam Satellite
US11909503B2 (en) * 2020-07-28 2024-02-20 Atc Technologies, Llc Single frequency network (SFN) for broadcast/multicast application on a spotbeam satellite

Also Published As

Publication number Publication date
US6925052B1 (en) 2005-08-02

Similar Documents

Publication Publication Date Title
US6925052B1 (en) Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment
US7766692B2 (en) Cable interconnect systems with cable connectors implementing storage devices
EP1139674B1 (en) Signaling server
US7406038B1 (en) System and method for expansion of computer network switching system without disruption thereof
US7453870B2 (en) Backplane for switch fabric
US6678268B1 (en) Multi-interface point-to-point switching system (MIPPSS) with rapid fault recovery capability
US20110262135A1 (en) Method and apparatus for increasing overall aggregate capacity of a network
CA2357944A1 (en) Multi-subshelf control system and method for a network element
Banwell et al. Physical design issues for very large ATM switching systems
GB2354883A (en) Enclosure for multi-processor equipment
GB2354905A (en) Network monitoring
Cisco Multiprotocol FastPAD Frame Relay Access Products
Cisco Product Overview
Cisco Product Overview
Cisco General Description
Cisco Product Overview
Cisco General Description
Cisco General Description
Cisco General Description
Cisco General Description
Cisco General Description
Cisco Hardware Description
Cisco Cisco AccessPath-TS3 Model 531 Product Overview
Cisco Hardware Description
Cisco General Description

Legal Events

Date Code Title Description
AS Assignment

Owner name: AGILENT TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT BY OPERATION OF LAW;ASSIGNORS:HEWLETT-PACKARD LIMITED;REYNOLDS, ALASTAIR;CARSON, DOUGLAS JOHN;AND OTHERS;REEL/FRAME:015658/0814

Effective date: 20040424

AS Assignment

Owner name: AGILENT TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HEWLETT-PACKARD LIMITED;REYNOLDS, ALASTAIR;CARSON, DOUGLAS JOHN;AND OTHERS;REEL/FRAME:016897/0584

Effective date: 20010424

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION