CA1267226A - Fault recovery in a distributed processing system - Google Patents

Fault recovery in a distributed processing system

Info

Publication number
CA1267226A
CA1267226A CA000526244A CA526244A CA1267226A CA 1267226 A CA1267226 A CA 1267226A CA 000526244 A CA000526244 A CA 000526244A CA 526244 A CA526244 A CA 526244A CA 1267226 A CA1267226 A CA 1267226A
Authority
CA
Canada
Prior art keywords
processors
processor
logical
heartbeat messages
identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA000526244A
Other languages
French (fr)
Inventor
James Edward Vandendorpe
Donald Walter Brown
James William Leth
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone and Telegraph Co Inc filed Critical American Telephone and Telegraph Co Inc
Application granted granted Critical
Publication of CA1267226A publication Critical patent/CA1267226A/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/006Identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2028Failover techniques eliminating a faulty processor or activating a spare
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/42Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker
    • H04Q3/54Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised
    • H04Q3/545Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised using a stored programme
    • H04Q3/54541Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised using a stored programme using multi-processor systems
    • H04Q3/5455Multi-processor, parallelism, distributed systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/42Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker
    • H04Q3/54Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised
    • H04Q3/545Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised using a stored programme
    • H04Q3/54575Software application
    • H04Q3/54591Supervision, e.g. fault localisation, traffic measurements, avoiding errors, failure recovery, monitoring, statistical analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0754Error or fault detection not based on redundancy by exceeding limits
    • G06F11/0757Error or fault detection not based on redundancy by exceeding limits by exceeding a time limit, i.e. time-out, e.g. watchdogs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0763Error or fault detection not based on redundancy by bit configuration check, e.g. of formats or tags
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2038Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2046Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share persistent storage

Abstract

FAULT RECOVERY IN A DISTRIBUTED PROCESSING SYSTEM

Abstract A fault recovery method for a distributed processing system where a message called a heartbeat is broadcast among the processors once during each major processing cycle. The heartbeat message indicates the physical (PMP) and logical (LMP) identity of the transmitting processor with respect to the system arrangement as well as the processor's present operational state. By monitoring the heartbeats from other processors, spare processors can autonomously take over the functions of failed processors without being required to consult or obtain the approval of an executive processor. The new physical location (PMP) of a replaced processor will be automatically recorded by the other processors. The method has application to duplex standby and resource pool configurations as well as sparing arrangements.

Description

FA[lLT RECOVE~Y IN A DISTRIBUTED PROCESSING SYSTEI~I

Technical Field .
This invention relates to fault recovery in multiple-processor systems, and more speciically, to recovery mechanisms Eor such 6ystems where the processors continually monitor heartbeat~ from the other ; processors and each processor is capable of taking autonomous recovery action in response to a failure to receive heartbeats, advantaqeously without the overall guidance of an executive processor.
Background of the Invention .. . . .... .. _ ... _ . _ Even thouqh the trend toward distributed processing has been a factor of increasing significance in system design from the initial develop~ent of the lS microprocessor, most current distributed systems employ centralized maintenance and configuration control.
typical system is the distributed sîgnal processing ; system disclosed in U.S. Patent 4,412,281, In ~hich redundant elements comprising signal processors9 mass me~ories and input-output controllers are interconnected by redundant busses. One signal processor element in the system is initially designated as the executive and assigns processing tasks from a mass memory to other elements. When a failure is detected, the executive verifies the failure, isolates the faulty element and reassigns the task to another spare element. I-E another element is not available, the executive recon~igures the system to permit degraded operation usin~ the available elements. The executive element is fault monitored by one of the other elements which is capable of assuming the role of executive as required. The individual elements are addressed by the executive using a virtual ` addressing technique for each element.

:. .
( ~ J

,, ':
, ~ :., ` .

.., ;.
.~ :.:' ``' '' ' "' :.:~ . . , , , `:
- 2 ~ t ;~

In such an "executive-controlled" system, the executive processing is very complex beca~se the executive is required to track the operational status of all other elements of the system as well as typica]ly "approving" any software changes to be implemented in the other system elements. In addition, the recovery algorithm implemented by the executive frequently depends in a fixed way nn the exact topology of the system. ~s such, system configuration changes generally result in substantial and time-consuming modifications of the executable recovery code used by the executive.
In view of the foregoing, two recognized : problems in the art are the complexity and inflexibility that result when an executive processor controls Eault recovery in an otherwise distributed processing system.
Summary of the Invention The aforementioned problems are solved and a technical advance is achieved in accordance with the principles of the invention in an illustrative distributed processing system where the responsibility for fault recovery is advantageously distributed among the system processors rather than being controlled by an executive processor, by having each processor continually monitor heartbeat messages broadcast from ; 25 the other processors and having one processor respond to a failure to receive heartbeat messages from another processor, by autonomously assuming the functions and logical system identity of the failed processor significantly without consulting or obtaining an approval of an executive processor but rather by simply reading local sparing tables defining the sparing relationships among the various system processors. All processors are automatically notified of the change when the processor assumes its new logical identity and begins broadcasting heartbeat messages defining such new identity.

' ,. ~ ..

. .
- 3 ~

An illustrative fault recovery method in accordance with the invention is used in an exemplary distributed processing arrangement including a number of processors each ha~ing a logical identity defining the functions performed by that processor with respect to the arrangement. According to the method, each processor repeatedly broadcasts heartbeat messages to other processors. ~ach such heartbeat message defines the logical identity of the processor broadcasting the heartbeat message, ~t least one of the processors maintains a status tabla defining the logical identities of other processors based on received heartbeat messages. Upon failing to receive heartbeat messages defining one of the logical identities defined in the stat~s table, the at least one processor assumes such logical identity for performing the functions of a processor having that logical identity.
Each of the processors also has an associated sparing table defining the logical identities that the j 20 processor can assume. A processor assumes the logical identity of another processor only after reading the sparing table and determining that such ass~mption of logical identity is allowed.
~ he exemplary distributed processing arrangement also includes a database processor~ In ' order for one processor`to assume the logical identity of another processor, communication with the database processor must be effected and the information defining ; ~he functions of the other processor must be downloaded ~0 by the database processor.
In addition to defining logical identity, the heartbeat mes~ages also define the physical identity and present processor state of the processor broadcasting the heartbeat message, A processor assuming a new logical identity repeatedly broadcasts heartbeat messages defining both its ~nchanged physical identity and the assumed logical identity to other processors.

:: ..: .. ... . .
' ` `

, Thus the other processors are automatically informed of the change of logical identity.
The fault recovery method of the present invention is applicable when two of the processors operate in a duplex standby mode of operation.
The invention is further applicable in an arrangament comprising a resource pool of processors and at least one other processor that needs to select a processor from the resource pool. Each of the pool of processors repeatedly transmits heartbeat messages to the other processor. The heartbeat messages each define the present processor state of the processor transmitting the heartbeat message. The other processor maintains a status table based on the received heartbeat messages defining the present processor state of each processor in the pool. The other processor makes its selection of a processor from the resource pool based on the processor state defined by the status table. In particular, the other processor selects a processor that i5 defined by the status table as being in an active processor state. When a given processor of the resource pool terminates the transmission of heartbeat messages, the other processor changes its status table to define the present state of the given processor as being out-of-service.
In an alternative embodiment of the invention, the processors can have multiple logical identities.
Accordingly, a given processor can maintain it~ original logical identity and perform its original functions and assume in addition the functions and logical identity of a failed processor.
In accordance with one aspeGt of the invention there is provided in an arrangement comprising a plurality of processors interconnected for message communication and each having a logical identity deflning functions performed by that processor with respect to said arrangement, a fault recovery method comprising each of said processors repeatedly , - .
:" ~ ~:' ''' ' `

- 4a -broadcasting heartbeat messages to others of said processors, which heartbeat messages each define the logical identity of the processor broadcasting the heartbeat message, at least one of said processors maintaining an associated status table defining the logical identities of others of said processors based on heartbeat messages received therefrom and said at least one of said processors, upon failing to receive heartbeat messages defining one of said logical identities defined in said status table, initiating performance of the functions defined by said one of said logical identities.
In accordance with another aspect of the invention there is provided a distributed processing arrangement comprising a plurality of processors interconnected for message communication and each having a logical identity defining functions performed by that processor with respect to said arrangement, wherein each of said processors comprises means for repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define the logical identity of said each processor, and wherein at least one of said proc~ssors further comprises means for maintaining a status table defining the logical identities of others of said processors base~ on heartbeat messages received therefrom, and means responsive to a failure to receive heartbeat messages defining one of said logical identities defined in said status table, for initiating performance of the functions defined by said one of said logical identities.
Brief Description of the Drawinq A more complete understanding o~ the invention may be obtained from a consideration of the following description when read in conjunction with the drawing in which:

:,' ~ . " . ~

_ 5 ~ 7~ e~

FIGS. 1 through 3, when arranged in accordance with FIG. 4, present a block diagram of an exemplary distributed processing arrangement used to illustrate the fault recovery method of the present invention;
S FIG. 5 is a state diagram showing the four possible processor states for the processors in the arrangement of FIGS. 1 thro~gh 3 and the transitions between states;
FIG~ 6 illustrates the soft~are struct~re of a call control processor included in the arrangement of FIGS. 1 through 3;
FIG. 7 illustrates the software structure of a databa~e procesior included in the arrangement of FIGS~
1 through 3;
FIG. 8 is a message sequence diagram defining the messages that are transmitted between a database processor and a standby call control processor in the arrangement of FIGS. 1 through 3 during the process of the standby call control processor assuming the role of a failed active call control processor;
FIG. 9 is a message sequence diagram defining the messages that are transmitted between a database processor and a previously unequipped call control processor in the arrangement of FIGS. 1 through 3 during the process of equipping the call control processor for service as a standby proces.sor;
FIGS. 10 through 15 are various tables important in i~plementing the ault recovery mechanism of the present invention;
FIG. 16 is a block diagram representing an addition to the arrangement of FIGS. 1 through 3 comprising a resource pool of trunk processors connected to a remote central office;
FIGS. 17 and 18 are tables related to the reso~rce pool of FIG. 16; and .

,.,' '`

: ~ , :
:.

- 6 - ~i Y~

FIGS. 19 through 33 are flow charts defining the programs of various software modules shown in FIGS. 6 and 7.
Detailed Description ~IGS. 1 through 3, arranged in accordance with FIG. 4, present a block diagram of an exemplary distributed processing arrangement 100 in accordance with the invention. Arrangement 100 illustratively co~prises a distributed control switching system and includes a plurality of processor modules, referred to herein simply as processors. The processors are interconnected by a processor interconn~ct mechanism 10, illustratively a CSMA/CD bus~ such as the Ethernet~ bus, capable of both selective and broadcast transmission of packets to the other processors connected thereto. Four types of processors are shown in the exemplary arrangement 100 as being interconnected by bus 10: 1) line processors such as processors 21 through 24 serving corresponding user teleterminals 11 through 14 (although such line processors could also each serve a multiplicity of user teleterminals~, 2) call control processors such as processors 31 through 37 used to direct the various phases of call processing in a switching system, 3) billing processors such as processors 41 through ~5 each connected to an associated tape device 51 through 55 used for the storage of billing information related to telephone calls in a switching syste~, and 4) database processors such as processors 61 and 62 both connected to an office con~iguration database 60, illustratively, a discO
Database processors ~1 and 62 are used by the various other processors of arrangement 100 to obtain programs and data from databa~e 60 to define the various processor functions required in a distributed control switching system~

~:

.

- 7 ~ Ji,~5 7~

Bus lO is ~sed both to convey the packetized voice and da-ta between the user teleterminals/ e~g. ll through 14, and also to convey the inter-processor control messages required to control calls. For example, if line processor 21 detects a service request and receives a destination address or directory n~ber for a call from user teleterminal ll, a number oE
control messages are exchanged between line processor 21 and ones of the call control processors 31, 32t and 33 to translate the directory number to determine the desired destination and to conduct various call-related activities concerning the originating end of the call.
Further control messages are then conveyed between ones of the call control processors 31, 32 and 33 and line processor 24 to establish the terminating portion of the call if the determined destination is user teleterminal 14. Such activities include the assignment of a logical channel, or example, to be used to convey packetized voice for the call and storing the appropriate physical to logical translations required by the originating and terminating line processors to establish a virtual circuit. In addition, message communication is required with billing processor 41 to establish the necessary call records in order to properly bill the call. Further message communication is required at call termination.
Each processor of arrangement lO0 has a physical identity (PMP1 defining the physical location and identification of the processor with respect to arrangement lO0. Each processor also has a logical identity (LMP3 defining the logical function to be performed by that processor with respect to arrangement 100. The LMP and PMP for various pxocessors of a representative processor configuration are shown in FIG5. 2 and 3 in the lower le~t corner of the processors. For example, billing proce~30r ~1 is indicated to have a logical identity (LMP) of 1 and a .

'"

physical identity (PMP) of 42S. Although not shown in FIG. 1, line processors 21 through 24 also have associated logical and physical identities Eor communicating with the other processors of arrangement 100~
Each processor of srrangement 100 is in one of four possible states: 1) active, indicating ~hat the processor is running application programs, 2) standby, indicating that the processor is ready to replace a failed active processor, 3) out-of-service, indicating that the processor is available for maintenance or diagnostics, and 4) unequipped indicating that the processor has been halted or physically disconnected.
Processor state transitions are cyclic. Changes from one state to another are either automatic, as a consequence of a hardware failure, or manual. The state transition diagram shown in FIG. 5 illustrates how the chan~es occurO
In the exemplary arrangement 100, the line processors 21 through 24 are all in the active state and each forms a distinct failure group because the required individual circuits and connections to the user teleterminals are not d~plicated. Line processor 21 compri~es a processor 21-1, and associated memory 21-4 (including, for exa~ple, both random access memory (RAM~
and read only memory (ROM) as required). A direct memory access (DMA) unit 21-2 is included as a means of conveniently reading memory 21-4. Line proce~sor 21 further includes a codec 21-6 which converts information from the analog speech format presented from the handset of user teleterminal 11 to the conventional 64 kilobits per second PCM speech representation. Such encoded speech informati~n, as well as digital information received directly from the keyboard or switchhook oE
user teleterminal 11, is transmitted to bus 10, by the cooperation of processor 21-1 and memory 21-4, in the form of packets via a bus interface 21-3 (of conventional design for interfacing bus 10). The various units comprising line processor 21 are interconnected by an internal bu5 21-5 in a conventional manner. Packets received from bus 10 including headers defining that the packets are to be received by processor 21, are transmitted via bus interface 21-3 for storage by memory 21-4. Such packets are further processed and transmitted either as encoded 64 kilobits per second speech samples to codec 21-6, for subsequent transmission in analog form to the handset of user teleterminal 11, or as digital information for display by user teleterminal 11 or to control other elements, e.g., a ringer, of user teleterminal 11. Each of the other line processors 22 through 24 is of similar construction. (Of course, if the user teleterminals are eq~ipped to directly handle speech in the digital, 64 kilobits per second PCI~ Eormat, the codecs, such as codec 21-6 in line processor 21, are not required~) Of the seven call control processors 31 through 37 included in arrangement 100, three, 31 through 33, are in the active sta~e; ~hree, 35 through 37, are in the standby state; and one, 34, is in the unequipped state. Each call control processor has the same general architecture as line processor 21 except that the connections and elements needed in line processor 21 to interface with user teleterminal 11 are not required in the call control processors. Call control processor 31~ for example, includes processor 31-1, memory 31-4, DMA 31-2, bus interface 31-3, and internal bus 31~5.
Of the five billing processors 41 through 45 included in arrange~ent 100, one, 41, is in the active state; two, 44 and 45, are in the standby state; one, 43, is in the out-of-service state, and one, ~2, is in the unequipped state. Each billing processor is similar in cons~ruction to the line and call control processors except that a billing processor must be interfaced with its associated tape unit. Billing processor 41, for example, includes processor 41-1, memory 41-4, D~A 41-2, bus interface 41-3, and internal bus 41-5, and i5 interfaced to associated tape unit 51. Billing processors ~2 through 45 have associated tape units 5 through 55.
o the two database processors 61 and 62, processor 62 is in the active state and processor 61 is in the standby state. The database processors are similar in construction to the processors of other types already described e~cept that both database processors 61 and 62 are connected to the same office configuration database 60. Database processor 61, for example, includes processor 61-1, memory 61-4, DMA 61-2, bus 15 interface 61-3, and internal bus 61-5, and is interfaced to the office configuration database 60. (For reliability reasons, it may be desirable to duplicate database 60 and provide access from each of the database processors 61 and 52 to the duplicate database.) The software structure of each of the types of processors comprising arrangement 100 consists of a plurality o~ software modules, e.g., program processes or ~oftware objects, that communicate with each other via control messages. A program process comprises a ~ 25 collection of procedures, each performing some subtask ; of the process. Associated with a process is a block of memory called a process control block which stores data applicable to the entire process, and a block of memory called a stack which stores data useful to the individual procedures of the process. A software object is an instance of an abstract data type comprising an aggregate of logically related data and functions. The i present invention is independent of the implementation technique used for the software modules.
Typical of the software structure of the various processors is that shown in FIG. 6 for call control processor 31. A processor communication module . ~..

.-- . . . .

: ::
. ~ .

..,g, ~

31-10 includes the programs necessary for the other software modules of processor 31 to convey information to and from bus 10 for either broadcast or selective communication with other processors. Processor communication module 31 10 is the only sof-tware module that addresses other processors using their physical identity (PMP) rather than their logical identity (LMP~.
Processor 31 further includes a cold start module 31-20 used for the initialization of processor 31 and further includes a heartbeat module 31-30 which includes the programs necessary for generating and responding to heartbeat messages conveyed among the processors of arrangement 100 in accordance with the invention.
timer module 31-40 is used to control the periodic activities such as heartbea~ generation and checking within heartbeat module 31-30. Processor communication module 31-10 and heartbeat module 31-30 ~oth have access to data stored in a data structure 31~50. Such data includes data tables referred to herein as an LMPstatusPMP table and a SpareGroup table. Examples of such tables are shown in FIGS. 10 through 13, and descriptions of their use in accordance with the invention are included herein. The software modules shown in FIG. 6 for call control processor 31 are only the basic modules required for processor operation.
Other application-dependent modules required for call control processor 31 to perform its specific functions as part of a distributed control switching system are not shown in FIG. 6.
The basic software modules shown in FIG. 6 for call control processor 31 are typical of the software modules included in the line processors, e.g., 21, the billing processors, e.g., processor 41, as well as the database processors~ e.g., processor 62. However, as shown in FIG. 7, database processor 62 further includes in addition to the corresponding modules 62-10, 62-20, 62-30, 62-40 and data structure 62-50, a pump up module ...~.

- L2 ~

62-60 which accesses the data of office configuration da-tabase 60 to among other things pump up or download other processors with the programs required to perform specific functions. Pump up module 62-50 also accesses a FMPtoPMP table (FIG. 14) and a LMPconfig table (FIG. 15) stored in database 60 and used in accordance with the invention in a manner described herein.
Sparing Arrangement Central to the present invention, each processor in arrangement 100 periodically, e.g., at .2 second intervals, broadcasts a heartbeat message on bus 10. The heartbeat message defines the physical identity (PMP), logical identity (LMP), and processor state (active, standoy or out-of-service) of the transmitting processor. Each processor receives each heartbeat message transmitted by the other processors and ~ses the information contained therein to update its LMPstatusPMP
table (FIG. 10) by modifying the appropriate LMP, P~P, or Actual State entries in the table and well as to reset a "leaky bucket" counterO In the present example, the "leaky bucket" counter is represented by the Pulse i Count entry in the LMPstatusPMP table and the Pulse Count entry is reset to five when a heartbeat message is received. The Pulse Count entry in the LMPstatusPMP
table is decremented at regular intervals. When a processor either fails or for ~ome other reason chooses to stop the transmission of its heartbeat messages, the Pulse Count entry in the LMPstatusPMP tables of all other processors is, over time, decremented to zero. In response to the Pulse Count entry being reduced to zero, each of the other processors accesses its SpareGroup table to determine whether that processor is defined as a spare processor for the failed processor. If the SpareGroup table of a given processor defines that it is a spare processor for the failed processor, the given processor assumes the logical identity of the failed processor in a manner described in detail herein.

; "

Consider as an example, that call control processors 31, 32, and 33 of arrangement 100 are in the active state. The SpareGroup tables for each of the processors 31 through 33 define as in FIG. 11 that those processors are not defined as spare processors for any other processor. Additionally assume that call control processor 37, which is in the s~andby state, is designated as the primary spare call control processor.
This designation as the primary spare is reflected in the SpareGroup table (FIG~ 12) stored by call control processor 37 tLMP 3) where call control processor 37 is designated as a spare processor for call control processor 31 (LMP 10), call control processor 32 (LMP
9), as well as call control processor 33 (LMP 5).
Finally ass~me that call control processor 36, which is also in the standby state, is designated as the secondary spare call control processor. This designation is reflected in the SpareGroup table (FIG. 13) stored by call control processor 36 tLMP 4) in that call control processor 36 is defined in that table as the spare for only the primary spare call control processor 37 (LMP 3~ and no other processors.
Continuing the example, assume that call control processor 31 (LMP 10) fails and subsequently discontinues its periodic transmission of heartbeat messages on bus 10. The Pulse Count entry in the LMPstatusPMP tables of each of the processors in arrangement 100 subsequently is reduced to zero.
However, oniy the primary spare call control processor ; 30 37 determines by reading its SpareGroup table that it is the designated spare processor for the failed processor 31. Accordingly processor 37 assumes the logical identity of the failed processor 31.
The control messages required between the various software modules of call control processor 37 and database processor ~2 in order that processor 37 can assume the functions of the failed processor 31 are , ' ' , :

- 14 ~ k~

denoted in FIG. 8 by the characters a through j. The control message ~a) represents the Einal pulse decrementing the PULSE COUNT entry in the LMPstatusPMP
table of processor 37 to zero thereby triggering the reading of the SpareGroup table and the determination that processor 37 is to assume the logical identity of processor 31. Heartbeat module 37-30 transmits a PumpUp.RestoreLMP message (b), also referred to herein as a download request message, to the pump up module 62-60 in database processor 62 indicating that processor 37 is to be downloaded with the base programs for processor 31. Heartbeat module 37-30 also continues to broadcast its HeartBeat.Pulse message (c) defining its logical identity as LMP 3~ Heartbeat module 37-30 further transmits a message (d) to timer module 37-40 to reset the ti.~ler. In response to the PumpUp~RestoreLMP
message (b), pump up module 62-60 in da-tabase processor 62 transmits a ColdStart.RestoreLMP(S-record) message (e) to cold start ~odule 37-20 in call control processor 37. The S-record is a standard unit of downloaded information comprising both program text and data. Of course, in the present example, the downloaded information comprises the programs and the initial data required to perform the functions previously performed by the failed call control processor 31. Upon the successful receipt of an S-record, cold start module 37-20 ret~rns an acknowledgment message (f) to pump up module 62-60. The process of downloading S-records and subsequently acknowledging their recei~pt by cold start module 37-20 continues until all the program text and data required by call control proces~Qr 37 to function as processor 31 has been transferred~
Once the downloading is complete, pump up module 62-60 transnits a ColdStart.ChangeIdentity message ~g) to cold start module 37-20. The ColdStart.ChangeIdentity message includes information defining the new logical identity LMP 10, the required .
.

:.a~ f~

- L s active state of processor 37, and the SpareGroup table for L~P 10 which defines that processor 37 is not a spare for any other processor~ In response to the ColdStart.ChangeIdentity message, cold start module 37-20 transmits a HeartBeat.Start message (h) to heartbeatmodule 37-30 defining the new logical identity LMP 10, the unchanged physical identity PMP 722, as well as the required active state of processor 37. In response to the HeartBeat.Start message, heartbeat module 37-30 broadcasts (via the processor communication module) a heartbeat message (i~ defining the new logical identity LMP 10 of call control processor 37 to the other processors. Heartbeat module 37-30 then transmits a reset message (j) to timer module 37-40 and timer module 37-40 begins timing another cycle.
Recall that call control processor 36 was designated as the secondary spare call control processor 36 in arrangement 100. Since call control processor 37 is now transmitting heartbeat messages defining LMP 10 rather than LMP 3~ the Pulse Count entry Eor LMP 3 in the LMPstatusPMP table of processor 36 will be decremented to zero. Processor 36 will there~ore read its SpareGroup table and will determine that it is designated as the spare processor for LMP 3. This determination will trigger a sequence of events similar to that just described in order that processor 36 can assume the logical identity LMP 3 as the new primary spare call control processor. Note that the downloading of S-records may be substantially reduced compared to the previous scenario, since processor 36 will only be functioning as a spare processor whereas processor 37 assumed the role of an active processor. If call control processor 35 was designated as a tertiarty spare, the process would repea~ again as processor 35 assumed the new logical identity LMP 4.
-_, .

.... .

.

- 16 ~

In the present example, the same sparing strategy is used for the billing processors. Billing processor 41 is designated as the active billing processor with processors 45 and 44 designated as the primary and secondary spares, respectively.
Duplex Standby Operation The database processors 61 and 62 on the other hand operate in a duplex standby mode with processor 62 designated as the active database processor but with processor 61 ready to become the active processor whenever needed. Processor 61 already has stored therein the program text and data necessary to function as an active database processor. In duplex standby operation, processor 61 monitors all messages received by processor 62 and reproduces the computations of processor 62. Assume that active database processor 62 fails and it terminates the transmission of heartbeat messages. The standby database processor 61 will detect this condition (by having the Pulse Count entry for LMP
13 in its LMPstatusPMP table go to zero) and will subsequently assume the role of the active database processor, i.e., by assuming the logical identity LMP
13. The other processors of arrangement 100 will receive heartbeat messages from the now active database processor 61 and will record the new logical identity (LMP 13) and the active status in their LMPstatusPMP
table entries for processor 61 (PMP 877~.
Adding a New Processor As a continuation of the previous example concerning the call control processors, assume that because of the failure of call control processor 31, it now becomes necessary for reliability reasons to add or "equip" call control processor 34 to become operational in arrangement 100. The scenario required to equip processor 34 is shown in FIG. 9 with ~he messages or actions relating to the software module~ of call control processor 34 and database processor 62 (again assumed to be the active datahase processor) ~enoted by the letters A through S. First, the hardware reset button of processor 34 is manually pressed (A) and an interrupt-driven, ColdStart.Initialize procedure is invoked. Cold start module 34-20 transmits a message (B) to heartbeat module 34-30, which responds by invoking its HeartBeat.Initialize rsutine enabling the reception of HeartBeat.Pulse messages from other processors. The periodic broadcast of heartbeat messages ~rom processor 34 is not yet enabled. Cold start module 3g-20 then broadcasts a PumpUp.ColdStart message (C) defining the physical identity of processor 34, PMP 312. Althouqh the message is broadcast, only pump up module 62-60 accepts the message, which uses the PMPtoFMP (FIG. 1~) and LMPconfig (FIG. 15) tables stored in office configuration database 60 to determine the logical identity or LMP that call control processor 34 is to assume. The replaceable processors of arrangement 100 are classified in fa~ilies representing hardware equivalence classes of processors. In arrangement 100, the billing processors are in family FMP 1, the call control processors are in family FMP 2, the database processors are in family ~MP 3, and the line processors are in family FMP 4. Pump up module 62-60 determines based on the PMP 312 of processor 34, that the PMPtoFMP
table (FIG. 14) defines processor 34 to be in family FMP
2, the family of call control processors. Then using the LMPstatusPMP table (FIG. 10) and the LMPconfig table (FI~. 15), LMP 8 is selected as the next unequipped processor in family FMP 2. Pump up module 62-60 then transmits a message (D) to heartbeat module 62-30 which is a local HeartBeat.Pulse message defining LMP 8, PMP
312, and out-of-service as the initial sta~e of processor 34. This allows heartbeat module 62-30 to co~plete the appropriate entries for the new processor in its LMPstatusP~P table. ,Pump up module 62-60 then transmits a HeartBeat.Start message ~E~ to heartbeat '~

module 62-30 defining the L~P 13, PMP 812, and active state of database processor 62. This results in the broadcast of the HeartBeat.Pulse~esSage ~F) to all other processors defining the LMP, PMP, and present processor state of database processor 62. Processor 34 receives the HeartBeat.PulseMessage and includes entries appropriately defining database processor 62 in its LMPstatusPMP table which allows processor 34 to communicate with processor 62. Heartbeat module 62-30 transmits a message (G) to timer module 62-40 such that database processor 62 continues to periodically broadcast its heartbeat messages. Pump up module 62-60 transmits a ColdStar~.IdentifyOCD message (H) to cold start module 34-20 defining LMP 13, the logical identity o~ active database processor 62. Pump up module 62-60 then transmits a ColdStart.ChangeIdentity message (I) to cold start module 34-20 defining L~P 8, the out-~f~
service state, and the appropriate SpareGroup table. In response, cold start module 34-20 transmits a message (J) to heartbeat module 34-30 to invoke its HeartBeat.Start routine to initiate the transmission of heartbeat messages by processor 34 deEining LMP 8, PMP
312, and the out-of-service state. Message (K) in FIG. 9 represents the first such heartbeat message.
Heartbeat module 34-30 then transmits a reset message (L) to timer module 34-40 such that the timer will periodically trigger the transmission of heartbeat messages. Pump up module 62-60 then transmits a ColdStart.Restore message (M) to cold start module 34-20 including an S-record comprising program text or data required by processor 34 to operate as a spare call control processor. Cold start module 34-20 returns an acknowledgement message (N) to pump up module 62-60 upon the successful receipt of the S-record. The messages (M) and (N) continue to be exchanged until all the necessary program text and data has been downloaded to processor 34. The letter (O) in FI~. g indicates that a ' . ~ , .

: :
.

.

- 19 ~

craftsperson may interact through a craft interface processor (not shown) and monitor diagnostic messages during rou-tine testing of processor 34. Assume for the purposes of this example, that processor 34 passes all diagnostic tests and is to ~e placed in the standby state. The craft interface processor t'nen trans~its a ColdStart.ChangeIdentity message (P) to cold start module 3~-20 defining LMP 8, the new standby state, and the SpareGroup table appropriate for processor 34. In response, processor 34 changes from the out-of-service state to the standby state. Cold start module 34-20 then transmits a local HeartBeat.Start message (Q) to heartbeat module 34-30 such that the appropriate entries can be made in the LMPstatusPMP table. Heartbeat module 34-30 then broadcasts its first new heartbeat message (R) informing the other processors of its readiness to serve as a standby call control processor. Heartbeat module 34-30 then transmits a reset message (S) to tiner module 34-40 such that the heartbeat messages are triggered at the appropriate periodic intervals~
Resource Pools .
FIG. 16 shows an addition to arrangement 100 comprising three trunk processors 71, 72, and 73 each connecting one or more trunks from a remote central office 200 to bus 10 of arrangement 100. Trunk processors 71, 72, and 73 form a resource pool since any one of them offers equivalent services to the other processors such as call control processor 32. For example, if during the progress of a call, call control processor 32 determines that the call is to be completed to remote central office 200, processor 32 can select any one of the trunk processors 71, 72 or 73 to establish the connection. As with the other processors of arrangement 100, the trunk processors each broadcast heartbeat messages on bus 10 defining the LMP, PMP and actual processor state. A separate entry for each of the trunk processors 71, 72 and 73 is maintained in all .
.: .

- 20 ~

LMPstatusP~P tables based on the received heartbeat messages. For example, a portion of the LMPstat~sPMP
table stored by call control processor 32 is shown in FIG~ 17. Whenever call control processor 32 needs to complete a call to remote central office 200, it selects one of the three LMPs 15, 16, or 17 and communicates with the trunk processor having the selected LMP to complete the call. As with the other processors of arrangement 100, when trunk processor 72, for example, fails, it terminates the transmission of its heartbeat messages. Each of the other processors of arrangement 100 detects such termination and when the Pulse Count entry in the LMPstat~sPMP table is reduced to zero, the trunk processor 72 (having LMP 16 and PMP 941) is considered out-of service as shown in FIG. 18 .
Thereafter call control processor 32 will select one of the trunk processors 71 or 73 to complete calls to re~ote c~ntral office 200.
The programs comprising the processor communication, cold start, heartbeat, and pump up modules are written, for example, in the well-known "C"
programming language. Flow charts describing the various functions performed by the modules are shown in FIGS. 19 through 33.
The flow chart of FIG. 19 describes the response of the heartbeat module in processor LMP A to the receipt of a Heartbeat.PulseMessage(LMP B, PMP B, State B) from processor LMP B. During the blocks 1101, 1102, and 1103, the LMPstatusPMP table entries or LMP B
are set as follows: (1) the PMP entry for LMP B is set equal to PMP B in accordance with the message, (2) the Pulse Count entry for LMP B is set to the reset Yalue, e.g., five, and t3) the Actual State entry for LMP B is set to State B in accordance with the message.
The flow chart of FIGS. 20 and 21 describes the response of the heartbeat module in processor LMP A
to the receipt of a timeout message from the timer , , module. During block 1201, a determination is made based on the value of a ~ecovery Flag, of whether a recovery operation is in progress. If recovery is not in progress, execution proceeds to decision block 1202 to determine whether a subsequent decision block 1203 has been executed for all elements of the LMPstatusP~P
table. Assuming a negative determination, execution proceeds to block 1203 during which the ith element of the LMPstatusPMP table is checked and non-zero Pulse Count entries are decremented. Execution proceeds to decision block 1204, where a determination is made of whether the Pulse Count entry is in fact already equal to zero. If it is, execution proceeds to block 1205, where the actual state entry is changed to the unequipped state. Execution proceeds to decision block 1206 where a determination is made based on the SpareGroup ta~le whether processor L~P A can spare for the failed processor. A negative determination in either of the decision blocks 1204 or 1206 results in a branch back to decision block 1202. If the determination made in block 1206 is positive, i.e., LMP
A can spare for the failed processor, execution proceeds to blocks 1207 and 120~ (FIGo 21) wher~ the Recovery Flag is set to True and LMP B is set equal to the logical identity of the failed LMP. Execution proceeds to block 1209 and the message PumpUp.RestoreLMP~LMP A, LMP B) is sent to the active database processor (LMP
OCD). This logs the intention of LMP A to assume the identity of LMP B. Execution proceeds from block 1208 ~as well as from decision blocks 1201 or 1202), to block 1210 during which processor LMP A's heartbeat message, HeartBeatOPulseMessage(LMp A, PMP A, State A), is broadcast to all processors. Execution proceeds from block 1210 to block 1211 and the heartbeat timer (implemented by the timer module) is reset.
.

~, "' :
.

- ~2 ~

The flow chart of FIG. 22 describes the xesponse of the heartbeat module in processor LMP A to the receipt of the HeartBeat.Start(LMP A, PMP A, State A) message typically from the cold start module also in processor LMP A. Execution begins with block 1301 where the Recovery ~lag is set to False. Execution then proceeds to block 1302 during ~7hich processor LMP A's heartbeat message--~eartBeat.PulseMessage(LMP A, P;~P A~
State A) is broadcast to the other processor~.
Execution then proceeds to block 1303 and the heartbeat timer (i-mplemented by the timer module) is reset.
The flow chart of FIG. 23 describes the initialization of a heartbeat module. During block 1401, all the elements of the LMPstatusPMP table are cycled through and are set to the initial settings:
PMP=0, Pulse Count=0, and Actual State=Unequipped.
The flow chart of FIG. 24 describes the initialization of an unequipped processor. During block 2101, the processor's PMP is determined based, for example, on switch settings within the proces~or or on a ROM entry. PMP A is set equal to the determined PMP.
Execution proceeds through blocks 2102 and 2103 where the heartbeat module initialization procedure and the processor communication module initialization procedure are respectively invoked~ Then during block 2104, a PumpUp.ColdStart(PMP A) message is broadcast. By convantion, only the active database processor will respond.
The flow chart of FIG. 25 describes the response of the cold start module to the receipt of a ColdStart.ChangeIdentity(LMP, 5tate, SpareGroup) message by processor L~P A. During block 2201, the SpareGroup for LMP A is defined to be the SpareGroup received in the message. During block 2202, the logical i~lentity of processor LMP A is changed to the LMP received in the message. During block 2203, the state o~ the processor is set to be the state received in the message.

. ~

Execution then proceeds to block 2204 and the transmission of heartbeat messages is begun by invoking the local routine HeartBeat.Start(LMP A, PMP A, State A).
The flow chart of E`IG~ 26 describes the response of the cold start module tc the receipt of a ColdStart.RestoreLMP(S-record) message. During block 2301, the program text and data included in the received S-record is loaded into the associated memory of the processor. Execution then proceeds to block 2302 and the message PumpUp.AckSrecord~LMP A~ is transmitted to the database processor (LMP OCD) to acknowledge the receipt of the S-record.
The flow chart of FIGS. 27 and 28 describes the response of the pump up module to the receipt of a PumpUp.ColdStart(PMP B) message. (Recall that only database processors have pump up modules.) The logical identity of the active database processor is referred to as LMP OCD. During block 3101, the H~artBeat.Start~LMP
OCD, PMP OCD, Active) message is sent to the heartbeat module. (This results in the heartbeat message being broadcast by the database processor.) During block 3102, the appropriate family of processors FMP B is determined from the PMPtoFMP table using PMP B as the key. Execution then proceeds to block 3103 and the ; LMPcon~ig table i5 searched se~uentially beginning with the last entry for FMP B and progressing toward the first. The search is stopped at the first FMP B entry with an Actual State in the LMPstatusPMP table that is not unequipped. (The search is also stopped if the last FMP B entry is unequipped.) Execution then proceeds to decision block 3104 and the next entry in the LMPconfig table that is unequipped i5 selected and a determination is made of whether the selected entry also refers to FMP
B. If not, execution proceeds to block 3105 and an office configuration database error is reported. The unequipped processor must be restarted after the ~,. ; , : , , -., ` ~
.: :
: ' ~

- 2~

database has been correctedO If a positive determination is made in decision block 3104, execution proceeds to block 3105 (FIG. 28) and the selected L~P B
is assigned as the next entry in the LMPconfig table.
Execution proceeds to block 3107 where the HeartBeat.PulseMessage(LMP B, PMP ~, out-of-service) message is locally transmitted ~o the heartbeat module of the database processor so that the database p~ocessor can subsequently refer to L~P B. Execution proceeds to block 3108 where the HeartBeat.Start(LMP OCD, PMP OCD, Active) message is transmitted to the heartbeat module of the database processor so that it immediately broadcasts its own heartbeat ~essage. Then during blocks 3109 and 3110, the messages ColdStart.Identify OCD (LMP OCD) and Cold~tart.ChangeIdentity(LMP B, Out-Of-Service, SpareGroup B) are transmitted to processor LMP B. Finally, in block 3111 the message ColdStart.RestoreLMP(S-record) is transmitted to begin loading the processor L~P B with the required program text and data.
The flow chart of FIG. 29 describes the response of the pump up module of a database processor to the receipt of the PumpUp.RestoreLMP(LMP B, LMP C) message. The message indicates to the database processor that processor LMP B intends to assume the logical identity of processor LMP C. During block 3201, the downloading of the program text and data required by processor LMP C is beyun by trans~itting a ColdStart.RestoreLMP(S-record) message to processor LMP
B.
~ The flow chart ofi FIGS. 30 and 31 describes the response of the pump up module of a database processor to the receipt of the PumpUp.AckSrecord (LMP
B~ LMP C) acknowledgment message. Execution begins with block 3301 during which a determination is made o~
whether the S-record being acknowledged was the last S-record required. If not, execution proceeds to block - 25 ~

3302 during which the next S-Lecord is requested by transmitting the message ColdStart.RestoreLMP(S-record) to processor LMP B. ~owever, if it is determined in block 3301, that the last S-record has been received, execution proceeds to block 3303 (FIG. 31) where a value of i is selected such that L~P C is the ith entry in the LMPconfig table. This selection is ~ade in order to determine the required state~ of the processor from the LMPconfig table. Execution then proceeds to block 3304 where the state of processor L~P C is assigned to be the required state for the ith entry of the LMPconfig table.
Further, a SpareGroup table corresponding to LMP C is read from database 60. The ordering of entries in the LMPconfig table must correspond to the ordering of processor sparing. Execution proceeds to decision block 3305 where a determination is made of whether the actual state of processor LMP B as defined by the LMPstatusPMP
table is the standby state. If it is, execution proceeds to block 3307 and the message ColdStart.ChangeIdentity(LMP C, State C, SpareGroup C) is transmitted to the cold start module of processor LMP
B. If a negative determination is made in block 3305, execution proceeds to block 3306 during which diagnostic tests may be performed by a craftsperson. If the processor LMP B passes the tests, the craftsperson may manually cause the message ColdStart~ChangeIdentity(LMP
C, State C, SpareGroup C) to be sent to the cold start module of processor LMP B.
The flow chart of FIG. 32 describes the initialization of the processor communication module.
During block 4001, interrupt vectors are set up and programs are initialized to read) transmit and buffer inter-processor messages. The interrupt vectors are stored in a memory table of subroutines to be preemptively performed whenever an external input/output event occurs.

The flow chart of FIG. 33 describes the response of a cold start module in a processor being initially equipped for service to a ColdStart.IdentifyOCD (LMP Sender) message from the pump up module in the active datahase processor. During block 2401, the LMP Sender identity received in the message is assigned as the identity of the active database processor (LMP OCD).
It is to be understood that the above-described fault recovery methods and distributed processing arrangements are merely illustrative of the ; principles of the present invention and that other methods and arrangements may be devised by those skilled in the art without departing from the spirit and scope of the invention. For examples in the fault recovery methods previously described, only processors in a standby state are allowed to replace failed active processors. In an alternative method, for example in an arrangement including no standby proces~ors, one active processor having a given logical identity defining the performance of functions in the arrangement, can in addition assu~e the logical identity and perform the functions of a failed active processor. In such case, the operational processor broadca~ts separate heartbeat messages for each logical identity and distinct entries are stored in each of the LMPstatusPMP tables of the other processors in the arrangement.

': ' j

Claims (37)

Claims:
1. In an arrangement comprising a plurality of processors interconnected for message communication and each having a logical identity defining functions performed by that processor with respect to said arrangement, a fault recovery method comprising each of said processors repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define the logical identity of the processor broadcasting the heartbeat message, at least one of said processors maintaining an associated status table defining the logical identities of others of said processors based on heartbeat messages received therefrom and said at least one of said processors, upon failing to receive heartbeat messages defining one of said logical identities defined in said status table, initiating performance of the functions defined by said one of said logical identities.
2. A method in accordance with claim 1 wherein said at least one of said processors has an associated sparing table defining logical identities each defining functions that said at least one of said processors can perform, said method further comprising said at least one of said processors, upon failing to receive heartbeat messages defining said one of said logical identities and prior to said initiating step, reading said sparing table, and said at least one of said processors determining based on said read sparing table, whether said at least one of said processors can perform the functions defined by said one of said logical identities, wherein said initiating step is performed only upon a determination that said at least one of said processors can perform the functions defined by said one of said logical identities.
3. A method in accordance with claim 2 wherein said arrangement further comprises a database processor interconnected for message communication with said plurality of processors, wherein said initiating step further comprises said at least one of said processors transmitting a download request message to said database processor, and wherein said method further comprises said database processor responding to said download request message by downloading information needed by said at least one of said processors to perform the functions defined by said one of said logical identities.
4. A method in accordance with claim 3 wherein said initiating step further comprises said at least one of said processors repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define said one of said logical identities.
5. A method in accordance with claim 1 wherein said arrangement further comprises a database processor interconnected for message communication with said plurality of processors, wherein said initiating step further comprises said at least one of said processors transmitting a download request message to said database processor, and wherein said method further comprises said database processor responding to said download request message by downloading information needed by said at least one of said processors to perform the functions defined by said one of said logical identities.
6. A method in accordance with claim 1 wherein said initiating step further comprises said at least one of said processors repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define said one of said logical identities.
7. A method in accordance with claim 1 wherein said at least one of said processors and one of said processors having said one of said logical identities, operate in a duplex standby mode of operation.
8. A method in accordance with claim 1 wherein each of said plurality of processors can have multiple logical identities, wherein said at least one of said processors has a given logical identity prior to said initiating step, and wherein said initiating step further comprises said at least one of said processors repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define said given logical identity, and said at least one of said processors repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define said one of said logical identities.
9. In an arrangement comprising a plurality of processors interconnected for message communication and each having a physical identity with respect to said arrangement and a logical identity defining functions performed by that processor with respect to said arrangement, a fault recovery method comprising each of said processors repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define the physical identity and the logical identity of the processor broadcasting the heartbeat message, each of said processors maintaining an associated status table defining the physical and logical identities of others of said processors based on heartbeat messages received therefrom, and a given one of said processors, upon failing to receive heartbeat messages defining one of the logical identities defined in the status table associated with said given processor, initiating performance of the functions defined by said one of said logical identities.
10. A method in accordance with claim 9 wherein said initiating step further comprises said given processor repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define the physical identity of said given processor and said one of said logical identities.
11. A method in accordance with claim 10 further comprising ones of said processors responding to said heartbeat messages defining the physical identity of said given processor and said one of said logical identities, by updating their associated status tables to define said given processor as having said one of said logical identities.
12. In an arrangement comprising a resource pool of processors and at least one other processor interconnected for message communication, a method for use by said other processor for selecting a processor from said resource pool comprising each of said resource pool of processors repeatedly transmitting heartbeat messages to said other processor, which heartbeat messages each define a present processor state of the processor transmitting the heartbeat message, said other processor maintaining based on heartbeat messages received from said resource pool of processors, a status table defining a present processor state of each of said resource pool of processors and said other processor selecting a processor from said resource pool based on the processor state defined by said status table for said selected processor.
13. A method in accordance with claim 12 wherein said selecting step comprises said other processor selecting a processor from said resource pool that is defined as being in an active processor state by said status table.
14. A method in accordance with claim 12 further comprising the following steps prior to said selecting step:
one of said resource pool of processors terminating the transmission of heartbeat messages and said other processor, upon failing to receive heartbeat messages from said one of said resource pool of processors, changing said status table to define the present state of said one of said resource pool of processors as being out-of-service.
15. A method in accordance with claim 14 wherein said selecting step comprises said other processor selecting a processor from said resource pool that is defined as being in an active processor state by said status table.
16. In an arrangement comprising a plurality of processors interconnected for message communication and including N active processors and at least one spare processor, N being a positive integer greater than one, each of said plurality of processors having a logical identity defining functions performed by that processor with respect to said arrangement, a method of recovering from a failure of any one of said N active processors comprising each of said plurality of processors repeatedly broadcasting heartbeat messages to others of said plurality of processors, each of said plurality of processors monitoring the receipt of heartbeat messages from others of said plurality of processors, said one processor terminating its broadcasting of heartbeat messages, and said spare processor, upon failing to receive heartbeat messages from said one processor, initiating performance of the functions defined by the logical identity of said one processor.
17. A method in accordance with claim 16 wherein further comprising a database processor interconnected for message communication with said plurality of processors, and wherein said at least one of said processors further comprises means for transmitting a download request message to said database processor, and wherein said database processor comprises means responsive to said download request message for downloading information needed by said at least one of said processors to perform the functions defined by said one of said logical identities.
18. In an arrangement comprising a plurality of processors interconnected for message communication and including N active processors, at least one primary square processor, and at least one secondary spare processor, N being a positive integer greater than one, each of said plurality of processors having a logical identity defining functions performed by that processor with respect to said arrangement, a method of recovering from a failure of any one of said N active processors comprising each of said plurality of processors repeatedly broadcasting heartbeat messages to others of said plurality of processors, each of said plurality processors monitoring the receipt of heartbeat messages from others of said plurality of processors, said one processor terminating its broadcasting of heartbeat messages, said primary spare processor, upon failing to receive heartbeat messages from said one processor, initiating performance of the functions defined by the logical identity of said one processor, and upon said primary spare processor initiating performance of the functions defined by the logical identity of said one processor, said secondary spare processor initiating performance of the functions defined by the logical identity of said primary spare processor.
19. In an arrangement comprising a plurality of processors interconnected for message communication and including at least one active processor, at least one primary spare processor, and at least one secondary spare processor, each of said plurality of processors having a logical identity defining functions performed by that processor with respect to said arrangement, a method of recovering from a failure of said one active processor comprising each of said plurality of processors repeatedly broadcasting heartbeat messages to others of said plurality of processors, which heartbeat messages each define the logical identity of the processor broadcasting the heartbeat message, each of said plurality of processors monitoring the receipt of heartbeat messages from others of said plurality of processors, said one active processor terminating its broadcasting of heartbeat messages, upon failing to receive heartbeat messages from said one active processor, said primary spare processor terminating its broadcasting of heartbeat messages defining the logical identity of said primary spare processor and initiating performance of the functions defined by the logical identity of said one active processor, said secondary spare processor, upon failing to receive heartbeat messages defining the logical identity of said primary spare processor, initiating performance of the functions defined by the logical identity of said primary spare processor.
20. In an arrangement comprising a plurality of processors interconnected for message communication and each having a logical identity defining the functions performed by that processor with respect to said arrangement, a fault recovery method comprising each of said processors repeatedly broadcasting heartbeat messages to others of said processors, said heartbeat messages each defining the logical identity of the processor broadcasting the heartbeat message, any one of said processors terminating the broadcasting of its heartbeat messages and another of said processors, upon failing to receive heartbeat messages from said any one of said processors, initiating performance of the functions defined by the logical identity of said any one of said processors.
21. A method in accordance with claim 20 wherein each of said processors has an associated sparing table defining logical identities each defining functions that said each processor can perform, said method further comprising said another of said processors, upon failing to receive heartbeat messages from said any one of said processors and prior to said initiating step, reading the sparing table associated with said another of said processors, and said another of said processors determining based on said read sparing table, whether said another of said processors can assume said logical identity of said any one of said processors, wherein said initiating step is performed only upon a determination that said another of said processors can perform the functions defined by said logical identity of said any one of said processors.
22. A method in accordance with claim 20 wherein said initiating step further comprises said another of said processors repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define said logical identity of said any one of said processors.
23. A distributed processing arrangement comprising a plurality of processors interconnected for message communication and each having a logical identity defining functions performed by that processor with respect to said arrangement, wherein each of said processors comprises means for repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define the logical identity of said each processor, and wherein at least one of said processors further comprises means for maintaining a status table defining the logical identities of others of said processors based on heartbeat messages received therefrom, and means responsive to a failure to receive heartbeat messages defining one of said logical identities defined in said status table, for initiating performance of the functions defined by said one of said logical identities.
24. An arrangement in accordance with claim 23 wherein said at least one of said processors further comprises means for storing a sparing table defining logical identities each defining functions that said at least one of said processors can perform, and means for reading said sparing table, and means for determining based on said read sparing table, whether said at least one of said processors can perform the functions defined by said one of said logical identities, and wherein said at least one of said processors initiates performance of the functions defined by said one of said logical identities only upon a determination by said determining means that said at least one of said processors can perform the functions defined by said one of said logical identities.
25. An arrangement in accordance with claim 24 further comprising a database processor interconnected for message communication with said plurality of processors, and wherein said at least one of said processors further comprises means for transmitting a download request message to said database processor, and wherein said database processor comprises means responsive to said download request message for downloading information needed by said at least one of said processors to perform the functions defined by said one of said logical identities.
26. An arrangement in accordance with claim 25 wherein the broadcasting means of said at least one of said processor initiates, upon said at least one of said processors initiating performance of the functions defined by said one of said logical identities, the repeated broadcasting of heartbeat messages to others of said processors, which heartbeat messages each define said one of said logical identities.
27 An arrangement in accordance with claim 23 further comprising a database processor interconnected for message communication with said plurality of processors, and wherein said at least one of said processors further comprises means for transmitting a download request message to said database processor, and wherein said database processor comprises means responsive to said download request message for downloading information needed by said at least one of said processors to perform the functions defined by said one of said logical identities.
28. An arrangement in accordance with claim 23 wherein the broadcasting means of said at least one of said processors initiates, upon said at least one of said processors initiating performance of the functions defined by said one of said logical identities, the repeated broadcasting of heartbeat messages to others of said processors, which heartbeat messages each define said one of said logical identities.
29. An arrangement in accordance with claim 23 wherein each of said plurality of processors has a physical identity with respect to said arrangement, wherein each of said plurality of processors is in one of a plurality of processor states, and wherein the heartbeat messages broadcast by the broadcasting means of each of said plurality of processors, each define the logical identity, physical identity, and present processor state of the processor broadcasting the heartbeat message.
30. An arrangement in accordance with claim 23 wherein said arrangement is included in a distributed control switching system.
31. A distributed processing arrangement comprising a plurality of processors interconnected for message communication and each having a physical identity with respect to said arrangement and a logical identity defining functions performed by that processor with respect to said arrangement, wherein each of said processors comprises means for repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define the physical identity and the logical identity of the processor broadcasting the heartbeat message, and means for maintaining an associated status table defining the physical and logical identities of others of said processors based on heartbeat messages received therefrom, and wherein a given one of said processors further comprises means responsive to a failure to receive heartbeat messages defining one of the logical identities defined in the status table associated with said give processor, for initiating performance of the functions defined by said one of said logical identities.
32. An arrangement in accordance with claim 31 wherein the broadcasting means of said given processor initiates, upon said given processor initiating performance of the functions defined by said one of said logical identities, the repeated broadcasting of heartbeat messages to others of said processors, which heartbeat messages each d fine the physical identity of said given processor and said one of said logical identities.
33. An arrangement in accordance with claim 32 wherein the maintaining means of each of said processors is responsive to the receipt of said heartbeat messages defining the physical identity of said given processor and said one of said logical identities, for updating the status table associated with said each processor to define said given processor as having said one of said logical identities.
34. A distributed processing arrangement comprising a resource pool of processors and at least one other processor interconnected for message communication, each of said resource pool of processors comprising means for repeatedly transmitting heartbeat messages to said other processor, which heartbeat messages each define a present processor state of the processor transmitting the heartbeat message, and said other processor comprising means for maintaining based on heartbeat messages received from said resource pool of processors, a status table defining a present processor state of each of said resource pool of processors, and means for selecting a processor from said resource pool based on the processor state defined by said status table for the selected processor.
35. A distributed processing arrangement comprising a plurality of processors interconnected for message communication and each having a logical identity defining functions performed by that processor with respect to said arrangement, wherein each of said processors comprises means for repeatedly broadcasting heartbeat messages to others of said processors, said heartbeat messages each defining the logical identity of the processor broadcasting the heartbeat message, means responsive to a termination in receiving heartbeat messages from any one of said processors, for initiating performance of the functions defined by the logical identity of said any one of said processors.
36. An arrangement in accordance with claim 35 wherein said each processor further comprises means for storing an associated sparing table defining logical identities each defining functions that said each processor can perform, means responsive to said termination, for reading the sparing table associated with said each processor, and means for determining based on said read sparing table, whether said each processor can perform the functions defined by said logical identity of said any one of said processors, wherein said initiating means is responsive to said determining means for initiating performance of the functions defined by said logical identity of said any one of said processors only when said determining means determines that each processor can perform the functions defined by said logical identity of said any one of said processors.
37. An arrangement in accordance with claim 35 wherein said initiating means further comprises means for repeatedly broadcasting heartbeat messages to others of said processors, which heartbeat messages each define said logical identity of said any one of said processors.
CA000526244A 1985-12-27 1986-12-23 Fault recovery in a distributed processing system Expired - Fee Related CA1267226A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US814,115 1985-12-27
US06/814,115 US4710926A (en) 1985-12-27 1985-12-27 Fault recovery in a distributed processing system

Publications (1)

Publication Number Publication Date
CA1267226A true CA1267226A (en) 1990-03-27

Family

ID=25214204

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000526244A Expired - Fee Related CA1267226A (en) 1985-12-27 1986-12-23 Fault recovery in a distributed processing system

Country Status (5)

Country Link
US (1) US4710926A (en)
EP (1) EP0230029B1 (en)
JP (1) JPH07104793B2 (en)
CA (1) CA1267226A (en)
DE (1) DE3688526T2 (en)

Families Citing this family (110)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0789337B2 (en) * 1985-10-30 1995-09-27 株式会社日立製作所 Distributed file recovery method
EP0228559A1 (en) * 1985-12-17 1987-07-15 BBC Brown Boveri AG Fault-tolerant multi-computer arrangement
CA1269178A (en) * 1987-01-30 1990-05-15 John David Morton Automatic refresh of operating parameters in equipment with volatile storage
US5065311A (en) * 1987-04-20 1991-11-12 Hitachi, Ltd. Distributed data base system of composite subsystem type, and method fault recovery for the system
US4827499A (en) * 1987-06-12 1989-05-02 American Telephone And Telegraph Company At&T Bell Laboratories Call control of a distributed processing communications switching system
US4881230A (en) * 1987-10-05 1989-11-14 Ibm Corporation Expert system for processing errors in a multiplex communications system
US4817092A (en) * 1987-10-05 1989-03-28 International Business Machines Threshold alarms for processing errors in a multiplex communications system
US4873687A (en) * 1987-10-05 1989-10-10 Ibm Corporation Failing resource manager in a multiplex communication system
JPH01147727A (en) * 1987-12-04 1989-06-09 Hitachi Ltd Fault restoring method for on-line program
EP0358785B1 (en) * 1988-09-12 1993-11-24 Siemens Aktiengesellschaft Device for operating a redundant multiprocessor system for the control of an electronic signal mechanism in the train signal technique
US5551035A (en) * 1989-06-30 1996-08-27 Lucent Technologies Inc. Method and apparatus for inter-object communication in an object-oriented program controlled system
US5377322A (en) * 1989-07-19 1994-12-27 Hitachi, Ltd. Information handling method and system utilizing multiple interconnected processors and controllers
JP2755437B2 (en) * 1989-07-20 1998-05-20 富士通株式会社 Continuous operation guarantee processing method of communication control program
US5105420A (en) * 1990-04-09 1992-04-14 At&T Bell Laboratories Method and apparatus for reconfiguring interconnections between switching system functional units
US5165031A (en) * 1990-05-16 1992-11-17 International Business Machines Corporation Coordinated handling of error codes and information describing errors in a commit procedure
US5327532A (en) * 1990-05-16 1994-07-05 International Business Machines Corporation Coordinated sync point management of protected resources
JP3293839B2 (en) * 1990-05-16 2002-06-17 インターナショナル・ビジネス・マシーンズ・コーポレーション Computer system that adjusts the commit scope to the unit of work
US5319773A (en) * 1990-05-16 1994-06-07 International Business Machines Corporation Asynchronous resynchronization of a commit procedure
US5261089A (en) * 1990-05-16 1993-11-09 International Business Machines Corporation Optimization of commit procedures by utilizing a two-phase commit procedure only when necessary
JP2691081B2 (en) * 1990-05-16 1997-12-17 インターナショナル・ビジネス・マシーンズ・コーポレイション Computer network
US5319774A (en) * 1990-05-16 1994-06-07 International Business Machines Corporation Recovery facility for incomplete sync points for distributed application
US5276876A (en) * 1990-05-16 1994-01-04 International Business Machines Corporation Registration of resources for commit procedures
US5206952A (en) * 1990-09-12 1993-04-27 Cray Research, Inc. Fault tolerant networking architecture
DE4112334A1 (en) * 1991-04-16 1992-10-22 Bosch Gmbh Robert Multiprocessor system for control of motor vehicle - provides functions for ABS and engine operation controlled with monitoring to identify cold and warm start conditions following fault identification
US5309448A (en) * 1992-01-03 1994-05-03 International Business Machines Corporation Methods and systems for alarm correlation and fault localization in communication networks
US5377195A (en) * 1992-04-02 1994-12-27 Telefonaktiebolaget L M Ericsson Leaky bucket for supervision in industrial processes
US5471631A (en) * 1992-10-19 1995-11-28 International Business Machines Corporation Using time stamps to correlate data processing event times in connected data processing units
US5513354A (en) * 1992-12-18 1996-04-30 International Business Machines Corporation Fault tolerant load management system and method
US5640513A (en) * 1993-01-22 1997-06-17 International Business Machines Corporation Notification of disconnected service machines that have stopped running
US5371883A (en) * 1993-03-26 1994-12-06 International Business Machines Corporation Method of testing programs in a distributed environment
US5408649A (en) * 1993-04-30 1995-04-18 Quotron Systems, Inc. Distributed data access system including a plurality of database access processors with one-for-N redundancy
US5812757A (en) * 1993-10-08 1998-09-22 Mitsubishi Denki Kabushiki Kaisha Processing board, a computer, and a fault recovery method for the computer
US5574718A (en) * 1994-07-01 1996-11-12 Dsc Communications Corporation Signal protection and monitoring system
US5560033A (en) * 1994-08-29 1996-09-24 Lucent Technologies Inc. System for providing automatic power control for highly available n+k processors
US5530939A (en) * 1994-09-29 1996-06-25 Bell Communications Research, Inc. Method and system for broadcasting and querying a database using a multi-function module
US5675723A (en) * 1995-05-19 1997-10-07 Compaq Computer Corporation Multi-server fault tolerance using in-band signalling
US5696895A (en) * 1995-05-19 1997-12-09 Compaq Computer Corporation Fault tolerant multiple network servers
US5822512A (en) * 1995-05-19 1998-10-13 Compaq Computer Corporartion Switching control in a fault tolerant system
US5682470A (en) * 1995-09-01 1997-10-28 International Business Machines Corporation Method and system for achieving collective consistency in detecting failures in a distributed computing system
SE515348C2 (en) * 1995-12-08 2001-07-16 Ericsson Telefon Ab L M Processor redundancy in a distributed system
GB2308040A (en) * 1995-12-09 1997-06-11 Northern Telecom Ltd Telecommunications system
US5978933A (en) * 1996-01-11 1999-11-02 Hewlett-Packard Company Generic fault tolerant platform
EP0882266A1 (en) 1996-02-20 1998-12-09 Intergraph Corporation High-availability super server
US6032271A (en) * 1996-06-05 2000-02-29 Compaq Computer Corporation Method and apparatus for identifying faulty devices in a computer system
US6148377A (en) * 1996-11-22 2000-11-14 Mangosoft Corporation Shared memory computer networks
US6026474A (en) * 1996-11-22 2000-02-15 Mangosoft Corporation Shared client-side web caching using globally addressable memory
US7136903B1 (en) 1996-11-22 2006-11-14 Mangosoft Intellectual Property, Inc. Internet-based shared file service with native PC client access and semantics and distributed access control
US5909540A (en) * 1996-11-22 1999-06-01 Mangosoft Corporation System and method for providing highly available data storage using globally addressable memory
US5987506A (en) * 1996-11-22 1999-11-16 Mangosoft Corporation Remote access and geographically distributed computers in a globally addressable storage environment
FR2760160B1 (en) * 1997-02-27 1999-03-26 Alsthom Cge Alcatel METHOD FOR ELECTING THE ACTIVE STATION IN A SECURE INFORMATION PROCESSING SYSTEM
US5983371A (en) * 1997-07-11 1999-11-09 Marathon Technologies Corporation Active failure detection
US6006206A (en) * 1997-09-08 1999-12-21 Reuters Limited Data health monitor for financial information communications networks
US6173411B1 (en) 1997-10-21 2001-01-09 The Foxboro Company Method and system for fault-tolerant network connection switchover
US6799224B1 (en) 1998-03-10 2004-09-28 Quad Research High speed fault tolerant mass storage network information server
US6115829A (en) * 1998-04-30 2000-09-05 International Business Machines Corporation Computer system with transparent processor sparing
US6189112B1 (en) 1998-04-30 2001-02-13 International Business Machines Corporation Transparent processor sparing
US6260155B1 (en) 1998-05-01 2001-07-10 Quad Research Network information server
US6370656B1 (en) * 1998-11-19 2002-04-09 Compaq Information Technologies, Group L. P. Computer system with adaptive heartbeat
US6389551B1 (en) 1998-12-17 2002-05-14 Steeleye Technology, Inc. Method of preventing false or unnecessary failovers in a high availability cluster by using a quorum service
EP1142279B1 (en) * 1998-12-21 2004-09-22 Siemens Aktiengesellschaft Method for detecting errors occurring in at least one electric unit, especially a switching oriented unit
US6581166B1 (en) 1999-03-02 2003-06-17 The Foxboro Company Network fault detection and recovery
US6378084B1 (en) * 1999-03-29 2002-04-23 Hewlett-Packard Company Enclosure processor with failover capability
JP3892998B2 (en) * 1999-09-14 2007-03-14 富士通株式会社 Distributed processing device
JP3651353B2 (en) * 2000-04-04 2005-05-25 日本電気株式会社 Digital content reproduction system and digital content distribution system
US6735717B1 (en) * 2000-04-13 2004-05-11 Gnp Computers, Inc. Distributed computing system clustering model providing soft real-time responsiveness and continuous availability
US6865157B1 (en) * 2000-05-26 2005-03-08 Emc Corporation Fault tolerant shared system resource with communications passthrough providing high availability communications
US7263476B1 (en) 2000-06-12 2007-08-28 Quad Research High speed information processing and mass storage system and method, particularly for information and application servers
US6854072B1 (en) 2000-10-17 2005-02-08 Continuous Computing Corporation High availability file server for providing transparent access to all data before and after component failover
WO2002058337A1 (en) * 2001-01-19 2002-07-25 Openwave Systems, Inc. Computer solution and software product to establish error tolerance in a network environment
US20040153714A1 (en) * 2001-01-19 2004-08-05 Kjellberg Rikard M. Method and apparatus for providing error tolerance in a network environment
US20020124201A1 (en) * 2001-03-01 2002-09-05 International Business Machines Corporation Method and system for log repair action handling on a logically partitioned multiprocessing system
US7206309B2 (en) * 2001-03-27 2007-04-17 Nortel Networks Limited High availability packet forward apparatus and method
JP4491167B2 (en) * 2001-04-27 2010-06-30 富士通株式会社 Backup system for management device in communication system
JP2002344484A (en) * 2001-05-21 2002-11-29 Nec Corp Method and system for restoring connection of network
US7743126B2 (en) * 2001-06-28 2010-06-22 Hewlett-Packard Development Company, L.P. Migrating recovery modules in a distributed computing environment
KR20030056290A (en) * 2001-12-28 2003-07-04 한국전자통신연구원 A Process error Recovery Technique by the Duplication System and Process
US6889339B1 (en) * 2002-01-30 2005-05-03 Verizon Serivces Corp. Automated DSL network testing software tool
JP4107083B2 (en) * 2002-12-27 2008-06-25 株式会社日立製作所 High-availability disk controller, its failure handling method, and high-availability disk subsystem
US7269623B2 (en) * 2003-01-09 2007-09-11 Raytheon Company System and method for distributed multimodal collaboration using a tuple-space
US7194655B2 (en) * 2003-06-12 2007-03-20 International Business Machines Corporation Method and system for autonomously rebuilding a failed server and a computer system utilizing the same
US7389411B2 (en) * 2003-08-29 2008-06-17 Sun Microsystems, Inc. Secure transfer of host identities
GB2405965B (en) * 2003-08-29 2005-11-02 Sun Microsystems Inc Transferring system identities
US7444396B2 (en) * 2003-08-29 2008-10-28 Sun Microsystems, Inc. Transferring system identities
KR100703488B1 (en) * 2004-06-01 2007-04-03 삼성전자주식회사 Method and apparatus for state transition backup router in a router redundancy system
JP4315057B2 (en) * 2004-06-02 2009-08-19 ソニー株式会社 Information processing apparatus, information processing method, and program
US7681104B1 (en) 2004-08-09 2010-03-16 Bakbone Software, Inc. Method for erasure coding data across a plurality of data stores in a network
US7681105B1 (en) * 2004-08-09 2010-03-16 Bakbone Software, Inc. Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network
US7627614B2 (en) * 2005-03-03 2009-12-01 Oracle International Corporation Lost write detection and repair
ITTO20050439A1 (en) * 2005-06-23 2006-12-24 Magneti Marelli Sistemi Elettr CIRCUIT ARRANGEMENT OF A CAN NETWORK NODE.
US20070041554A1 (en) * 2005-08-12 2007-02-22 Sbc Knowledge Ventures L.P. Method and system for comprehensive testing of network connections
US20080126748A1 (en) * 2006-09-01 2008-05-29 Capps Louis B Multiple-Core Processor
US8074109B1 (en) * 2006-11-14 2011-12-06 Unisys Corporation Third-party voting to select a master processor within a multi-processor computer
JP4433006B2 (en) * 2007-07-04 2010-03-17 株式会社デンソー Multi-core abnormality monitoring device
US20100211637A1 (en) * 2009-02-17 2010-08-19 Nokia Corporation Method and apparatus for providing shared services
US9081653B2 (en) 2011-11-16 2015-07-14 Flextronics Ap, Llc Duplicated processing in vehicles
US20140156120A1 (en) * 2012-11-30 2014-06-05 Electro-Motive Diesel, Inc. Monitoring and failover operation within locomotive distributed control system
US10152500B2 (en) 2013-03-14 2018-12-11 Oracle International Corporation Read mostly instances
US9767178B2 (en) 2013-10-30 2017-09-19 Oracle International Corporation Multi-instance redo apply
US9892153B2 (en) 2014-12-19 2018-02-13 Oracle International Corporation Detecting lost writes
US10061664B2 (en) * 2015-01-15 2018-08-28 Cisco Technology, Inc. High availability and failover
US10360116B2 (en) * 2015-02-13 2019-07-23 International Business Machines Corporation Disk preservation and failure prevention in a raid array
US11657037B2 (en) 2015-10-23 2023-05-23 Oracle International Corporation Query execution against an in-memory standby database
US10747752B2 (en) 2015-10-23 2020-08-18 Oracle International Corporation Space management for transactional consistency of in-memory objects on a standby database
US10275272B2 (en) * 2016-06-20 2019-04-30 Vmware, Inc. Virtual machine recovery in shared memory architecture
US10698771B2 (en) 2016-09-15 2020-06-30 Oracle International Corporation Zero-data-loss with asynchronous redo shipping to a standby database
US10891291B2 (en) 2016-10-31 2021-01-12 Oracle International Corporation Facilitating operations on pluggable databases using separate logical timestamp services
US11475006B2 (en) 2016-12-02 2022-10-18 Oracle International Corporation Query and change propagation scheduling for heterogeneous database systems
US10691722B2 (en) 2017-05-31 2020-06-23 Oracle International Corporation Consistent query execution for big data analytics in a hybrid database
US10459782B2 (en) * 2017-08-31 2019-10-29 Nxp Usa, Inc. System and method of implementing heartbeats in a multicore system
US10595088B2 (en) * 2018-03-28 2020-03-17 Neulion, Inc. Systems and methods for bookmarking during live media streaming

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3991406A (en) * 1963-12-31 1976-11-09 Bell Telephone Laboratories, Incorporated Program controlled data processing system
US3828321A (en) * 1973-03-15 1974-08-06 Gte Automatic Electric Lab Inc System for reconfiguring central processor and instruction storage combinations
US4257097A (en) * 1978-12-11 1981-03-17 Bell Telephone Laboratories, Incorporated Multiprocessor system with demand assignable program paging stores
US4363093A (en) * 1980-03-10 1982-12-07 International Business Machines Corporation Processor intercommunication system
US4451916A (en) * 1980-05-12 1984-05-29 Harris Corporation Repeatered, multi-channel fiber optic communication network having fault isolation system
US4412281A (en) * 1980-07-11 1983-10-25 Raytheon Company Distributed signal processing system
US4371754A (en) * 1980-11-19 1983-02-01 Rockwell International Corporation Automatic fault recovery system for a multiple processor telecommunications switching control
JPS5810258A (en) * 1981-07-13 1983-01-20 Hitachi Ltd Configuration controller for computer system
US4438494A (en) * 1981-08-25 1984-03-20 Intel Corporation Apparatus of fault-handling in a multiprocessing system
JPS5854470A (en) * 1981-09-28 1983-03-31 Toshiba Corp Controlling system for constitution of multiple electronic computer
JPS5884349A (en) * 1981-11-12 1983-05-20 Toshiba Corp Controlling system for constitution of computer
DE3215177A1 (en) * 1982-04-23 1983-10-27 Hartmann & Braun Ag, 6000 Frankfurt MONITORING SYSTEM FOR ONE OR MULTIPLE, SIMILAR DESIGN PROCESS STATIONS
US4634110A (en) * 1983-07-28 1987-01-06 Harris Corporation Fault detection and redundancy management system
JPS6061850A (en) * 1983-09-12 1985-04-09 インタ−ナショナル ビジネス マシ−ンズ コ−ポレ−ション Computer system
US4718005A (en) * 1984-05-03 1988-01-05 International Business Machines Corporation Distributed control of alias name usage in networks
US4589066A (en) * 1984-05-31 1986-05-13 General Electric Company Fault tolerant, frame synchronization for multiple processor systems
AU591057B2 (en) * 1984-06-01 1989-11-30 Digital Equipment Corporation Local area network for digital data processing system
US4633467A (en) * 1984-07-26 1986-12-30 At&T Bell Laboratories Computer system fault recovery based on historical analysis
US4635184A (en) * 1984-12-17 1987-01-06 Combustion Engineering, Inc. Distributed control with mutual spare switch over capability
US4718002A (en) * 1985-06-05 1988-01-05 Tandem Computers Incorporated Method for multiprocessor communications

Also Published As

Publication number Publication date
EP0230029A2 (en) 1987-07-29
US4710926A (en) 1987-12-01
DE3688526T2 (en) 1993-12-09
JPS62177634A (en) 1987-08-04
EP0230029B1 (en) 1993-06-02
DE3688526D1 (en) 1993-07-08
EP0230029A3 (en) 1989-01-25
JPH07104793B2 (en) 1995-11-13

Similar Documents

Publication Publication Date Title
CA1267226A (en) Fault recovery in a distributed processing system
US5805785A (en) Method for monitoring and recovery of subsystems in a distributed/clustered system
US4628508A (en) Computer of processor control systems
US6728746B1 (en) Computer system comprising a plurality of machines connected to a shared memory, and control method for a computer system comprising a plurality of machines connected to a shared memory
CA1227875A (en) Computer system fault recovery based on historical analysis
US5748883A (en) Distributed device status in a clustered system environment
EP0165934B1 (en) Control for a multiprocessing system program process
US5179695A (en) Problem analysis of a node computer with assistance from a central site
US5121486A (en) Network control system for dynamically switching a logical connection between an identified terminal device and an indicated processing unit
US20040001449A1 (en) System and method for supporting automatic protection switching between multiple node pairs using common agent architecture
JP2636179B2 (en) Common control redundant system switching method
US7424640B2 (en) Hybrid agent-oriented object model to provide software fault tolerance between distributed processor nodes
US5461609A (en) Packet data network switch having internal fault detection and correction
EP0423773A2 (en) Emergency resumption processing apparatus for an information processing system
JP3025732B2 (en) Control method of multiplex computer system
WO1994000925A1 (en) Fault tolerant radio communication system controller
CN100490343C (en) A method and device for realizing switching between main and backup units in communication equipment
JPH08234968A (en) Program execution management system and method
CA1269141A (en) Task synchronization arrangement and method for remote duplex processors
JP3042034B2 (en) Failure handling method
KR19990075606A (en) Protocol and System Transfer Priority Management Method for System Redundancy
JPH05143505A (en) Method for controlling terminal
JPH0418743B2 (en)
JPH02244940A (en) Protocol conversion system
JP2000293404A (en) System and method for job execution processing

Legal Events

Date Code Title Description
MKLA Lapsed