CA2077772A1 - Method for fault diagnosis by assessment of confidence measure - Google Patents

Method for fault diagnosis by assessment of confidence measure

Info

Publication number
CA2077772A1
CA2077772A1 CA002077772A CA2077772A CA2077772A1 CA 2077772 A1 CA2077772 A1 CA 2077772A1 CA 002077772 A CA002077772 A CA 002077772A CA 2077772 A CA2077772 A CA 2077772A CA 2077772 A1 CA2077772 A1 CA 2077772A1
Authority
CA
Canada
Prior art keywords
test
failure
value
fault
persistence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002077772A
Other languages
French (fr)
Inventor
Douglas Charles Doskocil
Alan Mark Offt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
General Electric Co
Original Assignee
General Electric Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by General Electric Co filed Critical General Electric Co
Publication of CA2077772A1 publication Critical patent/CA2077772A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2273Test methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S706/00Data processing: artificial intelligence
    • Y10S706/902Application using ai with detail of the ai system
    • Y10S706/911Nonmedical diagnostics

Abstract

Abstract of the Invention A method for enabling a diagnostic system to assess, within time constraints, the health of the host system during operation, and to detect and isolate system faults during maintenance, with reduced potential for false alarms due to intermittent real faults and system noise, and for apparent misdiagnosis of the host system health, by use of a Diagnostics by Confidence Measure Assessment (DCMA) process in which a confidence measure is provided for each system test failure assessment, resulting from both the use of specialized persistence processing on many test results from a single source and the use of specialized corroboration processing on many test results from different sources.

Description

2~77772 - 1- 39-AS-277g .~IETHOD FOR FAULT DIAG~rOSlS BY
~SSESS~IENT OF CONFIDENCE: ~fEASURE

The present inven~ion relates to diagnostic testing melhods and. more particularly, tO methods for determining the confidence level o~ any de~ected and corroborated persistent ~ault.
Back~und of the In~g.ntion In system engineering usage, a fault may be defined as arlv phvsical condition which causes an object to ~ail to perform in a required maMer: thus.
0 a failure is an inabilitv ot an object to perforrn i~s desired function. Failures are detected bv evaluation of tess results, i.e. the resul~ of comparing a measuremenf (itself defined as a sample o~ a signal of interes~) to some prcdeterrnined operational limits. n~e primarv objective and challenge ot` a diagnostic system then is to obtain simul~aneous high levels of both the coverage 15 -and accuracv oE fault detection in a system being diagnosed. In fact. a fault detection (F~) e~fectiveness parameter can be defined, as the producl o~ fault detection coverage and fault detection accuracy, and is a measure of the diagnostic svstem's ability to detec~ all potential faults. It is desirable ~o simultaneouslv increase Ihe probability of detecting ~aults in equipmenl ~hen a 0 fault exists. while reducin the probability of declaring a fault when one does not ex~st. Increased fault detection. the ability to detect more faul~s than a previous capabilitv, can be the result of either increased fault coverage, e.g. the presence of more test points in the same equiprnen~, or greater detection accuracy, e.g.
the irnplememation of better tests or processing. Conversely, decreased fault ~5 de~eclion le~ds to missing more real faults, and ;s almost never desirable.
A false alarm is defined as a fault indication (by Bui}t~ln Test or other monitoring circNitry) where no fault exists. However. the user communitv extends the definition of false alarm to include activity which does not correctthe causative ~ult; this may be actions such as isolating a &ult to the wrong 30 module or the inability to reproduce the fault during maintenance. Both ~alsealanns actions result in maintenance actions which do not correct the ac~ual fault: the user's ~erceDtion is that the causative fault does not exist. Similarlv.

c~7~772 -2- 39-AS-~778 detec~ion of a temporarv or transient real fau~t is also considered an error.
Consider the fault isolation process as applied to an aircraft: if a real fault is detected while the plane is in flight, but cannot be duplica~ed during ground mai~tenance, then the maintenance staff considers that fault to be a false alarm.
Such a condition is most o~ten caused by in~errniteent behavior of the svstem inuse. and due to factors including overheating9 part fatigue and corrosion. poor calibration, noise and the like. Since the plane is not stressed in the same manner while on the ground, these temporarv real f~ults either disappear or can~aot be duplicated; however, continued use of the unrepaired plane is not 10 alwavs a desirable alternative.
Due to the possibilitv of serious consequences if faults are noI properlv diagnosed, there have been many past attempts to prov~de diagnos~ic svstems with ever increasing levels of fault detection effectiveness. Some systems have tried tO increase effectiveness by changing test lim~ts and, in some cases, by 15 checking for repeated results. Changing test measurement lim~ts generates mixed results in a sys~em having measurement variations. ~oise in eilher, or both, of the system-under-test ~SIJT) and the associated diagnostic svstem~ can cause proper measurements, taken in a correctly operating systern, to lie in thereg~on of assessed failures, while similar measurements of a failed svstem m~v 'O lie in the region of correct operation.
If it is desired to increase fault detection by tightening test limiIs ~e g.
allow~ng a smaller meaSurement variation from the mean value be~ore exis~ence of a fault is declared), then the ~est thres~old must move toward the mean valueof the measurement. However, since more noisy measurements lie outside the 'S limit of correct operation, the resulting diagnostic system will declare more false alarms, Conversely, to decrease false alarms by changing only a measurement limit w~ll allow more measurement vanation (e.g. allow movement of the test limit farther from Ihe measurement mean value) before declaration of a ~ault condition occurs, However, use of this fault ~hreshold location decreases the 30 diagnostic system's ability to de~ect a fault, since a noiseless me~surement would have to devia~e rnore from its intended locatiorl for correct operation before afault is detected. Accordingly, a new technique is required to simultaneously 2~77~7~
incre~se tault detection while reducing false alarms. I~is new technique is desirablv compatible with new, multi-level, integrated diagnostic systems and also desirably capable of an increased probabilitv of detecting faults while reducine, if not eliminating, false alarms and intermi~tent real faults. Thus, we 5 desire to provide a new method for fault diagnosis which will substantially reduce or eliminate the effects of intermittent faults. noisy measurements, poten~ial false alarms, and out-of-tolerance conditions, while providing flexibilitv of system changes and implementation is a standardized architecture (capable of replication and similar usage in different sys~em portions and the like).
10 Finallv, the high level of Fault Detection wi~hout False Alarms must be made in a ~imelv manner in order tO facilitate operator and system to ~he failure. Itis therefore not acceptable to perform extensive post-processing of test result i~
such processing will require use of time beyond the system time constraints.
Prior Ars.
Many different diagnostic concepts have previously been ~ried and found wans~ng:
~. Treed Fault Analysis - is the tr~ditional, deductive faul~ analysis method.
Tests on the prime item are run sequentially to verifv that inputs, power supplies volta~es and other equipment states are correct so that a fault mav be isolated.'O Fault isolation flows deductively from an observed or measured failure indication through a se~uential set of tests that searches and sequen~iallv ~liminates all equipment faults that could have produced that indica~ion.
Decisions are made on a binary Pass/Fail basis: if a test passes, one deduc~ive path of two is taken. If the test fails, another deductive path is taken. In both 'S decision legs, more tests are per~ormed and (binary) clecisions made until only one fault could have caused the failure indication. Our new &ult diagnosis methodology will differ from a Treed Fault Analysis (TFA) in three primary areas: the new me~hod will use a graded fault indication, called a confidence measure5 instead of a binary Pass/Fail decision; the deductive part of our new 30 method will operate in parallel fashion rather than in the sequential form of the l'FA; and the deductive part of our new method will numerically combine ~he ~7~7~
graded indications of a fault to arnve at a decision rather than use a test PassjFail indication to make decisions.
b. ~i of N Fault Filtenng Decisions - is a first generation method of filtering fault indications that are gathered serially from the same test point in order to S make a fault decision. N samples of the equipmem's sta~e are gathered (generally in a sliding window); if M of these N samples indicate a failed test (measurement outside a ~ ~it), then the decision is made that a real failure hasoccurred. For example, if M=3 of N=5 samples indicate a failure. then the test has failed. Notice that the false ~larm reduction method of forcing a test to pass 10 three consecutive times before declaring a fault falls into this categorv. Our new diagnostic me~hod differs from M-of-N processing at least by utilization of a different process for the persistence analysis of serially-gathered data; we utilize operations ur~lcnown to the M-o~-N processing form, sueh as averaging of multiple-sample differenees and the like.
15 -c. ~lodule Intelligent Test Equipment ~MITE) - is a diagnostic concept, developed as ~he Automated Systerns Depar~ment o~ General Electric Company, that diagnoses the health of equipment by looking for correct operating states of the prime equipment, rather than by looking for faults and fault symptoms.
Decisions are made from the combined probabilitv o~ failure obtained ~rom test 'O results to assess equipment states. Arnong other objectives. ~his method altempts to prevent an improper fault diagnosis when a fault and its svmptoms have not been identified in a Failure Modes and Effects Critical Analvsis (FMECA). We have retained the use, during fault detection, of correct operating states as a basis for decisions, but then utilize the new concept of 75 confidence measure, rather than a probability of failure indication.
d. Abductive Reasoning is a model-based diagnostic method apparently originated by Abtech Corporation of Charlottesville, VA. In its execution, abductive reasoning samples the input and output states of an equipment being diagnosed. These input states are then passed through the equipment model;
30 the outputs of the model are compared to the actual output samples from the equipment being diagnosed. If the di~ferences bet veen corresponding equipment arld model outputs are sufficiently large, a fault is declared. l~is 2~777~
.5 39-AS-2778 approach may be unique in the architeclure of the model (a rnultiple-input.
multiple-output, Ihird-order, cr~ss-coupled-inp~t polynom~al), and the AIM
program which generates mo~el inf~rmation about the equipment to be diagnosed, by calculating the coe~lcients o~ the polynomials from expected 5 output states of the equipment being diagnosed when the equipment is presented with a range o~ inputs. Models may be developed at different levels and combined to synthesize more complicated models. We prefer to no~ use a model as the basis of our diagnostic system.
e. Diagnostic Expert Svstems - are computer programs that logicallv combine 10 operational and fault information a~out the svstem in order to diagnose equipment faults. rhe de~ision-making process (i.e. hypothesize a fault ancl then search for the system states that can cause that fault, or observe symptoms and then search for faults that mats h tho~e sy~T~ptoms) is part of th~ expert system software. The information used to reach decisions and the syrnptoms that are 15 associated with a fault are entered into tables. T~e software operates on this table-resident data to make logical conclusions based on the state of the SU~
e~uipment. We will continue to operate on information tables in our new method, but will, at least, add numerical and local combinations the graded indication of equipment states, to arrive at our fault decisions.
O Neural Networks applied to Diagnostic Systems - are a relatively new form that cornbines diagnostic information, represented at a very low-level (i.e. digital bits) and makes a decision based on (bit) pattern recognition techniques. which we prefer not to use in our new method.

~5 rief S~mar pf th~ InventiQn In accordance with the invention, a DCMA method for operating a diagnostic processor (a m~crocontroller or general purpose computer. which is either embedded in portions of the system-under-test or is resident on its own module and connected to the Sl rr) interacts with the system-under-test through 30 test points and other (inherent or unique) monitoring devices within that system.
The method for diagnosing the ~ilure condition during operation and maintenance of the associated system, using a plurality of system test points, 2~7772 compnsing the steps of: performing a sequence of each of a pluralitv of individual tests upon the svstem to evoke a like sequence of responses at a designated configuration of test points; determ~ning a persistence factor T for a sequential set of a pluralitv N of at least one selected test response; comerting the T factor to a confidence measure CM for that set of sequential test responses; de~ermining at least one fa~lllre mode ba~ed upon all of the selectedtest responses; and corroborating the determined failure mode by comparison to other data obtamed from the systerrL pnor to reporting the existence o~ that mode for the system.
Brief De$crip~ion Qf the Drawin~
Figure 1 is a schema~ic block diagram of a system under test and interconnected with a diagnostic processor, utilizîng the novel methods of the present invention;
. Figure ~ is a graph illus2ra~ing the noise-included probability densitv functions of failed and correclly-operating systerns;
Figures 3~-3c are graphic examples of how different degrees of persistence can operate wilh differing measurements to obtain different degrees o~ test confiden~e;
~0 Figure 4 is a test method logic flow diagram for the method o~ the present Inventlon;
Figure Sa is a schematic block diagram of a portion of a system tO be tested for illustration purposes using the present invention; and Figures 5b and 5c are time-coordinated graphs respectively illustrating a 'S series of measurements and the Confidence Measures evaluated for a running set of N of those measurements~

D~t;~ D~s~-ip¢lon ofthe InYentioll Referring initially to Figure 1, a diagnostic processor 10 may be a 30 microcontroller, microco~nputer or other general purpose computational element or hard-w~red subsystem and the like, programls~ed to carry out our novei DCMA method, as hereinbelow described, on a system 11 under test (SUT).

2~77772 The system receives svstem inpu~s at an input port 11a and provides, responsive thereto. system outputs at an output port 11b during operation, the svstem provides various test signals, each at an associated one of a plurality ~ of test points lP1-lPn. Each test point is coupled to a corresponding one o~ test S outputs 11-1 through 11-n, and thence to a correspondin~ one of test inputs 10-1 through 10-n of the diagnostic processor. The processor may also receive timing and test state number inputs via a timing input port 10a. The processor performsa failure mode evaluation, to assess whether or not a failure has occurred in the system (and the prevailing system conditions if there is a failure), and provides its results and a measure indicative of the confidence in that evaluation. at anOUtpUt port 10b, for subsequent use as desired~
Referring now to Figure 2. it will be seen why the confidence measure.
which is a numerical result used to represent both the persistence and corroboration of failure evaluation results, is use~ to reduce the susceptibilitv of the diagnostic process to noise effects. The graph has an abscissa 14 scaled in terms of test value, with an ordinate 15 scaled in terms of the probability p ofanv par~icular test value occurnng for a par;ticular tes~. Curve 17 is the well-known Gaussian probability densitv function of any one test result occurnng in a properlv operating real system (i.e. a system havtng some normal. non-zero.
'O amoun~ of noise); the bell-shaped cusve 11 peaks at the expected mean ~alue 17m ot the test result. If a threshold value 18 is used to establish a pass-fail tes~
criterion (i.e. wtth test passage for all values above value 18 and test faiiure tor all values below value 18), then there w~ll be a region 17a in which a failure may be diagnosed, due to notse effects, even though the system, by de~inition. is '5 operating correctly - area a7a represents undesired false alarrns. Sim~larly, on a cunre 19 of the probability density function of test results in a known-failed systerlL the system noise will likely cause failure-signals to be detected as values falling in a portion 19a which results in undesired test passage. Thus, one seesthat it is highly desirable to reduce, if not remove, the effects of test value noise on the diagnostic process.
One proposed solution to the noise problem is to average a set of several measurements for the identical test condition. As seen in Figures 3a-3c, there is a further problern in ~he confidence w~lich one can place on any set of results:
the measurement acceptabilitv area 2~a is bounded by both an unaccep~able area 2ûb, for out-of-lirnit results below a lower lirnit 21a of operation, and an unacceptable area 20c, for out-o~-limit results above an upper operational lirnit 21b. The proximitv of the plural test mea~urements 22 to either lirnit 21 is notthe only characteristic ~o be considered. Thus, bo~h se~ æ (Figure 3a) and set 23 (Figure 3b) have the same average value (with a set mean value 2~m or 23m at the same distance D from the center value 21c of the acceptable band 20a.
but set 22 has a relatively small measurement error band ~, caused by noise and the like, while the measurement error band ~' of set 23 is larger (i.e., ~ < ~')than set 22. Based on the greater possibility of a noise-induced false alarrn in the wider error band of set 23, we say that we have a "low" confidence for set ?~3, i.e.
have lower confidence that set 23 has measured a passing value within allowable lin~its, and have a ~'high~ confidence for set 22, i.e. have higher confidence that set 2~ has measured a passing value within allowable limits. Similarlv, for another set 24 (Figure 3c) of an equal number of measurements o~ the same parameter, with an error band ~ about as big as that of set 23, i.e. ~ ', we can say that we have high confidence for set 24 and low confidence for set 23.
because the set 24 mean value 24m is so much closer tO the within-limits cen~er '0 value 21c. i.e. the difference D~ between values 24m and 21c is less ~han the sel 23 mean value 23m-center value 21c distance D and is D' is much less Ihan Ihe within-limit5 tolerance band half-distance (value 21c to limit 21a or 21b). Thus, the confidence in a meaSurement set should be related to the persistence of the measurements, ~nd is a cosnbination of at least noise and separation ~ctors.
~5 In accordanGe with the invention, we utilize several new processingmethods to reduce ~lse alarms and incre~se fault detections; we call the generalmethodology "Diagnostics by Confidence Measure Assessment" (DCMA).
Several features we have developed are:
1. The use of a çonfi~en~e nlensure as a test result indication.
2. The use of specialized ~r~i~s~ processing an m~ test results from a ~g~ source.
3. The use of specialized 5Q~ prosessing on ~ test results from diffe~nt sources.

Confidcnce IVleasure S Our Confidence Measure (CM) i5 a numencal indication, beeween -1 and +1, defining how wel! a test passes its lim~ts of operation (when C~I is a positive value) and/or how well a test detects a specific failure (when CM is a negative value,~. Thus, a resulting CM of -1 indicates that a test has been failed with 100% confidence, while a CM of + I indicates that there is 1û0~o confidence that the passing of a test was correct.
The intent of the Confide~ce Measure is to provide a graded indica~ion of test results so that ehe state of a uni~ may be interpreted more accurately than just a Pass or Fail. In addition, it provides a mechanism with which to combine many test results, using corroboration processing, in order to proYide a more accurate assçssment of a unit's health.

Persistence Persistence is defined as the DCMA process of using more than one sample o~ a test results from a sin~le source to calculate a confidence measure ~0 in the conclusion inferred from the test being carried out.
Persistence tends to produce a stable measurement and indicale the ce~taintv that a set of measurements of the same parameter, at the samç test point, are u~thin operating limits, so as to e~iminate both short-term intermit~encies and measurement noise. Persistence may be a serial evaluation of rneasurements obtained from the same source.
E~eferr~ng to Figure 4, the diagrammed step by-step flow of a presently preferred embodiment of our method is commanded to start (step 26) and then enters the individual ~esting subroutine 28, wherein, ~or each one of a plurality M of different tests, various inputs stimulate (step 3~)) the SUT, so that responses can be measured (step 32) at the various test points TPi, for 1Si<n.
Each test can be a standardized implementation. with a set-up selected to exercise certain preselected paths through each equipment with known stimuli, 2~777`72 using predeterr~lined built-in-test (BIT) configurations. Each measured responseis a signal of interes~ which is compared (step 34) to the known value for a fully-operational SUT; the difference resl~lts are normalized to test limits and reported (step 36) to the persistence subroutine 38 as one o~ a stream of measurements.
Persistence processing occurs ~or each indi~idual test; in the illustrated embodiment, a specifilc persistence processing method operates on the last "N"
test result samples of the stream. The number o~ samples and the operational limits can be standardized, in accordance with predetermined parameter tables.
10 During processing, the average of ~h~ last N measurements. using the present sample and the (1~ previous samples~ is calculated and subtracted frorn the closest limit NL of correct operation (steps ~0 and 42); the same result c;m be obtained, with greater difficu~ty in imp~ementation and execution, by reversing the order of operations and subtracting each measured value from its nearest 15 limit before averaging the differences. At the same time, a standard deviation o from this average difference is c~lculated (step 44) for ~he same "N" samples.A T value is calculated ~step 46~ by dividing the resultan~ averaYe difference bv the standard deviation a and multiplying the result bv the square root of the number of samples. The calculated T value is converted to a contidence 'O measure CM ~step 4~) by using a prestored parameter look-up table (step ~0 which maps ranges of T values to confidence measures associated w~h the specific test.
Thus:
T- [(AyE~M)-~lL over N samples)/a(M)~
~5 and (~M = f('r), where: CM is the con~ldence measure in the test result T is an intermediate T measure M is the measured sample from the SUT
NL is the limit of correct operation that is nearest the measurement in the range of operation N is the number of sequential samples ~777~:2 and f~) is a table whose contents maps T values to Confidence Mea~ures.
The speed and resulting confidence of this persistence techn~que can be adjusted bv changing the number N of samples used to make a decision and the 5 threshold NL of hypothesizing a fault. n~us, persistence processing o~"raw"
measurement samples tends to elim~nate interm~ttent fault indications.
comperlsa~e for measurement "noise'' and prov~de a variable test repor~ing latencv dependent upon both noise level and proximitv of the measurement to its accep~able opèrational li~uts, with a conffdence value result dependent upon10 the all of these ~actors.

Corroboration Corroboration is d~fined as the ~CMA process of using more than one sample of test results from different sources ~o calcula~e a confidence measure 15 in the conclusion drawn from a SU T failure. Generally, more than one positive test result is needed for the corroboration process to produce a high-confidenceresult. If h7.~0 test points are sequentially located along a test rou2e~ then using corroboration one may sav that the route has failed when test results indicatin_that the signal measured at both of the sequential test point has failed with high 'O confidence.
The totality of test results TRj, for IsjsM, can be combined, o~ten with parallel processing, to find at least one failure mode FMk, for l<k<R. bv passage through a predeterrnined failure mode/test results matrLx 52 ,based on the test results 7~j which indicate that at least one failure has occurred. One '5 of the failure modes can be the absence of failures. These vanous failure mode indicators FMk are processed in subroutine 54 for reportage of corroborated FMs. Subroutine 54 arithmetically and logically combines these test results, represented by confidence measures, to Benerate a numerical evaluation (the confidence measure, between -I and ~1) about the status o~ that failure rnode.
30 ~he process (step j6) of combining test results uses mathematical operators ( +, , ~, /), logical operators (ANl~, OR, NOT, COMPLEMENT), and constants as tools to detect and verify ~aults, along with data as to which test source was used to produce the current response data. The result of this combination is then compared (step 58) to a pre-defined threshold. If the result exceeds the threshold (a YES decision), step 60 is emered and a failure is declared via a Failure Report Rx, so that an action associated with this RX indication can be 5 executed. If all Failure Mode assessments indicate that no faults are present.then a periodic ''~Faults" report is sent ~step 62) and the subroutine made to J/~7/ return to its beginning, to corroborate the next set of failure mode data received from the matruY S2. If the result does not exceed the threshold (a NO decision),step 64 is entered and a "No Fault" indication is kept enabled for this set of 10 failure modes~ and the action will be returned to corroborate the next failure mode found.
A separate Corroboration process must be designed ~or each svstem failure mode, by choosing candi~a~e test results for evaluation of each different mode ~test with different source, ete.) and then deYeloping methods to cormbine 15 these test resul~s to reach a failure conclusion. The various combinations of all fai}ure modes may be grouped into the ma~rix 5~ format as part of the DCMA.
The matnx is a convenient method to manage all of the known failure modes.
If all tesl results for a built-in test (BlT) level are arranged in columns o~ the matrix and failure modes for the same BIT level were arranged in rows o~ the 70 matrLx, then en~ries in the body of the matrix will "mark" those test results which contribute to verifying any one failure mode, where each row in the matrix is a test vector o~ resul~s (confidence measures) which uniquely identifies a failuremode. The mechanics of corroborating reports combines cor~dence measure entries in the failure mode/test results matr~x to validate faults and detect out-~5 of-tolerance conditions. An example of such a matrix (containing numerical test resul~s) is:

2~77772 Test Results TRl TR2 TR3 ~R4 TRn Failures FM l -0.2 0 ~ 0.7 0 FM2 0.3 -0.8 0.5 +0.1 FM3 0.5 0 -0.5 -0.7 10 Thus, F~I1 (the first &ilure mode) uses the first and third test results (TR1 and TR3) to verifv this mode, FM2 ~the second failure mode) uses the first four testresults TR1 through TR4 to vesify a failure. and so forth. Any interSeCIiOn. of a failure mode row and a test result column. which is without a confidence measure or which has a zero quantitv therein indicates that the pa~icular test 15 result is not required to reach ~hat failure mode decision. This matrLx gives a quick, visual method to determine if all failures can be isolated by a unique set of tests.
Corroboration processing comprises combining test results (confidence measures) using arithmetic operatGrs, logical operators and constants as '0 specified in a Failure Mode defining statement or equation. which we call ~
Failure ~vfode Operator Line. This line rnay operate on test results o~ one ~ailure mode in a stack oriented process as specified by pointers to tes~ resulls.
operator tokens, constants and evaluation criteria listed on the line. '~n Operator l,ine result is a numerical ~alue which is compared to the evaluation ~5 criteria to make a &ilure mode decision and generate the confidence measure CM value in that decision, In comparison to a binary process, the combination of low confidence in a numbcr of test results which have just passed their limits of correct operation could be combined to cortectly assess a system failure since the graded confidence in each test result contributes to the failure mode 30 evaluation procedure. It will be understood that conunon Algebraic method, reverse Polish negotiation or any other desired method can be used for the Operator Line calculations.
Corroboration operates in two modes: foreground and background. The foreground mode is evoked when 2 received test report indicates a failure 207~772 (negative confidence measure). ~hat test report would then be used as an index that points to all failure modes in the Failure Mode/Test Results matrix which use that test result as an enuv. Corroboration processing then evaluates this restricted subset of failure modes to quic3cly corroborate the repor~ed failed test 5 indication using all other required test reports.
In the background mode, corroboration processing operates on the present test result entries in the matrLx to assess potential out-of-tolerance conditions. In this rnode, it se~uentially evalua~es all failure modes to determine if the combination of test results in any one failure mode indicates degraded ~or 10 failed) performance, Wi~h a view o~ operating in this mode. ~he f~ilure mode/test results matrLx ~rught bes~ be called a functional operation matrix andcould be thought of as containing values of correct SUT operation rather than values which indicate failures.
In ei~her mode, if a corroborated system failure report R is issued ~om 15 - step 6Q a predetermined course of action (system shut-dowr, manual intervention request, automalic switch-over to redundant subsystem. and the likeactions~ can be carried out. as part o~ subsequent step 60.

Exampl~
'O Refer;ing now to Figures Sa-5c, we consider the problem o~ contidence and persistence in a set of measurements of a subsystem powered by ~n intermittent power supp~y. An equipment 70 has a plurality S o~ sensor means 7Z, such as sensors 74-1 through 74-S, each powered by its OWTI power supply means 76, such as means 76-1 through 16~S; control inputs to each sensor and S output of sensed data from each sensor is provided through an associated interface INTE~ means 78-1 through 7~-S. Each of the units (sensor 74, power supply 76 and INT~ means 78) may have a built-in-test controller BI~C means 80, to establish test conditions, so that test result data can be provided from associated test points 82, responsive IO comunands on a cornmon test and 30 measurement (T~M) bus 84. The sensors prov~de Iheir informa~ion to a sensor-data-processing subsystem 86, including a built-in-test subsystem processor 88, coupled to all of the E~ITC us~its 80 via T&M bus 84, and a subsystem power :2~777~

supply 90. providing operating potential(s) to all of the various subsystem stages 92-98. Each of the main in~erface MlTF means 92 and the subsequent A1~ A2.
A3, ... stages mav have a BITC means 80, and ha~re test poir ts 82-a,...,82-h....82-p,... from which data is sent tO test processor ~8, which includes the functions of S diagnostic processor 10 and the DCMA methodology.
The operating +5 volt DC power to stage ~2 is mor~tored by the 82-h test point "h", which, for a sequence of ~venty test/mea~surement inte~vals. sends back the measurement data reptesented by da~urns 100~1 through 10~20 in Figure Sb; note the interrnittent nature of measurement 100-7 ~perhaps eaused 10 bv a noise "spike" at the A2 power input terminal~. Having predeterminatelv chosen a sample size N =8, ~he confidence measure CM for a traveling N sample set has a different persistence for N measurement inte~va}s after the intermittent measurement 10~7, and (as shown in Figure Sc) higher CM levels 102-a (prior to the intermittent datum 10~7) are hanged to Ihe lowered level 102-b of the 15 Confidence Measure for N( = 8) intervals a~ter receipt of the spike; the CM level 102-c returns to a higher level once the effect of the spike's disturbance on the persistence is removed by the samp~e size tr~veling beyond the out-of-limit rneasurement. By way of illustration only, if the measurements 100-1 through 100-6 and 100-8 through 100-20 all range from about 5. L2 to about 5.18 VDC
'O and the nearer upper limit 104 is set at +5.5, then the high confidence measure level 104a and 1~4c is about +0.8 for a T value of (5.5-5.15)/(û.125)-/8 ~7.9:
responsive to a voltagç spike 10~7 of about 6.0 VDC, the T value drops to (55-5.~25)/~0.17)~/8 z3.0 and the Confidence Measure falls to the lower level 102 b o~ ~bout ~ 0.3, for the eight measurement intenrals associated wi~h '5 measurements 100-8 through 100-15. The persistence can be easily interpreted by re~erence to the following chart:

~7~7~

P value M~asurement Sta~ Rep~r~
+ 1.0 Stable and Inside Li~ies r"Good ~ + 0.7 Within 3a of variation inside I Health"
lirnits I for ~+0.6 Inside of, but close to, lirnit L+ l>P2+0.3 ~ + 0-3 rSend ~ +0.1 Has variation grea~er ehan the I result to average differenee from limit ¦ Corrobor-0.0 Recently changed significantly I ation for ~-û.l Oscillating sigruficantly ¦ fur~her ~-0.3 Lanalysis --0.6 Outside of, but close tO, lim~t r Test ~~.7 Within 3a of variation outside Ifailure limits I for -1.0 Stable and outside limits L -0.32P>-1 The above example illustrates the use of DCMA method Persistence and Confidence Measure aspects; the s~me system o~ Figure Sa will be used tO
illustra~e the Corroboration aspect ~eatures of multiple ~est result use ~or 0 evaluation of failure mode status, failure verifica~ion via in~orma~ion obtaine~
from various locations in a module/subsystem/systern, fault isolation, conclusion production from a se~ of low-confidence test results, the foreground mode evaluation of failure modes which use a reported test and the background mode continuous evaluation of all failure mo~es to determine possible out~o~-tolerance 25 conditions Detection o~ a fault is illustrated by detection of the failure of the Al assembly output by the out-of limit test response at test point 82-c, and thesubsequent corroboration by measurement of an out-of-lirnit condition at the A2 input test point 82-e. Isolation Oe a fault can be illustrated, for the failure of assembly A2, by testing the A2 input, at test point 82-e and de~ermining that the 30 signal there is within lirnits, then corroborating this "okay" condition by determining that the module control signal at A2 contrvl test point 82-f is alsocorrect; when the A2 Outp~lt arnplitude test point 82-h is measured and found 2~777~

to be outside the limits of normal operation, and corroborated by testing the next assembly A3 input amplitude at test point 82-k and determ~ning opera~ion there is also outside of lin~ts, the fault is isolated to assembly A2. If the conditions at respective test poin~s 82-e, 82-f, 82-g and 82-k are respectivelv S named l~a. TRb, TRc and TRd, then a Failure Mode Operator Line ~FMOL) for this isolation example can be written as:
[MAX of(MIN(NEG(2~Tra + Trb)/3) OR 0.0)]
OR
~IAX of(Trc OR Trd)]
<-0.7 where the test results ~x, for a~xcd~ are the Confidence Measure v~lues ~or that par~icular test point response. The following table is on~ e,Yample illustrating the use o~ Corroborative processing, with only those corroborated CMs less than -0,7 being assessed as ~aults ~note that in this example, only a 15 failed OUlpUt test will result in reportage of a fault):

~oL: MAX OF(MIN (NEG((2-TR1 + TR2)/~) OR t~.O~ OR MAX OF(TR3 OR TR4) < -0.7 IN ORDER OF EXECUTION:
TRl, 2. , TR2. ~,3, /, NEG, 0.0 MIN. TR3. TF~4 MAX. MAX: c 0.7 Fhql~RE ,~:
~0 NO FAILURE tO.7 2 1.4 I O.7~2.1 3 0.7 0.7 0.0 0.7 +0-7 ~0 7 +0 7 +0 7 OUTPUT FAILS ~0.7 2 1.4 ~0.7 +2.1 3 0.7 0.7 0.0 0.7 0.7 0.7 0.7 0 7 INPUT FAILS 0.7 2 1.4 ~0.7 0.7 3 -0.2 +a.2 o~o 0.0 0.7 0.0 +0.0 +O
CON~ROL FAILS tO.7 Z 11.4 0.7 +0.7 ~ +0.2 0.2 0.0 0.2 -0.7 -0.2 -0.2 0.2 aITMONITORFAILS +0.7 2 ~1.4 ~0.7 ~2.1 3 0.7 0.7 0.0 0.7 0.7 +0.7 +0.7 +0.7 While sevoral examples of our novel DCMA methodology are described herein, those skilled in the art will now understand that many modiflcations andvaria~ions can be made within the spirit of out invention. It is therefore our intent to be lim~ted only by the scope of the appending clai~rLs and not by way of details and instrumentalities presented by way of description of the exemplary embodiments.

Claims (15)

1. A method for diagnosing the failure condition during operation and maintenance of a system having a plurality of test points. comprising the steps of:
(a) performing a sequence of each of a plurality of individual tests upon the system to evoke a like sequence of responses at a designated configuration of test points;
(b) determining a persistence factor T for a sequential set of a plurality N of at least one selected test response:
(c) converting the T factor to a confidence measure CM for that set of sequential test responses;
(d) determining at least one failure mode based upon all of the selected test responses; and (e) corroborating the determined failure mode by comparison to other data obtained from the system, prior to reporting the existence of that mode forthe system.
2. The method of claim 1, wherein step (a) includes the steps of:
providing each of a plurality of sets to test stimuli to the system: measuring aresponse to each stimulus at each of a predetermined pattern of test points: andcomparing each response measurement to a predetermined desired response.
3. The method of claim 2, wherein step (a) further includes the step of:
normalizing each response measurement prior to reporting the normalized data to the persistence determination step.
4. The method of claim 1, wherein step (b) includes the step of determining the persistence only for test response data measured for test inputsfrom a single source.
5. The method of claim 4, wherein step (b) further includes the step of comparing each test response to a closest one of a predetermined set of limits for correct operation of the system during that test.
6. The method of claim 5, wherein step (b) further includes the steps of:
averaging a predetermined number (n) of samples of the same test response at the same test point; finding the difference between the sample average and the closest correct-operation test limit for that test; determining a standard deviation from the average difference and the sample number (n); and calculating the T
factor value from the average difference and standard deviation values.
7. The method of claim 6, wherein the T factor calculating step includes the steps of: dividing the resulting average difference value by the resulting standard deviation value; and multiplying the result by a factor relating to thenumber (n) of samples used in the averaging step.
8. The method of claim 1, wherein the multiplying step factor is the square root (?n) of the number of samples used in the average.
9. The method of claim 6, wherein step (b) further includes the step of using a sliding average of the last (n) samples, including the present sample and the last (n-1) samples at the same test point.
10. The method of claim 1, wherein step (c) includes the step of obtaining a CM value for the calculated persistence factor value by addressing a preestablished look-up table mapping each possible persistence value to an associated confidence measure value.
11. The method of claim 1, wherein step (d) includes the step of determining that no-failure modes exist when there are no persistently incorrectmeasurements within the system being tested.
12. The method of claim l, wherein step (d) includes the steps of:
repeating steps (a)-(c) for a different test source; and calculating a new confidence measure for the joint set of test responses from all sources used.
13. The method of claim 12, wherein step (e) further includes the steps of: combining into a final failure mode result R the failure mods data found forall input source repetitions; comparing the combined result R against a pre-defined threshold; and declaring a system failure to exist if R exceeds the threshold.
14. The method of claim 13, wherein step (e) includes the step of taking a pre established course of action responsive to declaration of a system failure.
15. The invention as defined in any of the preceding claims including any further features of novelty disclosed.
CA002077772A 1991-10-24 1992-09-09 Method for fault diagnosis by assessment of confidence measure Abandoned CA2077772A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/782,191 US5293323A (en) 1991-10-24 1991-10-24 Method for fault diagnosis by assessment of confidence measure
US782,191 1991-10-24

Publications (1)

Publication Number Publication Date
CA2077772A1 true CA2077772A1 (en) 1993-04-25

Family

ID=25125282

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002077772A Abandoned CA2077772A1 (en) 1991-10-24 1992-09-09 Method for fault diagnosis by assessment of confidence measure

Country Status (2)

Country Link
US (1) US5293323A (en)
CA (1) CA2077772A1 (en)

Families Citing this family (89)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2685526B1 (en) * 1991-12-20 1994-02-04 Alcatel Nv CONNECTION NETWORK WITH MONITORING SENSORS AND DIAGNOSTIC SYSTEM, AND METHOD OF ESTABLISHING DIAGNOSTICS FOR SUCH A NETWORK.
EP0557628B1 (en) * 1992-02-25 1999-06-09 Hewlett-Packard Company Circuit testing system
US5819028A (en) * 1992-06-10 1998-10-06 Bay Networks, Inc. Method and apparatus for determining the health of a network
US5566091A (en) * 1994-06-30 1996-10-15 Caterpillar Inc. Method and apparatus for machine health inference by comparing two like loaded components
US5500941A (en) * 1994-07-06 1996-03-19 Ericsson, S.A. Optimum functional test method to determine the quality of a software system embedded in a large electronic system
JPH0855029A (en) * 1994-08-09 1996-02-27 Komatsu Ltd Inference device for cause
US5570376A (en) * 1994-10-05 1996-10-29 Sun Microsystems, Inc. Method and apparatus for identifying faults within a system
DE59503378D1 (en) * 1994-10-26 1998-10-01 Siemens Ag METHOD FOR ANALYZING A MEASURED VALUE AND MEASURED VALUE ANALYZER FOR IMPLEMENTING THE METHOD
US5748497A (en) * 1994-10-31 1998-05-05 Texas Instruments Incorporated System and method for improving fault coverage of an electric circuit
US5655074A (en) * 1995-07-06 1997-08-05 Bell Communications Research, Inc. Method and system for conducting statistical quality analysis of a complex system
GB9608953D0 (en) * 1996-04-29 1996-07-03 Pulp Paper Res Inst Automatic control loop monitoring and diagnostics
US5768501A (en) * 1996-05-28 1998-06-16 Cabletron Systems Method and apparatus for inter-domain alarm correlation
US5923834A (en) * 1996-06-17 1999-07-13 Xerox Corporation Machine dedicated monitor, predictor, and diagnostic server
US5867505A (en) * 1996-08-07 1999-02-02 Micron Technology, Inc. Method and apparatus for testing an integrated circuit including the step/means for storing an associated test identifier in association with integrated circuit identifier for each test to be performed on the integrated circuit
US5799148A (en) * 1996-12-23 1998-08-25 General Electric Company System and method for estimating a measure of confidence in a match generated from a case-based reasoning system
GB2327553B (en) 1997-04-01 2002-08-21 Porta Systems Corp System and method for telecommunications system fault diagnostics
US6636841B1 (en) 1997-04-01 2003-10-21 Cybula Ltd. System and method for telecommunications system fault diagnostics
JP3778652B2 (en) * 1997-04-18 2006-05-24 株式会社日立製作所 Log data collection management method and apparatus
DE19723079C1 (en) * 1997-06-02 1998-11-19 Bosch Gmbh Robert Fault diagnosis device for automobile
US5950147A (en) * 1997-06-05 1999-09-07 Caterpillar Inc. Method and apparatus for predicting a fault condition
US5949676A (en) * 1997-07-30 1999-09-07 Allen-Bradley Company Llc Method and system for diagnosing the behavior of a machine controlled by a discrete event control system
US6785636B1 (en) * 1999-05-14 2004-08-31 Siemens Corporate Research, Inc. Fault diagnosis in a complex system, such as a nuclear plant, using probabilistic reasoning
AU5156800A (en) 1999-05-24 2000-12-12 Aprisma Management Technologies, Inc. Service level management
US7069185B1 (en) 1999-08-30 2006-06-27 Wilson Diagnostic Systems, Llc Computerized machine controller diagnostic system
US6442511B1 (en) * 1999-09-03 2002-08-27 Caterpillar Inc. Method and apparatus for determining the severity of a trend toward an impending machine failure and responding to the same
US6532426B1 (en) 1999-09-17 2003-03-11 The Boeing Company System and method for analyzing different scenarios for operating and designing equipment
US6480809B1 (en) 1999-09-23 2002-11-12 Intel Corporation Computer system monitoring
US6618691B1 (en) * 2000-08-28 2003-09-09 Alan J Hugo Evaluation of alarm settings
US6574537B2 (en) 2001-02-05 2003-06-03 The Boeing Company Diagnostic system and method
US6966015B2 (en) * 2001-03-22 2005-11-15 Micromuse, Ltd. Method and system for reducing false alarms in network fault management systems
JP2005531935A (en) 2001-07-12 2005-10-20 アトルア テクノロジーズ インコーポレイテッド Method and system for biometric image assembly from multiple partial biometric frame scans
US6907430B2 (en) 2001-10-04 2005-06-14 Booz-Allen Hamilton, Inc. Method and system for assessing attacks on computer networks using Bayesian networks
US7093168B2 (en) * 2002-01-22 2006-08-15 Honeywell International, Inc. Signal validation and arbitration system and method
US6909960B2 (en) * 2002-10-31 2005-06-21 United Technologies Corporation Method for performing gas turbine performance diagnostics
US6751536B1 (en) * 2002-12-04 2004-06-15 The Boeing Company Diagnostic system and method for enabling multistage decision optimization for aircraft preflight dispatch
US7451021B2 (en) * 2003-05-06 2008-11-11 Edward Wilson Model-based fault detection and isolation for intermittently active faults with application to motion-based thruster fault detection and isolation for spacecraft
US7206965B2 (en) * 2003-05-23 2007-04-17 General Electric Company System and method for processing a new diagnostics case relative to historical case data and determining a ranking for possible repairs
US7584420B2 (en) * 2004-02-12 2009-09-01 Lockheed Martin Corporation Graphical authoring and editing of mark-up language sequences
US7801702B2 (en) * 2004-02-12 2010-09-21 Lockheed Martin Corporation Enhanced diagnostic fault detection and isolation
US20050240555A1 (en) * 2004-02-12 2005-10-27 Lockheed Martin Corporation Interactive electronic technical manual system integrated with the system under test
US20050223288A1 (en) * 2004-02-12 2005-10-06 Lockheed Martin Corporation Diagnostic fault detection and isolation
US7257515B2 (en) * 2004-03-03 2007-08-14 Hewlett-Packard Development Company, L.P. Sliding window for alert generation
US7440862B2 (en) * 2004-05-10 2008-10-21 Agilent Technologies, Inc. Combining multiple independent sources of information for classification of devices under test
KR100862407B1 (en) * 2004-07-06 2008-10-08 인텔 코오퍼레이션 System and method to detect errors and predict potential failures
US7409594B2 (en) 2004-07-06 2008-08-05 Intel Corporation System and method to detect errors and predict potential failures
US7415328B2 (en) * 2004-10-04 2008-08-19 United Technologies Corporation Hybrid model based fault detection and isolation system
US20060120181A1 (en) * 2004-10-05 2006-06-08 Lockheed Martin Corp. Fault detection and isolation with analysis of built-in-test results
US20060085692A1 (en) * 2004-10-06 2006-04-20 Lockheed Martin Corp. Bus fault detection and isolation
US7899646B2 (en) * 2004-11-02 2011-03-01 Agilent Technologies, Inc. Method for comparing a value to a threshold in the presence of uncertainty
US20080052281A1 (en) * 2006-08-23 2008-02-28 Lockheed Martin Corporation Database insertion and retrieval system and method
US20060229777A1 (en) * 2005-04-12 2006-10-12 Hudson Michael D System and methods of performing real-time on-board automotive telemetry analysis and reporting
US7427025B2 (en) * 2005-07-08 2008-09-23 Lockheed Marlin Corp. Automated postal voting system and method
US7599688B2 (en) * 2005-11-29 2009-10-06 Alcatel-Lucent Usa Inc. Methods and apparatus for passive mid-stream monitoring of real-time properties
US7752468B2 (en) 2006-06-06 2010-07-06 Intel Corporation Predict computing platform memory power utilization
US7643916B2 (en) 2006-06-14 2010-01-05 Spx Corporation Vehicle state tracking method and apparatus for diagnostic testing
US8762165B2 (en) 2006-06-14 2014-06-24 Bosch Automotive Service Solutions Llc Optimizing test procedures for a subject under test
US8428813B2 (en) 2006-06-14 2013-04-23 Service Solutions Us Llc Dynamic decision sequencing method and apparatus for optimizing a diagnostic test plan
US9081883B2 (en) 2006-06-14 2015-07-14 Bosch Automotive Service Solutions Inc. Dynamic decision sequencing method and apparatus for optimizing a diagnostic test plan
US7865278B2 (en) * 2006-06-14 2011-01-04 Spx Corporation Diagnostic test sequence optimization method and apparatus
US8423226B2 (en) * 2006-06-14 2013-04-16 Service Solutions U.S. Llc Dynamic decision sequencing method and apparatus for optimizing a diagnostic test plan
US20070293998A1 (en) * 2006-06-14 2007-12-20 Underdal Olav M Information object creation based on an optimized test procedure method and apparatus
US20100324376A1 (en) * 2006-06-30 2010-12-23 Spx Corporation Diagnostics Data Collection and Analysis Method and Apparatus
GB2440355A (en) * 2006-07-27 2008-01-30 Rolls Royce Plc Method of Monitoring a System to Determine Probable Faults.
US8024610B2 (en) * 2007-05-24 2011-09-20 Palo Alto Research Center Incorporated Diagnosing intermittent faults
US20090216401A1 (en) * 2008-02-27 2009-08-27 Underdal Olav M Feedback loop on diagnostic procedure
US20090216584A1 (en) * 2008-02-27 2009-08-27 Fountain Gregory J Repair diagnostics based on replacement parts inventory
US8239094B2 (en) * 2008-04-23 2012-08-07 Spx Corporation Test requirement list for diagnostic tests
JP4489128B2 (en) * 2008-04-23 2010-06-23 株式会社日立製作所 Apparatus and method for monitoring a computer system
US8417432B2 (en) * 2008-04-30 2013-04-09 United Technologies Corporation Method for calculating confidence on prediction in fault diagnosis systems
US20100017092A1 (en) * 2008-07-16 2010-01-21 Steven Wayne Butler Hybrid fault isolation system utilizing both model-based and empirical components
US8650411B2 (en) * 2008-09-07 2014-02-11 Schweitzer Engineering Laboratories Inc. Energy management for an electronic device
WO2010027559A1 (en) * 2008-09-07 2010-03-11 Schweitzer Engineering Laboratories, Inc. Energy management for an electronic device
US8648700B2 (en) * 2009-06-23 2014-02-11 Bosch Automotive Service Solutions Llc Alerts issued upon component detection failure
US8386849B2 (en) * 2010-01-29 2013-02-26 Honeywell International Inc. Noisy monitor detection and intermittent fault isolation
US8862433B2 (en) 2010-05-18 2014-10-14 United Technologies Corporation Partitioning of turbomachine faults
US8621305B2 (en) 2010-07-08 2013-12-31 Honeywell International Inc. Methods systems and apparatus for determining whether built-in-test fault codes are indicative of an actual fault condition or a false alarm
DE102011079034A1 (en) 2011-07-12 2013-01-17 Siemens Aktiengesellschaft Control of a technical system
TWI825537B (en) * 2011-08-01 2023-12-11 以色列商諾威股份有限公司 Optical measurement system
US9386529B2 (en) 2012-09-06 2016-07-05 Schweitzer Engineering Laboratories, Inc. Power management in a network of stationary battery powered control, automation, monitoring and protection devices
NO342992B1 (en) * 2015-06-17 2018-09-17 Roxar Flow Measurement As Method of measuring metal loss from equipment in process systems
US10558559B2 (en) * 2016-07-25 2020-02-11 Oracle International Corporation Determining a test confidence metric for a testing application
US10459025B1 (en) 2018-04-04 2019-10-29 Schweitzer Engineering Laboratories, Inc. System to reduce start-up times in line-mounted fault detectors
US11397198B2 (en) 2019-08-23 2022-07-26 Schweitzer Engineering Laboratories, Inc. Wireless current sensor
US11105834B2 (en) 2019-09-19 2021-08-31 Schweitzer Engineering Laboratories, Inc. Line-powered current measurement device
WO2021102037A1 (en) * 2019-11-21 2021-05-27 Conocophillips Company Well annulus pressure monitoring
US11449407B2 (en) 2020-05-28 2022-09-20 Bank Of America Corporation System and method for monitoring computing platform parameters and dynamically generating and deploying monitoring packages
CN112445685A (en) * 2020-11-27 2021-03-05 平安普惠企业管理有限公司 Method, device and storage medium for dynamically updating alarm threshold
CN113486586B (en) * 2021-07-06 2023-09-05 新奥新智科技有限公司 Device health state evaluation method and device, computer device and storage medium
CN117349129B (en) * 2023-12-06 2024-03-29 广东无忧车享科技有限公司 Abnormal optimization method and system for vehicle sales process service system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4517468A (en) * 1984-04-30 1985-05-14 Westinghouse Electric Corp. Diagnostic system and method
US4644479A (en) * 1984-07-31 1987-02-17 Westinghouse Electric Corp. Diagnostic apparatus
US4847795A (en) * 1987-08-24 1989-07-11 Hughes Aircraft Company System for diagnosing defects in electronic assemblies
US4985857A (en) * 1988-08-19 1991-01-15 General Motors Corporation Method and apparatus for diagnosing machines
US5099436A (en) * 1988-11-03 1992-03-24 Allied-Signal Inc. Methods and apparatus for performing system fault diagnosis
US5130936A (en) * 1990-09-14 1992-07-14 Arinc Research Corporation Method and apparatus for diagnostic testing including a neural network for determining testing sufficiency

Also Published As

Publication number Publication date
US5293323A (en) 1994-03-08

Similar Documents

Publication Publication Date Title
CA2077772A1 (en) Method for fault diagnosis by assessment of confidence measure
CA2387929C (en) Method and apparatus for diagnosing difficult to diagnose faults in a complex system
US6240343B1 (en) Apparatus and method for diagnosing an engine using computer based models in combination with a neural network
US6567795B2 (en) Artificial neural network and fuzzy logic based boiler tube leak detection systems
US6226760B1 (en) Method and apparatus for detecting faults
Grimmelius et al. Three state-of-the-art methods for condition monitoring
CN112308147B (en) Rotary machinery fault diagnosis method based on multi-source domain anchor adapter integrated migration
JPH08202444A (en) Method and device for diagnosing abnormality of machine facility
KR920011084B1 (en) Elevator testing apparatus
Lu et al. Application of autoassociative neural network on gas-path sensor data validation
SE463338B (en) SETTING TO MONITOR AND / OR DIAGNOSTIC CURRENT OPERATING CONDITIONS WITH COMPLIED MACHINES
CN115268417A (en) Self-adaptive ECU fault diagnosis control method
CN106339720B (en) A kind of abatement detecting method of automobile engine
Simpson et al. System testability assessment for integrated diagnostics
CN109990803A (en) The method, apparatus of method, apparatus and the sensor processing of detection system exception
Kavuri et al. Combining pattern classification and assumption-based techniques for process fault diagnosis
Ruff et al. Consideration of failure diagnosis in conceptual design of mechanical systems
Harris Human performance testing
CN114943258A (en) Fault diagnosis method and system based on DTW and small sample learning
Martin et al. Diagnostics of a coolant system via neural networks
CN111881988B (en) Heterogeneous unbalanced data fault detection method based on minority class oversampling method
Chin et al. A method of fault signature extraction for improved diagnosis
Wilkinson MIND: an inside look at an expert system for electronic diagnosis
CN113094816A (en) Method for constructing comprehensive working condition vibration spectrum and long-life test spectrum of armored vehicle
Sheppard et al. Managing conflict in system diagnosis

Legal Events

Date Code Title Description
FZDE Discontinued