US20060111927A1 - System, method and program for estimating risk of disaster in infrastructure - Google Patents

System, method and program for estimating risk of disaster in infrastructure Download PDF

Info

Publication number
US20060111927A1
US20060111927A1 US11/272,299 US27229905A US2006111927A1 US 20060111927 A1 US20060111927 A1 US 20060111927A1 US 27229905 A US27229905 A US 27229905A US 2006111927 A1 US2006111927 A1 US 2006111927A1
Authority
US
United States
Prior art keywords
polynomial
infrastructure
disasters
previous
risk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/272,299
Inventor
Etienne Sereville
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyndryl Inc
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DE SEREVILLE, ETIENNE
Publication of US20060111927A1 publication Critical patent/US20060111927A1/en
Priority to US11/700,509 priority Critical patent/US20070162135A1/en
Priority to US12/316,789 priority patent/US7988735B2/en
Priority to US14/260,852 priority patent/US9747151B2/en
Priority to US15/636,884 priority patent/US10725850B2/en
Assigned to KYNDRYL, INC. reassignment KYNDRYL, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Definitions

  • the present invention relates to estimation of disasters in infrastructures, such as computer networks.
  • IT disasters such as severe failures of an Information Technology (“IT”) infrastructure
  • IT disasters such as an e-mail server failure or other computer network failure
  • IT disasters can impact the organization's ability to operate efficiently.
  • cindynic theory (science of danger) is applicable in different domains.
  • cindynics has been used to detect industrial risks and can also be used in the area of computer network (including computer hardware and software) risks.
  • a hazardous situation (cindynic situation) has been defined if the field of the “hazards study” is clearly identified by limits in time (life span), limits in space (boundaries), and limits in the participants' networks involved and by the perspective of the observer studying the system. At this stage of the known development of the sciences of hazards, the perspective can follow five main dimensions.
  • a first dimension comprises memory, history and statistics (a space of statistics).
  • the first dimension consists of all the information contained in databases of large institutions constituting feedback from experience (for example, electricity of France power plants, Air France flights incidents, forest fires monitored by the Hospital Antipolis center of the autoimmune des Mines de Paris, and claims data gathered by insurers and reinsurers).
  • a second dimension comprises representations and models drawn from the facts (a space of models).
  • the second dimension is the scientific body of knowledge that allows computation of possible effects using physical principles, chemical principles, material resistance, propagation, contagion, explosion and geo-cindynic principles (for example, inundation, volcanic eruptions, earthquakes, landslides, tornadoes and hurricanes).
  • a third dimension comprises goals & objectives (a space of goals).
  • the third dimension requires a precise definition by all the participants and networks involved in the cindynic situation of their reasons for living, acting and working. It is arduous to clearly express why participants act as they do and what motivates them. For example, there are two common objectives for risk management—“survival” and “continuity of customer (public) service”. These two objectives lead to fundamentally different cindynic attitudes. The organization, or its environment, will have to harmonize these two conflicting goals.
  • a fourth dimension comprises norms, laws, rules, standards, deontology, compulsory or voluntary, controls, etc. (a space of rules).
  • the fourth dimension comprises all the normative set of rules that makes life possible in a given society. For example, socient determined a need for a traffic code when there were enough automobiles to make it impossible to rely on courtesy of each individual driver; the code is compulsory and makes driving on the road reasonably safe and predictable.
  • the rules for behaving in society are aimed at reducing the risk of injuring other people and establishing a society.
  • the codification is not yet clarified. For example, skiers on the same ski-slope may have different skiing techniques and endanger each other.
  • some skiers use equipment not necessarily compatible with the safety of others (cross country sky and mono-ski, etc.)
  • a fifth dimension comprises value systems (a space of values).
  • the fifth dimension is the set of fundamental objectives and values shared by a group of individuals or other collective participants involved in a cindynic situation. For example, protection of a nation from an invader was a fundamental objective and value, and meant protection of the physical resources as well as the shared heritage or values. Protection of such values may lead the population to accept heavy sacrifices.
  • axioms A number of general principles, called axioms, have been developed within cindynics.
  • the cindynic axioms explain the emergence of dissonances and deficits.
  • CINDYNIC AXIOM 2 CONVENTION: The measures of risk (traditionally measured by the vector Frequency-Severity) depend on convention between participants.
  • CINDYNIC AXIOM 3 GOALS DEPENDENCY: Goals can directly impact the assessment of risks. The participants may have conflicting perceived objectives. It is essential to try to define and prioritise the goals of the various participants involved in the situation. Insufficient clarification of goals is a current pitfall in complex systems.
  • CINDYNIC AXIOM 4 AMBIGUITY: There is usually a lack of clarity in the five dimensions previously mentioned. A major task of prevention is to reduce these ambiguities.
  • CINDYNIC AXIOM 5 ABIGUITY REDUCTION: Accidents and catastrophes are accompanied by brutal transformations in the five dimensions. The reduction of ambiguity (or contradictions) of the content of the five dimensions will happen when they are excessive. This reduction can be involuntary and brutal, resulting in an accident, or voluntary and progressive achieved through a prevention process.
  • CINDYNIC AXIOM 6 CINDYNIC AXIOM 6—CRISIS: A crisis results from a tear in the social cloth. This means a dysfunction in the networks of the participants involved in a given situation.
  • crisis management may comprises an emergency reconstitution of networks.
  • CINDYNIC AXIOM 7 AGO-ANTAGONISTIC CONFLICT: Any therapy is inherently dangerous. Human actions and medications are accompanied by inherent dangers. There is always a curing aspect, reducing danger (cindynolitic), and an aggravating factor, creating new danger (cindynogenetic).
  • FIG. 1 shows a known “Farmer” curve where disasters are placed on a graph showing the relationship between probability and damage.
  • Disasters are normally identified by IT infrastructure components. These components follow rules or parameters and may generate log traces. Typically, disaster information is represented in the form of log files.
  • the disaster rating and scale are relative rather than absolute.
  • the scale may be, for example, values between “1” and “10”: “1” being a minor disaster of minimal impact to the disaster data group and “10” being a major disaster having widespread impact.
  • the logging function depends of the needs of monitoring systems and data volumes and, in some cases, delay due to legal obligations.
  • the known Risk Analysis uses a simple comparison between values found by the foregoing operations, in order to extract statistics. Also, a full Risk Analysis of a IT infrastructure required a one to one analysis of all the data held on disasters. By comparing each disaster with each of the other disaster it was possible to calculate the likelihood of further disasters. This process is computationally expensive and also requires a significant amount of a computer's Random Access Memory (RAM).
  • RAM Random Access Memory
  • An object of the present invention is to estimate risk of disaster of an infrastructure.
  • Another object of the present invention is to facilitate estimation of risk of disaster of an infrastructure.
  • the present invention is directed to a method, system and computer program for estimating risk of a future disaster of an infrastructure. Times of previous, respective disasters of the infrastructure are identified. Respective severities of the previous disasters are determined. Risk of a future disaster of the infrastructure is estimated by determining a relationship between the previous disasters, their respective severities and their respective times of occurrence.
  • the risk is estimated by generating a polynomial linking severity and time of occurrence of each of the previous disasters.
  • the polynomial can be generated by approximating a Tchebychev polynomial.
  • the risk is also estimated by modifying the polynomial by extracting peaks in a curve representing the polynomial, regenerating the polynomial using the extracted peaks and repeating the modifying step until a number of extracted peaks is less than or equal to a predetermined value.
  • FIG. 1 illustrates an example of a prior art Farmer's curve.
  • FIG. 2 illustrates the result of the Tchebychev's polynomials approximation's use.
  • FIG. 3 illustrates two polynomial curves showing the collected disaster information from a first origin and a second origin.
  • FIG. 4 illustrates the combining of the polynomial curves of FIG. 3 according to an embodiment of the invention.
  • FIG. 5 is a flow diagram, including a flowchart and a block diagram, illustrating a program and system for generating polynomials according to the present invention.
  • FIG. 6 illustrates a system according to the present invention for estimating risk of disaster of an infrastructure.
  • a Tchebychev analysis program 500 (shown in FIGS. 5 and 6 ) executing in a risk estimation computer 20 generates a continuous polynomial curve with a corresponding polynomial equation.
  • Program 500 takes derivatives of the polynomial equation. When the derivative of the continuous curve is null, the risk reaches its maximum.
  • the construction of the polynomial equation is shown below.
  • P 2 ⁇ ( x ) y 1 ⁇ ( x - x 2 ) ( x 1 - x 2 ) + y 2 ⁇ ( x - x 1 ) ( x 2 - x 1 )
  • P 2 (x 1 ) Y 1
  • P 2 (x 2 ) Y 2 .
  • Tchebychev analysis program 500 receives identified disasters data 510 from an infrastructure which are then inputted to a Tchebychev approximation module 520 .
  • the Tchebychev module 520 calculates a polynomial from the identified disasters data 510 .
  • the polynomial is inputted to a derivative module 530 .
  • the derivative module 530 identifies peaks and troughs by identifying points which have a null derivative.
  • the peaks having a null derivative are forwarded to a peaks (or tops) module 540 .
  • the peaks module 540 identifies the peaks by studying the sign of the derivative before and after each of the identified points.
  • a new filter module 550 counts the number of identified peaks and compares this to a predetermined maximum. If there are more identified peaks than the maximum, the identified peaks are inputted to the Tchebychev module 520 and the process is repeated. If the number of peaks is less than or equal to the maximum the process stops (step 560 ).
  • FIG. 2 illustrates an example of results produced by program 500 .
  • An identified disasters trace 210 plots severity of a disaster against their time of occurrence.
  • Program 500 then generates an approximation of Tchebychev's polynomials to obtain a first polynomial equation represented by a first polynomial curve 220 .
  • Program 500 then takes derivatives of first polynomial equation 220 to identify the points at which the derivative is equal to zero.
  • Null derivative points 230 correspond to peaks and troughs on the polynomial curve.
  • Program 500 identifies peaks by analyzing each null derivative point 230 .
  • program 500 identifies the extracted peaks 240 from the polynomial 220 through comparison with the identified disasters trace 210 . Where a null derivative point 230 is identified as a peak, program 500 compares the null derivative point 230 to the value of identified disasters trace 210 before and after the null derivative point 230 . Thus, program 500 identifies the extracted peaks 240 in FIG. 2 . For example, point A is one of extracted peaks 240 , B is the null derivative point 230 preceding A, and C is the null derivative point 230 following A. If the derivative is positive between A and B, and negative between A and C, point A is a peak. Furthermore, the values of the identified disasters trace 210 before and after point A are less than point A. Therefore point A is an extracted peak 240 .
  • Program 500 then uses an approximation of Tchebychev's polynomials to create a modified polynomial 250 using points which have been identified as peaks and the start and end point.
  • Program 500 further modifies polynomial 250 by repeating the process described above to identify peaks. In this case, there would be no further improvement but in other cases the process will preserve only the highest peaks.
  • polynomial curves 340 show two collections of disaster information for two organizations (first origin and second origin) with each disaster 310 shown as a point on the polynomial curve 340 .
  • Program 500 identifies represented peaks 320 by the process described above to identify peaks from recovered data points.
  • Each polynomial curve 340 has ends 330 .
  • the polynomial curves 450 represent the two polynomial curves of FIG. 3 ( 340 ).
  • the first origin has disaster points 420 and the second origin has disaster points 430 .
  • Program 500 identifies peaks and ends of each of the polynomial curves 450 and extracts represented peaks.
  • the new ends 440 are the ends from either of the polynomial curves 450 which are of greater gravity or greater extremity of time.
  • Program 500 then uses the represented peaks from each polynomial curve 450 along with the new ends 440 to generate a merged polynomial 460 which represents disaster from the combined information of the first and second origin.
  • a data logger 602 which enables information, typically consisting of logged events, to be collected from a infrastructure network 604 .
  • the information from the data logger 602 is stored in a data storage 606 .
  • a disaster identification program 608 assesses the logged events to determine whether the event is deemed a disaster. For example, if the logged event indicates a failure of system hardware or software it may be logged as a disaster.
  • a disaster gravity program 610 assesses each identified disaster generating disaster data. For example, as described previously, a disaster may be assigned a value between “1” and “10” corresponding to level of impact on the infrastructure 604 . The disaster data is then inputted to Tchebychev analysis program 500 as described previously.
  • the Tchebychev analysis program generates a risk analysis equation or data.
  • Program 500 then analyzes the risk analysis data to identify one or more high risk disaster events. For example, after the Tchebychev analysis program 500 has completed the risk analysis, program 500 typically identifies a number of peaks corresponding to high risk events 612 . These peaks/events can be identified as disasters which generate significant risk to the infrastructure 604 . Measures can then be automatically, or otherwise, taken to minimise further risk.
  • the computer system 20 could instigate additional services on other computers or server of the network 604 to provide additional redundancy to cope with a particular high risk event.
  • the high risk events 612 can also be displayed on a computer screen, or any type of visual display unit, to allow a user to view and obtain more information about the high risk events 612 . In this manner, a disaster of greatest potential risk can be identified automatically.
  • the present invention may be embodied in a computer program (including program modules 608 , 610 , 500 and 612 ) comprising instructions which, when executed in computer 20 , perform the functions of the system or method as described above.
  • the computer 20 includes a standard CPU 12 , operating system 14 , RAM 16 and ROM 18 .
  • the program modules 608 , 610 , 500 and 612 can be loaded into computer 20 from a computer readable medium such as a magnetic disk or tape, optical medium, DVD, or network download media (such as including a TCP/IP adapter card 21 ).

Abstract

Method, system and computer program for estimating risk of a future disaster of an infrastructure. Times of previous, respective disasters of the infrastructure are identified. Respective severities of the previous disasters are determined. Risk of a future disaster of the infrastructure is estimated by determining a relationship between the previous disasters, their respective severities and their respective times of occurrence. The risk can be estimated by generating a polynomial linking severity and time of occurrence of each of the previous disasters. The polynomial can be generated by approximating a Tchebychev polynomial.

Description

    TECHNICAL FIELD
  • The present invention relates to estimation of disasters in infrastructures, such as computer networks.
  • BACKGROUND
  • Risk analysis predicts likelihood of disasters, such as severe failures of an Information Technology (“IT”) infrastructure, that an organization may face, and the consequences of such failures. IT disasters, such as an e-mail server failure or other computer network failure, can impact the organization's ability to operate efficiently.
  • Known cindynic theory (science of danger) is applicable in different domains. For example, cindynics has been used to detect industrial risks and can also be used in the area of computer network (including computer hardware and software) risks. According to the modern theory of description, a hazardous situation (cindynic situation) has been defined if the field of the “hazards study” is clearly identified by limits in time (life span), limits in space (boundaries), and limits in the participants' networks involved and by the perspective of the observer studying the system. At this stage of the known development of the sciences of hazards, the perspective can follow five main dimensions.
  • A first dimension comprises memory, history and statistics (a space of statistics). The first dimension consists of all the information contained in databases of large institutions constituting feedback from experience (for example, electricity of France power plants, Air France flights incidents, forest fires monitored by the Sophia Antipolis center of the Ecole des Mines de Paris, and claims data gathered by insurers and reinsurers).
  • A second dimension comprises representations and models drawn from the facts (a space of models). The second dimension is the scientific body of knowledge that allows computation of possible effects using physical principles, chemical principles, material resistance, propagation, contagion, explosion and geo-cindynic principles (for example, inundation, volcanic eruptions, earthquakes, landslides, tornadoes and hurricanes).
  • A third dimension comprises goals & objectives (a space of goals). The third dimension requires a precise definition by all the participants and networks involved in the cindynic situation of their reasons for living, acting and working. It is arduous to clearly express why participants act as they do and what motivates them. For example, there are two common objectives for risk management—“survival” and “continuity of customer (public) service”. These two objectives lead to fundamentally different cindynic attitudes. The organization, or its environment, will have to harmonize these two conflicting goals.
  • A fourth dimension comprises norms, laws, rules, standards, deontology, compulsory or voluntary, controls, etc. (a space of rules). The fourth dimension comprises all the normative set of rules that makes life possible in a given society. For example, socient determined a need for a traffic code when there were enough automobiles to make it impossible to rely on courtesy of each individual driver; the code is compulsory and makes driving on the road reasonably safe and predictable. The rules for behaving in society are aimed at reducing the risk of injuring other people and establishing a society. On the other hand, there are situations, in which the codification is not yet clarified. For example, skiers on the same ski-slope may have different skiing techniques and endanger each other. In addition, some skiers use equipment not necessarily compatible with the safety of others (cross country sky and mono-ski, etc.)
  • A fifth dimension comprises value systems (a space of values). The fifth dimension is the set of fundamental objectives and values shared by a group of individuals or other collective participants involved in a cindynic situation. For example, protection of a nation from an invader was a fundamental objective and value, and meant protection of the physical resources as well as the shared heritage or values. Protection of such values may lead the population to accept heavy sacrifices.
  • A number of general principles, called axioms, have been developed within cindynics. The cindynic axioms explain the emergence of dissonances and deficits.
  • CINDYNIC AXIOM 1—RELATIVITY: The perception of danger varies according to each participant's situation.
  • Therefore, there is no “objective” measure of danger. This principle is the basis for the concept of situation.
  • CINDYNIC AXIOM 2—CONVENTION: The measures of risk (traditionally measured by the vector Frequency-Severity) depend on convention between participants.
  • CINDYNIC AXIOM 3—GOALS DEPENDENCY: Goals can directly impact the assessment of risks. The participants may have conflicting perceived objectives. It is essential to try to define and prioritise the goals of the various participants involved in the situation. Insufficient clarification of goals is a current pitfall in complex systems.
  • CINDYNIC AXIOM 4—AMBIGUITY: There is usually a lack of clarity in the five dimensions previously mentioned. A major task of prevention is to reduce these ambiguities.
  • CINDYNIC AXIOM 5—AMBIGUITY REDUCTION: Accidents and catastrophes are accompanied by brutal transformations in the five dimensions. The reduction of ambiguity (or contradictions) of the content of the five dimensions will happen when they are excessive. This reduction can be involuntary and brutal, resulting in an accident, or voluntary and progressive achieved through a prevention process.
  • CINDYNIC AXIOM 6—CRISIS: A crisis results from a tear in the social cloth. This means a dysfunction in the networks of the participants involved in a given situation. Crisis management may comprises an emergency reconstitution of networks.
  • CINDYNIC AXIOM 7—AGO-ANTAGONISTIC CONFLICT: Any therapy is inherently dangerous. Human actions and medications are accompanied by inherent dangers. There is always a curing aspect, reducing danger (cindynolitic), and an aggravating factor, creating new danger (cindynogenetic).
  • The main utility of these principles is to reduce time lost in unproductive discussions on the following subjects:
      • How accurate are the quantitative evaluations of catastrophes—Quantitative measures result from conventions, scales or unit of measures (axiom 2); and
      • Negative effects of proposed prevention measures—In any action positive and negative impacts are intertwined (axiom 7).
        Consequently, Risk Analysis, viewed by the cindynic theory, takes into account the frequency that the disaster appears (probability), and its real impact on the participant or organization (damage).
  • FIG. 1 shows a known “Farmer” curve where disasters are placed on a graph showing the relationship between probability and damage.
  • Disaster study is a part of Risk Analysis; its aim is to follow the disaster evolution. Damages are rated in term of cost or rate, with time. Let “d” denote the damage of a given disaster and “If” denote the frequency of such a disaster. From a quantitative point of view, it is common to define a rating “R” of the associated risk as: R=d×f. In practice, often, the perception of risk is such that the relevance given to the damaging consequences “d” is far greater than that given to its probability of occurrence f so that, the given “R=d×f” is slightly modified to: R=dk×f with k>1. So, numerically larger values of risk are associated with larger consequences.
  • Disasters are normally identified by IT infrastructure components. These components follow rules or parameters and may generate log traces. Typically, disaster information is represented in the form of log files. The disaster rating and scale are relative rather than absolute. The scale may be, for example, values between “1” and “10”: “1” being a minor disaster of minimal impact to the disaster data group and “10” being a major disaster having widespread impact. The logging function depends of the needs of monitoring systems and data volumes and, in some cases, delay due to legal obligations.
  • The known Risk Analysis uses a simple comparison between values found by the foregoing operations, in order to extract statistics. Also, a full Risk Analysis of a IT infrastructure required a one to one analysis of all the data held on disasters. By comparing each disaster with each of the other disaster it was possible to calculate the likelihood of further disasters. This process is computationally expensive and also requires a significant amount of a computer's Random Access Memory (RAM).
  • An object of the present invention is to estimate risk of disaster of an infrastructure.
  • Another object of the present invention is to facilitate estimation of risk of disaster of an infrastructure.
  • SUMMARY OF THE INVENTION
  • The present invention is directed to a method, system and computer program for estimating risk of a future disaster of an infrastructure. Times of previous, respective disasters of the infrastructure are identified. Respective severities of the previous disasters are determined. Risk of a future disaster of the infrastructure is estimated by determining a relationship between the previous disasters, their respective severities and their respective times of occurrence.
  • In accordance with a feature of the present invention, the risk is estimated by generating a polynomial linking severity and time of occurrence of each of the previous disasters. The polynomial can be generated by approximating a Tchebychev polynomial.
  • In accordance with other features of the present invention, the risk is also estimated by modifying the polynomial by extracting peaks in a curve representing the polynomial, regenerating the polynomial using the extracted peaks and repeating the modifying step until a number of extracted peaks is less than or equal to a predetermined value.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example of a prior art Farmer's curve.
  • FIG. 2 illustrates the result of the Tchebychev's polynomials approximation's use.
  • FIG. 3 illustrates two polynomial curves showing the collected disaster information from a first origin and a second origin.
  • FIG. 4 illustrates the combining of the polynomial curves of FIG. 3 according to an embodiment of the invention.
  • FIG. 5 is a flow diagram, including a flowchart and a block diagram, illustrating a program and system for generating polynomials according to the present invention.
  • FIG. 6 illustrates a system according to the present invention for estimating risk of disaster of an infrastructure.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention will now be described in detail with reference to the Figures. A Tchebychev analysis program 500 (shown in FIGS. 5 and 6) executing in a risk estimation computer 20 generates a continuous polynomial curve with a corresponding polynomial equation. Program 500 takes derivatives of the polynomial equation. When the derivative of the continuous curve is null, the risk reaches its maximum. The construction of the polynomial equation is shown below.
    For i≧1 and j≧1, a Tchebychev polynomial having “n” points is given by: P n ( x ) = i = 1 n ( y i j = 1 n ( x - x j ) ( x i - x j ) )
    For example, to calculate the polynomial between two points, Point1 and Point2, having coordinates (x1, y1) and (x2, y2) respectively in space (x, y), the formula is: n=2, P 2 ( x ) = y 1 ( x - x 2 ) ( x 1 - x 2 ) + y 2 ( x - x 1 ) ( x 2 - x 1 )
    Where P2(x1)=Y1, and P2(x2)=Y2.
    To calculate the polynomial between 3 points: Point1(x1, y1), Point2(x2, y2) and Point3(x3, y3), the formula is: n=3, P 3 ( x ) = y 1 ( x - x 2 ) ( x - x 3 ) ( x 1 - x 2 ) ( x 1 - x 3 ) + y 2 ( x - x 1 ) ( x - x 3 ) ( x 2 - x 1 ) ( x 2 - x 3 ) + y 3 ( x - x 1 ) ( x - x 2 ) ( x 3 - x 1 ) ( x 3 - x 1 )
    where P3 (x1)=y1, P3 (x2)=y2 and P3 (x3)=y3.
    The Tchebychev polynomial is a continuous curve between “n” points.
  • Referring to FIG. 5, Tchebychev analysis program 500 receives identified disasters data 510 from an infrastructure which are then inputted to a Tchebychev approximation module 520. The Tchebychev module 520 calculates a polynomial from the identified disasters data 510. The polynomial is inputted to a derivative module 530. The derivative module 530 identifies peaks and troughs by identifying points which have a null derivative. The peaks having a null derivative are forwarded to a peaks (or tops) module 540. The peaks module 540 identifies the peaks by studying the sign of the derivative before and after each of the identified points. Where the sign of the derivative is positive before and negative after an identified point, a peak has been found. A new filter module 550 counts the number of identified peaks and compares this to a predetermined maximum. If there are more identified peaks than the maximum, the identified peaks are inputted to the Tchebychev module 520 and the process is repeated. If the number of peaks is less than or equal to the maximum the process stops (step 560).
  • FIG. 2 illustrates an example of results produced by program 500. An identified disasters trace 210 plots severity of a disaster against their time of occurrence. Program 500 then generates an approximation of Tchebychev's polynomials to obtain a first polynomial equation represented by a first polynomial curve 220. Program 500 then takes derivatives of first polynomial equation 220 to identify the points at which the derivative is equal to zero. Null derivative points 230 correspond to peaks and troughs on the polynomial curve. Program 500 identifies peaks by analyzing each null derivative point 230. If the polynomial values of the polynomial 220 before and after each null derivative point 230 are lower that the peak polynomial value at this point, a peak is identified. In this example, program 500 also identifies the extracted peaks 240 from the polynomial 220 through comparison with the identified disasters trace 210. Where a null derivative point 230 is identified as a peak, program 500 compares the null derivative point 230 to the value of identified disasters trace 210 before and after the null derivative point 230. Thus, program 500 identifies the extracted peaks 240 in FIG. 2. For example, point A is one of extracted peaks 240, B is the null derivative point 230 preceding A, and C is the null derivative point 230 following A. If the derivative is positive between A and B, and negative between A and C, point A is a peak. Furthermore, the values of the identified disasters trace 210 before and after point A are less than point A. Therefore point A is an extracted peak 240.
  • Program 500 then uses an approximation of Tchebychev's polynomials to create a modified polynomial 250 using points which have been identified as peaks and the start and end point. Program 500 further modifies polynomial 250 by repeating the process described above to identify peaks. In this case, there would be no further improvement but in other cases the process will preserve only the highest peaks.
  • Referring now to FIG. 3, polynomial curves 340 show two collections of disaster information for two organizations (first origin and second origin) with each disaster 310 shown as a point on the polynomial curve 340. Program 500 identifies represented peaks 320 by the process described above to identify peaks from recovered data points. Each polynomial curve 340 has ends 330.
  • Referring now to FIG. 4, the polynomial curves 450 represent the two polynomial curves of FIG. 3 (340). The first origin has disaster points 420 and the second origin has disaster points 430. Program 500 identifies peaks and ends of each of the polynomial curves 450 and extracts represented peaks. The new ends 440 are the ends from either of the polynomial curves 450 which are of greater gravity or greater extremity of time. Program 500 then uses the represented peaks from each polynomial curve 450 along with the new ends 440 to generate a merged polynomial 460 which represents disaster from the combined information of the first and second origin.
  • Referring now to FIG. 6, a data logger 602 which enables information, typically consisting of logged events, to be collected from a infrastructure network 604. The information from the data logger 602 is stored in a data storage 606. A disaster identification program 608 assesses the logged events to determine whether the event is deemed a disaster. For example, if the logged event indicates a failure of system hardware or software it may be logged as a disaster. A disaster gravity program 610 assesses each identified disaster generating disaster data. For example, as described previously, a disaster may be assigned a value between “1” and “10” corresponding to level of impact on the infrastructure 604. The disaster data is then inputted to Tchebychev analysis program 500 as described previously. The Tchebychev analysis program generates a risk analysis equation or data. Program 500 then analyzes the risk analysis data to identify one or more high risk disaster events. For example, after the Tchebychev analysis program 500 has completed the risk analysis, program 500 typically identifies a number of peaks corresponding to high risk events 612. These peaks/events can be identified as disasters which generate significant risk to the infrastructure 604. Measures can then be automatically, or otherwise, taken to minimise further risk. For example, the computer system 20 could instigate additional services on other computers or server of the network 604 to provide additional redundancy to cope with a particular high risk event. The high risk events 612 can also be displayed on a computer screen, or any type of visual display unit, to allow a user to view and obtain more information about the high risk events 612. In this manner, a disaster of greatest potential risk can be identified automatically.
  • The present invention may be embodied in a computer program (including program modules 608, 610, 500 and 612) comprising instructions which, when executed in computer 20, perform the functions of the system or method as described above. The computer 20 includes a standard CPU 12, operating system 14, RAM 16 and ROM 18. The program modules 608, 610, 500 and 612 can be loaded into computer 20 from a computer readable medium such as a magnetic disk or tape, optical medium, DVD, or network download media (such as including a TCP/IP adapter card 21).
  • Improvements and modifications may be incorporated without departing from the scope of the present invention.

Claims (20)

1. A method of estimating risk of a future disaster of an infrastructure, said method comprising the steps of:
identifying times of previous, respective disasters of said infrastructure;
determining respective severities of said previous disasters; and
estimating risk of a future disaster of said infrastructure by determining a relationship between said previous disasters, their respective severities and their respective times of occurrence.
2. A method as claimed in claim 1, wherein the step of estimating risk comprises the step of generating a polynomial linking severity and time of occurrence of each of said previous disasters.
3. A method as claimed in claim 2, wherein the step of estimating risk comprises the step of modifying said polynomial by extracting peaks in a curve representing said polynomial, regenerating the polynomial using the extracted peaks and repeating the modifying step until a number of extracted peaks is less than or equal to a predetermined value.
4. A method as claimed in claim 2, wherein two or more of said previous disasters occur within a predetermined time period, and said polynomial is based on a most severe one of said two or more previous disasters.
5. A method as claimed in claim 2, wherein the step of generating a polynomial comprises the step of approximating a Tchebychev polynomial.
6. A method as claimed in claim 1, wherein the infrastructure is an IT infrastructure.
7. A method as claimed in claim 6, wherein the IT infrastructure comprises a network of computers, hardware and/or software components.
8. A method as claimed in claim 7, further comprising the step of sending instructions to other computers of the network to minimize occurrence of high risk disasters in the future.
9. A system for estimating risk of a future disaster of an infrastructure, said system comprising:
means for identifying times of previous, respective disasters of said infrastructure;
means for determining respective severities of said previous disasters; and
means for estimating risk of a future disaster of said infrastructure by determining a relationship between said previous disasters, their respective severities and their respective times of occurrence.
10. A system as claimed in claim 9, wherein the means for estimating risk comprises means for generating a polynomial linking severity and time of occurrence of each of said previous disasters.
11. A system as claimed in claim 10, wherein the means for estimating risk further comprises means for modifying said polynomial by extracting peaks in a curve representing said polynomial, regenerating the polynomial using the extracted peaks and repeating the modifying until a number of extracted peaks is less than or equal to a predetermined value.
12. A system as claimed in claim 10, wherein two or more of said previous disasters occur within a predetermined time period, and said polynomial is based on a most severe one of said two or more previous disasters.
13. A system as claimed in claim 10, wherein the means for generating a polynomial comprises means for approximating a Tchebychev polynomial.
14. A system as claimed in claim 9, wherein the infrastructure is an IT infrastructure.
15. A system as claimed in claim 14, wherein the IT infrastructure comprises a network of computers, hardware and/or software components.
16. A system as claimed in claim 15, further comprising means for sending instructions to other computers of the network to minimize occurrence of high risk disasters in the future.
17. A computer program product for estimating risk of a future disaster of an infrastructure, said computer program product comprising:
a computer readable medium;
first program instrutions to identify times of previous, respective disasters of said infrastructure;
second program instructions to determine respective severities of said previous disasters; and
third program instructions to estimate risk of a future disaster of said infrastructure by determining a relationship between said previous disasters, their respective severities and their respective times of occurrence; and wherein
said first, second and third program instructions are stored on said medium.
18. A computer program product as claimed in claim 17, wherein said third program instructions estimate risk by generating a polynomial linking severity and time of occurrence of each of said previous disasters.
19. A computer program product as claimed in claim 18, wherein said third program instructions further estimate risk by modifying said polynomial by extracting peaks in a curve representing said polynomial, regenerating the polynomial using the extracted peaks and repeating the modifying step until a number of extracted peaks is less than or equal to a predetermined value.
20. A computer program product as claimed in claim 18, wherein said third program instructions generate the polynomial by approximating a Tchebychev polynomial.
US11/272,299 2004-11-19 2005-11-10 System, method and program for estimating risk of disaster in infrastructure Abandoned US20060111927A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US11/700,509 US20070162135A1 (en) 2005-06-15 2007-01-31 Mechanical apparatus and method for artificial disc replacement
US12/316,789 US7988735B2 (en) 2005-06-15 2008-12-16 Mechanical apparatus and method for delivering materials into the inter-vertebral body space for nucleus replacement
US14/260,852 US9747151B2 (en) 2004-11-19 2014-04-24 System, method and program for estimating risk of disaster in infrastructure
US15/636,884 US10725850B2 (en) 2004-11-19 2017-06-29 Estimating risk to a computer network from a high risk failure that occurred on a first or second computer system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04300803.6 2004-11-19
EP04300803 2004-11-19

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/359,335 Continuation-In-Part US7547319B2 (en) 2005-06-15 2006-02-22 Mechanical apparatus and method for artificial disc replacement

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US11/153,776 Continuation-In-Part US8021426B2 (en) 2005-06-15 2005-06-15 Mechanical apparatus and method for artificial disc replacement
US11/700,509 Continuation-In-Part US20070162135A1 (en) 2005-06-15 2007-01-31 Mechanical apparatus and method for artificial disc replacement
US14/260,852 Continuation US9747151B2 (en) 2004-11-19 2014-04-24 System, method and program for estimating risk of disaster in infrastructure

Publications (1)

Publication Number Publication Date
US20060111927A1 true US20060111927A1 (en) 2006-05-25

Family

ID=36462011

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/272,299 Abandoned US20060111927A1 (en) 2004-11-19 2005-11-10 System, method and program for estimating risk of disaster in infrastructure
US14/260,852 Expired - Fee Related US9747151B2 (en) 2004-11-19 2014-04-24 System, method and program for estimating risk of disaster in infrastructure
US15/636,884 Active 2026-07-17 US10725850B2 (en) 2004-11-19 2017-06-29 Estimating risk to a computer network from a high risk failure that occurred on a first or second computer system

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/260,852 Expired - Fee Related US9747151B2 (en) 2004-11-19 2014-04-24 System, method and program for estimating risk of disaster in infrastructure
US15/636,884 Active 2026-07-17 US10725850B2 (en) 2004-11-19 2017-06-29 Estimating risk to a computer network from a high risk failure that occurred on a first or second computer system

Country Status (1)

Country Link
US (3) US20060111927A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070170A1 (en) * 2007-09-12 2009-03-12 Krishnamurthy Natarajan System and method for risk assessment and management
US20150025933A1 (en) * 2013-07-22 2015-01-22 Alex Daniel Andelman Value at risk insights engine
US20150134399A1 (en) * 2013-11-11 2015-05-14 International Business Machines Corporation Information model for supply chain risk decision making
US9747151B2 (en) 2004-11-19 2017-08-29 International Business Machines Corporation System, method and program for estimating risk of disaster in infrastructure
US11818205B2 (en) 2021-03-12 2023-11-14 Bank Of America Corporation System for identity-based exposure detection in peer-to-peer platforms

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5500529A (en) * 1994-06-28 1996-03-19 Saint-Gobain/Nortn Industrial Ceramics Corporation Apparatus and method for screening abnormal glow curves
US5594638A (en) * 1993-12-29 1997-01-14 First Opinion Corporation Computerized medical diagnostic system including re-enter function and sensitivity factors
US6363496B1 (en) * 1999-01-29 2002-03-26 The United States Of America As Represented By The Secretary Of The Air Force Apparatus and method for reducing duration of timeout periods in fault-tolerant distributed computer systems
US20050027571A1 (en) * 2003-07-30 2005-02-03 International Business Machines Corporation Method and apparatus for risk assessment for a disaster recovery process
US20050096953A1 (en) * 2003-11-01 2005-05-05 Ge Medical Systems Global Technology Co., Llc Methods and apparatus for predictive service for information technology resource outages
US20050144188A1 (en) * 2003-12-16 2005-06-30 International Business Machines Corporation Determining the impact of a component failure on one or more services
US20050154561A1 (en) * 2004-01-12 2005-07-14 Susan Legault Method for performing failure mode and effects analysis
US20050246590A1 (en) * 2004-04-15 2005-11-03 Lancaster Peter C Efficient real-time analysis method of error logs for autonomous systems
US20060100958A1 (en) * 2004-11-09 2006-05-11 Feng Cheng Method and apparatus for operational risk assessment and mitigation
US7221975B2 (en) * 2003-12-04 2007-05-22 Maquet Critical Care Ab Signal filtering using orthogonal polynomials and removal of edge effects

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU6265999A (en) * 1998-09-24 2000-04-10 Brigit Ananya Computer curve construction system and method
US20020147803A1 (en) 2001-01-31 2002-10-10 Dodd Timothy David Method and system for calculating risk in association with a security audit of a computer network
US7865427B2 (en) 2001-05-30 2011-01-04 Cybersource Corporation Method and apparatus for evaluating fraud risk in an electronic commerce transaction
US8484066B2 (en) 2003-06-09 2013-07-09 Greenline Systems, Inc. System and method for risk detection reporting and infrastructure
US20060111927A1 (en) 2004-11-19 2006-05-25 International Business Machines Corporation System, method and program for estimating risk of disaster in infrastructure

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594638A (en) * 1993-12-29 1997-01-14 First Opinion Corporation Computerized medical diagnostic system including re-enter function and sensitivity factors
US5500529A (en) * 1994-06-28 1996-03-19 Saint-Gobain/Nortn Industrial Ceramics Corporation Apparatus and method for screening abnormal glow curves
US6363496B1 (en) * 1999-01-29 2002-03-26 The United States Of America As Represented By The Secretary Of The Air Force Apparatus and method for reducing duration of timeout periods in fault-tolerant distributed computer systems
US20050027571A1 (en) * 2003-07-30 2005-02-03 International Business Machines Corporation Method and apparatus for risk assessment for a disaster recovery process
US20050096953A1 (en) * 2003-11-01 2005-05-05 Ge Medical Systems Global Technology Co., Llc Methods and apparatus for predictive service for information technology resource outages
US7221975B2 (en) * 2003-12-04 2007-05-22 Maquet Critical Care Ab Signal filtering using orthogonal polynomials and removal of edge effects
US20050144188A1 (en) * 2003-12-16 2005-06-30 International Business Machines Corporation Determining the impact of a component failure on one or more services
US20050154561A1 (en) * 2004-01-12 2005-07-14 Susan Legault Method for performing failure mode and effects analysis
US20050246590A1 (en) * 2004-04-15 2005-11-03 Lancaster Peter C Efficient real-time analysis method of error logs for autonomous systems
US20060100958A1 (en) * 2004-11-09 2006-05-11 Feng Cheng Method and apparatus for operational risk assessment and mitigation

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9747151B2 (en) 2004-11-19 2017-08-29 International Business Machines Corporation System, method and program for estimating risk of disaster in infrastructure
US10725850B2 (en) 2004-11-19 2020-07-28 International Business Machines Corporation Estimating risk to a computer network from a high risk failure that occurred on a first or second computer system
US20090070170A1 (en) * 2007-09-12 2009-03-12 Krishnamurthy Natarajan System and method for risk assessment and management
SG151122A1 (en) * 2007-09-12 2009-04-30 Natarajan Krishnamurthy System and method for risk assessment and management
US20150025933A1 (en) * 2013-07-22 2015-01-22 Alex Daniel Andelman Value at risk insights engine
US9336503B2 (en) * 2013-07-22 2016-05-10 Wal-Mart Stores, Inc. Value at risk insights engine
US20150134399A1 (en) * 2013-11-11 2015-05-14 International Business Machines Corporation Information model for supply chain risk decision making
US11818205B2 (en) 2021-03-12 2023-11-14 Bank Of America Corporation System for identity-based exposure detection in peer-to-peer platforms

Also Published As

Publication number Publication date
US9747151B2 (en) 2017-08-29
US20170329661A1 (en) 2017-11-16
US20140325289A1 (en) 2014-10-30
US10725850B2 (en) 2020-07-28

Similar Documents

Publication Publication Date Title
US10725850B2 (en) Estimating risk to a computer network from a high risk failure that occurred on a first or second computer system
Petak Emergency management: A challenge for public administration
Jordan et al. Operational earthquake forecasting can enhance earthquake preparedness
Aguirre Homeland security warnings: Lessons learned and unlearned
Birnbaum et al. Research and evaluations of the health aspects of disasters, Part VIII: Risk, risk reduction, risk management, and capacity building
Rivera et al. Resilience
CN116480412A (en) Mine disaster rescue method and device
CN113642926B (en) Method and device for risk early warning, electronic equipment and storage medium
Reuter et al. Informing the Population: Mobile Warning Apps
Teodorescu On the responses of social networks' to external events
US20100287010A1 (en) System, method and program for managing disaster recovery
Gromek Societal dimension of disaster risk reduction. Conceptual framework
Lichte et al. A study on the influence of uncertainties in physical security risk analysis
CN104301330A (en) Trap network detection method based on abnormal behavior monitoring and member intimacy measurement
RU2612943C1 (en) Multilevel navigation and information vehicle monitoring system
Drabek Some emerging issues in emergency management
Urbánek et al. Crisis interfaces investigation at process model of critical infrastructure subject
Oreko et al. An intervention theoretic modeling approach on the performance assessment of Federal Road Safety Corps in road traffic casualty reduction in Nigeria
Aiyuda et al. Medical Authority's Trust as Mediator of Risk Perception on Haze Mitigation Efforts
Zlateva et al. A method for risk assessment from natural disasters using an actuarial model
CN114331182A (en) Risk assessment method and system for public security high-risk personnel
Henry et al. Scenario-Based Modeling of Community Evacuation Vulnerability.
Mousavi et al. Analysis of the Human Behavior trapped in the Fire Hazards based on Protective Action Decision Model [PADM](Case Study: Office high-rise buildings in Tehran)
Sanjarinia Earthquake crisis management with emphasis on Sarpol-e Zahab earthquake
Corotis Resilience: communities are more than a portfolio of buildings

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DE SEREVILLE, ETIENNE;REEL/FRAME:017099/0404

Effective date: 20051018

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION

AS Assignment

Owner name: KYNDRYL, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:058213/0912

Effective date: 20211118