US20160371616A1 - Individualized Predictive Model & Workflow for an Asset - Google Patents

Individualized Predictive Model & Workflow for an Asset Download PDF

Info

Publication number
US20160371616A1
US20160371616A1 US14/744,369 US201514744369A US2016371616A1 US 20160371616 A1 US20160371616 A1 US 20160371616A1 US 201514744369 A US201514744369 A US 201514744369A US 2016371616 A1 US2016371616 A1 US 2016371616A1
Authority
US
United States
Prior art keywords
asset
individualized
data
workflow
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/744,369
Inventor
Brad Nicholas
Jason Kolb
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uptake Technologies Inc
Original Assignee
Uptake Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uptake Technologies Inc filed Critical Uptake Technologies Inc
Priority to US14/744,362 priority Critical patent/US10176279B2/en
Assigned to UPTAKE TECHNOLOGIES, INC. reassignment UPTAKE TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NICHOLAS, BRAD
Priority to US14/963,207 priority patent/US10254751B2/en
Assigned to UPTAKE TECHNOLOGIES, INC. reassignment UPTAKE TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KOLB, JASON
Priority to CN201680043854.5A priority patent/CN107851233A/en
Priority to KR1020187001578A priority patent/KR20180011333A/en
Priority to AU2016277850A priority patent/AU2016277850A1/en
Priority to CA2989806A priority patent/CA2989806A1/en
Priority to EP16812206.7A priority patent/EP3311345A4/en
Priority to PCT/US2016/037247 priority patent/WO2016205132A1/en
Priority to JP2017565106A priority patent/JP2018519594A/en
Priority to US15/185,524 priority patent/US10579750B2/en
Publication of US20160371616A1 publication Critical patent/US20160371616A1/en
Priority to US15/599,360 priority patent/US10878385B2/en
Priority to US15/696,137 priority patent/US20180247239A1/en
Priority to HK18111155.8A priority patent/HK1251701A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0748Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a remote unit communicating with a single-box computer node experiencing an error/fault
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D3/00Indicating or recording apparatus with provision for the special purposes referred to in the subgroups
    • G01D3/08Indicating or recording apparatus with provision for the special purposes referred to in the subgroups with provision for safeguarding the apparatus, e.g. against abnormal operation, against breakdown
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M99/00Subject matter not provided for in other groups of this subclass
    • G01M99/005Testing of complete machines, e.g. washing-machines or mobile phones
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M99/00Subject matter not provided for in other groups of this subclass
    • G01M99/008Subject matter not provided for in other groups of this subclass by doing functionality tests
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0224Process history based detection method, e.g. whereby history implies the availability of large amounts of data
    • G05B23/024Quantitative history assessment, e.g. mathematical relationships between available data; Functions therefor; Principal component analysis [PCA]; Partial least square [PLS]; Statistical classifiers, e.g. Bayesian networks, linear regression or correlation analysis; Neural networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0243Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
    • G05B23/0254Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a quantitative model, e.g. mathematical relationships between inputs and outputs; functions: observer, Kalman filter, residual calculation, Neural Networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0259Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
    • G05B23/0275Fault isolation and identification, e.g. classify fault; estimate cause or root of failure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/008Reliability or availability analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0721Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment within a central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0754Error or fault detection not based on redundancy by exceeding limits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0766Error or fault reporting or storing
    • G06F11/0772Means for error signaling, e.g. using interrupts, exception flags, dedicated error registers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0766Error or fault reporting or storing
    • G06F11/0787Storage of error reports, e.g. persistent data storage, storage using memory protection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2002Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
    • G06F11/2007Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • G06F11/263Generation of test inputs, e.g. test vectors, patterns or sequences ; with adaptation of the tested hardware for testability with external testers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06312Adjustment or analysis of established resource schedule, e.g. resource or task levelling, or dynamic rescheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/067Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0816Indicating performance data, e.g. occurrence of a malfunction
    • G07C5/0825Indicating performance data, e.g. occurrence of a malfunction using optical means
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/187Machine fault alarms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/85Active fault masking without idle spares
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • assets are ubiquitous in many industries. From locomotives that transfer cargo across countries to medical equipment that helps nurses and doctors to save lives, assets serve an important role in everyday life. Depending on the role that an asset serves, its complexity, and cost, may vary. For instance, some assets may include multiple subsystems that must operate in harmony for the asset to function properly (e.g., an engine, transmission, etc. of a locomotive).
  • the current approach for monitoring assets generally involves an on-asset computer that receives signals from various sensors and/or actuators distributed throughout an asset that monitor the operating conditions of the asset.
  • the sensors and/or actuators may monitor parameters such as temperatures, voltages, and speeds, among other examples. If sensor and/or actuator signals from one or more of these devices reach certain values, the on-asset computer may then generate an abnormal-condition indicator, such as a “fault code,” which is an indication that an abnormal condition has occurred within the asset.
  • an abnormal condition may be a defect at an asset or component thereof, which may lead to a failure of the asset and/or component.
  • an abnormal condition may be associated with a given failure, or perhaps multiple failures, in that the abnormal condition is symptomatic of the given failure or failures.
  • a user typically defines the sensors and respective sensor values associated with each abnormal-condition indicator. That is, the user defines an asset's “normal” operating conditions (e.g., those that do not trigger fault codes) and “abnormal” operating conditions (e.g., those that trigger fault codes).
  • the indicator and/or sensor signals may be passed to a remote location where a user may receive some indication of the abnormal condition and/or sensor signals and decide whether to take action.
  • One action that the user might take is to assign a mechanic or the like to evaluate and potentially repair the asset.
  • the mechanic may connect a computing device to the asset and operate the computing device to cause the asset to utilize one or more local diagnostic tools to facilitate diagnosing the cause of the generated indicator.
  • While current asset-monitoring systems are generally effective at triggering abnormal-condition indicators, such systems are typically reactionary. That is, by the time the asset-monitoring system triggers an indicator, a failure within the asset may have already occurred (or is about to occur), which may lead to costly downtime, among other disadvantages. Additionally, due to the simplistic nature of on-asset abnormality-detection mechanisms in such asset-monitoring systems, current asset-monitoring approaches tend to involve a remote computing system performing monitoring computations for an asset and then transmitting instructions to the asset if a problem is detected. This may be disadvantageous due to network latency and/or infeasible when the asset moves outside of coverage of a communication network. Further still, due to the nature of local diagnostic tools stored on assets, current diagnosis procedures tend to be inefficient and cumbersome because a mechanic is required to cause the asset to utilize such tools.
  • a network configuration may include a communication network that facilitates communications between assets and a remote computing system.
  • the communication network may facilitate secure communications between assets and the remote computing system (e.g., via encryption or other security measures).
  • each asset may include multiple sensors and/or actuators distributed throughout the asset that facilitate monitoring operating conditions of the asset.
  • a number of assets may provide respective data indicative of each asset's operating conditions to the remote computing system, which may be configured to perform one or more operations based on the provided data.
  • the remote computing system may be configured to define and deploy to assets a predictive model and corresponding workflow (referred to herein as a “model-workflow pair”) that are related to the operation of the assets.
  • the assets may be configured to receive the model-workflow pair and utilize a local analytics device to operate in accordance with the model-workflow pair.
  • a model-workflow pair may cause an asset to monitor certain operating conditions and when certain conditions exist, modify a behavior that may help facilitate preventing an occurrence of a particular event.
  • a predictive model may receive as inputs sensor data from a particular set of asset sensors and output a likelihood that one or more particular events could occur at the asset within a particular period of time in the future.
  • a workflow may involve one or more operations that are performed based on the likelihood of the one or more particular events that is output by the model.
  • the remote computing system may define an aggregate, predictive model and corresponding workflows, individualized, predictive models and corresponding workflows, or some combination thereof.
  • An “aggregate” model/workflow may refer to a model/workflow that is generic for a group of assets, while an “individualized” model/workflow may refer to a model/workflow that is tailored for a single asset or subgroup of assets from the group of assets.
  • the remote computing system may start by defining an aggregate, predictive model based on historical data for multiple assets. Utilizing data for multiple assets may facilitate defining a more accurate predictive model than utilizing operating data for a single asset.
  • the historical data that forms the basis of the aggregate model may include at least operating data that indicates operating conditions of a given asset.
  • operating data may include abnormal-condition data identifying instances when failures occurred at assets and/or sensor data indicating one or more physical properties measured at the assets at the time of those instances.
  • the data may also include environment data indicating environments in which assets have been operated and scheduling data indicating dates and times when assets were utilized, among other examples of asset-related data used to define the aggregate model-workflow pair.
  • the remote computing system may define an aggregate model that predicts the occurrence of particular events.
  • an aggregate model may output a probability that a failure will occur at an asset within a particular period of time in the future.
  • Such a model may be referred to herein as a “failure model.”
  • Other aggregate models may predict the likelihood that an asset will complete a task within a particular period of time in the future, among other example predictive models.
  • the remote computing system may then define an aggregate workflow that corresponds to the defined aggregate model.
  • a workflow may include one or more operations that an asset may perform based on a corresponding model. That is, the output of the corresponding model may cause the asset to perform workflow operations.
  • an aggregate model-workflow pair may be defined such that when the aggregate model outputs a probability within a particular range an asset will execute a particular workflow operation, such as a local diagnostic tool.
  • the remote computing system may transmit the pair to one or more assets.
  • the one or more assets may then operate in accordance with the aggregate model-workflow pair.
  • the remote computing system may be configured to further define an individualized predictive model and/or corresponding workflow for one or multiple assets.
  • the remote computing system may do so based on certain characteristics of each given asset, among other considerations.
  • the remote computing system may start with an aggregate model-workflow pair as a baseline and individualize one or both of the aggregate model and workflow for the given asset based on the asset's characteristics.
  • the remote computing system may be configured to determine asset characteristics that are related to the aggregate model-workflow pair (e.g., characteristics of interest). Examples of such characteristics may include asset age, asset usage, asset class (e.g., brand and/or model), asset health, and environment in which an asset is operated, among other characteristics.
  • the remote computing system may determine characteristics of the given asset that correspond to the characteristics of interest. Based at least on some of the given asset's characteristics, the remote computing system may be configured to individualize the aggregate model and/or corresponding workflow.
  • Defining an individualized model and/or workflow may involve the remote computing system making certain modifications to the aggregate model and/or workflow.
  • individualizing the aggregate model may involve changing model inputs, changing a model calculation, and/or changing a weight of a variable or output of a calculation, among other examples.
  • Individualizing the aggregate workflow may involve changing one or more operations of the workflow and/or changing the model output value or range of values that triggers the workflow, among other examples.
  • the remote computing system may then transmit the individualized model and/or workflow to the given asset.
  • the given asset may utilize the aggregate version of the model or workflow that is not individualized.
  • the given asset may then operate in accordance with its individualized model-workflow pair.
  • a given asset may include a local analytics device that may be configured to cause the given asset to operate in accordance with a model-workflow pair provided by the remote computing system.
  • the local analytics device may be configured to utilize operating data generated by the asset sensors and/or actuators (e.g., data that is typically utilized for other asset-related purposes) to run the predictive model.
  • the local analytics device may execute the model and depending on the output of the model, may execute the corresponding workflow.
  • Executing the corresponding workflow may help facilitate preventing an undesirable event from occurring at the given asset.
  • the given asset may locally determine that an occurrence of a particular event is likely and may then execute a particular workflow to help prevent the occurrence of the event.
  • This may be particularly useful if communication between the given asset and remote computing system is hindered. For example, in some situations, a failure might occur before a command to take preventative actions reaches the given asset from the remote computing system.
  • the local analytics device may be advantageous in that it may generate the command locally, thereby avoiding any network latency or any issues arising from the given asset being “off-line.” As such, the local analytics device executing a model-workflow pair may facilitate causing the asset to adapt to its conditions.
  • the given asset While a given asset is operating in accordance with a model-workflow pair, the given asset may also continue to provide operating data to the remote computing system. Based at least on this data, the remote computing system may modify the aggregate model-workflow pair and/or one or more individualized model-workflow pairs. The remote computing system may make modifications for a number of reasons.
  • the remote computing system may modify a model and/or workflow if a new event occurred at an asset that the model did not previously account for.
  • the new event may be a new failure that had yet to occur at any of the assets whose data was used to define the aggregate model.
  • the remote computing system may modify a model and/or workflow if an event occurred at an asset under operating conditions that typically do not cause the event to occur. For instance, returning again to a failure model, the failure model or corresponding workflow may be modified if a failure occurred under operating conditions that had yet to cause the failure to occur in the past.
  • the remote computing system may modify a model and/or workflow if an executed workflow failed to prevent an occurrence of an event.
  • the remote computing system may modify the model and/or workflow if the output of the model caused an asset to execute a workflow aimed to prevent the occurrence of an event but the event occurred at the asset nonetheless.
  • Other examples of reasons for modifying a model and/or workflow are also possible.
  • the remote computing system may then distribute any modifications to the asset whose data caused the modification and/or to other assets in communication with the remote computing system.
  • the remote computing system may dynamically modify models and/or workflows and distribute these modifications to a whole fleet of assets based on operating conditions of an individual asset.
  • an asset and/or the remote computing system may be configured to dynamically adjust executing a predictive model and/or workflow.
  • the asset and/or remote computing system may be configured to detect certain events that trigger a change in responsibilities with respect to whether the asset and/or the remote computing system are executing a predictive model and/or workflow.
  • the asset may store the model-workflow pair in data storage but then may rely on the remote computing system to centrally execute part or all of the model-workflow pair.
  • the remote computing system may rely on the asset to locally execute part or all of the model-workflow pair.
  • the remote computing system and the asset may share in the responsibilities of executing the model-workflow pair.
  • certain events may occur that trigger the asset and/or remote computing system to adjust the execution of the predictive model and/or workflow.
  • the asset and/or remote computing system may detect certain characteristics of a communication network that couples the asset to the remote computing system. Based on the characteristics of the communication network, the asset may adjust whether it is locally executing a predictive model and/or workflow and the remote computing system may accordingly modify whether it is centrally executing the model and/or workflow. In this way, the asset and/or remote computing system may adapt to conditions of the asset.
  • the asset may detect an indication that a signal strength of a communication link between the asset and the remote computing system is relatively weak (e.g., the asset may determine that is about to go “off-line”), that a network latency is relatively high, and/or that a network bandwidth is relatively low.
  • the asset may be programmed to take on responsibilities for executing the model-workflow pair that were previously being handled by the remote computing system.
  • the remote computing system may cease centrally executing some or all of the model-workflow pair. In this way, the asset may locally execute the predictive model and then, based on executing the predictive model, execute the corresponding workflow to potentially help prevent an occurrence of a failure at the asset.
  • the asset and/or the remote computing system may similarly adjust executing (or perhaps modify) a predictive model and/or workflow based on various other considerations. For example, based on the processing capacity of the asset, the asset may adjust locally executing a model-workflow pair and the remote computing system may accordingly adjust as well. In another example, based on the bandwidth of the communication network coupling the asset to the remote computing system, the asset may execute a modified workflow (e.g., transmitting data to the remote computing system according to a data-transmission scheme with a reduced transmission rate). Other examples are also possible.
  • a computing system comprises at least one processor, a non-transitory computer-readable medium, and program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to: (a) receive operating data for a plurality of assets, wherein the plurality of assets comprises a first asset, (b) based on the received operating data, define an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets, (c) determine one or more characteristics of the first asset, (d) based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, define at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset, and (e) transmit to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
  • a non-transitory computer-readable medium having instructions stored thereon that are executable to cause a computing system to: (a) receive operating data for a plurality of assets, wherein the plurality of assets comprises a first asset, (b) based on the received operating data, define an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets, (c) determine one or more characteristics of the first asset, (d) based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, define at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset, and (e) transmit to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
  • a computer-implemented method comprises: (a) receiving operating data for a plurality of assets, wherein the plurality of assets comprises a first asset, (b) based on the received operating data, defining an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets, (c) determining one or more characteristics of the first asset, (d) based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, defining at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset, and (e) transmitting to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
  • FIG. 1 depicts an example network configuration in which example embodiments may be implemented.
  • FIG. 2 depicts a simplified block diagram of an example asset.
  • FIG. 3 depicts a conceptual illustration of example abnormal-condition indicators and triggering criteria.
  • FIG. 4 depicts a simplified block diagram of an example analytics system.
  • FIG. 5 depicts an example flow diagram of a definition phase that may be used for defining model-workflow pairs.
  • FIG. 6A depicts a conceptual illustration of an aggregate model-workflow pair.
  • FIG. 6B depicts a conceptual illustration of an individualized model-workflow pair.
  • FIG. 6C depicts a conceptual illustration of another individualized model-workflow pair.
  • FIG. 6D depicts a conceptual illustration of a modified model-workflow pair.
  • FIG. 7 depicts an example flow diagram of a modeling phase that may be used for defining a predictive model that outputs a health metric.
  • FIG. 8 depicts a conceptual illustration of data utilized to define a model.
  • FIG. 9 depicts an example flow diagram of a local-execution phase that may be used for locally executing a predictive model.
  • FIG. 10 depicts an example flow diagram of a modification phase that may be used for modifying model-workflow pairs.
  • FIG. 11 depicts an example flow diagram of an adjustment phase that may be used for adjusting execution of model-workflow pairs.
  • FIG. 12 depicts a flow diagram of an example method for defining and deploying an aggregate, predictive model and corresponding workflow
  • FIG. 13 depicts a flow diagram of an example method for defining and deploying an individualized, predictive model and/or corresponding workflow
  • FIG. 14 depicts a flow diagram of an example method for dynamically modifying the execution of model-workflow pairs.
  • FIG. 1 depicts an example network configuration 100 in which example embodiments may be implemented.
  • the network configuration 100 includes an asset 102 , an asset 104 , a communication network 106 , a remote computing system 108 that may take the form of an analytics system, an output system 110 , and a data source 112 .
  • the communication network 106 may communicatively connect each of the components in the network configuration 100 .
  • the assets 102 and 104 may communicate with the analytics system 108 via the communication network 106 .
  • the assets 102 and 104 may communicate with one or more intermediary systems, such as an asset gateway (not pictured), that in turn communicates with the analytics system 108 .
  • the analytics system 108 may communicate with the output system 110 via the communication network 106 .
  • the analytics system 108 may communicate with one or more intermediary systems, such as a host server (not pictured), that in turn communicates with the output system 110 .
  • the communication network 106 may facilitate secure communications between network components (e.g., via encryption or other security measures).
  • the assets 102 and 104 may take the form of any device configured to perform one or more operations (which may be defined based on the field) and may also include equipment configured to transmit data indicative of one or more operating conditions of the given asset.
  • an asset may include one or more subsystems configured to perform one or more respective operations. In practice, multiple subsystems may operate in parallel or sequentially in order for an asset to operate.
  • Example assets may include transportation machines (e.g., locomotives, aircraft, passenger vehicles, semi-trailer trucks, ships, etc.), industrial machines (e.g., mining equipment, construction equipment, factory automation, etc.), medical machines (e.g., medical imaging equipment, surgical equipment, medical monitoring systems, medical laboratory equipment, etc.), and utility machines (e.g., turbines, solar farms, etc.), among other examples.
  • transportation machines e.g., locomotives, aircraft, passenger vehicles, semi-trailer trucks, ships, etc.
  • industrial machines e.g., mining equipment, construction equipment, factory automation, etc.
  • medical machines e.g., medical imaging equipment, surgical equipment, medical monitoring systems, medical laboratory equipment, etc.
  • utility machines e.g., turbines, solar farms, etc.
  • the assets 102 and 104 may each be of the same type (e.g., a fleet of locomotives or aircrafts, a group of wind turbines, or a set of MRI machines, among other examples) and perhaps may be of the same class (e.g., same brand and/or model). In other examples, the assets 102 and 104 may differ by type, by brand, by model, etc. The assets are discussed in further detail below with reference to FIG. 2 .
  • the assets 102 and 104 may communicate with the analytics system 108 via the communication network 106 .
  • the communication network 106 may include one or more computing systems and network infrastructure configured to facilitate transferring data between network components.
  • the communication network 106 may be or may include one or more Wide-Area Networks (WANs) and/or Local-Area Networks (LANs), which may be wired and/or wireless and support secure communication.
  • WANs Wide-Area Networks
  • LANs Local-Area Networks
  • the communication network 106 may include one or more cellular networks and/or the Internet, among other networks.
  • the communication network 106 may operate according to one or more communication protocols, such as LTE, CDMA, GSM, LPWAN, WiFi, Bluetooth, Ethernet, HTTP/S, TCP, CoAP/DTLS and the like. Although the communication network 106 is shown as a single network, it should be understood that the communication network 106 may include multiple, distinct networks that are themselves communicatively linked. The communication network 106 could take other forms as well.
  • the analytics system 108 may be configured to receive data from the assets 102 and 104 and the data source 112 .
  • the analytics system 108 may include one or more computing systems, such as servers and databases, configured to receive, process, analyze, and output data.
  • the analytics system 108 may be configured according to a given dataflow technology, such as TPL Dataflow or NiFi, among other examples.
  • TPL Dataflow or NiFi
  • the analytics system 108 is discussed in further detail below with reference to FIG. 3 .
  • the analytics system 108 may be configured to transmit data to the assets 102 and 104 and/or to the output system 110 .
  • the particular data transmitted may take various forms and will be described in further detail below.
  • the output system 110 may take the form of a computing system or device configured to receive data and provide some form of output.
  • the output system 110 may take various forms.
  • the output system 110 may be or include an output device configured to receive data and provide an audible, visual, and/or tactile output in response to the data.
  • an output device may include one or more input interfaces configured to receive user input, and the output device may be configured to transmit data through the communication network 106 based on such user input. Examples of output devices include tablets, smartphones, laptop computers, other mobile computing devices, desktop computers, smart TVs, and the like.
  • output system 110 may take the form of a work-order system configured to output a request for a mechanic or the like to repair an asset.
  • output system 110 may take the form of a parts-ordering system configured to place an order for a part of an asset and output a receipt thereof. Numerous other output systems are also possible.
  • the data source 112 may be configured to communicate with the analytics system 108 .
  • the data source 112 may be or include one or more computing systems configured to collect, store, and/or provide to other systems, such as the analytics system 108 , data that may be relevant to the functions performed by the analytics system 108 .
  • the data source 112 may be configured to generate and/or obtain data independently from the assets 102 and 104 .
  • the data provided by the data source 112 may be referred to herein as “external data.”
  • the data source 112 may be configured to provide current and/or historical data.
  • the analytics system 108 may receive data from the data source 112 by “subscribing” to a service provided by the data source. However, the analytics system 108 may receive data from the data source 112 in other manners as well.
  • Examples of the data source 112 include environment data sources, asset-management data sources, and other data sources.
  • environment data sources provide data indicating some characteristic of the environment in which assets are operated.
  • environment data sources include weather-data servers, global navigation satellite systems (GNSS) servers, map-data servers, and topography-data servers that provide information regarding natural and artificial features of a given area, among other examples.
  • GNSS global navigation satellite systems
  • asset-management data sources provide data indicating events or statuses of entities (e.g., other assets) that may affect the operation or maintenance of assets (e.g., when and where an asset may operate or receive maintenance).
  • asset-management data sources include traffic-data servers that provide information regarding air, water, and/or ground traffic, asset-schedule servers that provide information regarding expected routes and/or locations of assets on particular dates and/or at particular times, defect detector systems (also known as “hotbox” detectors) that provide information regarding one or more operating conditions of an asset that passes in proximity to the defect detector system, part-supplier servers that provide information regarding parts that particular suppliers have in stock and prices thereof, and repair-shop servers that provide information regarding repair shop capacity and the like, among other examples.
  • traffic-data servers that provide information regarding air, water, and/or ground traffic
  • asset-schedule servers that provide information regarding expected routes and/or locations of assets on particular dates and/or at particular times
  • defect detector systems also known as “hotbox” detectors
  • part-supplier servers that provide
  • Examples of other data sources include power-grid servers that provide information regarding electricity consumption and external databases that store historical operating data for assets, among other examples.
  • power-grid servers that provide information regarding electricity consumption
  • external databases that store historical operating data for assets
  • network configuration 100 is one example of a network in which embodiments described herein may be implemented. Numerous other arrangements are possible and contemplated herein. For instance, other network configurations may include additional components not pictured and/or more or less of the pictured components.
  • FIG. 2 a simplified block diagram of an example asset 200 is depicted. Either or both of assets 102 and 104 from FIG. 1 may be configured like the asset 200 .
  • the asset 200 may include one or more subsystems 202 , one or more sensors 204 , one or more actuators 205 , a central processing unit 206 , data storage 208 , a network interface 210 , a user interface 212 , and a local analytics device 220 , all of which may be communicatively linked by a system bus, network, or other connection mechanism.
  • the asset 200 may include additional components not shown and/or more or less of the depicted components.
  • the asset 200 may include one or more electrical, mechanical, and/or electromechanical components configured to perform one or more operations.
  • one or more components may be grouped into a given subsystem 202 .
  • a subsystem 202 may include a group of related components that are part of the asset 200 .
  • a single subsystem 202 may independently perform one or more operations or the single subsystem 202 may operate along with one or more other subsystems to perform one or more operations.
  • different types of assets, and even different classes of the same type of assets, may include different subsystems.
  • examples of subsystems 202 may include engines, transmissions, drivetrains, fuel systems, battery systems, exhaust systems, braking systems, electrical systems, signal processing systems, generators, gear boxes, rotors, and hydraulic systems, among numerous other subsystems.
  • examples of subsystems 202 may include scanning systems, motors, coil and/or magnet systems, signal processing systems, rotors, and electrical systems, among numerous other subsystems.
  • the asset 200 may be outfitted with various sensors 204 that are configured to monitor operating conditions of the asset 200 and various actuators 205 that are configured to interact with the asset 200 or a component thereof and monitor operating conditions of the asset 200 .
  • some of the sensors 204 and/or actuators 205 may be grouped based on a particular subsystem 202 .
  • the group of sensors 204 and/or actuators 205 may be configured to monitor operating conditions of the particular subsystem 202 , and the actuators from that group may be configured to interact with the particular subsystem 202 in some way that may alter the subsystem's behavior based on those operating conditions.
  • a sensor 204 may be configured to detect a physical property, which may be indicative of one or more operating conditions of the asset 200 , and provide an indication, such as an electrical signal, of the detected physical property.
  • the sensors 204 may be configured to obtain measurements continuously, periodically (e.g., based on a sampling frequency), and/or in response to some triggering event.
  • the sensors 204 may be preconfigured with operating parameters for performing measurements and/or may perform measurements in accordance with operating parameters provided by the central processing unit 206 (e.g., sampling signals that instruct the sensors 204 to obtain measurements).
  • different sensors 204 may have different operating parameters (e.g., some sensors may sample based on a first frequency, while other sensors sample based on a second, different frequency).
  • the sensors 204 may be configured to transmit electrical signals indicative of a measured physical property to the central processing unit 206 .
  • the sensors 204 may continuously or periodically provide such signals to the central processing unit 206 .
  • sensors 204 may be configured to measure physical properties such as the location and/or movement of the asset 200 , in which case the sensors may take the form of GNSS sensors, dead-reckoning-based sensors, accelerometers, gyroscopes, pedometers, magnetometers, or the like.
  • various sensors 204 may be configured to measure other operating conditions of the asset 200 , examples of which may include temperatures, pressures, speeds, acceleration or deceleration rates, friction, power usages, fuel usages, fluid levels, runtimes, voltages and currents, magnetic fields, electric fields, presence or absence of objects, positions of components, and power generation, among other examples.
  • temperatures, pressures, speeds, acceleration or deceleration rates, friction, power usages, fuel usages, fluid levels, runtimes, voltages and currents, magnetic fields, electric fields, presence or absence of objects, positions of components, and power generation among other examples.
  • sensors may be configured to measure. Additional or fewer sensors may be used depending on the industrial application or specific asset.
  • an actuator 205 may be configured similar in some respects to a sensor 204 . Specifically, an actuator 205 may be configured to detect a physical property indicative of an operating condition of the asset 200 and provide an indication thereof in a manner similar to the sensor 204 .
  • an actuator 205 may be configured to interact with the asset 200 , one or more subsystems 202 , and/or some component thereof.
  • an actuator 205 may include a motor or the like that is configured to move or otherwise control a component or system.
  • an actuator may be configured to measure a fuel flow and alter the fuel flow (e.g., restrict the fuel flow), or an actuator may be configured to measure a hydraulic pressure and alter the hydraulic pressure (e.g., increase or decrease the hydraulic pressure). Numerous other example interactions of an actuator are also possible and contemplated herein.
  • the central processing unit 206 may include one or more processors and/or controllers, which may take the form of a general- or special-purpose processor or controller.
  • the central processing unit 206 may be or include microprocessors, microcontrollers, application specific integrated circuits, digital signal processors, and the like.
  • the data storage 208 may be or include one or more non-transitory computer-readable storage media, such as optical, magnetic, organic, or flash memory, among other examples.
  • the central processing unit 206 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 208 to perform the operations of an asset described herein. For instance, as suggested above, the central processing unit 206 may be configured to receive respective sensor signals from the sensors 204 and/or actuators 205 . The central processing unit 206 may be configured to store sensor and/or actuator data in and later access it from the data storage 208 .
  • the central processing unit 206 may also be configured to determine whether received sensor and/or actuator signals trigger any abnormal-condition indicators, such as fault codes. For instance, the central processing unit 206 may be configured to store in the data storage 208 abnormal-condition rules, each of which include a given abnormal-condition indicator representing a particular abnormal condition and respective triggering criteria that trigger the abnormal-condition indicator. That is, each abnormal-condition indicator corresponds with one or more sensor and/or actuator measurement values that must be satisfied before the abnormal-condition indicator is triggered.
  • the asset 200 may be pre-programmed with the abnormal-condition rules and/or may receive new abnormal-condition rules or updates to existing rules from a computing system, such as the analytics system 108 .
  • the central processing unit 206 may be configured to determine whether received sensor and/or actuator signals trigger any abnormal-condition indicators. That is, the central processing unit 206 may determine whether received sensor and/or actuator signals satisfy any triggering criteria. When such a determination is affirmative, the central processing unit 206 may generate abnormal-condition data and may also cause the asset's user interface 212 to output an indication of the abnormal condition, such as a visual and/or audible alert. Additionally, the central processing unit 206 may log the occurrence of the abnormal-condition indicator being triggered in the data storage 208 , perhaps with a timestamp.
  • FIG. 3 depicts a conceptual illustration of example abnormal-condition indicators and respective triggering criteria for an asset.
  • FIG. 3 depicts a conceptual illustration of example fault codes.
  • table 300 includes columns 302 , 304 , and 306 that correspond to Sensor A, Actuator B, and Sensor C, respectively, and rows 308 , 310 , and 312 that correspond to Fault Codes 1 , 2 , and 3 , respectively.
  • Entries 314 then specify sensor criteria (e.g., sensor value thresholds) that correspond to the given fault codes.
  • Fault Code 1 will be triggered when Sensor A detects a rotational measurement greater than 135 revolutions per minute (RPM) and Sensor C detects a temperature measurement greater than 65° Celsius (C)
  • Fault Code 2 will be triggered when Actuator B detects a voltage measurement greater than 1000 Volts (V) and Sensor C detects a temperature measurement less than 55° C.
  • Fault Code 3 will be triggered when Sensor A detects a rotational measurement greater than 100 RPM, Actuator B detects a voltage measurement greater than 750 V, and Sensor C detects a temperature measurement greater than 60° C.
  • FIG. 3 is provided for purposes of example and explanation only and that numerous other fault codes and/or triggering criteria are possible and contemplated herein.
  • the central processing unit 206 may be configured to carry out various additional functions for managing and/or controlling operations of the asset 200 as well.
  • the central processing unit 206 may be configured to provide instruction signals to the subsystems 202 and/or the actuators 205 that cause the subsystems 202 and/or the actuators 205 to perform some operation, such as modifying a throttle position.
  • the central processing unit 206 may be configured to modify the rate at which it processes data from the sensors 204 and/or the actuators 205 , or the central processing unit 206 may be configured to provide instruction signals to the sensors 204 and/or actuators 205 that cause the sensors 204 and/or actuators 205 to, for example, modify a sampling rate.
  • the central processing unit 206 may be configured to receive signals from the subsystems 202 , the sensors 204 , the actuators 205 , the network interfaces 210 , and/or the user interfaces 212 and based on such signals, cause an operation to occur. Further still, the central processing unit 206 may be configured to receive signals from a computing device, such as a diagnostic device, that cause the central processing unit 206 to execute one or more diagnostic tools in accordance with diagnostic rules stored in the data storage 208 . Other functionalities of the central processing unit 206 are discussed below.
  • the network interface 210 may be configured to provide for communication between the asset 200 and various network components connected to communication network 106 .
  • the network interface 210 may be configured to facilitate wireless communications to and from the communication network 106 and may thus take the form of an antenna structure and associated equipment for transmitting and receiving various over-the-air signals. Other examples are possible as well.
  • the network interface 210 may be configured according to a communication protocol, such as but not limited to any of those described above.
  • the user interface 212 may be configured to facilitate user interaction with the asset 200 and may also be configured to facilitate causing the asset 200 to perform an operation in response to user interaction.
  • Examples of user interfaces 212 include touch-sensitive interfaces, mechanical interfaces (e.g., levers, buttons, wheels, dials, keyboards, etc.), and other input interfaces (e.g., microphones), among other examples.
  • the user interface 212 may include or provide connectivity to output components, such as display screens, speakers, headphone jacks, and the like.
  • the local analytics device 220 may generally be configured to receive and analyze data and based on such analysis, cause one or more operations to occur at the asset 200 .
  • the local analytics device 220 may receive data from the sensors 204 and/or actuators 205 and based on such data, may provide instructions to the central processing unit 206 that cause the asset 200 to perform an operation.
  • the local analytics device 220 may enable the asset 200 to locally perform advanced analytics and associated operations, such as executing a predictive model and corresponding workflow, that may otherwise not be able to be performed with the other on-asset components. As such, the local analytics device 220 may help provide additional processing power and/or intelligence to the asset 200 .
  • the local analytics device 220 may include a processing unit 222 , a data storage 224 , and a network interface 226 , all of which may be communicatively linked by a system bus, network, or other connection mechanism.
  • the processing unit 222 may include any of the components discussed above with respect to the central processing unit 206 .
  • the data storage 224 may be or include one or more non-transitory computer-readable storage media, which may take any of the forms of computer-readable storage media discussed above.
  • the processing unit 222 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 224 to perform the operations of a local analytics device described herein.
  • the processing unit 222 may be configured to receive respective sensor and/or actuator signals from the sensors 204 and/or actuators 205 and execute a predictive model-workflow pair based on such signals. Other functions are described below.
  • the network interface 226 may be the same or similar to the network interfaces described above. In practice, the network interface 226 may facilitate communication between the asset 200 and the analytics system 108 .
  • the local analytics device 220 may include and/or communicate with a user interface that may be similar to the user interface 212 .
  • the user interface may be located remotely from the local analytics device 220 (and the asset 200 ). Other examples are also possible.
  • asset 200 shown in FIG. 2 is but one example of a simplified representation of an asset and that numerous others are also possible.
  • other assets may include additional components not pictured and/or more or less of the pictured components.
  • a given asset may include multiple, individual assets that are operated in concert to perform operations of the given asset. Other examples are also possible.
  • the analytics system 400 may include one or more computing systems communicatively linked and arranged to carry out various operations described herein.
  • the analytics system 400 may include a data intake system 402 , a data science system 404 , and one or more databases 406 . These system components may be communicatively coupled via one or more wireless and/or wired connections, which may be configured to facilitate secure communications.
  • the data intake system 402 may generally function to receive and process data and output data to the data science system 404 .
  • the data intake system 402 may include one or more network interfaces configured to receive data from various network components of the network configuration 100 , such as the assets 102 and 104 , the output system 110 , and/or the data source 112 .
  • the data intake system 402 may be configured to receive analog signals, data streams, and/or network packets, among other examples.
  • the network interfaces may include one or more wired network interfaces, such as a port or the like, and/or wireless network interfaces, similar to those described above.
  • the data intake system 402 may be or include components configured according to a given dataflow technology, such as a NiFi receiver or the like.
  • the data intake system 402 may include one or more processing components configured to perform one or more operations.
  • Example operations may include compression and/or decompression, encryption and/or de-encryption, analog-to-digital and/or digital-to-analog conversion, filtration, and amplification, among other operations.
  • the data intake system 402 may be configured to parse, sort, organize, and/or route data based on data type and/or characteristics of the data.
  • the data intake system 402 may be configured to format, package, and/or route data based on one or more characteristics or operating parameters of the data science system 404 .
  • the data received by the data intake system 402 may take various forms.
  • the payload of the data may include a single sensor or actuator measurement, multiple sensor and/or actuator measurements and/or one or more abnormal-condition data. Other examples are also possible.
  • the received data may include certain characteristics, such as a source identifier and a timestamp (e.g., a date and/or time at which the information was obtained).
  • a source identifier e.g., a computer generated alphabetic, numeric, alphanumeric, or the like identifier
  • a timestamp e.g., a date and/or time at which the information was obtained.
  • a unique identifier e.g., a computer generated alphabetic, numeric, alphanumeric, or the like identifier
  • Another characteristic may include the location (e.g., GPS coordinates) at which the information was obtained.
  • Data characteristics may come in the form of signal signatures or metadata, among other examples.
  • the data science system 404 may generally function to receive (e.g., from the data intake system 402 ) and analyze data and based on such analysis, cause one or more operations to occur.
  • the data science system 404 may include one or more network interfaces 408 , a processing unit 410 , and data storage 412 , all of which may be communicatively linked by a system bus, network, or other connection mechanism.
  • the data science system 404 may be configured to store and/or access one or more application program interfaces (APIs) that facilitate carrying out some of the functionality disclosed herein.
  • APIs application program interfaces
  • the network interfaces 408 may be the same or similar to any network interface described above. In practice, the network interfaces 408 may facilitate communication (e.g., with some level of security) between the data science system 404 and various other entities, such as the data intake system 402 , the databases 406 , the assets 102 , the output system 110 , etc.
  • the processing unit 410 may include one or more processors, which may take any of the processor forms described above.
  • the data storage 412 may be or include one or more non-transitory computer-readable storage media, which may take any of the forms of computer-readable storage media discussed above.
  • the processing unit 410 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 412 to perform the operations of an analytics system described herein.
  • the processing unit 410 may be configured to perform analytics on data received from the data intake system 402 .
  • the processing unit 410 may be configured to execute one or more modules, which may each take the form of one or more sets of program instructions that are stored in the data storage 412 .
  • the modules may be configured to facilitate causing an outcome to occur based on the execution of the respective program instructions.
  • An example outcome from a given module may include outputting data into another module, updating the program instructions of the given module and/or of another module, and outputting data to a network interface 408 for transmission to an asset and/or the output system 110 , among other examples.
  • the databases 406 may generally function to receive (e.g., from the data science system 404 ) and store data.
  • each database 406 may include one or more non-transitory computer-readable storage media, such as any of the examples provided above.
  • the databases 406 may be separate from or integrated with the data storage 412 .
  • the databases 406 may be configured to store numerous types of data, some of which is discussed below. In practice, some of the data stored in the databases 406 may include a timestamp indicating a date and time at which the data was generated or added to the database. Moreover, data may be stored in a number of manners in the databases 406 . For instance, data may be stored in time sequence, in a tabular manner, and/or organized based on data source type (e.g., based on asset, asset type, sensor, sensor type, actuator, or actuator type) or abnormal-condition indicator, among other examples.
  • data source type e.g., based on asset, asset type, sensor, sensor type, actuator, or actuator type
  • abnormal-condition indicator e.g., based on abnormal-condition indicator, among other examples.
  • each block may represent a module or portion of program code that includes instructions that are executable by a processor to implement specific logical functions or steps in a process.
  • the program code may be stored on any type of computer-readable medium, such as non-transitory computer-readable media.
  • each block may represent circuitry that is wired to perform specific logical functions or steps in a process.
  • the blocks shown in the flow diagrams may be rearranged into different orders, combined into fewer blocks, separated into additional blocks, and/or removed based upon the particular embodiment.
  • the analytics system 108 generally receives data from multiple sources, perhaps simultaneously, and performs operations based on such aggregate received data.
  • the representative asset 102 may take various forms and may be configured to perform a number of operations.
  • the asset 102 may take the form of a locomotive that is operable to transfer cargo across the United States.
  • the sensors and/or actuators of the asset 102 may obtain data that reflects one or more operating conditions of the asset 102 .
  • the sensors and/or actuators may transmit the data to a processing unit of the asset 102 .
  • the processing unit may be configured to receive the data from the sensors and/or actuators.
  • the processing unit may receive sensor data from multiple sensors and/or actuator data from multiple actuators simultaneously or sequentially.
  • the processing unit may also be configured to determine whether the data satisfies triggering criteria that trigger any abnormal-condition indicators, such as fault codes.
  • the processing unit may be configured to perform one or more local operations, such as outputting an indication of the triggered indicator via a user interface.
  • the asset 102 may then transmit operating data to the analytics system 108 via a network interface of the asset 102 and the communication network 106 .
  • the asset 102 may transmit operating data to the analytics system 108 continuously, periodically, and/or in response to triggering events (e.g., abnormal conditions).
  • the asset 102 may transmit operating data periodically based on a particular frequency (e.g., daily, hourly, every fifteen minutes, once per minute, once per second, etc.), or the asset 102 may be configured to transmit a continuous, real-time feed of operating data.
  • the asset 102 may be configured to transmit operating data based on certain triggers, such as when sensor and/or actuator measurements satisfy triggering criteria for any abnormal-condition indicators.
  • the asset 102 may transmit operating data in other manners as well.
  • operating data for the asset 102 may include sensor data, actuator data, and/or abnormal-condition data.
  • the asset 102 may be configured to provide the operating data in a single data stream, while in other implementations the asset 102 may be configured to provide the operating data in multiple, distinct data streams.
  • the asset 102 may provide to the analytics system 108 a first data stream of sensor and/or actuator data and a second data stream of abnormal-condition data. Other possibilities also exist.
  • Sensor and actuator data may take various forms. For example, at times, sensor data (or actuator data) may include measurements obtained by each of the sensors (or actuators) of the asset 102 . While at other times, sensor data (or actuator data) may include measurements obtained by a subset of the sensors (or actuators) of the asset 102 .
  • the sensor and/or actuator data may include measurements obtained by the sensors and/or actuators associated with a given triggered abnormal-condition indicator.
  • a triggered fault code is Fault Code 1 from FIG. 3
  • sensor data may include raw measurements obtained by Sensors A and C.
  • the data may include measurements obtained by one or more sensors or actuators not directly associated with the triggered fault code.
  • the data may additionally include measurements obtained by Actuator B and/or other sensors or actuators.
  • the asset 102 may include particular sensor data in the operating data based on a fault-code rule or instruction provided by the analytics system 108 , which may have, for example, determined that there is a correlation between that which Actuator B is measuring and that which caused the Fault Code 1 to be triggered in the first place.
  • a fault-code rule or instruction provided by the analytics system 108 , which may have, for example, determined that there is a correlation between that which Actuator B is measuring and that which caused the Fault Code 1 to be triggered in the first place.
  • Other examples are also possible.
  • the data may include one or more sensor and/or actuator measurements from each sensor and/or actuator of interest based on a particular time of interest, which may be selected based on a number of factors.
  • the particular time of interest may be based on a sampling rate.
  • the particular time of interest may be based on the time at which an abnormal-condition indicator is triggered.
  • the data may include one or more respective sensor and/or actuator measurements from each sensor and/or actuator of interest (e.g., sensors and/or actuators directly and indirectly associated with the triggered indicator).
  • the one or more measurements may be based on a particular number of measurements or particular duration of time around the time of the triggered abnormal-condition indicator.
  • the sensors and actuators of interest might include Actuator B and Sensor C.
  • the one or more measurements may include the most recent respective measurements obtained by Actuator B and Sensor C prior to the triggering of the fault code (e.g., triggering measurements) or a respective set of measurements before, after, or about the triggering measurements.
  • a set of five measurements may include the five measurements before or after the triggering measurement (e.g., excluding the triggering measurement), the four measurements before or after the triggering measurement and the triggering measurement, or the two measurements before and the two after as well as the triggering measurement, among other possibilities.
  • the abnormal-condition data may take various forms.
  • the abnormal-condition data may include or take the form of an indicator that is operable to uniquely identify a particular abnormal condition that occurred at the asset 102 from all other abnormal conditions that may occur at the asset 102 .
  • the abnormal-condition indicator may take the form of an alphabetic, numeric, or alphanumeric identifier, among other examples.
  • the abnormal-condition indicator may take the form of a string of words that is descriptive of the abnormal condition, such as “Overheated Engine” or “Out of Fuel”, among other examples.
  • the analytics system 108 may be configured to receive operating data from one or more assets and/or data sources.
  • the data intake system may be configured to perform one or more operations to the received data and then relay the data to the data science system of the analytics system 108 .
  • the data science system may analyze the received data and based on such analysis, perform one or more operations.
  • the analytics system 108 may be configured to define predictive models and corresponding workflows based on received operating data for one or more assets and/or received external data related to the one or more assets.
  • the analytics system 108 may define model-workflow pairs based on various other data as well.
  • a model-workflow pair may include a set of program instructions that cause an asset to monitor certain operating conditions and carry out certain operations that help facilitate preventing the occurrence of a particular event suggested by the monitored operating conditions.
  • a predictive model may include one or more algorithms whose inputs are sensor and/or actuator data from one or more sensors and/or actuators of an asset and whose outputs are utilized to determine a probability that a particular event may occur at the asset within a particular period of time in the future.
  • a workflow may include one or more triggers (e.g., model output values) and corresponding operations that the asset carries out based on the triggers.
  • the analytics system 108 may be configured to define aggregate and/or individualized predictive models and/or workflows.
  • An “aggregate” model/workflow may refer to a model/workflow that is generic for a group of assets and defined without taking into consideration particular characteristics of the assets to which the model/workflow is deployed.
  • an “individualized” model/workflow may refer to a model/workflow that is specifically tailored for a single asset or a subgroup of assets from the group of assets and defined based on particular characteristics of the single asset or subgroup of assets to which the model/workflow is deployed.
  • the analytics system 108 may be configured to define an aggregate model-workflow pair based on aggregated data for a plurality of assets. Defining aggregate model-workflow pairs may be performed in a variety of manners.
  • FIG. 5 is a flow diagram 500 depicting one possible example of a definition phase that may be used for defining model-workflow pairs.
  • the example definition phase is described as being carried out by the analytics system 108 , but this definition phase may be carried out by other systems as well.
  • the flow diagram 500 is provided for sake of clarity and explanation and that numerous other combinations of operations may be utilized to define a model-workflow pair.
  • the analytics system 108 may begin by defining a set of data that forms the basis for a given predictive model (e.g., the data of interest).
  • the data of interest may derive from a number of sources, such as the assets 102 and 104 and the data source 112 , and may be stored in a database of the analytics system 108 .
  • the data of interest may include historical data for a particular set of assets from a group of assets or all of the assets from a group of assets (e.g., the assets of interest). Moreover, the data of interest may include measurements from a particular set of sensors and/or actuators from each of the assets of interest or from all of the sensors and/or actuators from each of the assets of interest. Further still, the data of interest may include data from a particular period of time in the past, such as two week's worth of historical data.
  • the data of interest may include a variety of types data, which may depend on the given predictive model.
  • the data of interest may include at least operating data indicating operating conditions of assets, where the operating data is as discussed above in the Collection of Operating Data section.
  • the data of interest may include environment data indicating environments in which assets are typically operated and/or scheduling data indicating planned dates and times during which assets are to carry out certain tasks. Other types of data may also be included in the data of interest.
  • the data of interest may be defined in a number of manners.
  • the data of interest may be user-defined.
  • a user may operate an output system 110 that receives user inputs indicating a selection of certain data of interest, and the output system 110 may provide to the analytics system 108 data indicating such selections. Based on the received data, the analytics system 108 may then define the data of interest.
  • the data of interest may be machine-defined.
  • the analytics system 108 may perform various operations, such as simulations, to determine the data of interest that generates the most accurate predictive model. Other examples are also possible.
  • the analytics system 108 may be configured to, based on the data of interest, define an aggregate, predictive model that is related to the operation of assets.
  • an aggregate, predictive model may define a relationship between operating conditions of assets and a likelihood of an event occurring at the assets.
  • an aggregate, predictive model may receive as inputs sensor data from sensors of an asset and/or actuator data from actuators of the asset and output a probability that an event will occur at the asset within a certain amount of time into the future.
  • the event that the predictive model predicts may vary depending on the particular implementation.
  • the event may be a failure and so, the predictive model may be a failure model that predicts whether a failure will occur within a certain period of time in the future (failure models are discussed in detail below in the Health-Score Models & Workflows section).
  • the event may be an asset completing a task and so, the predictive model may predict the likelihood that an asset will complete a task on time.
  • the event may be a fluid or component replacement, and so, the predictive model may predict an amount of time before a particular asset fluid or component needs to be replaced.
  • the event may be a change in asset productivity, and so, the predictive model may predict the productivity of an asset during a particular period of time in the future.
  • the event may be the occurrence of a “leading indicator” event, which may indicate an asset behavior that differs from expected asset behaviors, and so, the predictive model may predict the likelihood of one or more leading indicator events occurring in the future.
  • Other examples of predictive models are also possible.
  • the analytics system 108 may define the aggregate, predictive model in a variety of manners. In general, this operation may involve utilizing one or more modeling techniques to generate a model that returns a probability between zero and one, such as a random forest technique, logistic regression technique, or other regression technique, among other modeling techniques. In a particular example implementation, the analytics system 108 may define the aggregate, predictive model in line with the below discussion referencing FIG. 7 . The analytics system 108 may define the aggregate model in other manners as well.
  • the analytics system 108 may be configured to define an aggregate workflow that corresponds to the defined model from block 504 .
  • a workflow may take the form of an action that is carried out based on a particular output of a predictive model.
  • a workflow may include one or more operations that an asset performs based on the output of the defined predictive model. Examples of operations that may be part of a workflow include an asset acquiring data according to a particular data-acquisition scheme, transmitting data to the analytics system 108 according to a particular data-transmission scheme, executing a local diagnostic tool, and/or modifying an operating condition of the asset, among other example workflow operations.
  • a particular data-acquisition scheme may indicate how an asset acquires data.
  • a data-acquisition scheme may indicate certain sensors and/or actuators from which the asset obtains data, such as a subset of sensors and/or actuators of the asset's plurality of sensors and actuators (e.g., sensors/actuators of interest).
  • a data-acquisition scheme may indicate an amount of data that the asset obtains from the sensors/actuators of interest and/or a sampling frequency at which the asset acquires such data.
  • Data-acquisition schemes may include various other attributes as well.
  • a particular data-acquisition scheme may correspond to a predictive model for asset health and may be adjusted to acquire more data and/or particular data (e.g., from particular sensors) based on a decreasing asset health.
  • a particular data-acquisition scheme may correspond to a leading-indicators predictive model and may be adjusted to a modify data acquired by asset sensors and/or actuators based on an increased likelihood of an occurrence of a leading indicator event that may signal that a subsystem failure might occur.
  • a particular data-transmission scheme may indicate how an asset transmits data to the analytics system 108 .
  • a data-transmission scheme may indicate a type of data (and may also indicate a format and/or structure of the data) that the asset should transmit, such as data from certain sensors or actuators, a number of data samples that the asset should transmit, a transmission frequency, and/or a priority-scheme for the data that the asset should include in its data transmission.
  • a particular data-acquisition scheme may include a data-transmission scheme or a data-acquisition scheme may be paired with a data-transmission scheme.
  • a particular data-transmission scheme may correspond to a predictive model for asset health and may be adjusted to transmit data less frequently based on an asset health that is above a threshold value. Other examples are also possible.
  • a local diagnostic tool may be a set of procedures or the like that are stored locally at an asset.
  • the local diagnostic tool may generally facilitate diagnosing a cause of a fault or failure at an asset.
  • a local diagnostic tool may pass test inputs into a subsystem of an asset or a portion thereof to obtain test results, which may facilitate diagnosing the cause of a fault or failure.
  • These local diagnostic tools are typically dormant on an asset and will not be executed unless the asset receives particular diagnostic instructions.
  • Other local diagnostic tools are also possible.
  • a particular local diagnostic tool may correspond to a predictive model for health of a subsystem of an asset and may be executed based on a subsystem health that is at or below a threshold value.
  • a workflow may involve modifying an operating condition of an asset.
  • one or more actuators of an asset may be controlled to facilitate modifying an operating condition of the asset.
  • Various operating conditions may be modified, such as a speed, temperature, pressure, fluid level, current draw, and power distribution, among other examples.
  • an operating-condition modification workflow may correspond to a predictive model for predicting whether an asset will complete a task on time and may cause the asset to increase its speed of travel based on a predicted completion percentage that is below a threshold value.
  • the aggregate workflow may be defined in a variety of manners.
  • the aggregate workflow may be user defined. Specifically, a user may operate a computing device that receives user inputs indicating selection of certain workflow operations, and the computing device may provide to the analytics system 108 data indicating such selections. Based on this data, the analytics system 108 may then define the aggregate workflow.
  • the aggregate workflow may be machine-defined.
  • the analytics system 108 may perform various operations, such as simulations, to determine a workflow that may facilitate determining a cause of the probability output by the predictive model and/or preventing an occurrence of an event predicted by the model.
  • Other examples of defining the aggregate workflow are also possible.
  • the analytics system 108 may define the triggers of the workflow.
  • a workflow trigger may be a value of the probability output by the predictive model or a range of values output by the predictive model.
  • a workflow may have multiple triggers, each of which may cause a different operation or operations to occur.
  • FIG. 6A is a conceptual illustration of an aggregate model-workflow pair 600 .
  • the aggregate model-workflow pair illustration 600 includes a column for model inputs 602 , model calculations 604 , model output ranges 606 , and corresponding workflow operations 608 .
  • the predictive model has a single input, data from Sensor A, and has two calculations, Calculations I and II. The output of this predictive model affects the workflow operation that is performed. If the output probability is less than or equal to 80%, then workflow Operation 1 is performed. Otherwise, the workflow Operation 2 is performed.
  • Other example model-workflow pairs are possible and contemplated herein.
  • the analytics system 108 may be configured to define individualized predictive models and/or workflows for assets, which may involve utilizing the aggregate model-workflow pair as a baseline. The individualization may be based on certain characteristics of assets. In this way, the analytics system 108 may provide a given asset a more accurate and robust model-workflow pair compared to the aggregate model-workflow pair.
  • the analytics system 108 may be configured to decide whether to individualize the aggregate model defined at block 504 for a given asset, such as the asset 102 .
  • the analytics system 108 may carry out this decision in a number of manners.
  • the analytics system 108 may be configured to define individualized predictive models by default. In other cases, the analytics system 108 may be configured to decide whether to define an individualized predictive model based on certain characteristics of the asset 102 . For example, in some cases, only assets of certain types or classes, or operated in certain environments, or that have certain health scores may receive an individualized predictive model. In yet other cases, a user may define whether an individualized model is defined for the asset 102 . Other examples are also possible.
  • the analytics system 108 may do so at block 510 . Otherwise, the analytics system 108 may proceed to block 512 .
  • the analytics system 108 may be configured to define an individualized predictive model in a number of manners.
  • the analytics system 108 may define an individualized predictive model based at least in part on one or more characteristics of the asset 102 .
  • the analytics system 108 may have determined one or more asset characteristics of interest that form the basis of individualized models. In practice, different predictive models may have different corresponding characteristics of interest.
  • the characteristics of interest may be characteristics that are related to the aggregate model-workflow pair.
  • the characteristics of interest may be characteristics that the analytics system 108 has determined influence the accuracy of the aggregate model-workflow pair. Examples of such characteristics may include asset age, asset usage, asset capacity, asset load, asset health (perhaps indicated by an asset health metric, discussed below), asset class (e.g., brand and/or model), and environment in which an asset is operated, among other characteristics.
  • the analytics system 108 may have determined the characteristics of interest in a number of manners. In one example, the analytics system 108 may have done so by performing one or more modeling simulations that facilitate identifying the characteristics of interest. In another example, the characteristics of interest may have been predefined and stored in the data storage of the analytics system 108 . In yet another example, characteristics of interest may have been define by a user and provided to the analytics system 108 via the output system 110 . Other examples are also possible.
  • the analytics system 108 may determine characteristics of the asset 102 that correspond to the determined characteristics of interest. That is, the analytics system 108 may determine a type, value, existence or lack thereof, etc. of the asset 102 ′s characteristics that correspond to the characteristics of interest. The analytics system 108 may perform this operation in a number of manners.
  • the analytics system 108 may be configured to perform this operation based on data originating from the asset 102 and/or the data source 112 .
  • the analytics system 108 may utilize operating data for the asset 102 and/or external data from the data source 112 to determine one or more characteristics of the asset 102 .
  • Other examples are also possible.
  • the analytics system 108 may define an individualized, predictive model by modifying the aggregate model.
  • the aggregate model may be modified in a number of manners.
  • the aggregate model may be modified by changing (e.g., adding, removing, re-ordering, etc.) one or more model inputs, changing one or more sensor and/or actuator measurement ranges that correspond to asset-operating limits (e.g., changing operating limits that correspond to “leading indicator” events), changing one or more model calculations, weighting (or changing a weight of) a variable or output of a calculation, utilizing a modeling technique that differs from that which was utilized to define the aggregate model, and/or utilizing a response variable that differs from that which was utilized to define the aggregate model, among other examples.
  • FIG. 6B is a conceptual illustration of an individualized model-workflow pair 610 .
  • the individualized model-workflow pair illustration 610 is a modified version of the aggregate model-workflow pair from FIG. 6A .
  • the individualized model-workflow pair illustration 610 includes a modified column for model inputs 612 and model calculations 614 and includes the original columns for model output ranges 606 and workflow operations 608 from FIG. 6A .
  • the individualized model has two inputs, data from Sensor A and Actuator B, and has two calculations, Calculations II and III.
  • the output ranges and corresponding workflow operations are the same as those of FIG. 6A .
  • the analytics system 108 may have defined the individualized model in this way based on determining that the asset 102 is, for example, relatively old and has relatively poor health, among other reasons.
  • individualizing the aggregate model may depend on the one or more characteristics of the given asset.
  • certain characteristics may affect the modification of the aggregate model differently than other characteristics.
  • the type, value, existence, or the like of a characteristic may affect the modification as well.
  • the asset age may affect a first part of the aggregate model, while an asset class may affect a second, different part of the aggregate model.
  • an asset age within a first range of ages may affect the first part of the aggregate model in a first manner
  • an asset age within a second range of ages, different from the first range may affect the first part of the aggregate model in a second, different manner.
  • Other examples are also possible.
  • individualizing the aggregate model may depend on considerations in addition to or alternatively to asset characteristics.
  • the aggregate model may be individualized based on sensor and/or actuator readings of an asset when the asset is known to be in a relatively good operating state (e.g., as defined by a mechanic or the like).
  • the analytics system 108 may be configured to receive an indication that the asset is in a good operating state (e.g., from a computing device operated by a mechanic) along with operating data from the asset. Based at least on the operating data, the analytics system 108 may then individualize the leading-indicator predictive model for the asset by modifying respective operating limits corresponding to “leading indicator” events.
  • Other examples are also possible.
  • the analytics system 108 may also be configured to decide whether to individualize a workflow for the asset 102 .
  • the analytics system 108 may carry out this decision in a number of manners.
  • the analytics system 108 may perform this operation in line with block 508 .
  • the analytics system 108 may decide whether to define an individualized workflow based on the individualized predictive model.
  • the analytics system 108 may decide to define an individualized workflow if an individualized predictive model was defined. Other examples are also possible.
  • the analytics system 108 may do so at block 514 . Otherwise, the analytics system 108 may end the definition phase.
  • the analytics system 108 may be configured to define an individualized workflow in a number of manners.
  • the analytics system 108 may define an individualized workflow based at least in part on one or more characteristics of the asset 102 .
  • the analytics system 108 may have determined one or more asset characteristics of interest that form the basis of an individualized workflow, which may have been determined in line with the discussion of block 510 .
  • these characteristics of interest may be characteristics that affect the efficacy of the aggregate workflow. Such characteristics may include any of the example characteristics discussed above. Other characteristics are possible as well.
  • the analytics system 108 may determine characteristics of the asset 102 that correspond to the determined characteristics of interest for an individualized workflow. In example implementations, the analytics system 108 may determine characteristics of the asset 102 in a manner similar to the characteristic determination discussed with reference to block 510 and in fact, may utilize some or all of that determination.
  • the analytics system 108 may individualize a workflow for the asset 102 by modifying the aggregate workflow.
  • the aggregate workflow may be modified in a number of manners.
  • the aggregate workflow may be modified by changing (e.g., adding, removing, re-ordering, replacing, etc.) one or more workflow operations (e.g., changing from a first data-acquisition scheme to a second scheme or changing from a particular data-acquisition scheme to a particular local diagnostic tool) and/or changing (e.g., increasing, decreasing, adding to, removing from, etc.) the corresponding model output value or range of values that triggers particular workflow operations, among other examples.
  • modification to the aggregate workflow may depend on the one or more characteristics of the asset 102 in a manner similar to the modification to the aggregate model.
  • FIG. 6C is a conceptual illustration of an individualized model-workflow pair 620 .
  • the individualized model-workflow pair illustration 620 is a modified version of the aggregate model-workflow pair from FIG. 6A .
  • the individualized model-workflow pair illustration 620 includes the original columns for model inputs 602 , model calculations 604 , and model output ranges 606 from FIG. 6A , but includes a modified column for workflow operations 628 .
  • the individualized model-workflow pair is similar to the aggregate model-workflow pair from FIG. 6A , except that when the output of the aggregate model is greater than 80% workflow Operation 3 is triggered instead of Operation 1 .
  • the analytics system 108 may have defined this individual workflow based on determining that the asset 102 , for example, operates in an environment that historically increases the occurrence of asset failures, among other reasons.
  • the analytics system 108 may end the definition phase. At that point, the analytics system 108 may then have an individualized model-workflow pair for the asset 102 .
  • the analytics system 108 may be configured to define an individualized predictive model and/or corresponding workflow for a given asset without first defining an aggregate predictive model and/or corresponding workflow. Other examples are also possible.
  • the analytics system 108 may be configured to define predictive models and corresponding workflows associated with the health of assets.
  • one or more predictive models for monitoring the health of an asset may be utilized to output a health metric (e.g., “health score”) for an asset, which is a single, aggregated metric that indicates whether a failure will occur at a given asset within a given timeframe into the future (e.g., the next two weeks).
  • a health metric e.g., “health score”
  • a health metric may indicate a likelihood that no failures from a group of failures will occur at an asset within a given timeframe into the future, or a health metric may indicate a likelihood that at least one failure from a group of failures will occur at an asset within a given timeframe into the future.
  • the predictive models utilized to output a health metric and the corresponding workflows may be defined as aggregate or individualized models and/or workflows, in line with the above discussion.
  • the analytics system 108 may be configured to define different predictive models that output different levels of health metrics and to define different corresponding workflows.
  • the analytics system 108 may define a predictive model that outputs a health metric for the asset as a whole (i.e., an asset-level health metric).
  • the analytics system 108 may define a respective predictive model that outputs a respective health metric for one or more subsystems of the asset (i.e., subsystem-level health metrics).
  • the outputs of each subsystem-level predictive model may be combined to generate an asset-level health metric.
  • Other examples are also possible.
  • FIG. 7 is a flow diagram 700 depicting one possible example of a modeling phase that may be used for defining a model that outputs a health metric.
  • the example modeling phase is described as being carried out by the analytics system 108 , but this modeling phase may be carried out by other systems as well.
  • the flow diagram 700 is provided for sake of clarity and explanation and that numerous other combinations of operations may be utilized to determine a health metric.
  • the analytics system 108 may begin by defining a set of the one or more failures that form the basis for the health metric (i.e., the failures of interest).
  • the one or more failures may be those failures that could render an asset (or a subsystem thereof) inoperable if they were to occur.
  • the analytics system 108 may take steps to define a model for predicting a likelihood of any of the failures occurring within a given timeframe in the future (e.g., the next two weeks).
  • the analytics system 108 may analyze historical operating data for a group of one or more assets to identify past occurrences of a given failure from the set of failures.
  • the analytics system 108 may identify a respective set of operating data that is associated with each identified past occurrence of the given failure (e.g., sensor and/or actuator data from a given timeframe prior to the occurrence of the given failure).
  • the analytics system 108 may analyze the identified sets of operating data associated with past occurrences of the given failure to define a relationship (e.g., a failure model) between (1) the values for a given set of operating metrics and (2) the likelihood of the given failure occurring within a given timeframe in the future (e.g., the next two weeks).
  • a relationship e.g., a failure model
  • the defined relationship for each failure in the defined set e.g., the individual failure models
  • the analytics system 108 may also continue to refine the predictive model for the defined set of one or more failures by repeating steps 704 - 710 on the updated operating data.
  • the analytics system 108 may begin by defining a set of the one or more failures that form the basis for the health metric.
  • the analytics system 108 may perform this function in various manners.
  • the set of the one or more failures may be based on one or more user inputs.
  • the analytics system 108 may receive from a computing system operated by a user, such as the output system 110 , input data indicating a user selection of the one or more failures.
  • the set of one or more failures may be user-defined.
  • the set of the one or more failures may be based on a determination made by the analytics system 108 (e.g., machine-defined).
  • the analytics system 108 may be configured to define the set of one or more failures, which may occur in a number of manners.
  • the analytics system 108 may be configured to define the set of failures based on one or more characteristics of the asset 102 . That is, certain failures may correspond to certain characteristics, such as asset type, class, etc., of an asset. For example, each type and/or class of asset may have respective failures of interest.
  • the analytics system 108 may be configured to define the set of failures based on historical data stored in the databases of the analytics system 108 and/or external data provided by the data source 112 . For example, the analytics system 108 may utilize such data to determine which failures result in the longest repair-time and/or which failures are historically followed by additional failures, among other examples.
  • the set of one or more failures may be defined based on a combination of user inputs and determinations made by the analytics system 108 .
  • Other examples are also possible.
  • the analytics system 108 may analyze historical operating data for a group of one or more assets (e.g., abnormal-behavior data) to identify past occurrences of a given failure.
  • the group of the one or more assets may include a single asset, such as asset 102 , or multiple assets of a same or similar type, such as fleet of assets that includes the assets 102 and 104 .
  • the analytics system 108 may analyze a particular amount of historical operating data, such as a certain amount of time's worth of data (e.g., a month's worth) or a certain number of data-points (e.g., the most recent thousand data-points), among other examples.
  • identifying past occurrences of the given failure may involve the analytics system 108 identifying the type of operating data, such as abnormal-condition data, that indicates the given failure.
  • a given failure may be associated with one or multiple abnormal-condition indicators, such as fault codes. That is, when the given failure occurs, one or multiple abnormal-condition indicators may be triggered. As such, abnormal-condition indicators may be reflective of an underlying symptom of a given failure.
  • the analytics system 108 may identify the past occurrences of the given failure in a number of manners. For instance, the analytics system 108 may locate, from historical operating data stored in the databases of the analytics system 108 , abnormal-condition data corresponding to the abnormal-condition indicators associated with the given failure. Each located abnormal-condition data would indicate an occurrence of the given failure. Based on this located abnormal-condition data, the analytics system 108 may identify a time at which a past failure occurred.
  • the analytics system 108 may identify a respective set of operating data that is associated with each identified past occurrence of the given failure.
  • the analytics system 108 may identify a set of sensor and/or actuator data from a certain timeframe around the time of the given occurrence of the given failure.
  • the set of data may be from a particular timeframe (e.g., two weeks) before, after, or around the given occurrence of the failure.
  • the set of data may be identified from a certain number of data-points before, after, or around the given occurrence of the failure.
  • the set of operating data may include sensor and/or actuator data from some or all of the sensors and actuators of the asset 102 .
  • the set of operating data may include data from sensors and/or actuators associated with an abnormal-condition indicator corresponding to the given failure.
  • FIG. 8 depicts a conceptual illustration of historical operating data that the analytics system 108 may analyze to facilitate defining a model.
  • Plot 800 may correspond to a segment of historical data that originated from some (e.g., Sensor A and Actuator B) or all of the sensors and actuators of the asset 102 .
  • the plot 800 includes time on the x-axis 802 , measurement values on the y-axis 804 , and sensor data 806 corresponding to Sensor A and actuator data 808 corresponding to Actuator B, each of which includes various data-points representing measurements at particular points in time, T i .
  • the plot 800 includes an indication of an occurrence of a failure 810 that occurred at a past time, T f (e.g., “time of failure”), and an indication of an amount of time 812 before the occurrence of the failure, ⁇ T, from which sets of operating data are identified.
  • T f - ⁇ T defines a timeframe 814 of data-points of interest.
  • the analytics system 108 may determine whether there are any remaining occurrences for which a set of operating data should be identified. In the event that there is a remaining occurrence, block 706 would be repeated for each remaining occurrence.
  • the analytics system 108 may analyze the identified sets of operating data associated with the past occurrences of the given failure to define a relationship (e.g., a failure model) between (1) a given set of operating metrics (e.g., a given set of sensor and/or actuator measurements) and (2) the likelihood of the given failure occurring within a given timeframe in the future (e.g., the next two weeks). That is, a given failure model may take as inputs sensor and/or actuator measurements from one or more sensors and/or actuators and output a probability that the given failure will occur within the given timeframe in the future.
  • a relationship e.g., a failure model
  • a failure model may define a relationship between operating conditions of the asset 102 and the likelihood of a failure occurring.
  • a failure model may receive a number of other data inputs, also known as features, which are derived from the sensor and/or actuator signals.
  • Such features may include an average or range of values that were historically measured when a failure occurred, an average or range of value gradients (e.g., a rate of change in measurements) that were historically measured prior to an occurrence of a failure, a duration of time between failures (e.g., an amount of time or number of data-points between a first occurrence of a failure and a second occurrence of a failure), and/or one or more failure patterns indicating sensor and/or actuator measurement trends around the occurrence of a failure.
  • an average or range of values that were historically measured when a failure occurred an average or range of value gradients (e.g., a rate of change in measurements) that were historically measured prior to an occurrence of a failure, a duration of time between failures (e.g., an amount of time or number of data-points between a first occurrence of a failure and a second occurrence of a failure), and/or one or more failure patterns indicating sensor and/or actuator measurement trends around the occurrence of a failure.
  • a failure model may be defined in a number of manners.
  • the analytics system 108 may define a failure model by utilizing one or more modeling techniques that return a probability between zero and one, which may take the form of any modeling techniques described above.
  • defining a failure model may involve the analytics system 108 generating a response variable based on the historical operating data identified at block 706 .
  • the analytics system 108 may determine an associated response variable for each set of sensor and/or actuator measurements received at a particular point in time.
  • the response variable may take the form of a data set associated with the failure model.
  • the response variable may indicate whether the given set of measurements is within any of the timeframes determined at block 706 . That is, a response variable may reflect whether a given set of data is from a time of interest about the occurrence of a failure.
  • the response variable may be a binary-valued response variable such that, if the given set of measurements is within any of determined timeframes, the associated response variable is assigned a value of one, and otherwise, the associated response variable is assigned a value of zero.
  • response variables associated with sets of measurements that are within the timeframe 814 have a value of one (e.g., Y res at times T i+3 -T i+8 ), while response variables associated with sets of measurements outside the timeframe 814 have a value of zero (e.g., Y res at times T i -T i+2 and T i+9 -T i+10 ).
  • Other response variables are also possible.
  • the analytics system 108 may train the failure model with the historical operating data identified at block 706 and the generated response variable. Based on this training process, the analytics system 108 may then define the failure model that receives as inputs various sensor and/or actuator data and outputs a probability between zero and one that a failure will occur within a period of time equivalent to the timeframe used to generate the response variable.
  • training with the historical operating data identified at block 706 and the generated response variable may result in variable importance statistics for each sensor and/or actuator.
  • a given variable importance statistic may indicate the sensor's or actuator's relative effect on the probability that a given failure will occur within the period of time into the future.
  • the analytics system 108 may be configured to define a failure model based on one or more survival analysis techniques, such as a Cox proportional hazard technique.
  • the analytics system 108 may utilize a survival analysis technique in a manner similar in some respects to the above-discussed modeling technique, but the analytics system 108 may determine a survival time-response variable that indicates an amount of time from the last failure to a next expected event.
  • a next expected event may be either reception of senor and/or actuator measurements or an occurrence of a failure, whichever occurs first.
  • This response variable may include a pair of values that are associated with each of the particular points in time at which measurements are received. The response variable may then be utilized to determine a probability that a failure will occur within the given timeframe in the future.
  • the failure model may be defined based in part on external data, such as weather data, and “hotbox” data, among other data. For instance, based on such data, the failure model may increase or decrease an output failure probability.
  • external data may be observed at points in time that do not coincide with times at which asset sensors and/or actuators obtain measurements.
  • the times at which “hotbox” data is collected e.g., times at which a locomotive passes along a section of railroad track that is outfitted with hot box sensors
  • the analytics system 108 may be configured to perform one or more operations to determine external data observations that would have been observed at times that correspond to the sensor measurement times.
  • the analytics system 108 may utilize the times of the external data observations and times of the measurements to interpolate the external data observations to produce external data values for times corresponding to the measurement times. Interpolation of the external data may allow external data observations or features derived therefrom to be included as inputs into the failure model. In practice, various techniques may be used to interpolate the external data with the sensor and/or actuator data, such as nearest-neighbor interpolation, linear interpolation, polynomial interpolation, and spline interpolation, among other examples.
  • the analytics system 108 may determine whether there are any remaining failures for which a failure model should be determined. In the event that there remains a failure for which a failure model should be determined, the analytics system 108 may repeat the loop of blocks 704 - 708 . In some implementations, the analytics system 108 may determine a single failure model that encompasses all of the failures defined at block 702 . In other implementations, the analytics system 108 may determine a failure model for each subsystem of the asset 102 , which may then be utilized to determine an asset-level failure model. Other examples are also possible.
  • the defined relationship for each failure in the defined set may then be combined into the model (e.g., the health-metric model) for predicting the overall likelihood of a failure occurring within the given timeframe in the future (e.g., the next two weeks). That is, the model receives as inputs sensor and/or actuator measurements from one or more sensors and/or actuators and outputs a single probability that at least one failure from the set of failures will occur within the given timeframe in the future.
  • the analytics system 108 may define the health-metric model in a number of manners, which may depend on the desired granularity of the health metric. That is, in instances where there are multiple failure models, the outcomes of the failure models may be utilized in a number of manners to obtain the output of the health-metric model. For example, the analytics system 108 may determine a maximum, median, or average from the multiple failure models and utilize that determined value as the output of the health-metric model.
  • determining the health-metric model may involve the analytics system 108 attributing a weight to individual probabilities output by the individual failure models. For instance, each failure from the set of failures may be considered equally undesirable, and so each probability may likewise be weighted the same in determining the health-metric model. In other instances, some failures may be considered more undesirable than others (e.g., more catastrophic or require longer repair time, etc.), and so those corresponding probabilities may be weighted more than others.
  • determining the health-metric model may involve the analytics system 108 utilizing one or more modeling techniques, such as a regression technique.
  • An aggregate response variable may take the form of the logical disjunction (logical OR) of the response variables (e.g., Y res in FIG. 8 ) from each of the individual failure models.
  • aggregate response variables associated with any set of measurements that occur within any timeframe determined at block 706 may have a value of one, while aggregate response variables associated with sets of measurements that occur outside any of the timeframes may have a value of zero.
  • Other manners of defining the health-metric model are also possible.
  • block 710 may be unnecessary.
  • the analytics system 108 may determine a single failure model, in which case the health-metric model may be the single failure model.
  • the analytics system 108 may be configured to update the individual failure models and/or the overall health-metric model.
  • the analytics system 108 may update a model daily, weekly, monthly, etc. and may do so based on a new portion of historical operating data from the asset 102 or from other assets (e.g., from other assets in the same fleet as the asset 102 ).
  • Other examples are also possible.
  • the analytics system 108 may deploy the defined model-workflow pair to one or more assets. Specifically, the analytics system 108 may transmit the defined predictive model and/or corresponding workflow to at least one asset, such as the asset 102 . The analytics system 108 may transmit model-workflow pairs periodically or based on triggering events, such as any modifications or updates to a given model-workflow pair.
  • the analytics system 108 may transmit only one of an individualized model or an individualized workflow. For example, in scenarios where the analytics system 108 defined only an individualized model or workflow, the analytics system 108 may transmit an aggregate version of the workflow or model along with the individualized model or workflow, or the analytics system 108 may not need to transmit an aggregate version if the asset 102 already has the aggregate version stored in data storage. In sum, the analytics system 108 may transmit (1) an individualized model and/or individualized workflow, (2) an individualized model and the aggregate workflow, (3) the aggregate model and an individualized workflow, or (4) the aggregate model and the aggregate workflow.
  • the analytics system 108 may have carried out some or all of the operations of blocks 702 - 710 of FIG. 7 for multiple assets to define model-workflow pairs for each asset.
  • the analytics system 108 may have additionally defined a model-workflow pair for the asset 104 .
  • the analytics system 108 may be configured to transmit respective model-workflow pairs to the assets 102 and 104 simultaneously or sequentially.
  • a given asset such as the asset 102
  • each asset may include a local analytics device configured to store and run model-workflow pairs provided by the analytics system 108 .
  • the local analytics device receives particular sensor and/or actuator data, it may input the received data into the predictive model and depending on the output of the model, may execute one or more operations of the corresponding workflow.
  • a central processing unit of the asset 102 may execute the predictive model and/or corresponding workflow.
  • the local analytics system and central processing unit of the asset 102 may collaboratively execute the model-workflow pair. For instance, the local analytics system may execute the predictive model and the central processing unit may execute the workflow or vice versa.
  • an asset executing a predictive model and based on the resulting output, executing operations of the workflow may facilitate determining a cause or causes of the likelihood of a particular event occurring that is output by the model and/or may facilitate preventing a particular event from occurring in the future.
  • an asset may locally determine and take actions to help prevent an event from occurring, which may be beneficial in situations when reliance on the analytics system 108 to make such determinations and provide recommended actions is not efficient or feasible (e.g., when there is network latency, when network connection is poor, when the asset moves out of coverage of the communication network 106 , etc.).
  • FIG. 9 is a flow diagram 900 depicting one possible example of a local-execution phase that may be used for locally executing a predictive model.
  • the example local-execution phase will be discussed in the context of a health-metric model that outputs a health metric of an asset, but it should be understood that a same or similar local-execution phase may be utilized for other types of predictive models.
  • the example local-execution phase is described as being carried out by a local analytics device of the asset 102 , but this phase may be carried out by other devices and/or systems as well.
  • the flow diagram 900 is provided for sake of clarity and explanation and that numerous other combinations of operations and functions may be utilized to locally execute a predictive model.
  • the local analytics device may receive data that reflects the current operating conditions of the asset 102 .
  • the local analytics device may identify, from the received data, the set of operating data that is to be input into the model provided by the analytics system 108 .
  • the local analytics device may then input the identified set of operating data into the model and run the model to obtain a health metric for the asset 102 .
  • the local analytics device may also continue to update the health metric for the asset 102 by repeating the operations of blocks 902 - 906 based on the updated operating data.
  • the operations of blocks 902 - 906 may be repeated each time the local analytics device receives new data from sensors and/or actuators of the asset 102 or periodically (e.g., hourly, daily, weekly, monthly, etc.). In this way, local analytics devices may be configured to dynamically update health metrics, perhaps in real-time, as assets are used in operation.
  • the local analytics device may receive data that reflects the current operating conditions of the asset 102 .
  • data may include sensor data from one or more of the sensors of the asset 102 , actuator data from one or more actuators of the asset 102 , and/or it may include abnormal-condition data, among other types of data.
  • the local analytics device may identify, from the received data, the set of operating data that is to be input into the health-metric model provided by the analytics system 108 . This operation may be performed in a number of manners.
  • the local analytics device may identify the set of operating data inputs (e.g., data from particular sensors and/or actuators of interest) for the model based on a characteristic of the asset 102 , such as asset type or asset class, for which the health metric is being determined.
  • the identified set of operating data inputs may be sensor data from some or all of the sensors of the asset 102 and/or actuator data from some of all of the actuators of the asset 102 .
  • the local analytics device may identify the set of operating data inputs based on the predictive model provided by the analytics system 108 . That is, the analytics system 108 may provide some indication to the asset 102 (e.g., either in the predictive model or in a separate data transmission) of the particular inputs for the model. Other examples of identifying the set of operating data inputs are also possible.
  • the local analytics device may then run the health-metric model. Specifically, the local analytics device may input the identified set of operating data into the model, which in turn determines and outputs an overall likelihood of at least one failure occurring within the given timeframe in the future (e.g., the next two weeks).
  • this operation may involve the local analytics device inputting particular operating data (e.g., sensor and/or actuator data) into one or more individual failure models of the health-metric model, which each may output an individual probability.
  • the local analytics device may then use these individual probabilities, perhaps weighting some more than others in accordance with the health-metric model, to determine the overall likelihood of a failure occurring within the given timeframe in the future.
  • the local analytics device may convert the probability of a failure occurring into the health metric that may take the form of a single, aggregated parameter that reflects the likelihood that no failures will occur at the asset 102 within the give timeframe in the future (e.g., two weeks).
  • converting the failure probability into the health metric may involve the local analytics device determining the complement of the failure probability.
  • the overall failure probability may take the form of a value ranging from zero to one; the health metric may be determined by subtracting one by that number.
  • Other examples of converting the failure probability into the health metric are also possible.
  • workflows may take various forms and so, workflows may be executed in a variety of manners.
  • the asset 102 may internally execute one or more operations that modify some behavior of the asset 102 , such as modifying a data-acquisition and/or -transmission scheme, executing a local diagnostic tool, modifying an operating condition of the asset 102 (e.g., modifying a velocity, acceleration, fan speed, propeller angle, air intake, etc. via one or more actuators of the asset 102 ), or outputting an indication, perhaps of a relatively low health metric or of recommended preventative actions that should be executed in relation to the asset 102 , at a user interface of the asset 102 or to an external computing system.
  • modify some behavior of the asset 102 such as modifying a data-acquisition and/or -transmission scheme, executing a local diagnostic tool, modifying an operating condition of the asset 102 (e.g., modifying a velocity, acceleration, fan speed, propeller angle, air intake, etc. via one or more actuators of the asset 102 ), or outputting an indication, perhaps of a relatively low health metric or of recommended preventative actions
  • the asset 102 may transmit to a system on the communication network 106 , such as the output system 110 , an instruction to cause the system to carry out an operation, such as generating a work-order or ordering a particular part for a repair of the asset 102 .
  • a system on the communication network 106 such as the output system 110
  • an instruction to cause the system to carry out an operation such as generating a work-order or ordering a particular part for a repair of the asset 102 .
  • Other examples of the asset 102 locally executing a workflow are also possible.
  • the analytics system 108 may carry out a modification phase during which the analytics system 108 modifies a deployed model and/or workflow based on new asset data. This phase may be performed for both aggregate and individualized models and workflows.
  • the asset 102 may provide operating data to the analytics system 108 and/or the data source 112 may provide to the analytics system 108 external data related to the asset 102 .
  • the analytics system 108 may modify the model and/or workflow for the asset 102 and/or the model and/or workflow for other assets, such as the asset 104 .
  • the analytics system 108 may share information learned from the behavior of the asset 102 .
  • FIG. 10 is a flow diagram 1000 depicting one possible example of a modification phase that may be used for modifying model-workflow pairs.
  • the example modification phase is described as being carried out by the analytics system 108 , but this modification phase may be carried out by other systems as well.
  • the flow diagram 1000 is provided for sake of clarity and explanation and that numerous other combinations of operations may be utilized to modify model-workflow pairs.
  • the analytics system 108 may receive data from which the analytics system 108 identifies an occurrence of a particular event.
  • the data may be operating data originating from the asset 102 or external data related to the asset 102 from the data source 112 , among other data.
  • the event may take the form of any of the events discussed above, such as a failure at the asset 102 .
  • the event may take the form of a new component or subsystem being added to the asset 102 .
  • Another event may take the form of a “leading indicator” event, which may involve sensors and/or actuators of the asset 102 generating data that differs, perhaps by a threshold differential, from the data identified at block 706 of FIG. 7 during the model-definition phase. This difference may indicate that the asset 102 has operating conditions that are above or below normal operating conditions for assets similar to the asset 102 .
  • Yet another event may take the form of an event that is followed by one or more leading indicator events.
  • the analytics system 108 may then modify the aggregate, predictive model and/or workflow and/or one or more individualized predictive models and/or workflows.
  • the analytics system 108 may determine whether to modify the aggregate, predictive model.
  • the analytics system 108 may determine to modify the aggregate, predictive model for a number of reasons.
  • the analytics system 108 may modify the aggregate, predictive model if the identified occurrence of the particular event was the first occurrence of this particular event for a plurality of assets including the asset 102 , such as the first time a particular failure occurred at an asset from a fleet of assets or the first time a particular new component was added to an asset from a fleet of assets.
  • the analytics system 108 may make a modification if data associated with the identified occurrence of the particular event is different from data that was utilized to originally define the aggregate model. For instance, the identified occurrence of the particular event may have occurred under operating conditions that had not previously been associated with an occurrence of the particular event (e.g., a particular failure might have occurred with associated sensor values not previously measured before with the particular failure). Other reasons for modifying the aggregate model are also possible.
  • the analytics system 108 may do so at block 1006 . Otherwise, the analytics system 108 may proceed to block 1008 .
  • the analytics system 108 may modify the aggregate model based at least in part on the data related to the asset 102 that was received at block 1002 .
  • the aggregate model may be modified in various manners, such as any manner discussed above with reference to block 510 of FIG. 5 .
  • the aggregate model may be modified in other manners as well.
  • the analytics system 108 may then determine whether to modify the aggregate workflow.
  • the analytics system 108 may modify the aggregate workflow for a number of reasons.
  • the analytics system 108 may modify the aggregate workflow based on whether the aggregate model was modified at block 1004 and/or if there was some other change at the analytics system 108 .
  • the analytics system 108 may modify the aggregate workflow if the identified occurrence of the event at block 1002 occurred despite the asset 102 executing the aggregate workflow. For instance, if the workflow was aimed to help facilitate preventing the occurrence of the event (e.g., a failure) and the workflow was executed properly but the event still occurred nonetheless, then the analytics system 108 may modify the aggregate workflow. Other reasons for modifying the aggregate workflow are also possible.
  • the analytics system 108 may do so at block 1010 . Otherwise, the analytics system 108 may proceed to block 1012 .
  • the analytics system 108 may modify the aggregate workflow based at least in part on the data related to the asset 102 that was received at block 1002 .
  • the aggregate workflow may be modified in various manners, such as any manner discussed above with reference to block 514 of FIG. 5 .
  • the aggregate model may be modified in other manners as well.
  • the analytics system 108 may be configured to modify one or more individualized models (e.g., for each of assets 102 and 104 ) and/or one or more individualized workflows (e.g., for one of asset 102 or asset 104 ) based at least in part on the data related to the asset 102 that was received at block 1002 .
  • the analytics system 108 may do so in a manner similar to blocks 1004 - 1010 .
  • the reasons for modifying an individualized model or workflow may differ from the reasons for the aggregate case.
  • the analytics system 108 may further consider the underlying asset characteristics that were utilized to define the individualized model and/or workflow in the first place.
  • the analytics system 108 may modify an individualized model and/or workflow if the identified occurrence of the particular event was the first occurrence of this particular event for assets with asset characteristics of the asset 102 .
  • Other reasons for modifying an individualized model and/or workflow are also possible.
  • FIG. 6D is a conceptual illustration of a modified model-workflow pair 630 .
  • the model-workflow pair illustration 630 is a modified version of the aggregate model-workflow pair from FIG. 6A .
  • the modified model-workflow pair illustration 630 includes the original column for model inputs 602 from FIG. 6A and includes modified columns for model calculations 634 , model output ranges 636 , and workflow operations 638 .
  • the modified predictive model has a single input, data from Sensor A, and has two calculations, Calculations I and III. If the output probability of the modified model is less than 75%, then workflow Operation 1 is performed. If the output probability is between 75% and 85%, then workflow Operation 2 is performed. And if the output probability is greater than 85%, then workflow Operation 3 is performed.
  • Other example modified model-workflow pairs are possible and contemplated herein.
  • the analytics system 108 may then transmit any model and/or workflow modifications to one or more assets.
  • the analytics system 108 may transmit a modified individualized model-workflow pair to the asset 102 (e.g., the asset whose data caused the modification) and a modified aggregate model to the asset 104 .
  • the analytics system 108 may dynamically modify models and/or workflows based on data associated with the operation of the asset 102 and distribute such modifications to multiple assets, such as the fleet to which the asset 102 belongs. Accordingly, other assets may benefit from the data originating from the asset 102 in that the other assets' local model-workflow pairs may be refined based on such data, thereby helping to create more accurate and robust model-workflow pairs
  • the asset 102 and/or the analytics system 108 may be configured to dynamically adjust executing a model-workflow pair.
  • the asset 102 and/or the analytics system 108 may be configured to detect certain events that trigger a change in responsibilities with respect to whether the asset 102 and/or the analytics system 108 should be executing the predictive model and/or workflow.
  • both the asset 102 and the analytics system 108 may execute all or a part of a model-workflow pair on behalf of the asset 102 .
  • the asset 102 may store the model-workflow pair in data storage but then may rely on the analytics system 108 to centrally execute part or all of the model-workflow pair.
  • the asset 102 may provide at least sensor and/or actuator data to the analytics system 108 , which may then use such data to centrally execute a predictive model for the asset 102 .
  • the analytics system 108 may then execute the corresponding workflow or the analytics system 108 may transmit to the asset 102 the output of the model or an instruction for the asset 102 to locally execute the workflow.
  • the analytics system 108 may rely on the asset 102 to locally execute part or all of the model-workflow pair. Specifically, the asset 102 may locally execute part or all of the predictive model and transmit results to the analytics system 108 , which may then cause the analytics system 108 to centrally execute the corresponding workflow. Or the asset 102 may also locally execute the corresponding workflow.
  • the analytics system 108 and the asset 102 may share in the responsibilities of executing the model-workflow pair.
  • the analytics system 108 may centrally execute portions of the model and/or workflow, while the asset 102 locally executes the other portions of the model and/or workflow.
  • the asset 102 and analytics system 108 may transmit results from their respective executed responsibilities.
  • Other examples are also possible.
  • the asset 102 and/or the analytics system 108 may determine that the execution of the model-workflow pair should be adjusted. That is, one or both may determine that the execution responsibilities should be modified. This operation may occur in a variety of manners.
  • FIG. 11 is a flow diagram 1100 depicting one possible example of an adjustment phase that may be used for adjusting execution of a model-workflow pair.
  • the example adjustment phase is described as being carried out by the asset 102 and/or the analytics system 108 , but this modification phase may be carried out by other systems as well.
  • the flow diagram 1100 is provided for sake of clarity and explanation and that numerous other combinations of operations may be utilized to adjust the execution of a model-workflow pair.
  • the asset 102 and/or the analytics system 108 may detect an adjustment factor (or potentially multiple) that indicates conditions that require an adjustment to the execution of the model-workflow pair.
  • Examples of such conditions include network conditions of the communication network 106 or processing conditions of the asset 102 and/or analytics system 108 , among other examples.
  • Example network conditions may include network latency, network bandwidth, signal strength of a link between the asset 102 and the communication network 106 , or some other indication of network performance, among other examples.
  • Example processing conditions may include processing capacity (e.g., available processing power), processing usage (e.g., amount of processing power being consumed) or some other indication of processing capabilities, among other examples.
  • detecting an adjustment factor may be performed in a variety of manners. For example, this operation may involve determining whether network (or processing) conditions reach one or more threshold values or whether conditions have changed in a certain manner. Other examples of detecting an adjustment factor are also possible.
  • detecting an adjustment factor may involve the asset 102 and/or the analytics system 108 detecting an indication that a signal strength of a communication link between the asset 102 and the analytics system 108 is below a threshold signal strength or has been decreasing at a certain rate of change.
  • the adjustment factor may indicate that the asset 102 is about to go “off-line.”
  • detecting an adjustment factor may additionally or alternatively involve the asset 102 and/or the analytics system 108 detecting an indication that network latency is above a threshold latency or has been increasing at a certain rate of change. Or the indication may be that a network bandwidth is below a threshold bandwidth or has been decreasing at a certain rate of change. In these examples, the adjustment factor may indicate that the communication network 106 is lagging.
  • detecting an adjustment factor may additionally or alternatively involve the asset 102 and/or the analytics system 108 detecting an indication that processing capacity is below a particular threshold or has been decreasing at a certain rate of change and/or that processing usage is above a threshold value or increasing at a certain rate of change.
  • the adjustment factor may indicate that processing capabilities of the asset 102 (and/or the analytics system 108 ) are low.
  • Other examples of detecting an adjustment factor are also possible.
  • the local execution responsibilities may be adjusted, which may occur in a number of manners.
  • the asset 102 may have detected the adjustment factor and then determined to locally execute the model-workflow pair or a portion thereof.
  • the asset 102 may then transmit to the analytics system 108 a notification that the asset 102 is locally executing the predictive model and/or workflow.
  • the analytics system 108 may have detected the adjustment factor and then transmitted an instruction to the asset 102 to cause the asset 102 to locally execute the model-workflow pair or a portion thereof. Based on the instruction, the asset 102 may then locally execute the model-workflow pair.
  • the central execution responsibilities may be adjusted, which may occur in a number of manners.
  • the central execution responsibilities may be adjusted based on the analytics system 108 detecting an indication that the asset 102 is locally executing the predictive model and/or the workflow.
  • the analytics system 108 may detect such an indication in a variety of manners.
  • the analytics system 108 may detect the indication by receiving from the asset 102 a notification that the asset 102 is locally executing the predictive model and/or workflow.
  • the notification may take various forms, such as binary or textual, and may identify the particular predictive model and/or workflow that the asset is locally executing.
  • the analytics system 108 may detect the indication based on received operating data for the asset 102 . Specifically, detecting the indication may involve the analytics system 108 receiving operating data for the asset 102 and then detecting one or more characteristics of the received data. From the one or more detected characteristics of the received data, the analytics system 108 may infer that the asset 102 is locally executing the predictive model and/or workflow.
  • detecting the one or more characteristics of the received data may be performed in a variety of manners.
  • the analytics system 108 may detect a type of the received data.
  • the analytics system 108 may detect a source of the data, such as a particular sensor or actuator that generated sensor or actuator data.
  • the analytics system 108 may infer that the asset 102 is locally executing the predictive model and/or workflow.
  • the analytics system 108 may infer the that asset 102 is locally executing a predictive model and corresponding workflow that causes the asset 102 to acquire data from the particular sensor and transmit that data to the analytics system 108 .
  • the analytics system 108 may detect an amount of the received data. The analytics system 108 may compare that amount to a certain threshold amount of data. Based on the amount reaching the threshold amount, the analytics system 108 may infer that the asset 102 is locally executing a predictive model and/or workflow that causes the asset 102 to acquire an amount of data equivalent to or greater than the threshold amount. Other examples are also possible.
  • detecting the one or more characteristics of the received data may involve the analytics system 108 detecting a certain change in one or more characteristics of the received data, such as a change in the type of the received data, a change in the amount of data that is received, or change in the frequency at which data is received.
  • a change in the type of the received data may involve the analytics system 108 detecting a change in the source of sensor data that it is receiving (e.g., a change in sensors and/or actuators that are generating the data provided to the analytics system 108 ).
  • detecting a change in the received data may involve the analytics system 108 comparing recently received data to data received in the past (e.g., an hour, day, week, etc. before a present time). In any event, based on detecting the change in the one or more characteristics of the received data, the analytics system 108 may infer that the asset 102 is locally executing a predictive model and/or workflow that causes such a change to the data provided by the asset 102 to the analytics system 108 .
  • the analytics system 108 may detect an indication that the asset 102 is locally executing the predictive model and/or the workflow based on detecting the adjustment factor at block 1102 . For example, in the event that the analytics system 108 detects the adjustment factor at block 1102 , the analytics system 108 may then transmit to the asset 102 instructions that cause the asset 102 to adjust its local execution responsibilities and accordingly, the analytics system 108 may adjust its own central execution responsibilities. Other examples of detecting the indication are also possible.
  • the central execution responsibilities may be adjusted in accordance with the adjustment to the local execution responsibilities. For instance, if the asset 102 is now locally executing the predictive model, then the analytics system 108 may accordingly cease centrally executing the predictive model (and may or may not cease centrally executing the corresponding workflow). Further, if the asset 102 is locally executing the corresponding workflow, then the analytics system 108 may accordingly cease executing the workflow (and may or may not cease centrally executing the predictive model). Other examples are also possible.
  • the asset 102 and/or the analytics system 108 may continuously perform the operations of blocks 1102 - 1106 . And at times, the local and central execution responsibilities may be adjusted to facilitate optimizing the execution of model-workflow pairs.
  • the asset 102 and/or the analytics system 108 may perform other operations based on detecting an adjustment factor. For example, based on a condition of the communication network 106 (e.g., bandwidth, latency, signal strength, or another indication of network quality), the asset 102 may locally execute a particular workflow.
  • the particular workflow may be provided by the analytics system 108 based on the analytics system 108 detecting the condition of the communication network, may be already stored on the asset 102 , or may be a modified version of a workflow already stored on the asset 102 (e.g., the asset 102 may locally modify a workflow).
  • the particular workflow may include a data-acquisition scheme that increases or decreases a sampling rate and/or a data-transmission scheme that increases or decreases a transmission rate or amount of data transmitted to the analytics system 108 , among other possible workflow operations.
  • the asset 102 may determine that one or more detected conditions of the communication network have reached respective thresholds (e.g., indicating poor network quality). Based on such a determination, the asset 102 may locally execute a workflow that includes transmitting data according to a data-transmission scheme that reduces the amount and/or frequency of data the asset 102 transmits to the analytics system 108 . Other examples are also possible.
  • FIG. 12 a flow diagram is depicted illustrating an example method 1200 for defining and deploying an aggregate, predictive model and corresponding workflow that may be performed by the analytics system 108 .
  • the operations illustrated by the blocks in the flow diagrams may be performed in line with the above discussion.
  • one or more operations discussed above may be added to a given flow diagram.
  • the method 1200 may involve the analytics system 108 receiving respective operating data for a plurality of assets (e.g., the assets 102 and 104 ).
  • the method 1200 may involve the analytics system 108 , based on the received operating data, defining a predictive model and a corresponding workflow (e.g., a failure model and corresponding workflow) that are related to the operation of the plurality of assets.
  • the method 1200 may involve the analytics system 108 transmitting to at least one asset of the plurality of assets (e.g., the asset 102 ) the predictive model and the corresponding workflow for local execution by the at least one asset.
  • FIG. 13 depicts a flow diagram of an example method 1300 for defining and deploying an individualized, predictive model and/or corresponding workflow that may be performed by the analytics system 108 .
  • the method 1300 may involve the analytics system 108 receiving operating data for a plurality of assets, where the plurality of assets includes at least a first asset (e.g., the asset 102 ).
  • the method 1300 may involve the analytics system 108 , based on the received operating data, defining an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets.
  • the method 1300 may involve the analytics system 108 determining one or more characteristics of the first asset.
  • the method 1300 may involve the analytics system 108 , based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, defining at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset.
  • the method 1300 may involve the analytics system 108 transmitting to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
  • FIG. 14 depicts a flow diagram of an example method 1400 for dynamically modifying the execution of model-workflow pairs that may be performed by the analytics system 108 .
  • the method 1400 may involve the analytics system 108 transmitting to an asset (e.g., the asset 102 ) a predictive model and corresponding workflow that are related to the operation of the asset for local execution by the asset.
  • the method 1400 may involve the analytics system 108 detecting an indication that the asset is locally executing at least one of the predictive model or the corresponding workflow.
  • the method 1400 may involve the analytics system 108 , based on the detected indication, modifying central execution by the computing system of at least one of the predictive model or the corresponding workflow.
  • another method for dynamically modifying the execution of model-workflow pairs may be performed by an asset (e.g., the asset 102 ).
  • an asset e.g., the asset 102
  • such a method may involve the asset 102 receiving from a central computing system (e.g., the analytics system 108 ) a predictive model and corresponding workflow that are related to the operation of the asset 102 .
  • the method may also involve the asset 102 detecting an adjustment factor indicating one or more conditions associated with adjusting execution of the predictive model and the corresponding workflow.
  • the method may involve, based on the detected adjustment factor, (i) modifying local execution by the asset 102 of at least one of the predictive model or the corresponding workflow and (ii) transmitting to the central computing system an indication that the asset 102 is locally executing the at least one of the predictive model or the corresponding workflow to facilitate causing the central computing system to modify central execution by the computing system of at least one of the predictive model or the corresponding workflow.

Abstract

Disclosed herein are systems, devices, and methods related to assets and predictive models and corresponding workflows that are related to the operation of assets. In particular, examples involve defining and deploying aggregate, predictive models and corresponding workflows, defining and deploying individualized, predictive models and/or corresponding workflows, and dynamically adjusting the execution of model-workflow pairs.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application incorporates by reference U.S. Non-Provisional patent application Ser. No. 14/732,258, filed on Jun. 5, 2015, entitled Asset Health Score in its entirety. This application is also related to the following applications filed on the same day as the present application, each of which is incorporated by reference in its entirety: U.S. Non-Provisional patent application Ser. No. ______ (Attorney Docket No. Uptake-00012), entitled Aggregate Predictive Model & Workflow for Local Execution; and U.S. Non-Provisional patent application Ser. No. ______ (Attorney Docket No. Uptake-00013), entitled Dynamic Execution of Predictive Models & Workflows.
  • BACKGROUND
  • Today, machines (also referred to herein as “assets”) are ubiquitous in many industries. From locomotives that transfer cargo across countries to medical equipment that helps nurses and doctors to save lives, assets serve an important role in everyday life. Depending on the role that an asset serves, its complexity, and cost, may vary. For instance, some assets may include multiple subsystems that must operate in harmony for the asset to function properly (e.g., an engine, transmission, etc. of a locomotive).
  • Because of the key role that assets play in everyday life, it is desirable for assets to be repairable with limited downtime. Accordingly, some have developed mechanisms to monitor and detect abnormal conditions within an asset to facilitate repairing the asset, perhaps with minimal downtime.
  • OVERVIEW
  • The current approach for monitoring assets generally involves an on-asset computer that receives signals from various sensors and/or actuators distributed throughout an asset that monitor the operating conditions of the asset. As one representative example, if the asset is a locomotive, the sensors and/or actuators may monitor parameters such as temperatures, voltages, and speeds, among other examples. If sensor and/or actuator signals from one or more of these devices reach certain values, the on-asset computer may then generate an abnormal-condition indicator, such as a “fault code,” which is an indication that an abnormal condition has occurred within the asset.
  • In general, an abnormal condition may be a defect at an asset or component thereof, which may lead to a failure of the asset and/or component. As such, an abnormal condition may be associated with a given failure, or perhaps multiple failures, in that the abnormal condition is symptomatic of the given failure or failures. In practice, a user typically defines the sensors and respective sensor values associated with each abnormal-condition indicator. That is, the user defines an asset's “normal” operating conditions (e.g., those that do not trigger fault codes) and “abnormal” operating conditions (e.g., those that trigger fault codes).
  • After the on-asset computer generates an abnormal-condition indicator, the indicator and/or sensor signals may be passed to a remote location where a user may receive some indication of the abnormal condition and/or sensor signals and decide whether to take action. One action that the user might take is to assign a mechanic or the like to evaluate and potentially repair the asset. Once at the asset, the mechanic may connect a computing device to the asset and operate the computing device to cause the asset to utilize one or more local diagnostic tools to facilitate diagnosing the cause of the generated indicator.
  • While current asset-monitoring systems are generally effective at triggering abnormal-condition indicators, such systems are typically reactionary. That is, by the time the asset-monitoring system triggers an indicator, a failure within the asset may have already occurred (or is about to occur), which may lead to costly downtime, among other disadvantages. Additionally, due to the simplistic nature of on-asset abnormality-detection mechanisms in such asset-monitoring systems, current asset-monitoring approaches tend to involve a remote computing system performing monitoring computations for an asset and then transmitting instructions to the asset if a problem is detected. This may be disadvantageous due to network latency and/or infeasible when the asset moves outside of coverage of a communication network. Further still, due to the nature of local diagnostic tools stored on assets, current diagnosis procedures tend to be inefficient and cumbersome because a mechanic is required to cause the asset to utilize such tools.
  • The example systems, devices, and methods disclosed herein seek to help address one or more of these issues. In example implementations, a network configuration may include a communication network that facilitates communications between assets and a remote computing system. In practice, the communication network may facilitate secure communications between assets and the remote computing system (e.g., via encryption or other security measures).
  • As noted above, each asset may include multiple sensors and/or actuators distributed throughout the asset that facilitate monitoring operating conditions of the asset. A number of assets may provide respective data indicative of each asset's operating conditions to the remote computing system, which may be configured to perform one or more operations based on the provided data.
  • In example implementations, the remote computing system may be configured to define and deploy to assets a predictive model and corresponding workflow (referred to herein as a “model-workflow pair”) that are related to the operation of the assets. The assets may be configured to receive the model-workflow pair and utilize a local analytics device to operate in accordance with the model-workflow pair.
  • Generally, a model-workflow pair may cause an asset to monitor certain operating conditions and when certain conditions exist, modify a behavior that may help facilitate preventing an occurrence of a particular event. Specifically, a predictive model may receive as inputs sensor data from a particular set of asset sensors and output a likelihood that one or more particular events could occur at the asset within a particular period of time in the future. A workflow may involve one or more operations that are performed based on the likelihood of the one or more particular events that is output by the model.
  • In practice, the remote computing system may define an aggregate, predictive model and corresponding workflows, individualized, predictive models and corresponding workflows, or some combination thereof. An “aggregate” model/workflow may refer to a model/workflow that is generic for a group of assets, while an “individualized” model/workflow may refer to a model/workflow that is tailored for a single asset or subgroup of assets from the group of assets.
  • In example implementations, the remote computing system may start by defining an aggregate, predictive model based on historical data for multiple assets. Utilizing data for multiple assets may facilitate defining a more accurate predictive model than utilizing operating data for a single asset.
  • The historical data that forms the basis of the aggregate model may include at least operating data that indicates operating conditions of a given asset. Specifically, operating data may include abnormal-condition data identifying instances when failures occurred at assets and/or sensor data indicating one or more physical properties measured at the assets at the time of those instances. The data may also include environment data indicating environments in which assets have been operated and scheduling data indicating dates and times when assets were utilized, among other examples of asset-related data used to define the aggregate model-workflow pair.
  • Based on the historical data, the remote computing system may define an aggregate model that predicts the occurrence of particular events. In a particular example implementation, an aggregate model may output a probability that a failure will occur at an asset within a particular period of time in the future. Such a model may be referred to herein as a “failure model.” Other aggregate models may predict the likelihood that an asset will complete a task within a particular period of time in the future, among other example predictive models.
  • After defining the aggregate model, the remote computing system may then define an aggregate workflow that corresponds to the defined aggregate model. Generally, a workflow may include one or more operations that an asset may perform based on a corresponding model. That is, the output of the corresponding model may cause the asset to perform workflow operations. For instance, an aggregate model-workflow pair may be defined such that when the aggregate model outputs a probability within a particular range an asset will execute a particular workflow operation, such as a local diagnostic tool.
  • After the aggregate model-workflow pair is defined, the remote computing system may transmit the pair to one or more assets. The one or more assets may then operate in accordance with the aggregate model-workflow pair.
  • In example implementations, the remote computing system may be configured to further define an individualized predictive model and/or corresponding workflow for one or multiple assets. The remote computing system may do so based on certain characteristics of each given asset, among other considerations. In example implementations, the remote computing system may start with an aggregate model-workflow pair as a baseline and individualize one or both of the aggregate model and workflow for the given asset based on the asset's characteristics.
  • In practice, the remote computing system may be configured to determine asset characteristics that are related to the aggregate model-workflow pair (e.g., characteristics of interest). Examples of such characteristics may include asset age, asset usage, asset class (e.g., brand and/or model), asset health, and environment in which an asset is operated, among other characteristics.
  • Then, the remote computing system may determine characteristics of the given asset that correspond to the characteristics of interest. Based at least on some of the given asset's characteristics, the remote computing system may be configured to individualize the aggregate model and/or corresponding workflow.
  • Defining an individualized model and/or workflow may involve the remote computing system making certain modifications to the aggregate model and/or workflow. For example, individualizing the aggregate model may involve changing model inputs, changing a model calculation, and/or changing a weight of a variable or output of a calculation, among other examples. Individualizing the aggregate workflow may involve changing one or more operations of the workflow and/or changing the model output value or range of values that triggers the workflow, among other examples.
  • After defining an individualized model and/or workflow for the given asset, the remote computing system may then transmit the individualized model and/or workflow to the given asset. In a scenario where only one of the model or workflow is individualized, the given asset may utilize the aggregate version of the model or workflow that is not individualized. The given asset may then operate in accordance with its individualized model-workflow pair.
  • In example implementations, a given asset may include a local analytics device that may be configured to cause the given asset to operate in accordance with a model-workflow pair provided by the remote computing system. The local analytics device may be configured to utilize operating data generated by the asset sensors and/or actuators (e.g., data that is typically utilized for other asset-related purposes) to run the predictive model. When the local analytics device receives certain operating data, it may execute the model and depending on the output of the model, may execute the corresponding workflow.
  • Executing the corresponding workflow may help facilitate preventing an undesirable event from occurring at the given asset. In this way, the given asset may locally determine that an occurrence of a particular event is likely and may then execute a particular workflow to help prevent the occurrence of the event. This may be particularly useful if communication between the given asset and remote computing system is hindered. For example, in some situations, a failure might occur before a command to take preventative actions reaches the given asset from the remote computing system. In such situations, the local analytics device may be advantageous in that it may generate the command locally, thereby avoiding any network latency or any issues arising from the given asset being “off-line.” As such, the local analytics device executing a model-workflow pair may facilitate causing the asset to adapt to its conditions.
  • While a given asset is operating in accordance with a model-workflow pair, the given asset may also continue to provide operating data to the remote computing system. Based at least on this data, the remote computing system may modify the aggregate model-workflow pair and/or one or more individualized model-workflow pairs. The remote computing system may make modifications for a number of reasons.
  • In one example, the remote computing system may modify a model and/or workflow if a new event occurred at an asset that the model did not previously account for. For instance, in a failure model, the new event may be a new failure that had yet to occur at any of the assets whose data was used to define the aggregate model.
  • In another example, the remote computing system may modify a model and/or workflow if an event occurred at an asset under operating conditions that typically do not cause the event to occur. For instance, returning again to a failure model, the failure model or corresponding workflow may be modified if a failure occurred under operating conditions that had yet to cause the failure to occur in the past.
  • In yet another example, the remote computing system may modify a model and/or workflow if an executed workflow failed to prevent an occurrence of an event. Specifically, the remote computing system may modify the model and/or workflow if the output of the model caused an asset to execute a workflow aimed to prevent the occurrence of an event but the event occurred at the asset nonetheless. Other examples of reasons for modifying a model and/or workflow are also possible.
  • The remote computing system may then distribute any modifications to the asset whose data caused the modification and/or to other assets in communication with the remote computing system. In this way, the remote computing system may dynamically modify models and/or workflows and distribute these modifications to a whole fleet of assets based on operating conditions of an individual asset.
  • In some example implementations, an asset and/or the remote computing system may be configured to dynamically adjust executing a predictive model and/or workflow. In particular, the asset and/or remote computing system may be configured to detect certain events that trigger a change in responsibilities with respect to whether the asset and/or the remote computing system are executing a predictive model and/or workflow.
  • For instance, in some cases, after the asset receives a model-workflow pair from the remote computing system, the asset may store the model-workflow pair in data storage but then may rely on the remote computing system to centrally execute part or all of the model-workflow pair. On the other hand, in other cases, the remote computing system may rely on the asset to locally execute part or all of the model-workflow pair. In yet other cases, the remote computing system and the asset may share in the responsibilities of executing the model-workflow pair.
  • In any event, at some point in time, certain events may occur that trigger the asset and/or remote computing system to adjust the execution of the predictive model and/or workflow. For instance, the asset and/or remote computing system may detect certain characteristics of a communication network that couples the asset to the remote computing system. Based on the characteristics of the communication network, the asset may adjust whether it is locally executing a predictive model and/or workflow and the remote computing system may accordingly modify whether it is centrally executing the model and/or workflow. In this way, the asset and/or remote computing system may adapt to conditions of the asset.
  • In a particular example, the asset may detect an indication that a signal strength of a communication link between the asset and the remote computing system is relatively weak (e.g., the asset may determine that is about to go “off-line”), that a network latency is relatively high, and/or that a network bandwidth is relatively low. Accordingly, the asset may be programmed to take on responsibilities for executing the model-workflow pair that were previously being handled by the remote computing system. In turn, the remote computing system may cease centrally executing some or all of the model-workflow pair. In this way, the asset may locally execute the predictive model and then, based on executing the predictive model, execute the corresponding workflow to potentially help prevent an occurrence of a failure at the asset.
  • Moreover, in some implementations, the asset and/or the remote computing system may similarly adjust executing (or perhaps modify) a predictive model and/or workflow based on various other considerations. For example, based on the processing capacity of the asset, the asset may adjust locally executing a model-workflow pair and the remote computing system may accordingly adjust as well. In another example, based on the bandwidth of the communication network coupling the asset to the remote computing system, the asset may execute a modified workflow (e.g., transmitting data to the remote computing system according to a data-transmission scheme with a reduced transmission rate). Other examples are also possible.
  • As discussed above, examples provided herein are related to deployment and execution of predictive models. In one aspect, a computing system is provided. The computing system comprises at least one processor, a non-transitory computer-readable medium, and program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to: (a) receive operating data for a plurality of assets, wherein the plurality of assets comprises a first asset, (b) based on the received operating data, define an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets, (c) determine one or more characteristics of the first asset, (d) based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, define at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset, and (e) transmit to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
  • In another aspect, a non-transitory computer-readable medium is provided having instructions stored thereon that are executable to cause a computing system to: (a) receive operating data for a plurality of assets, wherein the plurality of assets comprises a first asset, (b) based on the received operating data, define an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets, (c) determine one or more characteristics of the first asset, (d) based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, define at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset, and (e) transmit to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
  • In yet another aspect, a computer-implemented method is provided. The method comprises: (a) receiving operating data for a plurality of assets, wherein the plurality of assets comprises a first asset, (b) based on the received operating data, defining an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets, (c) determining one or more characteristics of the first asset, (d) based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, defining at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset, and (e) transmitting to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
  • One of ordinary skill in the art will appreciate these as well as numerous other aspects in reading the following disclosure.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts an example network configuration in which example embodiments may be implemented.
  • FIG. 2 depicts a simplified block diagram of an example asset.
  • FIG. 3 depicts a conceptual illustration of example abnormal-condition indicators and triggering criteria.
  • FIG. 4 depicts a simplified block diagram of an example analytics system.
  • FIG. 5 depicts an example flow diagram of a definition phase that may be used for defining model-workflow pairs.
  • FIG. 6A depicts a conceptual illustration of an aggregate model-workflow pair.
  • FIG. 6B depicts a conceptual illustration of an individualized model-workflow pair.
  • FIG. 6C depicts a conceptual illustration of another individualized model-workflow pair.
  • FIG. 6D depicts a conceptual illustration of a modified model-workflow pair.
  • FIG. 7 depicts an example flow diagram of a modeling phase that may be used for defining a predictive model that outputs a health metric.
  • FIG. 8 depicts a conceptual illustration of data utilized to define a model.
  • FIG. 9 depicts an example flow diagram of a local-execution phase that may be used for locally executing a predictive model.
  • FIG. 10 depicts an example flow diagram of a modification phase that may be used for modifying model-workflow pairs.
  • FIG. 11 depicts an example flow diagram of an adjustment phase that may be used for adjusting execution of model-workflow pairs.
  • FIG. 12 depicts a flow diagram of an example method for defining and deploying an aggregate, predictive model and corresponding workflow
  • FIG. 13 depicts a flow diagram of an example method for defining and deploying an individualized, predictive model and/or corresponding workflow
  • FIG. 14 depicts a flow diagram of an example method for dynamically modifying the execution of model-workflow pairs.
  • DETAILED DESCRIPTION
  • The following disclosure makes reference to the accompanying figures and several exemplary scenarios. One of ordinary skill in the art will understand that such references are for the purpose of explanation only and are therefore not meant to be limiting. Part or all of the disclosed systems, devices, and methods may be rearranged, combined, added to, and/or removed in a variety of manners, each of which is contemplated herein.
  • I. Example Network Configuration
  • Turning now to the figures, FIG. 1 depicts an example network configuration 100 in which example embodiments may be implemented. As shown, the network configuration 100 includes an asset 102, an asset 104, a communication network 106, a remote computing system 108 that may take the form of an analytics system, an output system 110, and a data source 112.
  • The communication network 106 may communicatively connect each of the components in the network configuration 100. For instance, the assets 102 and 104 may communicate with the analytics system 108 via the communication network 106. In some cases, the assets 102 and 104 may communicate with one or more intermediary systems, such as an asset gateway (not pictured), that in turn communicates with the analytics system 108. Likewise, the analytics system 108 may communicate with the output system 110 via the communication network 106. In some cases, the analytics system 108 may communicate with one or more intermediary systems, such as a host server (not pictured), that in turn communicates with the output system 110. Many other configurations are also possible. In example cases, the communication network 106 may facilitate secure communications between network components (e.g., via encryption or other security measures).
  • In general, the assets 102 and 104 may take the form of any device configured to perform one or more operations (which may be defined based on the field) and may also include equipment configured to transmit data indicative of one or more operating conditions of the given asset. In some examples, an asset may include one or more subsystems configured to perform one or more respective operations. In practice, multiple subsystems may operate in parallel or sequentially in order for an asset to operate.
  • Example assets may include transportation machines (e.g., locomotives, aircraft, passenger vehicles, semi-trailer trucks, ships, etc.), industrial machines (e.g., mining equipment, construction equipment, factory automation, etc.), medical machines (e.g., medical imaging equipment, surgical equipment, medical monitoring systems, medical laboratory equipment, etc.), and utility machines (e.g., turbines, solar farms, etc.), among other examples. Those of ordinary skill in the art will appreciate that these are but a few examples of assets and that numerous others are possible and contemplated herein.
  • In example implementations, the assets 102 and 104 may each be of the same type (e.g., a fleet of locomotives or aircrafts, a group of wind turbines, or a set of MRI machines, among other examples) and perhaps may be of the same class (e.g., same brand and/or model). In other examples, the assets 102 and 104 may differ by type, by brand, by model, etc. The assets are discussed in further detail below with reference to FIG. 2.
  • As shown, the assets 102 and 104, and perhaps the data source 112, may communicate with the analytics system 108 via the communication network 106. In general, the communication network 106 may include one or more computing systems and network infrastructure configured to facilitate transferring data between network components. The communication network 106 may be or may include one or more Wide-Area Networks (WANs) and/or Local-Area Networks (LANs), which may be wired and/or wireless and support secure communication. In some examples, the communication network 106 may include one or more cellular networks and/or the Internet, among other networks. The communication network 106 may operate according to one or more communication protocols, such as LTE, CDMA, GSM, LPWAN, WiFi, Bluetooth, Ethernet, HTTP/S, TCP, CoAP/DTLS and the like. Although the communication network 106 is shown as a single network, it should be understood that the communication network 106 may include multiple, distinct networks that are themselves communicatively linked. The communication network 106 could take other forms as well.
  • As noted above, the analytics system 108 may be configured to receive data from the assets 102 and 104 and the data source 112. Broadly speaking, the analytics system 108 may include one or more computing systems, such as servers and databases, configured to receive, process, analyze, and output data. The analytics system 108 may be configured according to a given dataflow technology, such as TPL Dataflow or NiFi, among other examples. The analytics system 108 is discussed in further detail below with reference to FIG. 3.
  • As shown, the analytics system 108 may be configured to transmit data to the assets 102 and 104 and/or to the output system 110. The particular data transmitted may take various forms and will be described in further detail below.
  • In general, the output system 110 may take the form of a computing system or device configured to receive data and provide some form of output. The output system 110 may take various forms. In one example, the output system 110 may be or include an output device configured to receive data and provide an audible, visual, and/or tactile output in response to the data. In general, an output device may include one or more input interfaces configured to receive user input, and the output device may be configured to transmit data through the communication network 106 based on such user input. Examples of output devices include tablets, smartphones, laptop computers, other mobile computing devices, desktop computers, smart TVs, and the like.
  • Another example of the output system 110 may take the form of a work-order system configured to output a request for a mechanic or the like to repair an asset. Yet another example of the output system 110 may take the form of a parts-ordering system configured to place an order for a part of an asset and output a receipt thereof. Numerous other output systems are also possible.
  • The data source 112 may be configured to communicate with the analytics system 108. In general, the data source 112 may be or include one or more computing systems configured to collect, store, and/or provide to other systems, such as the analytics system 108, data that may be relevant to the functions performed by the analytics system 108. The data source 112 may be configured to generate and/or obtain data independently from the assets 102 and 104. As such, the data provided by the data source 112 may be referred to herein as “external data.” The data source 112 may be configured to provide current and/or historical data. In practice, the analytics system 108 may receive data from the data source 112 by “subscribing” to a service provided by the data source. However, the analytics system 108 may receive data from the data source 112 in other manners as well.
  • Examples of the data source 112 include environment data sources, asset-management data sources, and other data sources. In general, environment data sources provide data indicating some characteristic of the environment in which assets are operated. Examples of environment data sources include weather-data servers, global navigation satellite systems (GNSS) servers, map-data servers, and topography-data servers that provide information regarding natural and artificial features of a given area, among other examples.
  • In general, asset-management data sources provide data indicating events or statuses of entities (e.g., other assets) that may affect the operation or maintenance of assets (e.g., when and where an asset may operate or receive maintenance). Examples of asset-management data sources include traffic-data servers that provide information regarding air, water, and/or ground traffic, asset-schedule servers that provide information regarding expected routes and/or locations of assets on particular dates and/or at particular times, defect detector systems (also known as “hotbox” detectors) that provide information regarding one or more operating conditions of an asset that passes in proximity to the defect detector system, part-supplier servers that provide information regarding parts that particular suppliers have in stock and prices thereof, and repair-shop servers that provide information regarding repair shop capacity and the like, among other examples.
  • Examples of other data sources include power-grid servers that provide information regarding electricity consumption and external databases that store historical operating data for assets, among other examples. One of ordinary skill in the art will appreciate that these are but a few examples of data sources and that numerous others are possible.
  • It should be understood that the network configuration 100 is one example of a network in which embodiments described herein may be implemented. Numerous other arrangements are possible and contemplated herein. For instance, other network configurations may include additional components not pictured and/or more or less of the pictured components.
  • II. Example Asset
  • Turning to FIG. 2, a simplified block diagram of an example asset 200 is depicted. Either or both of assets 102 and 104 from FIG. 1 may be configured like the asset 200. As shown, the asset 200 may include one or more subsystems 202, one or more sensors 204, one or more actuators 205, a central processing unit 206, data storage 208, a network interface 210, a user interface 212, and a local analytics device 220, all of which may be communicatively linked by a system bus, network, or other connection mechanism. One of ordinary skill in the art will appreciate that the asset 200 may include additional components not shown and/or more or less of the depicted components.
  • Broadly speaking, the asset 200 may include one or more electrical, mechanical, and/or electromechanical components configured to perform one or more operations. In some cases, one or more components may be grouped into a given subsystem 202.
  • Generally, a subsystem 202 may include a group of related components that are part of the asset 200. A single subsystem 202 may independently perform one or more operations or the single subsystem 202 may operate along with one or more other subsystems to perform one or more operations. Typically, different types of assets, and even different classes of the same type of assets, may include different subsystems.
  • For instance, in the context of transportation assets, examples of subsystems 202 may include engines, transmissions, drivetrains, fuel systems, battery systems, exhaust systems, braking systems, electrical systems, signal processing systems, generators, gear boxes, rotors, and hydraulic systems, among numerous other subsystems. In the context of a medical machine, examples of subsystems 202 may include scanning systems, motors, coil and/or magnet systems, signal processing systems, rotors, and electrical systems, among numerous other subsystems.
  • As suggested above, the asset 200 may be outfitted with various sensors 204 that are configured to monitor operating conditions of the asset 200 and various actuators 205 that are configured to interact with the asset 200 or a component thereof and monitor operating conditions of the asset 200. In some cases, some of the sensors 204 and/or actuators 205 may be grouped based on a particular subsystem 202. In this way, the group of sensors 204 and/or actuators 205 may be configured to monitor operating conditions of the particular subsystem 202, and the actuators from that group may be configured to interact with the particular subsystem 202 in some way that may alter the subsystem's behavior based on those operating conditions.
  • In general, a sensor 204 may be configured to detect a physical property, which may be indicative of one or more operating conditions of the asset 200, and provide an indication, such as an electrical signal, of the detected physical property. In operation, the sensors 204 may be configured to obtain measurements continuously, periodically (e.g., based on a sampling frequency), and/or in response to some triggering event. In some examples, the sensors 204 may be preconfigured with operating parameters for performing measurements and/or may perform measurements in accordance with operating parameters provided by the central processing unit 206 (e.g., sampling signals that instruct the sensors 204 to obtain measurements). In examples, different sensors 204 may have different operating parameters (e.g., some sensors may sample based on a first frequency, while other sensors sample based on a second, different frequency). In any event, the sensors 204 may be configured to transmit electrical signals indicative of a measured physical property to the central processing unit 206. The sensors 204 may continuously or periodically provide such signals to the central processing unit 206.
  • For instance, sensors 204 may be configured to measure physical properties such as the location and/or movement of the asset 200, in which case the sensors may take the form of GNSS sensors, dead-reckoning-based sensors, accelerometers, gyroscopes, pedometers, magnetometers, or the like.
  • Additionally, various sensors 204 may be configured to measure other operating conditions of the asset 200, examples of which may include temperatures, pressures, speeds, acceleration or deceleration rates, friction, power usages, fuel usages, fluid levels, runtimes, voltages and currents, magnetic fields, electric fields, presence or absence of objects, positions of components, and power generation, among other examples. One of ordinary skill in the art will appreciate that these are but a few example operating conditions that sensors may be configured to measure. Additional or fewer sensors may be used depending on the industrial application or specific asset.
  • As suggested above, an actuator 205 may be configured similar in some respects to a sensor 204. Specifically, an actuator 205 may be configured to detect a physical property indicative of an operating condition of the asset 200 and provide an indication thereof in a manner similar to the sensor 204.
  • Moreover, an actuator 205 may be configured to interact with the asset 200, one or more subsystems 202, and/or some component thereof. As such, an actuator 205 may include a motor or the like that is configured to move or otherwise control a component or system. In a particular example, an actuator may be configured to measure a fuel flow and alter the fuel flow (e.g., restrict the fuel flow), or an actuator may be configured to measure a hydraulic pressure and alter the hydraulic pressure (e.g., increase or decrease the hydraulic pressure). Numerous other example interactions of an actuator are also possible and contemplated herein.
  • Generally, the central processing unit 206 may include one or more processors and/or controllers, which may take the form of a general- or special-purpose processor or controller. In particular, in example implementations, the central processing unit 206 may be or include microprocessors, microcontrollers, application specific integrated circuits, digital signal processors, and the like. In turn, the data storage 208 may be or include one or more non-transitory computer-readable storage media, such as optical, magnetic, organic, or flash memory, among other examples.
  • The central processing unit 206 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 208 to perform the operations of an asset described herein. For instance, as suggested above, the central processing unit 206 may be configured to receive respective sensor signals from the sensors 204 and/or actuators 205. The central processing unit 206 may be configured to store sensor and/or actuator data in and later access it from the data storage 208.
  • The central processing unit 206 may also be configured to determine whether received sensor and/or actuator signals trigger any abnormal-condition indicators, such as fault codes. For instance, the central processing unit 206 may be configured to store in the data storage 208 abnormal-condition rules, each of which include a given abnormal-condition indicator representing a particular abnormal condition and respective triggering criteria that trigger the abnormal-condition indicator. That is, each abnormal-condition indicator corresponds with one or more sensor and/or actuator measurement values that must be satisfied before the abnormal-condition indicator is triggered. In practice, the asset 200 may be pre-programmed with the abnormal-condition rules and/or may receive new abnormal-condition rules or updates to existing rules from a computing system, such as the analytics system 108.
  • In any event, the central processing unit 206 may be configured to determine whether received sensor and/or actuator signals trigger any abnormal-condition indicators. That is, the central processing unit 206 may determine whether received sensor and/or actuator signals satisfy any triggering criteria. When such a determination is affirmative, the central processing unit 206 may generate abnormal-condition data and may also cause the asset's user interface 212 to output an indication of the abnormal condition, such as a visual and/or audible alert. Additionally, the central processing unit 206 may log the occurrence of the abnormal-condition indicator being triggered in the data storage 208, perhaps with a timestamp.
  • FIG. 3 depicts a conceptual illustration of example abnormal-condition indicators and respective triggering criteria for an asset. In particular, FIG. 3 depicts a conceptual illustration of example fault codes. As shown, table 300 includes columns 302, 304, and 306 that correspond to Sensor A, Actuator B, and Sensor C, respectively, and rows 308, 310, and 312 that correspond to Fault Codes 1, 2, and 3, respectively. Entries 314 then specify sensor criteria (e.g., sensor value thresholds) that correspond to the given fault codes.
  • For example, Fault Code 1 will be triggered when Sensor A detects a rotational measurement greater than 135 revolutions per minute (RPM) and Sensor C detects a temperature measurement greater than 65° Celsius (C), Fault Code 2 will be triggered when Actuator B detects a voltage measurement greater than 1000 Volts (V) and Sensor C detects a temperature measurement less than 55° C., and Fault Code 3 will be triggered when Sensor A detects a rotational measurement greater than 100 RPM, Actuator B detects a voltage measurement greater than 750 V, and Sensor C detects a temperature measurement greater than 60° C. One of ordinary skill in the art will appreciate that FIG. 3 is provided for purposes of example and explanation only and that numerous other fault codes and/or triggering criteria are possible and contemplated herein.
  • Referring back to FIG. 2, the central processing unit 206 may be configured to carry out various additional functions for managing and/or controlling operations of the asset 200 as well. For example, the central processing unit 206 may be configured to provide instruction signals to the subsystems 202 and/or the actuators 205 that cause the subsystems 202 and/or the actuators 205 to perform some operation, such as modifying a throttle position. Additionally, the central processing unit 206 may be configured to modify the rate at which it processes data from the sensors 204 and/or the actuators 205, or the central processing unit 206 may be configured to provide instruction signals to the sensors 204 and/or actuators 205 that cause the sensors 204 and/or actuators 205 to, for example, modify a sampling rate. Moreover, the central processing unit 206 may be configured to receive signals from the subsystems 202, the sensors 204, the actuators 205, the network interfaces 210, and/or the user interfaces 212 and based on such signals, cause an operation to occur. Further still, the central processing unit 206 may be configured to receive signals from a computing device, such as a diagnostic device, that cause the central processing unit 206 to execute one or more diagnostic tools in accordance with diagnostic rules stored in the data storage 208. Other functionalities of the central processing unit 206 are discussed below.
  • The network interface 210 may be configured to provide for communication between the asset 200 and various network components connected to communication network 106. For example, the network interface 210 may be configured to facilitate wireless communications to and from the communication network 106 and may thus take the form of an antenna structure and associated equipment for transmitting and receiving various over-the-air signals. Other examples are possible as well. In practice, the network interface 210 may be configured according to a communication protocol, such as but not limited to any of those described above.
  • The user interface 212 may be configured to facilitate user interaction with the asset 200 and may also be configured to facilitate causing the asset 200 to perform an operation in response to user interaction. Examples of user interfaces 212 include touch-sensitive interfaces, mechanical interfaces (e.g., levers, buttons, wheels, dials, keyboards, etc.), and other input interfaces (e.g., microphones), among other examples. In some cases, the user interface 212 may include or provide connectivity to output components, such as display screens, speakers, headphone jacks, and the like.
  • The local analytics device 220 may generally be configured to receive and analyze data and based on such analysis, cause one or more operations to occur at the asset 200. In particular, the local analytics device 220 may receive data from the sensors 204 and/or actuators 205 and based on such data, may provide instructions to the central processing unit 206 that cause the asset 200 to perform an operation.
  • In practice, the local analytics device 220 may enable the asset 200 to locally perform advanced analytics and associated operations, such as executing a predictive model and corresponding workflow, that may otherwise not be able to be performed with the other on-asset components. As such, the local analytics device 220 may help provide additional processing power and/or intelligence to the asset 200.
  • As shown, the local analytics device 220 may include a processing unit 222, a data storage 224, and a network interface 226, all of which may be communicatively linked by a system bus, network, or other connection mechanism. The processing unit 222 may include any of the components discussed above with respect to the central processing unit 206. In turn, the data storage 224 may be or include one or more non-transitory computer-readable storage media, which may take any of the forms of computer-readable storage media discussed above.
  • The processing unit 222 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 224 to perform the operations of a local analytics device described herein. For instance, the processing unit 222 may be configured to receive respective sensor and/or actuator signals from the sensors 204 and/or actuators 205 and execute a predictive model-workflow pair based on such signals. Other functions are described below.
  • The network interface 226 may be the same or similar to the network interfaces described above. In practice, the network interface 226 may facilitate communication between the asset 200 and the analytics system 108.
  • In some example implementations, the local analytics device 220 may include and/or communicate with a user interface that may be similar to the user interface 212. In practice, the user interface may be located remotely from the local analytics device 220 (and the asset 200). Other examples are also possible.
  • One of ordinary skill in the art will appreciate that the asset 200 shown in FIG. 2 is but one example of a simplified representation of an asset and that numerous others are also possible. For instance, other assets may include additional components not pictured and/or more or less of the pictured components. Moreover, a given asset may include multiple, individual assets that are operated in concert to perform operations of the given asset. Other examples are also possible.
  • III. Example Analytics System
  • Referring now to FIG. 4, a simplified block diagram of an example analytics system 400 is depicted. As suggested above, the analytics system 400 may include one or more computing systems communicatively linked and arranged to carry out various operations described herein. Specifically, as shown, the analytics system 400 may include a data intake system 402, a data science system 404, and one or more databases 406. These system components may be communicatively coupled via one or more wireless and/or wired connections, which may be configured to facilitate secure communications.
  • The data intake system 402 may generally function to receive and process data and output data to the data science system 404. As such, the data intake system 402 may include one or more network interfaces configured to receive data from various network components of the network configuration 100, such as the assets 102 and 104, the output system 110, and/or the data source 112. Specifically, the data intake system 402 may be configured to receive analog signals, data streams, and/or network packets, among other examples. As such, the network interfaces may include one or more wired network interfaces, such as a port or the like, and/or wireless network interfaces, similar to those described above. In some examples, the data intake system 402 may be or include components configured according to a given dataflow technology, such as a NiFi receiver or the like.
  • The data intake system 402 may include one or more processing components configured to perform one or more operations. Example operations may include compression and/or decompression, encryption and/or de-encryption, analog-to-digital and/or digital-to-analog conversion, filtration, and amplification, among other operations. Moreover, the data intake system 402 may be configured to parse, sort, organize, and/or route data based on data type and/or characteristics of the data. In some examples, the data intake system 402 may be configured to format, package, and/or route data based on one or more characteristics or operating parameters of the data science system 404.
  • In general, the data received by the data intake system 402 may take various forms. For example, the payload of the data may include a single sensor or actuator measurement, multiple sensor and/or actuator measurements and/or one or more abnormal-condition data. Other examples are also possible.
  • Moreover, the received data may include certain characteristics, such as a source identifier and a timestamp (e.g., a date and/or time at which the information was obtained). For instance, a unique identifier (e.g., a computer generated alphabetic, numeric, alphanumeric, or the like identifier) may be assigned to each asset, and perhaps to each sensor and actuator. Such identifiers may be operable to identify the asset, sensor, or actuator from which data originates. In some cases, another characteristic may include the location (e.g., GPS coordinates) at which the information was obtained. Data characteristics may come in the form of signal signatures or metadata, among other examples.
  • The data science system 404 may generally function to receive (e.g., from the data intake system 402) and analyze data and based on such analysis, cause one or more operations to occur. As such, the data science system 404 may include one or more network interfaces 408, a processing unit 410, and data storage 412, all of which may be communicatively linked by a system bus, network, or other connection mechanism. In some cases, the data science system 404 may be configured to store and/or access one or more application program interfaces (APIs) that facilitate carrying out some of the functionality disclosed herein.
  • The network interfaces 408 may be the same or similar to any network interface described above. In practice, the network interfaces 408 may facilitate communication (e.g., with some level of security) between the data science system 404 and various other entities, such as the data intake system 402, the databases 406, the assets 102, the output system 110, etc.
  • The processing unit 410 may include one or more processors, which may take any of the processor forms described above. In turn, the data storage 412 may be or include one or more non-transitory computer-readable storage media, which may take any of the forms of computer-readable storage media discussed above. The processing unit 410 may be configured to store, access, and execute computer-readable program instructions stored in the data storage 412 to perform the operations of an analytics system described herein.
  • In general, the processing unit 410 may be configured to perform analytics on data received from the data intake system 402. To that end, the processing unit 410 may be configured to execute one or more modules, which may each take the form of one or more sets of program instructions that are stored in the data storage 412. The modules may be configured to facilitate causing an outcome to occur based on the execution of the respective program instructions. An example outcome from a given module may include outputting data into another module, updating the program instructions of the given module and/or of another module, and outputting data to a network interface 408 for transmission to an asset and/or the output system 110, among other examples.
  • The databases 406 may generally function to receive (e.g., from the data science system 404) and store data. As such, each database 406 may include one or more non-transitory computer-readable storage media, such as any of the examples provided above. In practice, the databases 406 may be separate from or integrated with the data storage 412.
  • The databases 406 may be configured to store numerous types of data, some of which is discussed below. In practice, some of the data stored in the databases 406 may include a timestamp indicating a date and time at which the data was generated or added to the database. Moreover, data may be stored in a number of manners in the databases 406. For instance, data may be stored in time sequence, in a tabular manner, and/or organized based on data source type (e.g., based on asset, asset type, sensor, sensor type, actuator, or actuator type) or abnormal-condition indicator, among other examples.
  • IV. Example Operations
  • The operations of the example network configuration 100 depicted in FIG. 1 will now be discussed in further detail below. To help describe some of these operations, flow diagrams may be referenced to describe combinations of operations that may be performed. In some cases, each block may represent a module or portion of program code that includes instructions that are executable by a processor to implement specific logical functions or steps in a process. The program code may be stored on any type of computer-readable medium, such as non-transitory computer-readable media. In other cases, each block may represent circuitry that is wired to perform specific logical functions or steps in a process. Moreover, the blocks shown in the flow diagrams may be rearranged into different orders, combined into fewer blocks, separated into additional blocks, and/or removed based upon the particular embodiment.
  • The following description may reference examples where a single data source, such as the asset 102, provides data to the analytics system 108 that then performs one or more functions. It should be understood that this is done merely for sake of clarity and explanation and is not meant to be limiting. In practice, the analytics system 108 generally receives data from multiple sources, perhaps simultaneously, and performs operations based on such aggregate received data.
  • A. Collection of Operating Data
  • As mentioned above, the representative asset 102 may take various forms and may be configured to perform a number of operations. In a non-limiting example, the asset 102 may take the form of a locomotive that is operable to transfer cargo across the United States. While in transit, the sensors and/or actuators of the asset 102 may obtain data that reflects one or more operating conditions of the asset 102. The sensors and/or actuators may transmit the data to a processing unit of the asset 102.
  • The processing unit may be configured to receive the data from the sensors and/or actuators. In practice, the processing unit may receive sensor data from multiple sensors and/or actuator data from multiple actuators simultaneously or sequentially. As discussed above, while receiving this data, the processing unit may also be configured to determine whether the data satisfies triggering criteria that trigger any abnormal-condition indicators, such as fault codes. In the event the processing unit determines that one or more abnormal-condition indicators are triggered, the processing unit may be configured to perform one or more local operations, such as outputting an indication of the triggered indicator via a user interface.
  • The asset 102 may then transmit operating data to the analytics system 108 via a network interface of the asset 102 and the communication network 106. In operation, the asset 102 may transmit operating data to the analytics system 108 continuously, periodically, and/or in response to triggering events (e.g., abnormal conditions). Specifically, the asset 102 may transmit operating data periodically based on a particular frequency (e.g., daily, hourly, every fifteen minutes, once per minute, once per second, etc.), or the asset 102 may be configured to transmit a continuous, real-time feed of operating data. Additionally or alternatively, the asset 102 may be configured to transmit operating data based on certain triggers, such as when sensor and/or actuator measurements satisfy triggering criteria for any abnormal-condition indicators. The asset 102 may transmit operating data in other manners as well.
  • In practice, operating data for the asset 102 may include sensor data, actuator data, and/or abnormal-condition data. In some implementations, the asset 102 may be configured to provide the operating data in a single data stream, while in other implementations the asset 102 may be configured to provide the operating data in multiple, distinct data streams. For example, the asset 102 may provide to the analytics system 108 a first data stream of sensor and/or actuator data and a second data stream of abnormal-condition data. Other possibilities also exist.
  • Sensor and actuator data may take various forms. For example, at times, sensor data (or actuator data) may include measurements obtained by each of the sensors (or actuators) of the asset 102. While at other times, sensor data (or actuator data) may include measurements obtained by a subset of the sensors (or actuators) of the asset 102.
  • Specifically, the sensor and/or actuator data may include measurements obtained by the sensors and/or actuators associated with a given triggered abnormal-condition indicator. For example, if a triggered fault code is Fault Code 1 from FIG. 3, then sensor data may include raw measurements obtained by Sensors A and C. Additionally or alternatively, the data may include measurements obtained by one or more sensors or actuators not directly associated with the triggered fault code. Continuing off the last example, the data may additionally include measurements obtained by Actuator B and/or other sensors or actuators. In some examples, the asset 102 may include particular sensor data in the operating data based on a fault-code rule or instruction provided by the analytics system 108, which may have, for example, determined that there is a correlation between that which Actuator B is measuring and that which caused the Fault Code 1 to be triggered in the first place. Other examples are also possible.
  • Further still, the data may include one or more sensor and/or actuator measurements from each sensor and/or actuator of interest based on a particular time of interest, which may be selected based on a number of factors. In some examples, the particular time of interest may be based on a sampling rate. In other examples, the particular time of interest may be based on the time at which an abnormal-condition indicator is triggered.
  • In particular, based on the time at which an abnormal-condition indicator is triggered, the data may include one or more respective sensor and/or actuator measurements from each sensor and/or actuator of interest (e.g., sensors and/or actuators directly and indirectly associated with the triggered indicator). The one or more measurements may be based on a particular number of measurements or particular duration of time around the time of the triggered abnormal-condition indicator.
  • For example, if a triggered fault code is Fault Code 2 from FIG. 3, the sensors and actuators of interest might include Actuator B and Sensor C. The one or more measurements may include the most recent respective measurements obtained by Actuator B and Sensor C prior to the triggering of the fault code (e.g., triggering measurements) or a respective set of measurements before, after, or about the triggering measurements. For example, a set of five measurements may include the five measurements before or after the triggering measurement (e.g., excluding the triggering measurement), the four measurements before or after the triggering measurement and the triggering measurement, or the two measurements before and the two after as well as the triggering measurement, among other possibilities.
  • Similar to sensor and actuator data, the abnormal-condition data may take various forms. In general, the abnormal-condition data may include or take the form of an indicator that is operable to uniquely identify a particular abnormal condition that occurred at the asset 102 from all other abnormal conditions that may occur at the asset 102. The abnormal-condition indicator may take the form of an alphabetic, numeric, or alphanumeric identifier, among other examples. Moreover, the abnormal-condition indicator may take the form of a string of words that is descriptive of the abnormal condition, such as “Overheated Engine” or “Out of Fuel”, among other examples.
  • The analytics system 108, and in particular, the data intake system of the analytics system 108, may be configured to receive operating data from one or more assets and/or data sources. The data intake system may be configured to perform one or more operations to the received data and then relay the data to the data science system of the analytics system 108. In turn, the data science system may analyze the received data and based on such analysis, perform one or more operations.
  • B. Defining Predictive Models & Workflows
  • As one example, the analytics system 108 may be configured to define predictive models and corresponding workflows based on received operating data for one or more assets and/or received external data related to the one or more assets. The analytics system 108 may define model-workflow pairs based on various other data as well.
  • In general, a model-workflow pair may include a set of program instructions that cause an asset to monitor certain operating conditions and carry out certain operations that help facilitate preventing the occurrence of a particular event suggested by the monitored operating conditions. Specifically, a predictive model may include one or more algorithms whose inputs are sensor and/or actuator data from one or more sensors and/or actuators of an asset and whose outputs are utilized to determine a probability that a particular event may occur at the asset within a particular period of time in the future. In turn, a workflow may include one or more triggers (e.g., model output values) and corresponding operations that the asset carries out based on the triggers.
  • As suggested above, the analytics system 108 may be configured to define aggregate and/or individualized predictive models and/or workflows. An “aggregate” model/workflow may refer to a model/workflow that is generic for a group of assets and defined without taking into consideration particular characteristics of the assets to which the model/workflow is deployed. On the other hand, an “individualized” model/workflow may refer to a model/workflow that is specifically tailored for a single asset or a subgroup of assets from the group of assets and defined based on particular characteristics of the single asset or subgroup of assets to which the model/workflow is deployed. These different types of models/workflows and the operations performed by the analytics system 108 to define them are discussed in further detail below.
  • 1. Aggregate Models & Workflows
  • In example implementations, the analytics system 108 may be configured to define an aggregate model-workflow pair based on aggregated data for a plurality of assets. Defining aggregate model-workflow pairs may be performed in a variety of manners.
  • FIG. 5 is a flow diagram 500 depicting one possible example of a definition phase that may be used for defining model-workflow pairs. For purposes of illustration, the example definition phase is described as being carried out by the analytics system 108, but this definition phase may be carried out by other systems as well. One of ordinary skill in the art will appreciate that the flow diagram 500 is provided for sake of clarity and explanation and that numerous other combinations of operations may be utilized to define a model-workflow pair.
  • As shown in FIG. 5, at block 502, the analytics system 108 may begin by defining a set of data that forms the basis for a given predictive model (e.g., the data of interest). The data of interest may derive from a number of sources, such as the assets 102 and 104 and the data source 112, and may be stored in a database of the analytics system 108.
  • The data of interest may include historical data for a particular set of assets from a group of assets or all of the assets from a group of assets (e.g., the assets of interest). Moreover, the data of interest may include measurements from a particular set of sensors and/or actuators from each of the assets of interest or from all of the sensors and/or actuators from each of the assets of interest. Further still, the data of interest may include data from a particular period of time in the past, such as two week's worth of historical data.
  • The data of interest may include a variety of types data, which may depend on the given predictive model. In some instances, the data of interest may include at least operating data indicating operating conditions of assets, where the operating data is as discussed above in the Collection of Operating Data section. Additionally, the data of interest may include environment data indicating environments in which assets are typically operated and/or scheduling data indicating planned dates and times during which assets are to carry out certain tasks. Other types of data may also be included in the data of interest.
  • In practice, the data of interest may be defined in a number of manners. In one example, the data of interest may be user-defined. In particular, a user may operate an output system 110 that receives user inputs indicating a selection of certain data of interest, and the output system 110 may provide to the analytics system 108 data indicating such selections. Based on the received data, the analytics system 108 may then define the data of interest.
  • In another example, the data of interest may be machine-defined. In particular, the analytics system 108 may perform various operations, such as simulations, to determine the data of interest that generates the most accurate predictive model. Other examples are also possible.
  • Returning to FIG. 5, at block 504, the analytics system 108 may be configured to, based on the data of interest, define an aggregate, predictive model that is related to the operation of assets. In general, an aggregate, predictive model may define a relationship between operating conditions of assets and a likelihood of an event occurring at the assets. Specifically, an aggregate, predictive model may receive as inputs sensor data from sensors of an asset and/or actuator data from actuators of the asset and output a probability that an event will occur at the asset within a certain amount of time into the future.
  • The event that the predictive model predicts may vary depending on the particular implementation. For example, the event may be a failure and so, the predictive model may be a failure model that predicts whether a failure will occur within a certain period of time in the future (failure models are discussed in detail below in the Health-Score Models & Workflows section). In another example, the event may be an asset completing a task and so, the predictive model may predict the likelihood that an asset will complete a task on time. In other examples, the event may be a fluid or component replacement, and so, the predictive model may predict an amount of time before a particular asset fluid or component needs to be replaced. In yet other examples, the event may be a change in asset productivity, and so, the predictive model may predict the productivity of an asset during a particular period of time in the future. In one other example, the event may be the occurrence of a “leading indicator” event, which may indicate an asset behavior that differs from expected asset behaviors, and so, the predictive model may predict the likelihood of one or more leading indicator events occurring in the future. Other examples of predictive models are also possible.
  • In any event, the analytics system 108 may define the aggregate, predictive model in a variety of manners. In general, this operation may involve utilizing one or more modeling techniques to generate a model that returns a probability between zero and one, such as a random forest technique, logistic regression technique, or other regression technique, among other modeling techniques. In a particular example implementation, the analytics system 108 may define the aggregate, predictive model in line with the below discussion referencing FIG. 7. The analytics system 108 may define the aggregate model in other manners as well.
  • At block 506, the analytics system 108 may be configured to define an aggregate workflow that corresponds to the defined model from block 504. In general, a workflow may take the form of an action that is carried out based on a particular output of a predictive model. In example implementations, a workflow may include one or more operations that an asset performs based on the output of the defined predictive model. Examples of operations that may be part of a workflow include an asset acquiring data according to a particular data-acquisition scheme, transmitting data to the analytics system 108 according to a particular data-transmission scheme, executing a local diagnostic tool, and/or modifying an operating condition of the asset, among other example workflow operations.
  • A particular data-acquisition scheme may indicate how an asset acquires data. In particular, a data-acquisition scheme may indicate certain sensors and/or actuators from which the asset obtains data, such as a subset of sensors and/or actuators of the asset's plurality of sensors and actuators (e.g., sensors/actuators of interest). Further, a data-acquisition scheme may indicate an amount of data that the asset obtains from the sensors/actuators of interest and/or a sampling frequency at which the asset acquires such data. Data-acquisition schemes may include various other attributes as well. In a particular example implementation, a particular data-acquisition scheme may correspond to a predictive model for asset health and may be adjusted to acquire more data and/or particular data (e.g., from particular sensors) based on a decreasing asset health. Or a particular data-acquisition scheme may correspond to a leading-indicators predictive model and may be adjusted to a modify data acquired by asset sensors and/or actuators based on an increased likelihood of an occurrence of a leading indicator event that may signal that a subsystem failure might occur.
  • A particular data-transmission scheme may indicate how an asset transmits data to the analytics system 108. Specifically, a data-transmission scheme may indicate a type of data (and may also indicate a format and/or structure of the data) that the asset should transmit, such as data from certain sensors or actuators, a number of data samples that the asset should transmit, a transmission frequency, and/or a priority-scheme for the data that the asset should include in its data transmission. In some cases, a particular data-acquisition scheme may include a data-transmission scheme or a data-acquisition scheme may be paired with a data-transmission scheme. In some example implementations, a particular data-transmission scheme may correspond to a predictive model for asset health and may be adjusted to transmit data less frequently based on an asset health that is above a threshold value. Other examples are also possible.
  • As suggested above, a local diagnostic tool may be a set of procedures or the like that are stored locally at an asset. The local diagnostic tool may generally facilitate diagnosing a cause of a fault or failure at an asset. In some cases, when executed, a local diagnostic tool may pass test inputs into a subsystem of an asset or a portion thereof to obtain test results, which may facilitate diagnosing the cause of a fault or failure. These local diagnostic tools are typically dormant on an asset and will not be executed unless the asset receives particular diagnostic instructions. Other local diagnostic tools are also possible. In one example implementation, a particular local diagnostic tool may correspond to a predictive model for health of a subsystem of an asset and may be executed based on a subsystem health that is at or below a threshold value.
  • Lastly, a workflow may involve modifying an operating condition of an asset. For instance, one or more actuators of an asset may be controlled to facilitate modifying an operating condition of the asset. Various operating conditions may be modified, such as a speed, temperature, pressure, fluid level, current draw, and power distribution, among other examples. In a particular example implementation, an operating-condition modification workflow may correspond to a predictive model for predicting whether an asset will complete a task on time and may cause the asset to increase its speed of travel based on a predicted completion percentage that is below a threshold value.
  • In any event, the aggregate workflow may be defined in a variety of manners. In one example, the aggregate workflow may be user defined. Specifically, a user may operate a computing device that receives user inputs indicating selection of certain workflow operations, and the computing device may provide to the analytics system 108 data indicating such selections. Based on this data, the analytics system 108 may then define the aggregate workflow.
  • In another example, the aggregate workflow may be machine-defined. In particular, the analytics system 108 may perform various operations, such as simulations, to determine a workflow that may facilitate determining a cause of the probability output by the predictive model and/or preventing an occurrence of an event predicted by the model. Other examples of defining the aggregate workflow are also possible.
  • In defining the workflow corresponding to the predictive model, the analytics system 108 may define the triggers of the workflow. In example implementations, a workflow trigger may be a value of the probability output by the predictive model or a range of values output by the predictive model. In some cases, a workflow may have multiple triggers, each of which may cause a different operation or operations to occur.
  • To illustrate, FIG. 6A is a conceptual illustration of an aggregate model-workflow pair 600. As shown, the aggregate model-workflow pair illustration 600 includes a column for model inputs 602, model calculations 604, model output ranges 606, and corresponding workflow operations 608. In this example, the predictive model has a single input, data from Sensor A, and has two calculations, Calculations I and II. The output of this predictive model affects the workflow operation that is performed. If the output probability is less than or equal to 80%, then workflow Operation 1 is performed. Otherwise, the workflow Operation 2 is performed. Other example model-workflow pairs are possible and contemplated herein.
  • 2. Individualized Models & Workflows
  • In another aspect, the analytics system 108 may be configured to define individualized predictive models and/or workflows for assets, which may involve utilizing the aggregate model-workflow pair as a baseline. The individualization may be based on certain characteristics of assets. In this way, the analytics system 108 may provide a given asset a more accurate and robust model-workflow pair compared to the aggregate model-workflow pair.
  • In particular, returning to FIG. 5, at block 508, the analytics system 108 may be configured to decide whether to individualize the aggregate model defined at block 504 for a given asset, such as the asset 102. The analytics system 108 may carry out this decision in a number of manners.
  • In some cases, the analytics system 108 may be configured to define individualized predictive models by default. In other cases, the analytics system 108 may be configured to decide whether to define an individualized predictive model based on certain characteristics of the asset 102. For example, in some cases, only assets of certain types or classes, or operated in certain environments, or that have certain health scores may receive an individualized predictive model. In yet other cases, a user may define whether an individualized model is defined for the asset 102. Other examples are also possible.
  • In any event, if the analytics system 108 decides to define an individualized predictive model for the asset 102, the analytics system 108 may do so at block 510. Otherwise, the analytics system 108 may proceed to block 512.
  • At block 510, the analytics system 108 may be configured to define an individualized predictive model in a number of manners. In example implementations, the analytics system 108 may define an individualized predictive model based at least in part on one or more characteristics of the asset 102.
  • Before defining the individualized predictive model for the asset 102, the analytics system 108 may have determined one or more asset characteristics of interest that form the basis of individualized models. In practice, different predictive models may have different corresponding characteristics of interest.
  • In general, the characteristics of interest may be characteristics that are related to the aggregate model-workflow pair. For instance, the characteristics of interest may be characteristics that the analytics system 108 has determined influence the accuracy of the aggregate model-workflow pair. Examples of such characteristics may include asset age, asset usage, asset capacity, asset load, asset health (perhaps indicated by an asset health metric, discussed below), asset class (e.g., brand and/or model), and environment in which an asset is operated, among other characteristics.
  • The analytics system 108 may have determined the characteristics of interest in a number of manners. In one example, the analytics system 108 may have done so by performing one or more modeling simulations that facilitate identifying the characteristics of interest. In another example, the characteristics of interest may have been predefined and stored in the data storage of the analytics system 108. In yet another example, characteristics of interest may have been define by a user and provided to the analytics system 108 via the output system 110. Other examples are also possible.
  • In any event, after determining the characteristics of interest, the analytics system 108 may determine characteristics of the asset 102 that correspond to the determined characteristics of interest. That is, the analytics system 108 may determine a type, value, existence or lack thereof, etc. of the asset 102′s characteristics that correspond to the characteristics of interest. The analytics system 108 may perform this operation in a number of manners.
  • For examples, the analytics system 108 may be configured to perform this operation based on data originating from the asset 102 and/or the data source 112. In particular, the analytics system 108 may utilize operating data for the asset 102 and/or external data from the data source 112 to determine one or more characteristics of the asset 102. Other examples are also possible.
  • Based on the determined one or more characteristics of the asset 102, the analytics system 108 may define an individualized, predictive model by modifying the aggregate model. The aggregate model may be modified in a number of manners. For example, the aggregate model may be modified by changing (e.g., adding, removing, re-ordering, etc.) one or more model inputs, changing one or more sensor and/or actuator measurement ranges that correspond to asset-operating limits (e.g., changing operating limits that correspond to “leading indicator” events), changing one or more model calculations, weighting (or changing a weight of) a variable or output of a calculation, utilizing a modeling technique that differs from that which was utilized to define the aggregate model, and/or utilizing a response variable that differs from that which was utilized to define the aggregate model, among other examples.
  • To illustrate, FIG. 6B is a conceptual illustration of an individualized model-workflow pair 610. Specifically, the individualized model-workflow pair illustration 610 is a modified version of the aggregate model-workflow pair from FIG. 6A. As shown, the individualized model-workflow pair illustration 610 includes a modified column for model inputs 612 and model calculations 614 and includes the original columns for model output ranges 606 and workflow operations 608 from FIG. 6A. In this example, the individualized model has two inputs, data from Sensor A and Actuator B, and has two calculations, Calculations II and III. The output ranges and corresponding workflow operations are the same as those of FIG. 6A. The analytics system 108 may have defined the individualized model in this way based on determining that the asset 102 is, for example, relatively old and has relatively poor health, among other reasons.
  • In practice, individualizing the aggregate model may depend on the one or more characteristics of the given asset. In particular, certain characteristics may affect the modification of the aggregate model differently than other characteristics. Further, the type, value, existence, or the like of a characteristic may affect the modification as well. For example, the asset age may affect a first part of the aggregate model, while an asset class may affect a second, different part of the aggregate model. And an asset age within a first range of ages may affect the first part of the aggregate model in a first manner, while an asset age within a second range of ages, different from the first range, may affect the first part of the aggregate model in a second, different manner. Other examples are also possible.
  • In some implementations, individualizing the aggregate model may depend on considerations in addition to or alternatively to asset characteristics. For instance, the aggregate model may be individualized based on sensor and/or actuator readings of an asset when the asset is known to be in a relatively good operating state (e.g., as defined by a mechanic or the like). More particularly, in an example of a leading-indicator predictive model, the analytics system 108 may be configured to receive an indication that the asset is in a good operating state (e.g., from a computing device operated by a mechanic) along with operating data from the asset. Based at least on the operating data, the analytics system 108 may then individualize the leading-indicator predictive model for the asset by modifying respective operating limits corresponding to “leading indicator” events. Other examples are also possible.
  • Returning to FIG. 5, at block 512, the analytics system 108 may also be configured to decide whether to individualize a workflow for the asset 102. The analytics system 108 may carry out this decision in a number of manners. In some implementations, the analytics system 108 may perform this operation in line with block 508. In other implementations, the analytics system 108 may decide whether to define an individualized workflow based on the individualized predictive model. In yet another implementation, the analytics system 108 may decide to define an individualized workflow if an individualized predictive model was defined. Other examples are also possible.
  • In any event, if the analytics system 108 decides to define an individualized workflow for the asset 102, the analytics system 108 may do so at block 514. Otherwise, the analytics system 108 may end the definition phase.
  • At block 514, the analytics system 108 may be configured to define an individualized workflow in a number of manners. In example implementations, the analytics system 108 may define an individualized workflow based at least in part on one or more characteristics of the asset 102.
  • Before defining the individualized workflow for the asset 102, similar to defining the individualized predictive model, the analytics system 108 may have determined one or more asset characteristics of interest that form the basis of an individualized workflow, which may have been determined in line with the discussion of block 510. In general, these characteristics of interest may be characteristics that affect the efficacy of the aggregate workflow. Such characteristics may include any of the example characteristics discussed above. Other characteristics are possible as well.
  • Similar again to block 510, the analytics system 108 may determine characteristics of the asset 102 that correspond to the determined characteristics of interest for an individualized workflow. In example implementations, the analytics system 108 may determine characteristics of the asset 102 in a manner similar to the characteristic determination discussed with reference to block 510 and in fact, may utilize some or all of that determination.
  • In any event, based on the determined one or more characteristics of the asset 102, the analytics system 108 may individualize a workflow for the asset 102 by modifying the aggregate workflow. The aggregate workflow may be modified in a number of manners. For example, the aggregate workflow may be modified by changing (e.g., adding, removing, re-ordering, replacing, etc.) one or more workflow operations (e.g., changing from a first data-acquisition scheme to a second scheme or changing from a particular data-acquisition scheme to a particular local diagnostic tool) and/or changing (e.g., increasing, decreasing, adding to, removing from, etc.) the corresponding model output value or range of values that triggers particular workflow operations, among other examples. In practice, modification to the aggregate workflow may depend on the one or more characteristics of the asset 102 in a manner similar to the modification to the aggregate model.
  • To illustrate, FIG. 6C is a conceptual illustration of an individualized model-workflow pair 620. Specifically, the individualized model-workflow pair illustration 620 is a modified version of the aggregate model-workflow pair from FIG. 6A. As shown, the individualized model-workflow pair illustration 620 includes the original columns for model inputs 602, model calculations 604, and model output ranges 606 from FIG. 6A, but includes a modified column for workflow operations 628. In this example, the individualized model-workflow pair is similar to the aggregate model-workflow pair from FIG. 6A, except that when the output of the aggregate model is greater than 80% workflow Operation 3 is triggered instead of Operation 1. The analytics system 108 may have defined this individual workflow based on determining that the asset 102, for example, operates in an environment that historically increases the occurrence of asset failures, among other reasons.
  • After defining the individualized workflow, the analytics system 108 may end the definition phase. At that point, the analytics system 108 may then have an individualized model-workflow pair for the asset 102.
  • In some example implementations, the analytics system 108 may be configured to define an individualized predictive model and/or corresponding workflow for a given asset without first defining an aggregate predictive model and/or corresponding workflow. Other examples are also possible.
  • 3. Health-Score Models & Workflows
  • In a particular implementation, as mentioned above, the analytics system 108 may be configured to define predictive models and corresponding workflows associated with the health of assets. In example implementations, one or more predictive models for monitoring the health of an asset may be utilized to output a health metric (e.g., “health score”) for an asset, which is a single, aggregated metric that indicates whether a failure will occur at a given asset within a given timeframe into the future (e.g., the next two weeks). In particular, a health metric may indicate a likelihood that no failures from a group of failures will occur at an asset within a given timeframe into the future, or a health metric may indicate a likelihood that at least one failure from a group of failures will occur at an asset within a given timeframe into the future.
  • In practice, the predictive models utilized to output a health metric and the corresponding workflows may be defined as aggregate or individualized models and/or workflows, in line with the above discussion.
  • Moreover, depending on the desired granularity of the health metric, the analytics system 108 may be configured to define different predictive models that output different levels of health metrics and to define different corresponding workflows. For example, the analytics system 108 may define a predictive model that outputs a health metric for the asset as a whole (i.e., an asset-level health metric). As another example, the analytics system 108 may define a respective predictive model that outputs a respective health metric for one or more subsystems of the asset (i.e., subsystem-level health metrics). In some cases, the outputs of each subsystem-level predictive model may be combined to generate an asset-level health metric. Other examples are also possible.
  • In general, defining a predictive model that outputs a health metric may be performed in a variety of manners. FIG. 7 is a flow diagram 700 depicting one possible example of a modeling phase that may be used for defining a model that outputs a health metric. For purposes of illustration, the example modeling phase is described as being carried out by the analytics system 108, but this modeling phase may be carried out by other systems as well. One of ordinary skill in the art will appreciate that the flow diagram 700 is provided for sake of clarity and explanation and that numerous other combinations of operations may be utilized to determine a health metric.
  • As shown in FIG. 7, at block 702, the analytics system 108 may begin by defining a set of the one or more failures that form the basis for the health metric (i.e., the failures of interest). In practice, the one or more failures may be those failures that could render an asset (or a subsystem thereof) inoperable if they were to occur. Based on the defined set of failures, the analytics system 108 may take steps to define a model for predicting a likelihood of any of the failures occurring within a given timeframe in the future (e.g., the next two weeks).
  • In particular, at block 704, the analytics system 108 may analyze historical operating data for a group of one or more assets to identify past occurrences of a given failure from the set of failures. At block 706, the analytics system 108 may identify a respective set of operating data that is associated with each identified past occurrence of the given failure (e.g., sensor and/or actuator data from a given timeframe prior to the occurrence of the given failure). At block 708, the analytics system 108 may analyze the identified sets of operating data associated with past occurrences of the given failure to define a relationship (e.g., a failure model) between (1) the values for a given set of operating metrics and (2) the likelihood of the given failure occurring within a given timeframe in the future (e.g., the next two weeks). Lastly, at block 710, the defined relationship for each failure in the defined set (e.g., the individual failure models) may then be combined into a model for predicting the overall likelihood of a failure occurring.
  • As the analytics system 108 continues to receive updated operating data for the group of one or more assets, the analytics system 108 may also continue to refine the predictive model for the defined set of one or more failures by repeating steps 704-710 on the updated operating data.
  • The functions of the example modeling phase illustrated in FIG. 7 will now be described in further detail. Starting with block 702, as noted above, the analytics system 108 may begin by defining a set of the one or more failures that form the basis for the health metric. The analytics system 108 may perform this function in various manners.
  • In one example, the set of the one or more failures may be based on one or more user inputs. Specifically, the analytics system 108 may receive from a computing system operated by a user, such as the output system 110, input data indicating a user selection of the one or more failures. As such, the set of one or more failures may be user-defined.
  • In other examples, the set of the one or more failures may be based on a determination made by the analytics system 108 (e.g., machine-defined). In particular, the analytics system 108 may be configured to define the set of one or more failures, which may occur in a number of manners.
  • For instance, the analytics system 108 may be configured to define the set of failures based on one or more characteristics of the asset 102. That is, certain failures may correspond to certain characteristics, such as asset type, class, etc., of an asset. For example, each type and/or class of asset may have respective failures of interest.
  • In another instance, the analytics system 108 may be configured to define the set of failures based on historical data stored in the databases of the analytics system 108 and/or external data provided by the data source 112. For example, the analytics system 108 may utilize such data to determine which failures result in the longest repair-time and/or which failures are historically followed by additional failures, among other examples.
  • In yet other examples, the set of one or more failures may be defined based on a combination of user inputs and determinations made by the analytics system 108. Other examples are also possible.
  • At block 704, for each of the failures from the set of failures, the analytics system 108 may analyze historical operating data for a group of one or more assets (e.g., abnormal-behavior data) to identify past occurrences of a given failure. The group of the one or more assets may include a single asset, such as asset 102, or multiple assets of a same or similar type, such as fleet of assets that includes the assets 102 and 104. The analytics system 108 may analyze a particular amount of historical operating data, such as a certain amount of time's worth of data (e.g., a month's worth) or a certain number of data-points (e.g., the most recent thousand data-points), among other examples.
  • In practice, identifying past occurrences of the given failure may involve the analytics system 108 identifying the type of operating data, such as abnormal-condition data, that indicates the given failure. In general, a given failure may be associated with one or multiple abnormal-condition indicators, such as fault codes. That is, when the given failure occurs, one or multiple abnormal-condition indicators may be triggered. As such, abnormal-condition indicators may be reflective of an underlying symptom of a given failure.
  • After identifying the type of operating data that indicates the given failure, the analytics system 108 may identify the past occurrences of the given failure in a number of manners. For instance, the analytics system 108 may locate, from historical operating data stored in the databases of the analytics system 108, abnormal-condition data corresponding to the abnormal-condition indicators associated with the given failure. Each located abnormal-condition data would indicate an occurrence of the given failure. Based on this located abnormal-condition data, the analytics system 108 may identify a time at which a past failure occurred.
  • At block 706, the analytics system 108 may identify a respective set of operating data that is associated with each identified past occurrence of the given failure. In particular, the analytics system 108 may identify a set of sensor and/or actuator data from a certain timeframe around the time of the given occurrence of the given failure. For example, the set of data may be from a particular timeframe (e.g., two weeks) before, after, or around the given occurrence of the failure. In other cases, the set of data may be identified from a certain number of data-points before, after, or around the given occurrence of the failure.
  • In example implementations, the set of operating data may include sensor and/or actuator data from some or all of the sensors and actuators of the asset 102. For example, the set of operating data may include data from sensors and/or actuators associated with an abnormal-condition indicator corresponding to the given failure.
  • To illustrate, FIG. 8 depicts a conceptual illustration of historical operating data that the analytics system 108 may analyze to facilitate defining a model. Plot 800 may correspond to a segment of historical data that originated from some (e.g., Sensor A and Actuator B) or all of the sensors and actuators of the asset 102. As shown, the plot 800 includes time on the x-axis 802, measurement values on the y-axis 804, and sensor data 806 corresponding to Sensor A and actuator data 808 corresponding to Actuator B, each of which includes various data-points representing measurements at particular points in time, Ti. Moreover, the plot 800 includes an indication of an occurrence of a failure 810 that occurred at a past time, Tf (e.g., “time of failure”), and an indication of an amount of time 812 before the occurrence of the failure, ΔT, from which sets of operating data are identified. As such, Tf-ΔT defines a timeframe 814 of data-points of interest.
  • Returning to FIG. 7, after the analytics system 108 identifies the set of operating data for the given occurrence of the given failure (e.g., the occurrence at Tf), the analytics system 108 may determine whether there are any remaining occurrences for which a set of operating data should be identified. In the event that there is a remaining occurrence, block 706 would be repeated for each remaining occurrence.
  • Thereafter, at block 708, the analytics system 108 may analyze the identified sets of operating data associated with the past occurrences of the given failure to define a relationship (e.g., a failure model) between (1) a given set of operating metrics (e.g., a given set of sensor and/or actuator measurements) and (2) the likelihood of the given failure occurring within a given timeframe in the future (e.g., the next two weeks). That is, a given failure model may take as inputs sensor and/or actuator measurements from one or more sensors and/or actuators and output a probability that the given failure will occur within the given timeframe in the future.
  • In general, a failure model may define a relationship between operating conditions of the asset 102 and the likelihood of a failure occurring. In some implementations, in addition to raw data signals from sensors and/or actuators of the asset 102, a failure model may receive a number of other data inputs, also known as features, which are derived from the sensor and/or actuator signals. Such features may include an average or range of values that were historically measured when a failure occurred, an average or range of value gradients (e.g., a rate of change in measurements) that were historically measured prior to an occurrence of a failure, a duration of time between failures (e.g., an amount of time or number of data-points between a first occurrence of a failure and a second occurrence of a failure), and/or one or more failure patterns indicating sensor and/or actuator measurement trends around the occurrence of a failure. One of ordinary skill in the art will appreciate that these are but a few example features that can be derived from sensor and/or actuator signals and that numerous other features are possible.
  • In practice, a failure model may be defined in a number of manners. In example implementations, the analytics system 108 may define a failure model by utilizing one or more modeling techniques that return a probability between zero and one, which may take the form of any modeling techniques described above.
  • In a particular example, defining a failure model may involve the analytics system 108 generating a response variable based on the historical operating data identified at block 706. Specifically, the analytics system 108 may determine an associated response variable for each set of sensor and/or actuator measurements received at a particular point in time. As such, the response variable may take the form of a data set associated with the failure model.
  • The response variable may indicate whether the given set of measurements is within any of the timeframes determined at block 706. That is, a response variable may reflect whether a given set of data is from a time of interest about the occurrence of a failure. The response variable may be a binary-valued response variable such that, if the given set of measurements is within any of determined timeframes, the associated response variable is assigned a value of one, and otherwise, the associated response variable is assigned a value of zero.
  • Returning to FIG. 8, a conceptual illustration of a response variable vector, Yres is shown on the plot 800. As shown, response variables associated with sets of measurements that are within the timeframe 814 have a value of one (e.g., Yres at times Ti+3-Ti+8), while response variables associated with sets of measurements outside the timeframe 814 have a value of zero (e.g., Yres at times Ti-Ti+2 and Ti+9-Ti+10). Other response variables are also possible.
  • Continuing in the particular example of defining a failure model based on a response variable, the analytics system 108 may train the failure model with the historical operating data identified at block 706 and the generated response variable. Based on this training process, the analytics system 108 may then define the failure model that receives as inputs various sensor and/or actuator data and outputs a probability between zero and one that a failure will occur within a period of time equivalent to the timeframe used to generate the response variable.
  • In some cases, training with the historical operating data identified at block 706 and the generated response variable may result in variable importance statistics for each sensor and/or actuator. A given variable importance statistic may indicate the sensor's or actuator's relative effect on the probability that a given failure will occur within the period of time into the future.
  • Additionally or alternatively, the analytics system 108 may be configured to define a failure model based on one or more survival analysis techniques, such as a Cox proportional hazard technique. The analytics system 108 may utilize a survival analysis technique in a manner similar in some respects to the above-discussed modeling technique, but the analytics system 108 may determine a survival time-response variable that indicates an amount of time from the last failure to a next expected event. A next expected event may be either reception of senor and/or actuator measurements or an occurrence of a failure, whichever occurs first. This response variable may include a pair of values that are associated with each of the particular points in time at which measurements are received. The response variable may then be utilized to determine a probability that a failure will occur within the given timeframe in the future.
  • In some example implementations, the failure model may be defined based in part on external data, such as weather data, and “hotbox” data, among other data. For instance, based on such data, the failure model may increase or decrease an output failure probability.
  • In practice, external data may be observed at points in time that do not coincide with times at which asset sensors and/or actuators obtain measurements. For example, the times at which “hotbox” data is collected (e.g., times at which a locomotive passes along a section of railroad track that is outfitted with hot box sensors) may be in disagreement with sensor and/or actuator measurement times. In such cases, the analytics system 108 may be configured to perform one or more operations to determine external data observations that would have been observed at times that correspond to the sensor measurement times.
  • Specifically, the analytics system 108 may utilize the times of the external data observations and times of the measurements to interpolate the external data observations to produce external data values for times corresponding to the measurement times. Interpolation of the external data may allow external data observations or features derived therefrom to be included as inputs into the failure model. In practice, various techniques may be used to interpolate the external data with the sensor and/or actuator data, such as nearest-neighbor interpolation, linear interpolation, polynomial interpolation, and spline interpolation, among other examples.
  • Returning to FIG. 7, after the analytics system 108 determines a failure model for a given failure from the set of failures defined at block 702, the analytics system 108 may determine whether there are any remaining failures for which a failure model should be determined. In the event that there remains a failure for which a failure model should be determined, the analytics system 108 may repeat the loop of blocks 704-708. In some implementations, the analytics system 108 may determine a single failure model that encompasses all of the failures defined at block 702. In other implementations, the analytics system 108 may determine a failure model for each subsystem of the asset 102, which may then be utilized to determine an asset-level failure model. Other examples are also possible.
  • Lastly, at block 710, the defined relationship for each failure in the defined set (e.g., the individual failure models) may then be combined into the model (e.g., the health-metric model) for predicting the overall likelihood of a failure occurring within the given timeframe in the future (e.g., the next two weeks). That is, the model receives as inputs sensor and/or actuator measurements from one or more sensors and/or actuators and outputs a single probability that at least one failure from the set of failures will occur within the given timeframe in the future.
  • The analytics system 108 may define the health-metric model in a number of manners, which may depend on the desired granularity of the health metric. That is, in instances where there are multiple failure models, the outcomes of the failure models may be utilized in a number of manners to obtain the output of the health-metric model. For example, the analytics system 108 may determine a maximum, median, or average from the multiple failure models and utilize that determined value as the output of the health-metric model.
  • In other examples, determining the health-metric model may involve the analytics system 108 attributing a weight to individual probabilities output by the individual failure models. For instance, each failure from the set of failures may be considered equally undesirable, and so each probability may likewise be weighted the same in determining the health-metric model. In other instances, some failures may be considered more undesirable than others (e.g., more catastrophic or require longer repair time, etc.), and so those corresponding probabilities may be weighted more than others.
  • In yet other examples, determining the health-metric model may involve the analytics system 108 utilizing one or more modeling techniques, such as a regression technique. An aggregate response variable may take the form of the logical disjunction (logical OR) of the response variables (e.g., Yres in FIG. 8) from each of the individual failure models. For example, aggregate response variables associated with any set of measurements that occur within any timeframe determined at block 706 (e.g., the timeframe 814 of FIG. 8) may have a value of one, while aggregate response variables associated with sets of measurements that occur outside any of the timeframes may have a value of zero. Other manners of defining the health-metric model are also possible.
  • In some implementations, block 710 may be unnecessary. For example, as discussed above, the analytics system 108 may determine a single failure model, in which case the health-metric model may be the single failure model.
  • In practice, the analytics system 108 may be configured to update the individual failure models and/or the overall health-metric model. The analytics system 108 may update a model daily, weekly, monthly, etc. and may do so based on a new portion of historical operating data from the asset 102 or from other assets (e.g., from other assets in the same fleet as the asset 102). Other examples are also possible.
  • C. Deploying Models & Workflows
  • After the analytics system 108 defines a model-workflow pair, the analytics system 108 may deploy the defined model-workflow pair to one or more assets. Specifically, the analytics system 108 may transmit the defined predictive model and/or corresponding workflow to at least one asset, such as the asset 102. The analytics system 108 may transmit model-workflow pairs periodically or based on triggering events, such as any modifications or updates to a given model-workflow pair.
  • In some cases, the analytics system 108 may transmit only one of an individualized model or an individualized workflow. For example, in scenarios where the analytics system 108 defined only an individualized model or workflow, the analytics system 108 may transmit an aggregate version of the workflow or model along with the individualized model or workflow, or the analytics system 108 may not need to transmit an aggregate version if the asset 102 already has the aggregate version stored in data storage. In sum, the analytics system 108 may transmit (1) an individualized model and/or individualized workflow, (2) an individualized model and the aggregate workflow, (3) the aggregate model and an individualized workflow, or (4) the aggregate model and the aggregate workflow.
  • In practice, the analytics system 108 may have carried out some or all of the operations of blocks 702-710 of FIG. 7 for multiple assets to define model-workflow pairs for each asset. For example, the analytics system 108 may have additionally defined a model-workflow pair for the asset 104. The analytics system 108 may be configured to transmit respective model-workflow pairs to the assets 102 and 104 simultaneously or sequentially.
  • D. Local Execution by Asset
  • A given asset, such as the asset 102, may be configured to receive a model-workflow pair or a portion thereof and operate in accordance with the received model-workflow pair. That is, the asset 102 may store in data storage the model-workflow pair and input into the predictive model data obtained by sensors and/or actuators of the asset 102 and at times, execute the corresponding workflow based on the output of the predictive model.
  • In practice, various components of the asset 102 may execute the predictive model and/or corresponding workflow. For example, as discussed above, each asset may include a local analytics device configured to store and run model-workflow pairs provided by the analytics system 108. When the local analytics device receives particular sensor and/or actuator data, it may input the received data into the predictive model and depending on the output of the model, may execute one or more operations of the corresponding workflow.
  • In another example, a central processing unit of the asset 102 that is separate from the local analytics device may execute the predictive model and/or corresponding workflow. In yet other examples, the local analytics system and central processing unit of the asset 102 may collaboratively execute the model-workflow pair. For instance, the local analytics system may execute the predictive model and the central processing unit may execute the workflow or vice versa.
  • In general, an asset executing a predictive model and based on the resulting output, executing operations of the workflow may facilitate determining a cause or causes of the likelihood of a particular event occurring that is output by the model and/or may facilitate preventing a particular event from occurring in the future. In executing a workflow, an asset may locally determine and take actions to help prevent an event from occurring, which may be beneficial in situations when reliance on the analytics system 108 to make such determinations and provide recommended actions is not efficient or feasible (e.g., when there is network latency, when network connection is poor, when the asset moves out of coverage of the communication network 106, etc.).
  • In practice, an asset may execute a predictive model in a variety of manners, which may be dependent on the particular predictive model. FIG. 9 is a flow diagram 900 depicting one possible example of a local-execution phase that may be used for locally executing a predictive model. The example local-execution phase will be discussed in the context of a health-metric model that outputs a health metric of an asset, but it should be understood that a same or similar local-execution phase may be utilized for other types of predictive models. Moreover, for purposes of illustration, the example local-execution phase is described as being carried out by a local analytics device of the asset 102, but this phase may be carried out by other devices and/or systems as well. One of ordinary skill in the art will appreciate that the flow diagram 900 is provided for sake of clarity and explanation and that numerous other combinations of operations and functions may be utilized to locally execute a predictive model.
  • As shown in FIG. 9, at block 902, the local analytics device may receive data that reflects the current operating conditions of the asset 102. At block 904, the local analytics device may identify, from the received data, the set of operating data that is to be input into the model provided by the analytics system 108. At block 906, the local analytics device may then input the identified set of operating data into the model and run the model to obtain a health metric for the asset 102.
  • As the local analytics device continues to receive updated operating data for the asset 102, the local analytics device may also continue to update the health metric for the asset 102 by repeating the operations of blocks 902-906 based on the updated operating data. In some cases, the operations of blocks 902-906 may be repeated each time the local analytics device receives new data from sensors and/or actuators of the asset 102 or periodically (e.g., hourly, daily, weekly, monthly, etc.). In this way, local analytics devices may be configured to dynamically update health metrics, perhaps in real-time, as assets are used in operation.
  • The functions of the example local-execution phase illustrated in FIG. 9 will now be described in further detail. At block 902, the local analytics device may receive data that reflects the current operating conditions of the asset 102. Such data may include sensor data from one or more of the sensors of the asset 102, actuator data from one or more actuators of the asset 102, and/or it may include abnormal-condition data, among other types of data.
  • At block 904, the local analytics device may identify, from the received data, the set of operating data that is to be input into the health-metric model provided by the analytics system 108. This operation may be performed in a number of manners.
  • In one example, the local analytics device may identify the set of operating data inputs (e.g., data from particular sensors and/or actuators of interest) for the model based on a characteristic of the asset 102, such as asset type or asset class, for which the health metric is being determined. In some cases, the identified set of operating data inputs may be sensor data from some or all of the sensors of the asset 102 and/or actuator data from some of all of the actuators of the asset 102.
  • In another example, the local analytics device may identify the set of operating data inputs based on the predictive model provided by the analytics system 108. That is, the analytics system 108 may provide some indication to the asset 102 (e.g., either in the predictive model or in a separate data transmission) of the particular inputs for the model. Other examples of identifying the set of operating data inputs are also possible.
  • At block 906, the local analytics device may then run the health-metric model. Specifically, the local analytics device may input the identified set of operating data into the model, which in turn determines and outputs an overall likelihood of at least one failure occurring within the given timeframe in the future (e.g., the next two weeks).
  • In some implementations, this operation may involve the local analytics device inputting particular operating data (e.g., sensor and/or actuator data) into one or more individual failure models of the health-metric model, which each may output an individual probability. The local analytics device may then use these individual probabilities, perhaps weighting some more than others in accordance with the health-metric model, to determine the overall likelihood of a failure occurring within the given timeframe in the future.
  • After determining the overall likelihood of a failure occurring, the local analytics device may convert the probability of a failure occurring into the health metric that may take the form of a single, aggregated parameter that reflects the likelihood that no failures will occur at the asset 102 within the give timeframe in the future (e.g., two weeks). In example implementations, converting the failure probability into the health metric may involve the local analytics device determining the complement of the failure probability. Specifically, the overall failure probability may take the form of a value ranging from zero to one; the health metric may be determined by subtracting one by that number. Other examples of converting the failure probability into the health metric are also possible.
  • After an asset locally executes a predictive model, the asset may then execute a corresponding workflow based on the resulting output of the executed predictive model. As mentioned above, workflows may take various forms and so, workflows may be executed in a variety of manners.
  • For example, the asset 102 may internally execute one or more operations that modify some behavior of the asset 102, such as modifying a data-acquisition and/or -transmission scheme, executing a local diagnostic tool, modifying an operating condition of the asset 102 (e.g., modifying a velocity, acceleration, fan speed, propeller angle, air intake, etc. via one or more actuators of the asset 102), or outputting an indication, perhaps of a relatively low health metric or of recommended preventative actions that should be executed in relation to the asset 102, at a user interface of the asset 102 or to an external computing system.
  • In another example, the asset 102 may transmit to a system on the communication network 106, such as the output system 110, an instruction to cause the system to carry out an operation, such as generating a work-order or ordering a particular part for a repair of the asset 102. Other examples of the asset 102 locally executing a workflow are also possible.
  • E. Model/Workflow Modification Phase
  • In another aspect, the analytics system 108 may carry out a modification phase during which the analytics system 108 modifies a deployed model and/or workflow based on new asset data. This phase may be performed for both aggregate and individualized models and workflows.
  • In particular, as a given asset (e.g., the asset 102) operates in accordance with a model-workflow pair, the asset 102 may provide operating data to the analytics system 108 and/or the data source 112 may provide to the analytics system 108 external data related to the asset 102. Based at least on this data, the analytics system 108 may modify the model and/or workflow for the asset 102 and/or the model and/or workflow for other assets, such as the asset 104. In modifying models and/or workflows for other assets, the analytics system 108 may share information learned from the behavior of the asset 102.
  • In practice, the analytics system 108 may make modifications in a number of manners. FIG. 10 is a flow diagram 1000 depicting one possible example of a modification phase that may be used for modifying model-workflow pairs. For purposes of illustration, the example modification phase is described as being carried out by the analytics system 108, but this modification phase may be carried out by other systems as well. One of ordinary skill in the art will appreciate that the flow diagram 1000 is provided for sake of clarity and explanation and that numerous other combinations of operations may be utilized to modify model-workflow pairs.
  • As shown in FIG. 10, at block 1002, the analytics system 108 may receive data from which the analytics system 108 identifies an occurrence of a particular event. The data may be operating data originating from the asset 102 or external data related to the asset 102 from the data source 112, among other data. The event may take the form of any of the events discussed above, such as a failure at the asset 102.
  • In other example implementations, the event may take the form of a new component or subsystem being added to the asset 102. Another event may take the form of a “leading indicator” event, which may involve sensors and/or actuators of the asset 102 generating data that differs, perhaps by a threshold differential, from the data identified at block 706 of FIG. 7 during the model-definition phase. This difference may indicate that the asset 102 has operating conditions that are above or below normal operating conditions for assets similar to the asset 102. Yet another event may take the form of an event that is followed by one or more leading indicator events.
  • Based on the identified occurrence of the particular event and/or the underlying data (e.g., operating data and/or external data related to the asset 102), the analytics system 108 may then modify the aggregate, predictive model and/or workflow and/or one or more individualized predictive models and/or workflows. In particular, at block 1004, the analytics system 108 may determine whether to modify the aggregate, predictive model. The analytics system 108 may determine to modify the aggregate, predictive model for a number of reasons.
  • For example, the analytics system 108 may modify the aggregate, predictive model if the identified occurrence of the particular event was the first occurrence of this particular event for a plurality of assets including the asset 102, such as the first time a particular failure occurred at an asset from a fleet of assets or the first time a particular new component was added to an asset from a fleet of assets.
  • In another example, the analytics system 108 may make a modification if data associated with the identified occurrence of the particular event is different from data that was utilized to originally define the aggregate model. For instance, the identified occurrence of the particular event may have occurred under operating conditions that had not previously been associated with an occurrence of the particular event (e.g., a particular failure might have occurred with associated sensor values not previously measured before with the particular failure). Other reasons for modifying the aggregate model are also possible.
  • If the analytics system 108 determines to modify the aggregate, predictive model, the analytics system 108 may do so at block 1006. Otherwise, the analytics system 108 may proceed to block 1008.
  • At block 1006, the analytics system 108 may modify the aggregate model based at least in part on the data related to the asset 102 that was received at block 1002. In example implementations, the aggregate model may be modified in various manners, such as any manner discussed above with reference to block 510 of FIG. 5. In other implementations, the aggregate model may be modified in other manners as well.
  • At block 1008, the analytics system 108 may then determine whether to modify the aggregate workflow. The analytics system 108 may modify the aggregate workflow for a number of reasons.
  • For example, the analytics system 108 may modify the aggregate workflow based on whether the aggregate model was modified at block 1004 and/or if there was some other change at the analytics system 108. In other examples, the analytics system 108 may modify the aggregate workflow if the identified occurrence of the event at block 1002 occurred despite the asset 102 executing the aggregate workflow. For instance, if the workflow was aimed to help facilitate preventing the occurrence of the event (e.g., a failure) and the workflow was executed properly but the event still occurred nonetheless, then the analytics system 108 may modify the aggregate workflow. Other reasons for modifying the aggregate workflow are also possible.
  • If the analytics system 108 determines to modify the aggregate workflow, the analytics system 108 may do so at block 1010. Otherwise, the analytics system 108 may proceed to block 1012.
  • At block 1010, the analytics system 108 may modify the aggregate workflow based at least in part on the data related to the asset 102 that was received at block 1002. In example implementations, the aggregate workflow may be modified in various manners, such as any manner discussed above with reference to block 514 of FIG. 5. In other implementations, the aggregate model may be modified in other manners as well.
  • At blocks 1012 through blocks 1018, the analytics system 108 may be configured to modify one or more individualized models (e.g., for each of assets 102 and 104) and/or one or more individualized workflows (e.g., for one of asset 102 or asset 104) based at least in part on the data related to the asset 102 that was received at block 1002. The analytics system 108 may do so in a manner similar to blocks 1004-1010.
  • However, the reasons for modifying an individualized model or workflow may differ from the reasons for the aggregate case. For instance, the analytics system 108 may further consider the underlying asset characteristics that were utilized to define the individualized model and/or workflow in the first place. In a particular example, the analytics system 108 may modify an individualized model and/or workflow if the identified occurrence of the particular event was the first occurrence of this particular event for assets with asset characteristics of the asset 102. Other reasons for modifying an individualized model and/or workflow are also possible.
  • To illustrate, FIG. 6D is a conceptual illustration of a modified model-workflow pair 630. Specifically, the model-workflow pair illustration 630 is a modified version of the aggregate model-workflow pair from FIG. 6A. As shown, the modified model-workflow pair illustration 630 includes the original column for model inputs 602 from FIG. 6A and includes modified columns for model calculations 634, model output ranges 636, and workflow operations 638. In this example, the modified predictive model has a single input, data from Sensor A, and has two calculations, Calculations I and III. If the output probability of the modified model is less than 75%, then workflow Operation 1 is performed. If the output probability is between 75% and 85%, then workflow Operation 2 is performed. And if the output probability is greater than 85%, then workflow Operation 3 is performed. Other example modified model-workflow pairs are possible and contemplated herein.
  • Returning to FIG. 10, at block 1020, the analytics system 108 may then transmit any model and/or workflow modifications to one or more assets. For example, the analytics system 108 may transmit a modified individualized model-workflow pair to the asset 102 (e.g., the asset whose data caused the modification) and a modified aggregate model to the asset 104. In this way, the analytics system 108 may dynamically modify models and/or workflows based on data associated with the operation of the asset 102 and distribute such modifications to multiple assets, such as the fleet to which the asset 102 belongs. Accordingly, other assets may benefit from the data originating from the asset 102 in that the other assets' local model-workflow pairs may be refined based on such data, thereby helping to create more accurate and robust model-workflow pairs
  • F. Dynamic Execution of Model/Workflow
  • In another aspect, the asset 102 and/or the analytics system 108 may be configured to dynamically adjust executing a model-workflow pair. In particular, the asset 102 and/or the analytics system 108 may be configured to detect certain events that trigger a change in responsibilities with respect to whether the asset 102 and/or the analytics system 108 should be executing the predictive model and/or workflow.
  • In operation, both the asset 102 and the analytics system 108 may execute all or a part of a model-workflow pair on behalf of the asset 102. For example, after the asset 102 receives a model-workflow pair from the analytics system 108, the asset 102 may store the model-workflow pair in data storage but then may rely on the analytics system 108 to centrally execute part or all of the model-workflow pair. In particular, the asset 102 may provide at least sensor and/or actuator data to the analytics system 108, which may then use such data to centrally execute a predictive model for the asset 102. Based on the output of the model, the analytics system 108 may then execute the corresponding workflow or the analytics system 108 may transmit to the asset 102 the output of the model or an instruction for the asset 102 to locally execute the workflow.
  • In other examples, the analytics system 108 may rely on the asset 102 to locally execute part or all of the model-workflow pair. Specifically, the asset 102 may locally execute part or all of the predictive model and transmit results to the analytics system 108, which may then cause the analytics system 108 to centrally execute the corresponding workflow. Or the asset 102 may also locally execute the corresponding workflow.
  • In yet other examples, the analytics system 108 and the asset 102 may share in the responsibilities of executing the model-workflow pair. For instance, the analytics system 108 may centrally execute portions of the model and/or workflow, while the asset 102 locally executes the other portions of the model and/or workflow. The asset 102 and analytics system 108 may transmit results from their respective executed responsibilities. Other examples are also possible.
  • At some point in time, the asset 102 and/or the analytics system 108 may determine that the execution of the model-workflow pair should be adjusted. That is, one or both may determine that the execution responsibilities should be modified. This operation may occur in a variety of manners.
  • FIG. 11 is a flow diagram 1100 depicting one possible example of an adjustment phase that may be used for adjusting execution of a model-workflow pair. For purposes of illustration, the example adjustment phase is described as being carried out by the asset 102 and/or the analytics system 108, but this modification phase may be carried out by other systems as well. One of ordinary skill in the art will appreciate that the flow diagram 1100 is provided for sake of clarity and explanation and that numerous other combinations of operations may be utilized to adjust the execution of a model-workflow pair.
  • At block 1102, the asset 102 and/or the analytics system 108 may detect an adjustment factor (or potentially multiple) that indicates conditions that require an adjustment to the execution of the model-workflow pair. Examples of such conditions include network conditions of the communication network 106 or processing conditions of the asset 102 and/or analytics system 108, among other examples. Example network conditions may include network latency, network bandwidth, signal strength of a link between the asset 102 and the communication network 106, or some other indication of network performance, among other examples. Example processing conditions may include processing capacity (e.g., available processing power), processing usage (e.g., amount of processing power being consumed) or some other indication of processing capabilities, among other examples.
  • In practice, detecting an adjustment factor may be performed in a variety of manners. For example, this operation may involve determining whether network (or processing) conditions reach one or more threshold values or whether conditions have changed in a certain manner. Other examples of detecting an adjustment factor are also possible.
  • In particular, in some cases, detecting an adjustment factor may involve the asset 102 and/or the analytics system 108 detecting an indication that a signal strength of a communication link between the asset 102 and the analytics system 108 is below a threshold signal strength or has been decreasing at a certain rate of change. In this example, the adjustment factor may indicate that the asset 102 is about to go “off-line.”
  • In another case, detecting an adjustment factor may additionally or alternatively involve the asset 102 and/or the analytics system 108 detecting an indication that network latency is above a threshold latency or has been increasing at a certain rate of change. Or the indication may be that a network bandwidth is below a threshold bandwidth or has been decreasing at a certain rate of change. In these examples, the adjustment factor may indicate that the communication network 106 is lagging.
  • In yet other cases, detecting an adjustment factor may additionally or alternatively involve the asset 102 and/or the analytics system 108 detecting an indication that processing capacity is below a particular threshold or has been decreasing at a certain rate of change and/or that processing usage is above a threshold value or increasing at a certain rate of change. In such examples, the adjustment factor may indicate that processing capabilities of the asset 102 (and/or the analytics system 108) are low. Other examples of detecting an adjustment factor are also possible.
  • At block 1104, based on the detected adjustment factor, the local execution responsibilities may be adjusted, which may occur in a number of manners. For example, the asset 102 may have detected the adjustment factor and then determined to locally execute the model-workflow pair or a portion thereof. In some cases, the asset 102 may then transmit to the analytics system 108 a notification that the asset 102 is locally executing the predictive model and/or workflow.
  • In another example, the analytics system 108 may have detected the adjustment factor and then transmitted an instruction to the asset 102 to cause the asset 102 to locally execute the model-workflow pair or a portion thereof. Based on the instruction, the asset 102 may then locally execute the model-workflow pair.
  • At block 1106, the central execution responsibilities may be adjusted, which may occur in a number of manners. For example, the central execution responsibilities may be adjusted based on the analytics system 108 detecting an indication that the asset 102 is locally executing the predictive model and/or the workflow. The analytics system 108 may detect such an indication in a variety of manners.
  • In some examples, the analytics system 108 may detect the indication by receiving from the asset 102 a notification that the asset 102 is locally executing the predictive model and/or workflow. The notification may take various forms, such as binary or textual, and may identify the particular predictive model and/or workflow that the asset is locally executing.
  • In other examples, the analytics system 108 may detect the indication based on received operating data for the asset 102. Specifically, detecting the indication may involve the analytics system 108 receiving operating data for the asset 102 and then detecting one or more characteristics of the received data. From the one or more detected characteristics of the received data, the analytics system 108 may infer that the asset 102 is locally executing the predictive model and/or workflow.
  • In practice, detecting the one or more characteristics of the received data may be performed in a variety of manners. For instance, the analytics system 108 may detect a type of the received data. In particular, the analytics system 108 may detect a source of the data, such as a particular sensor or actuator that generated sensor or actuator data. Based on the type of the received data, the analytics system 108 may infer that the asset 102 is locally executing the predictive model and/or workflow. For example, based on detecting a sensor-identifier of a particular sensor, the analytics system 108 may infer the that asset 102 is locally executing a predictive model and corresponding workflow that causes the asset 102 to acquire data from the particular sensor and transmit that data to the analytics system 108.
  • In another instance, the analytics system 108 may detect an amount of the received data. The analytics system 108 may compare that amount to a certain threshold amount of data. Based on the amount reaching the threshold amount, the analytics system 108 may infer that the asset 102 is locally executing a predictive model and/or workflow that causes the asset 102 to acquire an amount of data equivalent to or greater than the threshold amount. Other examples are also possible.
  • In example implementations, detecting the one or more characteristics of the received data may involve the analytics system 108 detecting a certain change in one or more characteristics of the received data, such as a change in the type of the received data, a change in the amount of data that is received, or change in the frequency at which data is received. In a particular example, a change in the type of the received data may involve the analytics system 108 detecting a change in the source of sensor data that it is receiving (e.g., a change in sensors and/or actuators that are generating the data provided to the analytics system 108).
  • In some cases, detecting a change in the received data may involve the analytics system 108 comparing recently received data to data received in the past (e.g., an hour, day, week, etc. before a present time). In any event, based on detecting the change in the one or more characteristics of the received data, the analytics system 108 may infer that the asset 102 is locally executing a predictive model and/or workflow that causes such a change to the data provided by the asset 102 to the analytics system 108.
  • Moreover, the analytics system 108 may detect an indication that the asset 102 is locally executing the predictive model and/or the workflow based on detecting the adjustment factor at block 1102. For example, in the event that the analytics system 108 detects the adjustment factor at block 1102, the analytics system 108 may then transmit to the asset 102 instructions that cause the asset 102 to adjust its local execution responsibilities and accordingly, the analytics system 108 may adjust its own central execution responsibilities. Other examples of detecting the indication are also possible.
  • In example implementations, the central execution responsibilities may be adjusted in accordance with the adjustment to the local execution responsibilities. For instance, if the asset 102 is now locally executing the predictive model, then the analytics system 108 may accordingly cease centrally executing the predictive model (and may or may not cease centrally executing the corresponding workflow). Further, if the asset 102 is locally executing the corresponding workflow, then the analytics system 108 may accordingly cease executing the workflow (and may or may not cease centrally executing the predictive model). Other examples are also possible.
  • In practice, the asset 102 and/or the analytics system 108 may continuously perform the operations of blocks 1102-1106. And at times, the local and central execution responsibilities may be adjusted to facilitate optimizing the execution of model-workflow pairs.
  • Moreover, in some implementations, the asset 102 and/or the analytics system 108 may perform other operations based on detecting an adjustment factor. For example, based on a condition of the communication network 106 (e.g., bandwidth, latency, signal strength, or another indication of network quality), the asset 102 may locally execute a particular workflow. The particular workflow may be provided by the analytics system 108 based on the analytics system 108 detecting the condition of the communication network, may be already stored on the asset 102, or may be a modified version of a workflow already stored on the asset 102 (e.g., the asset 102 may locally modify a workflow). In some cases, the particular workflow may include a data-acquisition scheme that increases or decreases a sampling rate and/or a data-transmission scheme that increases or decreases a transmission rate or amount of data transmitted to the analytics system 108, among other possible workflow operations.
  • In a particular example, the asset 102 may determine that one or more detected conditions of the communication network have reached respective thresholds (e.g., indicating poor network quality). Based on such a determination, the asset 102 may locally execute a workflow that includes transmitting data according to a data-transmission scheme that reduces the amount and/or frequency of data the asset 102 transmits to the analytics system 108. Other examples are also possible.
  • V. Example Methods
  • Turning now to FIG. 12, a flow diagram is depicted illustrating an example method 1200 for defining and deploying an aggregate, predictive model and corresponding workflow that may be performed by the analytics system 108. For the method 1200 and the other methods discussed below, the operations illustrated by the blocks in the flow diagrams may be performed in line with the above discussion. Moreover, one or more operations discussed above may be added to a given flow diagram.
  • At block 1202, the method 1200 may involve the analytics system 108 receiving respective operating data for a plurality of assets (e.g., the assets 102 and 104). At block 1204, the method 1200 may involve the analytics system 108, based on the received operating data, defining a predictive model and a corresponding workflow (e.g., a failure model and corresponding workflow) that are related to the operation of the plurality of assets. At block 1206, the method 1200 may involve the analytics system 108 transmitting to at least one asset of the plurality of assets (e.g., the asset 102) the predictive model and the corresponding workflow for local execution by the at least one asset.
  • FIG. 13 depicts a flow diagram of an example method 1300 for defining and deploying an individualized, predictive model and/or corresponding workflow that may be performed by the analytics system 108. At block 1302, the method 1300 may involve the analytics system 108 receiving operating data for a plurality of assets, where the plurality of assets includes at least a first asset (e.g., the asset 102). At block 1304, the method 1300 may involve the analytics system 108, based on the received operating data, defining an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets. At block 1306, the method 1300 may involve the analytics system 108 determining one or more characteristics of the first asset. At block 1308, the method 1300 may involve the analytics system 108, based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, defining at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset. At block 1310, the method 1300 may involve the analytics system 108 transmitting to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
  • FIG. 14 depicts a flow diagram of an example method 1400 for dynamically modifying the execution of model-workflow pairs that may be performed by the analytics system 108. At block 1402, the method 1400 may involve the analytics system 108 transmitting to an asset (e.g., the asset 102) a predictive model and corresponding workflow that are related to the operation of the asset for local execution by the asset. At block 1404, the method 1400 may involve the analytics system 108 detecting an indication that the asset is locally executing at least one of the predictive model or the corresponding workflow. At block 1406, the method 1400 may involve the analytics system 108, based on the detected indication, modifying central execution by the computing system of at least one of the predictive model or the corresponding workflow.
  • Similar to method 1400, another method for dynamically modifying the execution of model-workflow pairs may be performed by an asset (e.g., the asset 102). For instance, such a method may involve the asset 102 receiving from a central computing system (e.g., the analytics system 108) a predictive model and corresponding workflow that are related to the operation of the asset 102. The method may also involve the asset 102 detecting an adjustment factor indicating one or more conditions associated with adjusting execution of the predictive model and the corresponding workflow. The method may involve, based on the detected adjustment factor, (i) modifying local execution by the asset 102 of at least one of the predictive model or the corresponding workflow and (ii) transmitting to the central computing system an indication that the asset 102 is locally executing the at least one of the predictive model or the corresponding workflow to facilitate causing the central computing system to modify central execution by the computing system of at least one of the predictive model or the corresponding workflow.
  • To the extent that examples described herein involve operations performed or initiated by actors, such as “humans”, “operators”, “users” or other entities, this is for purposes of example and explanation only. The claims should not be construed as requiring action by such actors unless explicitly recited in the claim language.

Claims (20)

1. A computing system comprising:
at least one processor;
a non-transitory computer-readable medium; and
program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to:
receive operating data for a plurality of assets, wherein the plurality of assets comprises a first asset;
based on the received operating data, define an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets;
determine one or more characteristics of the first asset;
based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, define at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset; and
transmit to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
2. The computing system of claim 1, wherein the one or more characteristics of the first asset comprises at least one of an asset age or an asset health.
3. The computing system of claim 1, wherein determining the one or more characteristics of the first asset comprises determining the one or more characteristics of the first asset based on received operating data for the first asset.
4. The computing system of claim 1, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized predictive model and the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the individualized predictive model and the individualized corresponding workflow.
5. The computing system of claim 1, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the aggregate predictive model and the individualized corresponding workflow.
6. The computing system of claim 5, wherein the aggregate corresponding workflow comprises a first operation, and wherein the individualized corresponding workflow comprises a second operation that differs from the first operation.
7. The computing system of claim 6, wherein the first operation comprises acquiring data according to a first acquisition scheme, and wherein the second operation comprises acquiring data according to a second acquisition scheme.
8. The computing system of claim 6, wherein the first operation comprises acquiring data according to an acquisition scheme, and wherein the second operation comprises executing one or more diagnostic tools.
9. The computing system of claim 1, wherein the plurality of assets further comprises a second asset, and wherein the program instructions further comprise instructions that are executable to cause the computing system to:
after transmitting the at least one individualized predictive model or individualized corresponding workflow, receive operating data for the second asset indicating an occurrence of an event at the second asset;
based on the received operating data for the second asset, modify the at least one individualized predictive model or individualized corresponding workflow; and
transmit to the first asset the modified at least one individualized predictive model or individualized corresponding workflow.
10. A non-transitory computer-readable medium having instructions stored thereon that are executable to cause a computing system to:
receive operating data for a plurality of assets, wherein the plurality of assets comprises a first asset;
based on the received operating data, define an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets;
determine one or more characteristics of the first asset;
based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, define at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset; and
transmit to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
11. The non-transitory computer-readable medium of claim 10, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized predictive model and the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the individualized predictive model and the individualized corresponding workflow.
12. The non-transitory computer-readable medium of claim 10, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the aggregate predictive model and the individualized corresponding workflow.
13. The non-transitory computer-readable medium of claim 12, wherein the aggregate corresponding workflow comprises a first operation, and wherein the individualized corresponding workflow comprises a second operation that differs from the first operation.
14. The non-transitory computer-readable medium of claim 13, wherein the first operation comprises acquiring data according to a first acquisition scheme, and wherein the second operation comprises acquiring data according to a second acquisition scheme.
15. The non-transitory computer-readable medium of claim 13, wherein the first operation comprises acquiring data according to an acquisition scheme, and wherein the second operation comprises executing one or more diagnostic tools.
16. The non-transitory computer-readable medium of claim 10, wherein the plurality of assets further comprises a second asset, and wherein the program instructions further comprise instructions that are executable to cause the computing system to:
after transmitting the at least one individualized predictive model or individualized corresponding workflow, receive operating data for the second asset indicating an occurrence of an event at the second asset;
based on the received operating data for the second asset, modify the at least one individualized predictive model or individualized corresponding workflow; and
transmit to the first asset the modified at least one individualized predictive model or individualized corresponding workflow.
17. A computer-implemented method comprising:
receiving operating data for a plurality of assets, wherein the plurality of assets comprises a first asset;
based on the received operating data, defining an aggregate predictive model and an aggregate corresponding workflow that are related to the operation of the plurality of assets;
determining one or more characteristics of the first asset;
based on the one or more characteristics of the first asset and the aggregate predictive model and the aggregate corresponding workflow, defining at least one of an individualized predictive model or an individualized corresponding workflow that is related to the operation of the first asset; and
transmitting to the first asset the defined at least one individualized predictive model or individualized corresponding workflow for local execution by the first asset.
18. The computer-implemented method of claim 17, wherein defining at least one of an individualized predictive model or an individualized corresponding workflow comprises defining the individualized corresponding workflow, and wherein transmitting the at least one individualized predictive model or individualized corresponding workflow comprises transmitting the aggregate predictive model and the individualized corresponding workflow.
19. The computer-implemented method of claim 18, wherein the aggregate corresponding workflow comprises a first operation, and wherein the individualized corresponding workflow comprises a second operation that differs from the first operation.
20. The computer-implemented method of claim 19, wherein one of the first operation or the second operation comprises executing one or more diagnostic tools.
US14/744,369 2014-12-01 2015-06-19 Individualized Predictive Model & Workflow for an Asset Abandoned US20160371616A1 (en)

Priority Applications (13)

Application Number Priority Date Filing Date Title
US14/744,362 US10176279B2 (en) 2015-06-05 2015-06-19 Dynamic execution of predictive models and workflows
US14/963,207 US10254751B2 (en) 2015-06-05 2015-12-08 Local analytics at an asset
JP2017565106A JP2018519594A (en) 2015-06-19 2016-06-13 Local analysis on assets
PCT/US2016/037247 WO2016205132A1 (en) 2015-06-19 2016-06-13 Local analytics at an asset
CA2989806A CA2989806A1 (en) 2015-06-19 2016-06-13 Local analytics at an asset
KR1020187001578A KR20180011333A (en) 2015-06-19 2016-06-13 Local analytics at an asset
AU2016277850A AU2016277850A1 (en) 2015-06-19 2016-06-13 Local analytics at an asset
CN201680043854.5A CN107851233A (en) 2015-06-19 2016-06-13 Local analytics at assets
EP16812206.7A EP3311345A4 (en) 2015-06-19 2016-06-13 Local analytics at an asset
US15/185,524 US10579750B2 (en) 2015-06-05 2016-06-17 Dynamic execution of predictive models
US15/599,360 US10878385B2 (en) 2015-06-19 2017-05-18 Computer system and method for distributing execution of a predictive model
US15/696,137 US20180247239A1 (en) 2015-06-19 2017-09-05 Computing System and Method for Compressing Time-Series Values
HK18111155.8A HK1251701A1 (en) 2015-06-19 2018-08-30 Local analytics at an asset

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201462086155P 2014-12-01 2014-12-01
US201462088651P 2014-12-07 2014-12-07
US14/732,258 US10417076B2 (en) 2014-12-01 2015-06-05 Asset health score

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/744,352 Continuation-In-Part US10261850B2 (en) 2014-12-01 2015-06-19 Aggregate predictive model and workflow for local execution

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/744,362 Continuation-In-Part US10176279B2 (en) 2015-06-05 2015-06-19 Dynamic execution of predictive models and workflows

Publications (1)

Publication Number Publication Date
US20160371616A1 true US20160371616A1 (en) 2016-12-22

Family

ID=56078998

Family Applications (15)

Application Number Title Priority Date Filing Date
US14/732,303 Abandoned US20160155098A1 (en) 2014-12-01 2015-06-05 Historical Health Metrics
US14/732,285 Active 2035-10-03 US10176032B2 (en) 2014-12-01 2015-06-05 Subsystem health score
US14/732,320 Active US9471452B2 (en) 2014-12-01 2015-06-05 Adaptive handling of operating data
US14/732,258 Active 2036-06-03 US10417076B2 (en) 2014-12-01 2015-06-05 Asset health score
US14/744,369 Abandoned US20160371616A1 (en) 2014-12-01 2015-06-19 Individualized Predictive Model & Workflow for an Asset
US14/744,352 Active 2037-09-07 US10261850B2 (en) 2014-12-01 2015-06-19 Aggregate predictive model and workflow for local execution
US14/853,189 Active 2036-01-10 US9842034B2 (en) 2014-12-01 2015-09-14 Mesh network routing based on availability of assets
US14/963,212 Active 2036-01-22 US10025653B2 (en) 2014-12-01 2015-12-08 Computer architecture and method for modifying intake data rate based on a predictive model
US14/963,208 Abandoned US20170161659A1 (en) 2014-12-01 2015-12-08 Computer Architecture and Method for Modifying Data Intake Storage Location Based on a Predictive Model
US14/963,209 Abandoned US20170161621A1 (en) 2014-12-01 2015-12-08 Computer Architecture and Method for Modifying Intake Data Set Based on a Predictive Model
US15/257,276 Active US9864665B2 (en) 2014-12-01 2016-09-06 Adaptive handling of operating data based on assets' external conditions
US15/257,258 Active US9910751B2 (en) 2014-12-01 2016-09-06 Adaptive handling of abnormal-condition indicator criteria
US15/805,124 Active 2035-11-26 US10545845B1 (en) 2014-12-01 2017-11-06 Mesh network routing based on availability of assets
US16/125,335 Active 2035-11-22 US11144378B2 (en) 2014-12-01 2018-09-07 Computer system and method for recommending an operating mode of an asset
US16/194,036 Active US10754721B2 (en) 2014-12-01 2018-11-16 Computer system and method for defining and using a predictive model configured to predict asset failures

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US14/732,303 Abandoned US20160155098A1 (en) 2014-12-01 2015-06-05 Historical Health Metrics
US14/732,285 Active 2035-10-03 US10176032B2 (en) 2014-12-01 2015-06-05 Subsystem health score
US14/732,320 Active US9471452B2 (en) 2014-12-01 2015-06-05 Adaptive handling of operating data
US14/732,258 Active 2036-06-03 US10417076B2 (en) 2014-12-01 2015-06-05 Asset health score

Family Applications After (10)

Application Number Title Priority Date Filing Date
US14/744,352 Active 2037-09-07 US10261850B2 (en) 2014-12-01 2015-06-19 Aggregate predictive model and workflow for local execution
US14/853,189 Active 2036-01-10 US9842034B2 (en) 2014-12-01 2015-09-14 Mesh network routing based on availability of assets
US14/963,212 Active 2036-01-22 US10025653B2 (en) 2014-12-01 2015-12-08 Computer architecture and method for modifying intake data rate based on a predictive model
US14/963,208 Abandoned US20170161659A1 (en) 2014-12-01 2015-12-08 Computer Architecture and Method for Modifying Data Intake Storage Location Based on a Predictive Model
US14/963,209 Abandoned US20170161621A1 (en) 2014-12-01 2015-12-08 Computer Architecture and Method for Modifying Intake Data Set Based on a Predictive Model
US15/257,276 Active US9864665B2 (en) 2014-12-01 2016-09-06 Adaptive handling of operating data based on assets' external conditions
US15/257,258 Active US9910751B2 (en) 2014-12-01 2016-09-06 Adaptive handling of abnormal-condition indicator criteria
US15/805,124 Active 2035-11-26 US10545845B1 (en) 2014-12-01 2017-11-06 Mesh network routing based on availability of assets
US16/125,335 Active 2035-11-22 US11144378B2 (en) 2014-12-01 2018-09-07 Computer system and method for recommending an operating mode of an asset
US16/194,036 Active US10754721B2 (en) 2014-12-01 2018-11-16 Computer system and method for defining and using a predictive model configured to predict asset failures

Country Status (10)

Country Link
US (15) US20160155098A1 (en)
EP (4) EP3227852A4 (en)
JP (4) JP2018500710A (en)
KR (3) KR20170118039A (en)
CN (2) CN107408226A (en)
AU (4) AU2015355156A1 (en)
CA (5) CA2969452A1 (en)
HK (3) HK1244937A1 (en)
SG (3) SG11201708095WA (en)
WO (4) WO2016089792A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161969A1 (en) * 2015-12-07 2017-06-08 The Boeing Company System and method for model-based optimization of subcomponent sensor communications
US20170264566A1 (en) * 2016-03-10 2017-09-14 Ricoh Co., Ltd. Architecture Customization at User Application Layer
WO2018213617A1 (en) * 2017-05-18 2018-11-22 Uptake Technologies, Inc. Computing system and method for approximating predictive models and time-series values
US10878385B2 (en) 2015-06-19 2020-12-29 Uptake Technologies, Inc. Computer system and method for distributing execution of a predictive model
US11036902B2 (en) 2015-06-19 2021-06-15 Uptake Technologies, Inc. Dynamic execution of predictive models and workflows
US20210365449A1 (en) * 2020-05-20 2021-11-25 Caterpillar Inc. Callaborative system and method for validating equipment failure models in an analytics crowdsourcing environment
WO2022072908A1 (en) * 2020-10-02 2022-04-07 Tonkean, Inc. Systems and methods for data objects for asynchronou workflows
US20220358434A1 (en) * 2021-05-06 2022-11-10 Honeywell International Inc. Foundation applications as an accelerator providing well defined extensibility and collection of seeded templates for enhanced user experience and quicker turnaround

Families Citing this family (225)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10518411B2 (en) * 2016-05-13 2019-12-31 General Electric Company Robotic repair or maintenance of an asset
US9772741B2 (en) * 2013-03-15 2017-09-26 Konstantinos (Constantin) F. Aliferis Data analysis computer system and method for organizing, presenting, and optimizing predictive modeling
US10599155B1 (en) 2014-05-20 2020-03-24 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
WO2020051523A1 (en) * 2018-09-07 2020-03-12 Uptake Technologies, Inc. Computer system and method for recommending an operating mode of an asset
US20160155098A1 (en) 2014-12-01 2016-06-02 Uptake, LLC Historical Health Metrics
US9882798B2 (en) * 2015-05-13 2018-01-30 Vmware, Inc. Method and system that analyzes operational characteristics of multi-tier applications
WO2016196775A1 (en) * 2015-06-04 2016-12-08 Fischer Block, Inc. Remaining-life and time-to-failure predictions of power assets
US10848371B2 (en) * 2015-06-11 2020-11-24 Instana, Inc. User interface for an application performance management system
USD796540S1 (en) 2015-06-14 2017-09-05 Google Inc. Display screen with graphical user interface for mobile camera history having event-specific activity notifications
USD809522S1 (en) 2015-06-14 2018-02-06 Google Inc. Display screen with animated graphical user interface for an alert screen
USD807376S1 (en) 2015-06-14 2018-01-09 Google Inc. Display screen with animated graphical user interface for smart home automation system having a multifunction status
US9361011B1 (en) 2015-06-14 2016-06-07 Google Inc. Methods and systems for presenting multiple live video feeds in a user interface
USD797131S1 (en) 2015-06-14 2017-09-12 Google Inc. Display screen with user interface for mode selector icons
US10133443B2 (en) 2015-06-14 2018-11-20 Google Llc Systems and methods for smart home automation using a multifunction status and entry point icon
USD803241S1 (en) 2015-06-14 2017-11-21 Google Inc. Display screen with animated graphical user interface for an alert screen
USD812076S1 (en) * 2015-06-14 2018-03-06 Google Llc Display screen with graphical user interface for monitoring remote video camera
US10031815B2 (en) * 2015-06-29 2018-07-24 Ca, Inc. Tracking health status in software components
JP6458663B2 (en) * 2015-06-29 2019-01-30 株式会社リコー Information processing system and program
US10484257B1 (en) * 2015-07-15 2019-11-19 Amazon Technologies, Inc. Network event automatic remediation service
CA2994770A1 (en) 2015-08-05 2017-02-09 Equifax Inc. Model integration tool
US9870649B1 (en) 2015-08-28 2018-01-16 State Farm Mutual Automobile Insurance Company Shared vehicle usage, monitoring and feedback
GB2542370B (en) * 2015-09-16 2020-05-27 Arm Ip Ltd A system for monitoring a plurality of distributed devices
JP6249003B2 (en) * 2015-09-30 2017-12-20 トヨタ自動車株式会社 Control device for hybrid vehicle
US9916194B2 (en) * 2015-10-01 2018-03-13 International Business Machines Corporation System component failure diagnosis
US10547971B2 (en) 2015-11-04 2020-01-28 xAd, Inc. Systems and methods for creating and using geo-blocks for location-based information service
US10455363B2 (en) 2015-11-04 2019-10-22 xAd, Inc. Systems and methods for using geo-blocks and geo-fences to discover lookalike mobile devices
US10278014B2 (en) * 2015-11-04 2019-04-30 xAd, Inc. System and method for using geo-blocks and geo-fences to predict mobile device locations
US10078571B2 (en) * 2015-12-09 2018-09-18 International Business Machines Corporation Rule-based adaptive monitoring of application performance
US10733514B1 (en) 2015-12-28 2020-08-04 EMC IP Holding Company LLC Methods and apparatus for multi-site time series data analysis
US10860405B1 (en) * 2015-12-28 2020-12-08 EMC IP Holding Company LLC System operational analytics
US20170183016A1 (en) * 2015-12-28 2017-06-29 General Electric Company Early warning system for locomotive bearings failures
US11242051B1 (en) 2016-01-22 2022-02-08 State Farm Mutual Automobile Insurance Company Autonomous vehicle action communications
US11719545B2 (en) 2016-01-22 2023-08-08 Hyundai Motor Company Autonomous vehicle component damage and salvage assessment
US11441916B1 (en) 2016-01-22 2022-09-13 State Farm Mutual Automobile Insurance Company Autonomous vehicle trip routing
US10134278B1 (en) 2016-01-22 2018-11-20 State Farm Mutual Automobile Insurance Company Autonomous vehicle application
US20210295439A1 (en) 2016-01-22 2021-09-23 State Farm Mutual Automobile Insurance Company Component malfunction impact assessment
US10452467B2 (en) * 2016-01-28 2019-10-22 Intel Corporation Automatic model-based computing environment performance monitoring
US9718486B1 (en) * 2016-02-01 2017-08-01 Electro-Motive Diesel, Inc. System for analyzing health of train
WO2017196821A1 (en) 2016-05-09 2017-11-16 Strong Force Iot Portfolio 2016, Llc Methods and systems for the industrial internet of things
US10510006B2 (en) * 2016-03-09 2019-12-17 Uptake Technologies, Inc. Handling of predictive models based on asset location
US10318903B2 (en) 2016-05-06 2019-06-11 General Electric Company Constrained cash computing system to optimally schedule aircraft repair capacity with closed loop dynamic physical state and asset utilization attainment control
US10983507B2 (en) 2016-05-09 2021-04-20 Strong Force Iot Portfolio 2016, Llc Method for data collection and frequency analysis with self-organization functionality
US10712738B2 (en) 2016-05-09 2020-07-14 Strong Force Iot Portfolio 2016, Llc Methods and systems for industrial internet of things data collection for vibration sensitive equipment
US11774944B2 (en) 2016-05-09 2023-10-03 Strong Force Iot Portfolio 2016, Llc Methods and systems for the industrial internet of things
US11327475B2 (en) 2016-05-09 2022-05-10 Strong Force Iot Portfolio 2016, Llc Methods and systems for intelligent collection and analysis of vehicle data
US10102056B1 (en) * 2016-05-23 2018-10-16 Amazon Technologies, Inc. Anomaly detection using machine learning
US10901407B2 (en) * 2016-05-31 2021-01-26 Applied Materials, Inc. Semiconductor device search and classification
WO2017210496A1 (en) * 2016-06-03 2017-12-07 Uptake Technologies, Inc. Provisioning a local analytics device
US20170353353A1 (en) 2016-06-03 2017-12-07 Uptake Technologies, Inc. Provisioning a Local Analytics Device
US10956758B2 (en) 2016-06-13 2021-03-23 Xevo Inc. Method and system for providing auto space management using virtuous cycle
US11237546B2 (en) 2016-06-15 2022-02-01 Strong Force loT Portfolio 2016, LLC Method and system of modifying a data collection trajectory for vehicles
US10346239B1 (en) * 2016-06-27 2019-07-09 Amazon Technologies, Inc. Predictive failure of hardware components
US10416982B1 (en) * 2016-06-30 2019-09-17 EMC IP Holding Company LLC Automated analysis system and method
US10263802B2 (en) 2016-07-12 2019-04-16 Google Llc Methods and devices for establishing connections with remote cameras
USD882583S1 (en) 2016-07-12 2020-04-28 Google Llc Display screen with graphical user interface
US20180048713A1 (en) * 2016-08-09 2018-02-15 Sciemetric Instruments Inc. Modular data acquisition and control system
US10462026B1 (en) * 2016-08-23 2019-10-29 Vce Company, Llc Probabilistic classifying system and method for a distributed computing environment
US10606254B2 (en) * 2016-09-14 2020-03-31 Emerson Process Management Power & Water Solutions, Inc. Method for improving process/equipment fault diagnosis
WO2018052015A1 (en) * 2016-09-14 2018-03-22 日本電気株式会社 Analysis support device for system, analysis support method and program for system
US10152810B2 (en) * 2016-10-21 2018-12-11 Siemens Aktiengesellschaft Techniques for displaying data comprising time and angular values acquired from a technical or industrial process
US10127125B2 (en) * 2016-10-21 2018-11-13 Accenture Global Solutions Limited Application monitoring and failure prediction
US10877465B2 (en) * 2016-10-24 2020-12-29 Fisher-Rosemount Systems, Inc. Process device condition and performance monitoring
US10270745B2 (en) 2016-10-24 2019-04-23 Fisher-Rosemount Systems, Inc. Securely transporting data across a data diode for secured process control communications
US10530748B2 (en) 2016-10-24 2020-01-07 Fisher-Rosemount Systems, Inc. Publishing data across a data diode for secured process control communications
US10257163B2 (en) 2016-10-24 2019-04-09 Fisher-Rosemount Systems, Inc. Secured process control communications
US10619760B2 (en) * 2016-10-24 2020-04-14 Fisher Controls International Llc Time-series analytics for control valve health assessment
US11238290B2 (en) 2016-10-26 2022-02-01 Google Llc Timeline-video relationship processing for alert events
US10386999B2 (en) 2016-10-26 2019-08-20 Google Llc Timeline-video relationship presentation for alert events
USD843398S1 (en) 2016-10-26 2019-03-19 Google Llc Display screen with graphical user interface for a timeline-video relationship presentation for alert events
US11012461B2 (en) * 2016-10-27 2021-05-18 Accenture Global Solutions Limited Network device vulnerability prediction
US20180135456A1 (en) * 2016-11-17 2018-05-17 General Electric Company Modeling to detect gas turbine anomalies
US10943283B2 (en) * 2016-11-18 2021-03-09 Cummins Inc. Service location recommendation tailoring
CA2987670A1 (en) * 2016-12-05 2018-06-05 Aware360 Ltd. Integrated personal safety and equipment monitoring system
US10304263B2 (en) * 2016-12-13 2019-05-28 The Boeing Company Vehicle system prognosis device and method
CN106485036A (en) * 2016-12-21 2017-03-08 杜伯仁 Based on the method graded to asset securitization Assets Pool by Survival Models
US10186155B2 (en) * 2016-12-22 2019-01-22 Xevo Inc. Method and system for providing interactive parking management via artificial intelligence analytic (AIA) services using cloud network
DE112017006715T5 (en) * 2017-01-03 2019-11-07 Intel Corporation SENSOR MANAGEMENT AND RELIABILITY
JP6706693B2 (en) * 2017-01-19 2020-06-10 株式会社日立製作所 Maintenance management system and maintenance management confirmation device used therefor
US10579961B2 (en) * 2017-01-26 2020-03-03 Uptake Technologies, Inc. Method and system of identifying environment features for use in analyzing asset operation
JP6879752B2 (en) * 2017-02-03 2021-06-02 株式会社日立システムズ Medical device monitoring system
US10318364B2 (en) * 2017-02-23 2019-06-11 Visual Process Limited Methods and systems for problem-alert aggregation
US11044329B2 (en) * 2017-02-27 2021-06-22 NCR Corportation Client application user experience tracking
CN106886481B (en) * 2017-02-28 2020-11-27 深圳市华傲数据技术有限公司 Static analysis and prediction method and device for system health degree
CN108595448B (en) * 2017-03-17 2022-03-04 北京京东尚科信息技术有限公司 Information pushing method and device
US10691516B2 (en) * 2017-04-05 2020-06-23 International Business Machines Corporation Measurement and visualization of resiliency in a hybrid IT infrastructure environment
JP7012093B2 (en) * 2017-04-13 2022-01-27 ルネサスエレクトロニクス株式会社 Probabilistic metric of accidental hardware failure
US10671039B2 (en) * 2017-05-03 2020-06-02 Uptake Technologies, Inc. Computer system and method for predicting an abnormal event at a wind turbine in a cluster
US10352496B2 (en) 2017-05-25 2019-07-16 Google Llc Stand assembly for an electronic device providing multiple degrees of freedom and built-in cables
US10972685B2 (en) 2017-05-25 2021-04-06 Google Llc Video camera assembly having an IR reflector
US10819921B2 (en) 2017-05-25 2020-10-27 Google Llc Camera assembly having a single-piece cover element
US20180347843A1 (en) * 2017-05-30 2018-12-06 Mikros Systems Corporation Methods and systems for prognostic analysis in electromechanical and environmental control equipment in building management systems
JP6718415B2 (en) * 2017-06-26 2020-07-08 株式会社日立ビルシステム Parts replacement prediction device, parts replacement prediction system, parts replacement prediction method
US10829344B2 (en) 2017-07-06 2020-11-10 Otis Elevator Company Elevator sensor system calibration
US11014780B2 (en) 2017-07-06 2021-05-25 Otis Elevator Company Elevator sensor calibration
EP3655824A1 (en) * 2017-07-21 2020-05-27 Johnson Controls Technology Company Building management system with dynamic work order generation with adaptive diagnostic task details
US10402192B2 (en) * 2017-07-25 2019-09-03 Aurora Labs Ltd. Constructing software delta updates for vehicle ECU software and abnormality detection based on toolchain
EP3662331A4 (en) 2017-08-02 2021-04-28 Strong Force Iot Portfolio 2016, LLC Methods and systems for detection in an industrial internet of things data collection environment with large data sets
US11131989B2 (en) 2017-08-02 2021-09-28 Strong Force Iot Portfolio 2016, Llc Systems and methods for data collection including pattern recognition
US10313413B2 (en) 2017-08-28 2019-06-04 Banjo, Inc. Detecting events from ingested communication signals
US11025693B2 (en) * 2017-08-28 2021-06-01 Banjo, Inc. Event detection from signal data removing private information
KR102025145B1 (en) 2017-09-01 2019-09-25 두산중공업 주식회사 Apparatus and Method for Predicting Plant Data
US20190080258A1 (en) * 2017-09-13 2019-03-14 Intel Corporation Observation hub device and method
US10817152B2 (en) * 2017-09-17 2020-10-27 Ge Inspection Technologies, Lp Industrial asset intelligence
US11687048B2 (en) * 2017-09-18 2023-06-27 Johnson Controls Tyco IP Holdings LLP Method and apparatus for evaluation of temperature sensors
US10585774B2 (en) * 2017-09-27 2020-03-10 International Business Machines Corporation Detection of misbehaving components for large scale distributed systems
WO2019068196A1 (en) 2017-10-06 2019-04-11 Raven Telemetry Inc. Augmented industrial management
TWI663570B (en) * 2017-10-20 2019-06-21 財團法人資訊工業策進會 Power consumption analyzing server and power consumption analyzing method thereof
US10379982B2 (en) * 2017-10-31 2019-08-13 Uptake Technologies, Inc. Computer system and method for performing a virtual load test
US10956843B2 (en) * 2017-11-07 2021-03-23 International Business Machines Corporation Determining optimal device refresh cycles and device repairs through cognitive analysis of unstructured data and device health scores
US11334854B2 (en) * 2017-11-10 2022-05-17 General Electric Company Systems and methods to generate an asset workscope
WO2019099494A1 (en) * 2017-11-14 2019-05-23 Tagnos, Inc. Use of historic and contemporary tracking data to improve healthcare facility operations
US10672204B2 (en) 2017-11-15 2020-06-02 The Boeing Company Real time streaming analytics for flight data processing
CN107730893B (en) * 2017-11-30 2019-08-09 大连理工大学 A kind of shared bus website passenger flow forecasting based on passenger's trip multidimensional characteristic
US20190180300A1 (en) 2017-12-07 2019-06-13 Fifth Third Bancorp Geospatial market analytics
US11010233B1 (en) * 2018-01-18 2021-05-18 Pure Storage, Inc Hardware-based system monitoring
JP7003159B2 (en) * 2018-01-19 2022-01-20 株式会社日立製作所 Failure prediction system and failure prediction method
WO2019143365A1 (en) * 2018-01-22 2019-07-25 Hitachi High-Tech Solutions Corporation Securing systems from harmful communications
EP3514555B1 (en) * 2018-01-22 2020-07-22 Siemens Aktiengesellschaft Apparatus for monitoring an actuator system, method for providing an apparatus for monitoring an actuator system and method for monitoring an actuator system
US10585724B2 (en) 2018-04-13 2020-03-10 Banjo, Inc. Notifying entities of relevant events
US20190266575A1 (en) * 2018-02-27 2019-08-29 Honeywell International, Inc. Modifying field workflows
US11080127B1 (en) * 2018-02-28 2021-08-03 Arizona Public Service Company Methods and apparatus for detection of process parameter anomalies
DE102018203179A1 (en) * 2018-03-02 2019-09-05 Robert Bosch Gmbh Device, in particular handheld power tool management device, and method for monitoring and / or managing a plurality of objects
US10169135B1 (en) 2018-03-02 2019-01-01 Uptake Technologies, Inc. Computer system and method of detecting manufacturing network anomalies
US10554518B1 (en) * 2018-03-02 2020-02-04 Uptake Technologies, Inc. Computer system and method for evaluating health of nodes in a manufacturing network
US10950345B2 (en) * 2018-03-23 2021-03-16 Siemens Healthcare Diagnostics Inc. Methods, apparatus, and systems for integration of diagnostic laboratory devices
US10354462B1 (en) * 2018-04-06 2019-07-16 Toyota Motor Engineering & Manufacturing North America, Inc. Fault diagnosis in power electronics using adaptive PCA
US11691755B2 (en) * 2018-04-16 2023-07-04 Wing Aviation Llc Multi-UAV management
US10635095B2 (en) 2018-04-24 2020-04-28 Uptake Technologies, Inc. Computer system and method for creating a supervised failure model
US20210217256A1 (en) * 2018-05-16 2021-07-15 Siemens Mobility Austria Gmbh Method and Apparatus for Diagnosing and Monitoring Vehicles, Vehicle Components and Routes
CN108648071B (en) 2018-05-17 2020-05-19 阿里巴巴集团控股有限公司 Resource value evaluation method and device based on block chain
WO2019226715A1 (en) * 2018-05-21 2019-11-28 Promptlink Communications, Inc. Techniques for assessing a customer premises equipment device
US10896114B2 (en) * 2018-05-23 2021-01-19 Seagate Technology Llc Machine learning error prediction in storage arrays
US11823274B2 (en) 2018-06-04 2023-11-21 Machine Cover, Inc. Parametric instruments and methods relating to business interruption
US11842407B2 (en) 2018-06-04 2023-12-12 Machine Cover, Inc. Parametric instruments and methods relating to geographical area business interruption
US20190378349A1 (en) * 2018-06-07 2019-12-12 GM Global Technology Operations LLC Vehicle remaining useful life prediction
DE102018209407A1 (en) * 2018-06-13 2019-12-19 Robert Bosch Gmbh Method and device for handling an anomaly in a communication network
KR102062097B1 (en) * 2018-06-27 2020-06-23 송암시스콤 주식회사 A Bus Information Terminal Having Dual Structure With Automatic Recovery Function
WO2020023410A1 (en) * 2018-07-22 2020-01-30 Scott Amron Distributed inventory system
DE102018118437A1 (en) 2018-07-31 2020-02-06 Airbus Defence and Space GmbH System and method for monitoring the condition of an unmanned aircraft
US10926888B2 (en) * 2018-08-07 2021-02-23 The Boeing Company Methods and systems for identifying associated events in an aircraft
US11146911B2 (en) 2018-08-17 2021-10-12 xAd, Inc. Systems and methods for pacing information campaigns based on predicted and observed location events
US11172324B2 (en) 2018-08-17 2021-11-09 xAd, Inc. Systems and methods for predicting targeted location events
US10349208B1 (en) 2018-08-17 2019-07-09 xAd, Inc. Systems and methods for real-time prediction of mobile device locations
US11134359B2 (en) 2018-08-17 2021-09-28 xAd, Inc. Systems and methods for calibrated location prediction
EP3835898A4 (en) * 2018-08-23 2022-07-27 Siemens Aktiengesellschaft Artificial intelligence computing device, control method and apparatus, engineer station, and industrial automation system
EP3850382A1 (en) * 2018-09-10 2021-07-21 3M Innovative Properties Company Method and system for monitoring a health of a power cable accessory based on machine learning
US20210192390A1 (en) * 2018-09-24 2021-06-24 Hewlett-Packard Development Company, L.P. Device status assessment
US11455223B2 (en) * 2018-10-11 2022-09-27 International Business Machines Corporation Using system errors and manufacturer defects in system components causing the system errors to determine a quality assessment value for the components
US11573879B2 (en) * 2018-10-22 2023-02-07 General Electric Company Active asset monitoring
US11200142B2 (en) 2018-10-26 2021-12-14 International Business Machines Corporation Perform preemptive identification and reduction of risk of failure in computational systems by training a machine learning module
US11200103B2 (en) * 2018-10-26 2021-12-14 International Business Machines Corporation Using a machine learning module to perform preemptive identification and reduction of risk of failure in computational systems
US10635360B1 (en) * 2018-10-29 2020-04-28 International Business Machines Corporation Adjusting data ingest based on compaction rate in a dispersed storage network
US11182596B2 (en) * 2018-11-08 2021-11-23 International Business Machines Corporation Identifying a deficiency of a facility
JP6531213B1 (en) * 2018-12-04 2019-06-12 Psp株式会社 Medical device failure prediction system, medical device failure prediction method, and program
US11188691B2 (en) * 2018-12-21 2021-11-30 Utopus Insights, Inc. Scalable system and method for forecasting wind turbine failure using SCADA alarm and event logs
US11062233B2 (en) * 2018-12-21 2021-07-13 The Nielsen Company (Us), Llc Methods and apparatus to analyze performance of watermark encoding devices
KR102124779B1 (en) * 2018-12-21 2020-06-19 한국수력원자력 주식회사 Early warning device of plant of start and stop using multi-predictive model
KR102152403B1 (en) * 2018-12-24 2020-09-04 아주대학교산학협력단 Apparatus and method for detecting abnormal behavior using data pattern analysis
US10984154B2 (en) * 2018-12-27 2021-04-20 Utopus Insights, Inc. System and method for evaluating models for predictive failure of renewable energy assets
CN109840312B (en) * 2019-01-22 2022-11-29 新奥数能科技有限公司 Abnormal value detection method and device for boiler load rate-energy efficiency curve
US11334057B2 (en) 2019-01-25 2022-05-17 Waygate Technologies Usa, Lp Anomaly detection for predictive maintenance and deriving outcomes and workflows based on data quality
US11030067B2 (en) 2019-01-29 2021-06-08 Uptake Technologies, Inc. Computer system and method for presenting asset insights at a graphical user interface
WO2020159680A1 (en) * 2019-01-31 2020-08-06 Exxonmobil Research And Engineering Company Monitoring and reporting operational performance of machinery
US11232368B2 (en) * 2019-02-20 2022-01-25 Accenture Global Solutions Limited System for predicting equipment failure events and optimizing manufacturing operations
US11084387B2 (en) * 2019-02-25 2021-08-10 Toyota Research Institute, Inc. Systems, methods, and storage media for arranging a plurality of cells in a vehicle battery pack
WO2020179704A1 (en) * 2019-03-01 2020-09-10 日本電気株式会社 Network management method, network system, intensive analysis device, terminal device, and program
SE543982C2 (en) * 2019-03-26 2021-10-12 Stoneridge Electronics Ab Method of processing vehicle data from multiple sources and controller therefor
JP7162560B2 (en) * 2019-03-28 2022-10-28 日立造船株式会社 Information processing device, information processing method, information processing program, and garbage incineration plant
WO2020200412A1 (en) * 2019-04-01 2020-10-08 Abb Schweiz Ag Asset condition monitoring method with automatic anomaly detection
JP7320368B2 (en) * 2019-04-09 2023-08-03 ナブテスコ株式会社 FAILURE PREDICTION DEVICE, FAILURE PREDICTION METHOD AND COMPUTER PROGRAM
CA3042657A1 (en) * 2019-05-06 2020-11-06 Otr Wheel Safety, Inc. Integrated system for assessing integrity of wheels and rims of off the road vehicles
CN111846095B (en) * 2019-05-14 2022-05-17 北京骑胜科技有限公司 Fault detection device, electric power-assisted vehicle and fault detection method
CN110163387A (en) * 2019-05-17 2019-08-23 合肥帧讯软件有限公司 A kind of Design of Laboratory Management System
CN114175073A (en) 2019-05-24 2022-03-11 伯克利之光生命科技公司 System and method for optimizing instrument system workflow
US11604934B2 (en) * 2019-05-29 2023-03-14 Nec Corporation Failure prediction using gradient-based sensor identification
US11887177B2 (en) * 2019-06-18 2024-01-30 Hewlett-Packard Development Company, L.P. Part re-order based on part-specific sensor data
US20200401904A1 (en) * 2019-06-24 2020-12-24 GE Precision Healthcare LLC Adaptive medical imaging device configuration using artificial intelligence
US11817212B2 (en) 2019-06-26 2023-11-14 Roche Diagnostics Operations, Inc. Maintenance method for a laboratory system
CN114072825A (en) * 2019-07-02 2022-02-18 科路实有限责任公司 Monitoring, predicting and maintaining condition of railway elements using digital twinning
GB201909762D0 (en) * 2019-07-08 2019-08-21 Edwards Vacuum Llc Vacuum system with diagnostic circuitry and a method and computer program for monitoring the health of such a vacuum system
FR3098938B1 (en) * 2019-07-15 2022-01-07 Bull Sas Method and device for determining an anomaly prediction performance index value in a computer infrastructure from performance indicator values
US20210019651A1 (en) * 2019-07-18 2021-01-21 Hitachi, Ltd. Method for integrating prediction result
US11334410B1 (en) * 2019-07-22 2022-05-17 Intuit Inc. Determining aberrant members of a homogenous cluster of systems using external monitors
US11086749B2 (en) * 2019-08-01 2021-08-10 International Business Machines Corporation Dynamically updating device health scores and weighting factors
US11374814B2 (en) 2019-08-01 2022-06-28 Hewlett Packard Enterprise Development Lp Network device configuration update using rank and health
US11010222B2 (en) * 2019-08-29 2021-05-18 Sap Se Failure mode specific analytics using parametric models
FR3101164B1 (en) * 2019-09-25 2023-08-04 Mouafo Serge Romaric Tembo Method for real-time parsimonious predictive maintenance of a critical system, computer program product and associated device
US11088932B2 (en) * 2019-10-14 2021-08-10 International Business Machines Corporation Managing network system incidents
US11321115B2 (en) * 2019-10-25 2022-05-03 Vmware, Inc. Scalable and dynamic data collection and processing
US11379694B2 (en) 2019-10-25 2022-07-05 Vmware, Inc. Scalable and dynamic data collection and processing
EP3822725A1 (en) * 2019-11-15 2021-05-19 General Electric Company Systems, and methods for diagnosing an additive manufacturing device
JP2021108110A (en) * 2019-11-19 2021-07-29 ディー.エス.レイダー エルティーディーD.S.Raider Ltd System and method for monitoring and predicting vehicle damage
US11941116B2 (en) 2019-11-22 2024-03-26 Pure Storage, Inc. Ransomware-based data protection parameter modification
US20210382992A1 (en) * 2019-11-22 2021-12-09 Pure Storage, Inc. Remote Analysis of Potentially Corrupt Data Written to a Storage System
US11921573B2 (en) * 2019-12-02 2024-03-05 Accenture Global Solutions Limited Systems and methods for predictive system failure monitoring
JP7335154B2 (en) * 2019-12-17 2023-08-29 株式会社東芝 Information processing device, information processing method, and program
US11509136B2 (en) * 2019-12-30 2022-11-22 Utopus Insights, Inc. Scalable systems and methods for assessing healthy condition scores in renewable asset management
GB2591772A (en) * 2020-02-06 2021-08-11 Roads & Transp Authority Asset maintenance management system and method
JP7418256B2 (en) * 2020-03-24 2024-01-19 東芝テック株式会社 Information processing equipment and programs
US11558767B2 (en) 2020-03-26 2023-01-17 Sony Group Corporation Electronic device and related methods for predicting initiation of establishment of a network with one or more other electronic devices
US11482341B2 (en) 2020-05-07 2022-10-25 Carrier Corporation System and a method for uniformly characterizing equipment category
EP3923213A1 (en) * 2020-06-08 2021-12-15 ABB Power Grids Switzerland AG Method and computing system for performing a prognostic health analysis for an asset
EP4165645A1 (en) * 2020-06-12 2023-04-19 Roche Diagnostics GmbH Systems and methods for assessing bays on diagnostic devices
US11210160B1 (en) * 2020-08-13 2021-12-28 Servicenow, Inc. Computer information technology alert remediation selection based on alert similarity
US11188405B1 (en) 2020-08-25 2021-11-30 Servicenow, Inc. Similar alert identification based on application fingerprints
US20220114559A1 (en) * 2020-10-09 2022-04-14 ANI Technologies Private Limited Asset health management for vehicles
US11290343B1 (en) * 2020-12-10 2022-03-29 Hitachi, Ltd. System and method for asset and data management
US11675342B2 (en) * 2020-12-24 2023-06-13 Noodle Analytics, Inc. AI-based smart health surveillance system and method
CA3205716A1 (en) * 2021-01-22 2022-07-28 Xiang Liu Systems for infrastructure degradation modelling and methods of use thereof
US11633168B2 (en) * 2021-04-02 2023-04-25 AIX Scan, Inc. Fast 3D radiography with multiple pulsed X-ray sources by deflecting tube electron beam using electro-magnetic field
US11688059B2 (en) * 2021-05-27 2023-06-27 International Business Machines Corporation Asset maintenance prediction using infrared and regular images
US11500712B1 (en) * 2021-06-14 2022-11-15 EMC IP Holding Company LLC Method and system for intelligent proactive error log activation
TW202311961A (en) * 2021-09-02 2023-03-16 遠傳電信股份有限公司 Method and system for detecting an abnormal occurrence of an application program
US20230080981A1 (en) * 2021-09-13 2023-03-16 International Business Machines Corporation Predictive maintenance explanations based on user profile
US11789842B2 (en) * 2021-10-11 2023-10-17 Dell Products L.P. System and method for advanced detection of potential system impairment
US20230112875A1 (en) * 2021-10-13 2023-04-13 Honeywell International Inc. Alarm performance optimizer
US11797408B2 (en) 2021-12-30 2023-10-24 Juniper Networks, Inc. Dynamic prediction of system resource requirement of network software in a live network using data driven models
US20230221693A1 (en) * 2022-01-10 2023-07-13 General Electric Technology Gmbh Systems and methods for integrated condition monitoring for power system asset health scoring
US20230236922A1 (en) * 2022-01-24 2023-07-27 International Business Machines Corporation Failure Prediction Using Informational Logs and Golden Signals
US20230236890A1 (en) * 2022-01-25 2023-07-27 Poplar Technologies, Inc. Apparatus for generating a resource probability model
US20230259419A1 (en) * 2022-02-14 2023-08-17 Capital One Services, Llc Incident resolution system
US20230394888A1 (en) * 2022-06-01 2023-12-07 The Boeing Company Vehicle Health Management Using a Counterfactual Machine Learning Model
JP7292538B1 (en) 2022-06-17 2023-06-16 三菱電機株式会社 Soundness evaluation device, soundness evaluation method, and soundness evaluation program
US11907230B1 (en) * 2023-01-10 2024-02-20 Dell Products L.P. System and method for distributed management of hardware based on intent
US11929891B1 (en) 2023-01-10 2024-03-12 Dell Products L.P. System and method for distributed management of hardware through relationship management

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030139905A1 (en) * 2001-12-19 2003-07-24 David Helsper Method and system for analyzing and predicting the behavior of systems
US20070067678A1 (en) * 2005-07-11 2007-03-22 Martin Hosek Intelligent condition-monitoring and fault diagnostic system for predictive maintenance
US7693608B2 (en) * 2006-04-12 2010-04-06 Edsa Micro Corporation Systems and methods for alarm filtering and management within a real-time data acquisition and monitoring environment
US20150184549A1 (en) * 2013-12-31 2015-07-02 General Electric Company Methods and systems for enhancing control of power plant generating units

Family Cites Families (254)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3026510A (en) 1959-10-02 1962-03-20 Bell Telephone Labor Inc Self timed pcm encoder
DE3816520A1 (en) * 1988-05-14 1989-11-23 Bosch Gmbh Robert CONTROL PROCESS AND DEVICE, IN PARTICULAR LAMBAR CONTROL
US5633800A (en) 1992-10-21 1997-05-27 General Electric Company Integrated model-based reasoning/expert system diagnosis for rotating machinery
US5363317A (en) * 1992-10-29 1994-11-08 United Technologies Corporation Engine failure monitor for a multi-engine aircraft having partial engine failure and driveshaft failure detection
JP3287889B2 (en) * 1992-11-24 2002-06-04 トーヨーエイテック株式会社 Quality control equipment
US5566092A (en) 1993-12-30 1996-10-15 Caterpillar Inc. Machine fault diagnostics system and method
US5918222A (en) * 1995-03-17 1999-06-29 Kabushiki Kaisha Toshiba Information disclosing apparatus and multi-modal information input/output system
JP3366837B2 (en) 1997-08-15 2003-01-14 株式会社小松製作所 Machine abnormality monitoring device and method
US6473659B1 (en) 1998-04-10 2002-10-29 General Electric Company System and method for integrating a plurality of diagnostic related information
US6115697A (en) 1999-02-19 2000-09-05 Dynamic Research Group Computerized system and method for optimizing after-tax proceeds
US6625500B1 (en) * 1999-03-16 2003-09-23 Chou H. Li Self-optimizing method and machine
US6336065B1 (en) 1999-10-28 2002-01-01 General Electric Company Method and system for analyzing fault and snapshot operational parameter data for diagnostics of machine malfunctions
US6622264B1 (en) 1999-10-28 2003-09-16 General Electric Company Process and system for analyzing fault log data from a machine so as to identify faults predictive of machine failures
US6947797B2 (en) 1999-04-02 2005-09-20 General Electric Company Method and system for diagnosing machine malfunctions
JP3892614B2 (en) 1999-04-30 2007-03-14 新日本製鐵株式会社 Equipment and product process abnormality diagnosis method and apparatus
US6353902B1 (en) * 1999-06-08 2002-03-05 Nortel Networks Limited Network fault prediction and proactive maintenance system
US20110208567A9 (en) 1999-08-23 2011-08-25 Roddy Nicholas E System and method for managing a fleet of remote assets
US7783507B2 (en) * 1999-08-23 2010-08-24 General Electric Company System and method for managing a fleet of remote assets
US6442542B1 (en) 1999-10-08 2002-08-27 General Electric Company Diagnostic system with learning capabilities
US6615367B1 (en) 1999-10-28 2003-09-02 General Electric Company Method and apparatus for diagnosing difficult to diagnose faults in a complex system
US7020595B1 (en) 1999-11-26 2006-03-28 General Electric Company Methods and apparatus for model based diagnostics
US6650949B1 (en) 1999-12-30 2003-11-18 General Electric Company Method and system for sorting incident log data from a plurality of machines
US6634000B1 (en) 2000-02-01 2003-10-14 General Electric Company Analyzing fault logs and continuous data for diagnostics for a locomotive
US6725398B1 (en) 2000-02-11 2004-04-20 General Electric Company Method, system, and program product for analyzing a fault log of a malfunctioning machine
US20030126258A1 (en) 2000-02-22 2003-07-03 Conkright Gary W. Web based fault detection architecture
EP1279104B1 (en) 2000-03-09 2008-12-24 Smartsignal Corporation Generalized lensing angular similarity operator
US7739096B2 (en) 2000-03-09 2010-06-15 Smartsignal Corporation System for extraction of representative data for training of adaptive process monitoring equipment
US6957172B2 (en) 2000-03-09 2005-10-18 Smartsignal Corporation Complex signal decomposition and modeling
US6952662B2 (en) 2000-03-30 2005-10-04 Smartsignal Corporation Signal differentiation system using improved non-linear operator
US6708156B1 (en) 2000-04-17 2004-03-16 Michael Von Gonten, Inc. System and method for projecting market penetration
US20160078695A1 (en) * 2000-05-01 2016-03-17 General Electric Company Method and system for managing a fleet of remote assets and/or ascertaining a repair for an asset
US20020059075A1 (en) 2000-05-01 2002-05-16 Schick Louis A. Method and system for managing a land-based vehicle
US6799154B1 (en) 2000-05-25 2004-09-28 General Electric Comapny System and method for predicting the timing of future service events of a product
US6983207B2 (en) 2000-06-16 2006-01-03 Ntn Corporation Machine component monitoring, diagnosing and selling system
US6760631B1 (en) 2000-10-04 2004-07-06 General Electric Company Multivariable control method and system without detailed prediction model
US20020091972A1 (en) 2001-01-05 2002-07-11 Harris David P. Method for predicting machine or process faults and automated system for implementing same
US6859739B2 (en) 2001-01-19 2005-02-22 Smartsignal Corporation Global state change indicator for empirical modeling in condition based monitoring
US7233886B2 (en) 2001-01-19 2007-06-19 Smartsignal Corporation Adaptive modeling of changed states in predictive condition monitoring
US7373283B2 (en) 2001-02-22 2008-05-13 Smartsignal Corporation Monitoring and fault detection system and method using improved empirical model for range extrema
US20020183971A1 (en) 2001-04-10 2002-12-05 Wegerich Stephan W. Diagnostic systems and methods for predictive condition monitoring
US7539597B2 (en) 2001-04-10 2009-05-26 Smartsignal Corporation Diagnostic systems and methods for predictive condition monitoring
US6643600B2 (en) 2001-04-26 2003-11-04 General Electric Company Method and system for assessing adjustment factors in testing or monitoring process
US7079982B2 (en) 2001-05-08 2006-07-18 Hitachi Construction Machinery Co., Ltd. Working machine, trouble diagnosis system of working machine, and maintenance system of working machine
US7107491B2 (en) 2001-05-16 2006-09-12 General Electric Company System, method and computer product for performing automated predictive reliability
US6975962B2 (en) 2001-06-11 2005-12-13 Smartsignal Corporation Residual signal alert generation for condition monitoring using approximated SPRT distribution
US7120685B2 (en) * 2001-06-26 2006-10-10 International Business Machines Corporation Method and apparatus for dynamic configurable logging of activities in a distributed computing system
US7457732B2 (en) 2001-08-17 2008-11-25 General Electric Company System and method for measuring quality of baseline modeling techniques
US7428478B2 (en) 2001-08-17 2008-09-23 General Electric Company System and method for improving accuracy of baseline models
US8108249B2 (en) 2001-12-04 2012-01-31 Kimberly-Clark Worldwide, Inc. Business planner
JP2003256367A (en) 2002-03-06 2003-09-12 Seiko Epson Corp Information providing system concerning electronic equipment error and server for managing past error results of electric equipment
US6892163B1 (en) 2002-03-08 2005-05-10 Intellectual Assets Llc Surveillance system and method having an adaptive sequential probability fault detection test
US7660705B1 (en) 2002-03-19 2010-02-09 Microsoft Corporation Bayesian approach for learning regression decision graph models and regression models for time series analysis
US8176186B2 (en) 2002-10-30 2012-05-08 Riverbed Technology, Inc. Transaction accelerator for client-server communications systems
HUE033477T2 (en) 2002-11-04 2017-12-28 Ge Intelligent Platforms Inc System state monitoring using recurrent local learning machine
US6823253B2 (en) 2002-11-27 2004-11-23 General Electric Company Methods and apparatus for model predictive control of aircraft gas turbine engines
US8017411B2 (en) * 2002-12-18 2011-09-13 GlobalFoundries, Inc. Dynamic adaptive sampling rate for model prediction
JP4333331B2 (en) 2002-12-20 2009-09-16 セイコーエプソン株式会社 Failure prediction system, failure prediction program, and failure prediction method
US7634384B2 (en) 2003-03-18 2009-12-15 Fisher-Rosemount Systems, Inc. Asset optimization reporting in a process plant
US20040243636A1 (en) * 2003-03-18 2004-12-02 Smartsignal Corporation Equipment health monitoring architecture for fleets of assets
GB0307406D0 (en) * 2003-03-31 2003-05-07 British Telecomm Data analysis system and method
US7054706B2 (en) 2003-06-30 2006-05-30 Intel Corporation Managing supply chains with model predictive control
US8645276B2 (en) 2003-07-11 2014-02-04 Ca, Inc. Modeling of applications and business process services through auto discovery analysis
US7701858B2 (en) 2003-07-17 2010-04-20 Sensicast Systems Method and apparatus for wireless communication in a mesh network
US7181370B2 (en) 2003-08-26 2007-02-20 Siemens Energy & Automation, Inc. System and method for remotely obtaining and managing machine data
DE10345440A1 (en) 2003-09-30 2005-05-12 Siemens Ag Method, computer program with program code means and computer program product for analyzing influencing variables on a burning process in a combustion chamber using a trainable, statistical model
US7127371B2 (en) 2003-10-08 2006-10-24 Ge Medical Systems Information Customized medical equipment preventative maintenance method and system
US7451210B2 (en) * 2003-11-24 2008-11-11 International Business Machines Corporation Hybrid method for event prediction and system control
AU2003295301A1 (en) 2003-12-23 2005-07-14 Telefonaktiebolaget Lm Ericsson (Publ) Method and system for efficient routing in ad hoc networks
EP1706849B1 (en) 2004-01-09 2018-08-29 United Parcel Service Of America, Inc. System, method and apparatus for collecting telematics and sensor information in a delivery vehicle
KR101271876B1 (en) 2004-03-23 2013-06-10 더 리젠트스 오브 더 유니이버시티 오브 캘리포니아 Apparatus and method for improving reliability of collected sensor data over a network
US7062370B2 (en) 2004-03-30 2006-06-13 Honeywell International Inc. Model-based detection, diagnosis of turbine engine faults
US7447666B2 (en) 2004-04-09 2008-11-04 The Boeing Company System and method for analyzing a pattern in a time-stamped event sequence
US7729789B2 (en) 2004-05-04 2010-06-01 Fisher-Rosemount Systems, Inc. Process plant monitoring based on multivariate statistical analysis and on-line process simulation
US7412626B2 (en) 2004-05-21 2008-08-12 Sap Ag Method and system for intelligent and adaptive exception handling
JP2006072717A (en) 2004-09-02 2006-03-16 Hitachi Ltd Disk subsystem
US7570581B2 (en) * 2004-09-23 2009-08-04 Motorola, Inc. Dynamic reduction of route reconvergence time
US7280941B2 (en) 2004-12-29 2007-10-09 General Electric Company Method and apparatus for in-situ detection and isolation of aircraft engine faults
US7640145B2 (en) 2005-04-25 2009-12-29 Smartsignal Corporation Automated model configuration and deployment system for equipment health monitoring
US7536364B2 (en) 2005-04-28 2009-05-19 General Electric Company Method and system for performing model-based multi-objective asset optimization and decision-making
US9846479B1 (en) 2005-05-30 2017-12-19 Invent.Ly, Llc Smart security device with monitoring mode and communication mode
US20060293777A1 (en) * 2005-06-07 2006-12-28 International Business Machines Corporation Automated and adaptive threshold setting
US20070174449A1 (en) * 2005-07-22 2007-07-26 Ankur Gupta Method and system for identifying potential adverse network conditions
US7333917B2 (en) * 2005-08-11 2008-02-19 The University Of North Carolina At Chapel Hill Novelty detection systems, methods and computer program products for real-time diagnostics/prognostics in complex physical systems
US7599762B2 (en) * 2005-08-24 2009-10-06 Rockwell Automatino Technologies, Inc. Model-based control for crane control and underway replenishment
US7174233B1 (en) * 2005-08-29 2007-02-06 International Business Machines Corporation Quality/reliability system and method in multilevel manufacturing environment
US7509235B2 (en) 2005-08-31 2009-03-24 General Electric Company Method and system for forecasting reliability of assets
US7714735B2 (en) * 2005-09-13 2010-05-11 Daniel Rockwell Monitoring electrical assets for fault and efficiency correction
JP4717579B2 (en) 2005-09-30 2011-07-06 株式会社小松製作所 Maintenance work management system for work machines
US20070088570A1 (en) 2005-10-18 2007-04-19 Honeywell International, Inc. System and method for predicting device deterioration
US7484132B2 (en) * 2005-10-28 2009-01-27 International Business Machines Corporation Clustering process for software server failure prediction
US7869908B2 (en) 2006-01-20 2011-01-11 General Electric Company Method and system for data collection and analysis
US7680041B2 (en) 2006-01-31 2010-03-16 Zensys A/S Node repair in a mesh network
US7509537B1 (en) 2006-02-02 2009-03-24 Rockwell Collins, Inc. Prognostic processor system for real-time failure analysis of line replaceable units
US7496798B2 (en) 2006-02-14 2009-02-24 Jaw Link Data-centric monitoring method
US7647131B1 (en) * 2006-03-09 2010-01-12 Rockwell Automation Technologies, Inc. Dynamic determination of sampling rates
US20090146839A1 (en) 2006-05-17 2009-06-11 Tanla Solutions Limited Automated meter reading system and method thereof
US7949436B2 (en) 2006-05-19 2011-05-24 Oracle America, Inc. Method and apparatus for automatically detecting and correcting misalignment of a semiconductor chip
US7558771B2 (en) 2006-06-07 2009-07-07 Gm Global Technology Operations, Inc. System and method for selection of prediction tools
US20080040244A1 (en) * 2006-08-08 2008-02-14 Logcon Spec Ops, Inc. Tracking and Managing Assets
US20080097945A1 (en) 2006-08-09 2008-04-24 The University Of North Carolina At Chapel Hill Novelty detection systems, methods and computer program products for real-time diagnostics/prognostics in complex physical systems
US20080059120A1 (en) 2006-08-30 2008-03-06 Fei Xiao Using fault history to predict replacement parts
US20080059080A1 (en) 2006-08-31 2008-03-06 Caterpillar Inc. Method and system for selective, event-based communications
US8275577B2 (en) 2006-09-19 2012-09-25 Smartsignal Corporation Kernel-based method for detecting boiler tube leaks
JP2008117129A (en) 2006-11-02 2008-05-22 Nsk Ltd Data collection device for abnormality analysis and on-vehicle controller using the same
US7725293B2 (en) 2006-12-07 2010-05-25 General Electric Company System and method for equipment remaining life estimation
US8311774B2 (en) 2006-12-15 2012-11-13 Smartsignal Corporation Robust distance measures for on-line monitoring
US7661032B2 (en) * 2007-01-06 2010-02-09 International Business Machines Corporation Adjusting sliding window parameters in intelligent event archiving and failure analysis
JP4892367B2 (en) * 2007-02-02 2012-03-07 株式会社日立システムズ Abnormal sign detection system
US7548830B2 (en) 2007-02-23 2009-06-16 General Electric Company System and method for equipment remaining life estimation
US20080221834A1 (en) 2007-03-09 2008-09-11 General Electric Company Method and system for enhanced fault detection workflow
US7730364B2 (en) * 2007-04-05 2010-06-01 International Business Machines Corporation Systems and methods for predictive failure management
US20080255760A1 (en) * 2007-04-16 2008-10-16 Honeywell International, Inc. Forecasting system
US8145578B2 (en) 2007-04-17 2012-03-27 Eagel View Technologies, Inc. Aerial roof estimation system and method
US8229769B1 (en) 2007-06-13 2012-07-24 United Services Automobile Association Systems and methods for processing overhead imagery
US7949659B2 (en) * 2007-06-29 2011-05-24 Amazon Technologies, Inc. Recommendation system with multiple integrated recommenders
CA2695450C (en) 2007-08-03 2016-10-18 Smartsignal Corporation Fuzzy classification approach to fault pattern matching
US7919940B2 (en) 2007-10-21 2011-04-05 Ge Intelligent Platforms, Inc. System and method for jerk limited trajectory planning for a path planner
US8050800B2 (en) 2007-10-21 2011-11-01 Ge Intelligent Platforms, Inc. Method and system for meeting end conditions in a motion control system
US8700550B1 (en) 2007-11-30 2014-04-15 Intellectual Assets Llc Adaptive model training system and method
US7962240B2 (en) 2007-12-20 2011-06-14 Ge Intelligent Platforms, Inc. Methods and systems for synchronizing a control signal of a slave follower with a master source
JP2009206850A (en) 2008-02-28 2009-09-10 Fuji Xerox Co Ltd Failure diagnosis device and program
US7756678B2 (en) 2008-05-29 2010-07-13 General Electric Company System and method for advanced condition monitoring of an asset system
US8352216B2 (en) 2008-05-29 2013-01-08 General Electric Company System and method for advanced condition monitoring of an asset system
US7822578B2 (en) 2008-06-17 2010-10-26 General Electric Company Systems and methods for predicting maintenance of intelligent electronic devices
US20090326890A1 (en) 2008-06-30 2009-12-31 Honeywell International Inc. System and method for predicting system events and deterioration
US8285402B2 (en) 2008-07-14 2012-10-09 Ge Intelligent Platforms, Inc. Method and system for safety monitored terminal block
GB0813561D0 (en) * 2008-07-24 2008-09-03 Rolls Royce Plc Developments in or relating to power demand management
US8060274B2 (en) * 2008-10-30 2011-11-15 International Business Machines Corporation Location-based vehicle maintenance scheduling
JP5058947B2 (en) * 2008-11-10 2012-10-24 株式会社日立製作所 Terminal, program, and inventory management method
US8369281B2 (en) * 2008-11-24 2013-02-05 At&T Intellectual Property I, L.P. Cell-to-WiFi switcher
US8289150B2 (en) 2008-12-05 2012-10-16 Industrial Technology Research Institute Wireless sensor network and data sensing method thereof
KR101044074B1 (en) 2008-12-24 2011-06-27 동국대학교기술지주 주식회사 System and method for CAFM based on public concept of space ownership
KR20100076708A (en) 2008-12-26 2010-07-06 한국건설기술연구원 Asset management information system for social infrastructures
JP5108116B2 (en) 2009-01-14 2012-12-26 株式会社日立製作所 Device abnormality monitoring method and system
US8024069B2 (en) 2009-01-28 2011-09-20 Ge Intelligent Platforms, Inc. System and method for path planning
US8224765B2 (en) * 2009-02-05 2012-07-17 Honeywell International Inc. Method for computing the relative likelihood of failures
US8989887B2 (en) 2009-02-11 2015-03-24 Applied Materials, Inc. Use of prediction data in monitoring actual production targets
JP5370832B2 (en) * 2009-07-01 2013-12-18 株式会社リコー State determination device and failure prediction system using the same
DE102009043091A1 (en) 2009-09-25 2011-03-31 Wincor Nixdorf International Gmbh Device for handling notes of value
US9007896B2 (en) * 2009-10-07 2015-04-14 Verizon Patent And Licensing Inc. Congestion control based on call responses
CN102045181B (en) * 2009-10-10 2013-08-07 中国移动通信集团公司 Method and device for handling terminal offline fault
CN102844721B (en) * 2010-02-26 2015-11-25 株式会社日立制作所 Failure cause diagnostic system and method thereof
EP2375637A1 (en) 2010-03-22 2011-10-12 British Telecommunications Public Limited Company Network routing adaptation based on failure prediction
JP5416630B2 (en) * 2010-03-24 2014-02-12 株式会社日立製作所 Moving object abnormality judgment support system
CN102859457B (en) * 2010-04-26 2015-11-25 株式会社日立制作所 Time series data diagnosis compression method
CN103003801B (en) 2010-05-14 2016-08-03 哈尼施费格尔技术公司 The forecast analysis monitored for remote machine
US8234420B2 (en) 2010-07-14 2012-07-31 Ge Intelligent Platforms, Inc. Method, system, and apparatus for communicating using multiple controllers
US8634314B2 (en) * 2010-07-30 2014-01-21 Cisco Technology, Inc. Reporting statistics on the health of a sensor node in a sensor network
JP5025776B2 (en) * 2010-09-28 2012-09-12 株式会社東芝 Abnormality diagnosis filter generator
US8532795B2 (en) 2010-10-04 2013-09-10 General Electric Company Method and system for offline code validation
JP2012133672A (en) 2010-12-22 2012-07-12 Mitsubishi Heavy Ind Ltd Design optimization device and design optimization method for welded structure
CN103502899B (en) 2011-01-26 2016-09-28 谷歌公司 Dynamic prediction Modeling Platform
US8825840B2 (en) * 2011-02-22 2014-09-02 Intuit Inc. Systems and methods for self-adjusting logging of log messages
US8682454B2 (en) * 2011-02-28 2014-03-25 United Technologies Corporation Method and system for controlling a multivariable system with limits
US8862938B2 (en) * 2011-04-18 2014-10-14 General Electric Company System, method, and apparatus for resolving errors in a system
WO2012145616A2 (en) 2011-04-20 2012-10-26 The Cleveland Clinic Foundation Predictive modeling
US8594982B2 (en) 2011-06-09 2013-11-26 Pulsar Informatics, Inc. Systems and methods for distributed calculation of fatigue-risk prediction and optimization
DE102011108019A1 (en) * 2011-07-19 2013-01-24 Daimler Ag Method for determining a quality of a reducing agent solution containing ammonia used for the reduction of nitrogen oxide
US8660980B2 (en) 2011-07-19 2014-02-25 Smartsignal Corporation Monitoring system using kernel regression modeling with pattern sequences
US9256224B2 (en) 2011-07-19 2016-02-09 GE Intelligent Platforms, Inc Method of sequential kernel regression modeling for forecasting and prognostics
US8620853B2 (en) 2011-07-19 2013-12-31 Smartsignal Corporation Monitoring method using kernel regression modeling with pattern sequences
GB2494416A (en) 2011-09-07 2013-03-13 Rolls Royce Plc Asset Condition Monitoring Using Internal Signals Of The Controller
US9477223B2 (en) 2011-09-14 2016-10-25 General Electric Company Condition monitoring system and method
US9176819B2 (en) 2011-09-23 2015-11-03 Fujitsu Limited Detecting sensor malfunctions using compression analysis of binary decision diagrams
US8560494B1 (en) 2011-09-30 2013-10-15 Palantir Technologies, Inc. Visual data importer
JP2013092954A (en) * 2011-10-27 2013-05-16 Hitachi Ltd Management task support device, management task support method, and management task support system
EP2726987A4 (en) * 2011-11-04 2016-05-18 Hewlett Packard Development Co Fault processing in a system
US8560165B2 (en) * 2012-01-17 2013-10-15 GM Global Technology Operations LLC Co-operative on-board and off-board component and system diagnosis and prognosis
US8825567B2 (en) * 2012-02-08 2014-09-02 General Electric Company Fault prediction of monitored assets
US20140310379A1 (en) 2013-04-15 2014-10-16 Flextronics Ap, Llc Vehicle initiated communications with third parties via virtual personality
US8626385B2 (en) 2012-03-15 2014-01-07 Caterpillar Inc. Systems and methods for analyzing machine performance
CN104246482B (en) * 2012-04-19 2017-04-12 霍夫曼-拉罗奇有限公司 Method and device for determining an analyte concentration in blood
US9051945B2 (en) 2012-04-30 2015-06-09 Caterpillar Inc. System and method for identifying impending hydraulic pump failure
US8850000B2 (en) 2012-05-08 2014-09-30 Electro-Motive Diesel, Inc. Trigger-based data collection system
US9063856B2 (en) 2012-05-09 2015-06-23 Infosys Limited Method and system for detecting symptoms and determining an optimal remedy pattern for a faulty device
JP5905771B2 (en) 2012-05-14 2016-04-20 セコム株式会社 Communication failure support system
WO2013171620A1 (en) * 2012-05-18 2013-11-21 Koninklijke Philips N.V. Method of rendering hemodynamic instability index indicator information
US20130325502A1 (en) 2012-06-05 2013-12-05 Ari Robicsek System and method for providing syndrome-specific, weighted-incidence treatment regimen recommendations
US9234750B2 (en) 2012-07-30 2016-01-12 Caterpillar Inc. System and method for operating a machine
US20140060030A1 (en) 2012-08-31 2014-03-06 Caterpillar Inc. Hydraulic accumulator health monitor
US9960929B2 (en) 2012-09-21 2018-05-01 Google Llc Environmental sensing with a doorbell at a smart-home
AU2013317688A1 (en) 2012-09-24 2015-03-05 Caterpillar Inc. Mining operation control and review
EP2901284A4 (en) * 2012-09-28 2016-06-01 Longsand Ltd Predicting failure of a storage device
CN104272269A (en) * 2012-10-02 2015-01-07 松下知识产权经营株式会社 Monitoring device and monitoring method
WO2014054051A1 (en) 2012-10-03 2014-04-10 Forbes Marshall Pvt. Ltd. Health monitoring system for a process plant and a method thereof
US9176183B2 (en) 2012-10-15 2015-11-03 GlobalFoundries, Inc. Method and system for wafer quality predictive modeling based on multi-source information with heterogeneous relatedness
US9613413B2 (en) 2012-10-17 2017-04-04 Caterpillar Inc. Methods and systems for determining part wear based on digital image of part
US9139188B2 (en) 2012-11-01 2015-09-22 Caterpillar Inc. Prediction control strategy for hybrid machinery
US9647906B2 (en) * 2012-11-02 2017-05-09 Rockwell Automation Technologies, Inc. Cloud based drive monitoring solution
US10146611B2 (en) * 2012-11-19 2018-12-04 Siemens Corporation Resilient optimization and control for distributed systems
US20140170617A1 (en) 2012-12-19 2014-06-19 Caterpillar Inc. Monitoring System for a Machine
US9151681B2 (en) 2012-12-19 2015-10-06 Progress Rail Services Corporation Temperature detector having different types of independent sensors
US8918246B2 (en) 2012-12-27 2014-12-23 Caterpillar Inc. Augmented reality implement control
US20140184643A1 (en) 2012-12-27 2014-07-03 Caterpillar Inc. Augmented Reality Worksite
US20140188778A1 (en) 2012-12-27 2014-07-03 General Electric Company Computer-Implemented System for Detecting Anomaly Conditions in a Fleet of Assets and Method of Using the Same
US9217999B2 (en) 2013-01-22 2015-12-22 General Electric Company Systems and methods for analyzing data in a non-destructive testing system
US10001518B2 (en) 2013-02-04 2018-06-19 Abb Schweiz Ag System and method for power transmission and distribution asset condition prediction and diagnosis
CN104021264B (en) * 2013-02-28 2017-06-20 华为技术有限公司 A kind of failure prediction method and device
US10909137B2 (en) * 2014-10-06 2021-02-02 Fisher-Rosemount Systems, Inc. Streaming data for analytics in process control systems
US9593591B2 (en) 2013-03-13 2017-03-14 Rolls-Royce Corporation Engine health monitoring and power allocation control for a turbine engine using electric generators
US9262255B2 (en) * 2013-03-14 2016-02-16 International Business Machines Corporation Multi-stage failure analysis and prediction
US8937619B2 (en) 2013-03-15 2015-01-20 Palantir Technologies Inc. Generating an object time series from data objects
US8909656B2 (en) 2013-03-15 2014-12-09 Palantir Technologies Inc. Filter chains with associated multipath views for exploring large data sets
WO2014145977A1 (en) 2013-03-15 2014-09-18 Bates Alexander B System and methods for automated plant asset failure detection
US8917274B2 (en) 2013-03-15 2014-12-23 Palantir Technologies Inc. Event matrix based on integrated data
CA2908825C (en) 2013-04-08 2021-06-08 Reciprocating Network Solutions, Llc Reciprocating machinery monitoring system and method
JP5768834B2 (en) 2013-04-12 2015-08-26 横河電機株式会社 Plant model management apparatus and method
US20140330747A1 (en) 2013-05-01 2014-11-06 International Business Machines Corporation Asset lifecycle management
US20140330609A1 (en) 2013-05-01 2014-11-06 International Business Machines Corporation Performance Driven Municipal Asset Needs and Sustainability Analysis
US8799799B1 (en) 2013-05-07 2014-08-05 Palantir Technologies Inc. Interactive geospatial map
US9438648B2 (en) * 2013-05-09 2016-09-06 Rockwell Automation Technologies, Inc. Industrial data analytics in a cloud platform
US9456302B2 (en) 2013-06-03 2016-09-27 Temeda Llc Geospatial asset tracking systems, methods and apparatus for acquiring, manipulating and presenting telematic metadata
US9665843B2 (en) 2013-06-03 2017-05-30 Abb Schweiz Ag Industrial asset health profile
US11055450B2 (en) * 2013-06-10 2021-07-06 Abb Power Grids Switzerland Ag Industrial asset health model update
US10534361B2 (en) 2013-06-10 2020-01-14 Abb Schweiz Ag Industrial asset health model update
US8886601B1 (en) 2013-06-20 2014-11-11 Palantir Technologies, Inc. System and method for incrementally replicating investigative analysis data
WO2014205497A1 (en) 2013-06-26 2014-12-31 Climate Risk Pty Ltd Computer implemented frameworks and methodologies for enabling climate change related risk analysis
US10018997B2 (en) 2013-06-28 2018-07-10 Fisher-Rosemount Systems, Inc. Non-intrusive data analytics in a process control system
US8713467B1 (en) 2013-08-09 2014-04-29 Palantir Technologies, Inc. Context-sensitive views
US9319905B2 (en) * 2013-08-30 2016-04-19 Google Inc. Re-tasking balloons in a balloon network based on expected failure modes of balloons
US9535774B2 (en) * 2013-09-09 2017-01-03 International Business Machines Corporation Methods, apparatus and system for notification of predictable memory failure
US8689108B1 (en) 2013-09-24 2014-04-01 Palantir Technologies, Inc. Presentation and analysis of user interaction data
US8938686B1 (en) 2013-10-03 2015-01-20 Palantir Technologies Inc. Systems and methods for analyzing performance of an entity
US8812960B1 (en) 2013-10-07 2014-08-19 Palantir Technologies Inc. Cohort-based presentation of user interaction data
US8786605B1 (en) 2013-10-24 2014-07-22 Palantir Technologies Inc. Systems and methods for distance and congestion-aware resource deployment
US9355010B2 (en) * 2013-10-29 2016-05-31 Seagate Technology Llc Deriving an operational state of a data center using a predictive computer analysis model
JP5530020B1 (en) * 2013-11-01 2014-06-25 株式会社日立パワーソリューションズ Abnormality diagnosis system and abnormality diagnosis method
US8832594B1 (en) 2013-11-04 2014-09-09 Palantir Technologies Inc. Space-optimized display of multi-column tables with selective text truncation based on a combined text width
US8868537B1 (en) 2013-11-11 2014-10-21 Palantir Technologies, Inc. Simple web search
US9753796B2 (en) * 2013-12-06 2017-09-05 Lookout, Inc. Distributed monitoring, evaluation, and response for multiple devices
US9774522B2 (en) * 2014-01-06 2017-09-26 Cisco Technology, Inc. Triggering reroutes using early learning machine-based prediction of failures
WO2015112892A1 (en) 2014-01-24 2015-07-30 Telvent Usa Llc Utility resource asset management system
JP6459180B2 (en) 2014-02-10 2019-01-30 富士ゼロックス株式会社 Failure prediction system, failure prediction device, job execution device, and program
EP3111592B1 (en) 2014-02-27 2021-04-28 Intel Corporation Workload optimization, scheduling, and placement for rack-scale architecture computing systems
US10410116B2 (en) 2014-03-11 2019-09-10 SparkCognition, Inc. System and method for calculating remaining useful time of objects
US8924429B1 (en) 2014-03-18 2014-12-30 Palantir Technologies Inc. Determining and extracting changed data from a data source
US10521747B2 (en) 2014-04-08 2019-12-31 Northrop Grumman Systems Corporation System and method for providing a scalable semantic mechanism for policy-driven assessment and effective action taking on dynamically changing data
US9857238B2 (en) 2014-04-18 2018-01-02 Google Inc. Thermodynamic model generation and implementation using observed HVAC and/or enclosure characteristics
US20160028605A1 (en) 2014-05-30 2016-01-28 Reylabs Inc. Systems and methods involving mobile linear asset efficiency, exploration, monitoring and/or display aspects
US9734693B2 (en) 2014-07-09 2017-08-15 Mckinley Equipment Corporation Remote equipment monitoring and notification using a server system
US9733629B2 (en) 2014-07-21 2017-08-15 Honeywell International Inc. Cascaded model predictive control (MPC) approach for plantwide control and optimization
US20160028648A1 (en) 2014-07-25 2016-01-28 At&T Intellectual Property I, L.P. Resource Management Service
US9348710B2 (en) * 2014-07-29 2016-05-24 Saudi Arabian Oil Company Proactive failure recovery model for distributed computing using a checkpoint frequency determined by a MTBF threshold
CN104398431B (en) 2014-11-21 2017-03-29 贵州神奇药物研究院 It is a kind of to treat Chinese medicine extract product of chloasma and preparation method thereof
EP3026510B1 (en) 2014-11-26 2022-08-17 General Electric Company Methods and systems for enhancing control of power plant generating units
US20160155098A1 (en) * 2014-12-01 2016-06-02 Uptake, LLC Historical Health Metrics
US9960598B2 (en) 2015-03-03 2018-05-01 General Electric Company Methods and systems for enhancing control of power plant generating units
JP6339951B2 (en) * 2015-03-04 2018-06-06 株式会社日立製作所 Data collection system, data collection method, server, and gateway
AU2016265273A1 (en) 2015-05-15 2017-10-19 Parker-Hannifin Corporation Integrated asset integrity management system
US10197631B2 (en) 2015-06-01 2019-02-05 Verizon Patent And Licensing Inc. Systems and methods for determining vehicle battery health
US10254751B2 (en) * 2015-06-05 2019-04-09 Uptake Technologies, Inc. Local analytics at an asset
US10579750B2 (en) * 2015-06-05 2020-03-03 Uptake Technologies, Inc. Dynamic execution of predictive models
US10007710B2 (en) * 2015-09-21 2018-06-26 Splunk Inc. Adaptive control of data collection requests sent to external data sources
US10489752B2 (en) 2016-08-26 2019-11-26 General Electric Company Failure mode ranking in an asset management system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030139905A1 (en) * 2001-12-19 2003-07-24 David Helsper Method and system for analyzing and predicting the behavior of systems
US20070067678A1 (en) * 2005-07-11 2007-03-22 Martin Hosek Intelligent condition-monitoring and fault diagnostic system for predictive maintenance
US7693608B2 (en) * 2006-04-12 2010-04-06 Edsa Micro Corporation Systems and methods for alarm filtering and management within a real-time data acquisition and monitoring environment
US20150184549A1 (en) * 2013-12-31 2015-07-02 General Electric Company Methods and systems for enhancing control of power plant generating units

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10878385B2 (en) 2015-06-19 2020-12-29 Uptake Technologies, Inc. Computer system and method for distributing execution of a predictive model
US11036902B2 (en) 2015-06-19 2021-06-15 Uptake Technologies, Inc. Dynamic execution of predictive models and workflows
US20170161969A1 (en) * 2015-12-07 2017-06-08 The Boeing Company System and method for model-based optimization of subcomponent sensor communications
US20170264566A1 (en) * 2016-03-10 2017-09-14 Ricoh Co., Ltd. Architecture Customization at User Application Layer
US10530705B2 (en) * 2016-03-10 2020-01-07 Ricoh Co., Ltd. Architecture customization at user application layer
WO2018213617A1 (en) * 2017-05-18 2018-11-22 Uptake Technologies, Inc. Computing system and method for approximating predictive models and time-series values
US20210365449A1 (en) * 2020-05-20 2021-11-25 Caterpillar Inc. Callaborative system and method for validating equipment failure models in an analytics crowdsourcing environment
WO2022072908A1 (en) * 2020-10-02 2022-04-07 Tonkean, Inc. Systems and methods for data objects for asynchronou workflows
US20220358434A1 (en) * 2021-05-06 2022-11-10 Honeywell International Inc. Foundation applications as an accelerator providing well defined extensibility and collection of seeded templates for enhanced user experience and quicker turnaround

Also Published As

Publication number Publication date
WO2016089794A1 (en) 2016-06-09
EP3350978A4 (en) 2019-02-20
US20160155315A1 (en) 2016-06-02
EP3227784A4 (en) 2018-04-25
AU2016368298A1 (en) 2018-07-19
KR20180082606A (en) 2018-07-18
EP3387531A4 (en) 2019-04-24
CA3007466A1 (en) 2017-06-15
SG11201708095WA (en) 2017-11-29
US20160371599A1 (en) 2016-12-22
WO2017100245A1 (en) 2017-06-15
JP2018500710A (en) 2018-01-11
KR20170117377A (en) 2017-10-23
US10176032B2 (en) 2019-01-08
CA2998345A1 (en) 2017-03-23
EP3227852A1 (en) 2017-10-11
US10025653B2 (en) 2018-07-17
US20160154690A1 (en) 2016-06-02
JP2018536941A (en) 2018-12-13
AU2016322513A1 (en) 2018-04-12
CA2969455C (en) 2023-06-27
JP2018500709A (en) 2018-01-11
AU2015355154A1 (en) 2017-07-20
KR20170118039A (en) 2017-10-24
HK1244937A1 (en) 2018-08-17
US10417076B2 (en) 2019-09-17
WO2016089792A1 (en) 2016-06-09
SG11201708094XA (en) 2017-11-29
EP3227852A4 (en) 2018-05-23
US10545845B1 (en) 2020-01-28
AU2015355156A1 (en) 2017-07-20
US20160379465A1 (en) 2016-12-29
US10261850B2 (en) 2019-04-16
JP6652247B2 (en) 2020-02-19
US20190003929A1 (en) 2019-01-03
CA2969455A1 (en) 2016-06-09
US9842034B2 (en) 2017-12-12
US20160378585A1 (en) 2016-12-29
US9864665B2 (en) 2018-01-09
US20170161659A1 (en) 2017-06-08
US20170075778A1 (en) 2017-03-16
EP3350978A1 (en) 2018-07-25
US11144378B2 (en) 2021-10-12
CN107408225A (en) 2017-11-28
WO2017048640A1 (en) 2017-03-23
CA2969452A1 (en) 2016-06-09
EP3387531A1 (en) 2018-10-17
US9471452B2 (en) 2016-10-18
EP3227784A1 (en) 2017-10-11
US9910751B2 (en) 2018-03-06
HK1244938A1 (en) 2018-08-17
US20190087256A1 (en) 2019-03-21
JP2018528693A (en) 2018-09-27
US20160155098A1 (en) 2016-06-02
HK1255721A1 (en) 2019-08-23
US20170161130A1 (en) 2017-06-08
CA3111567A1 (en) 2020-03-12
SG11201804807VA (en) 2018-07-30
US10754721B2 (en) 2020-08-25
US20200278273A9 (en) 2020-09-03
CN107408225B (en) 2020-01-07
US20160153806A1 (en) 2016-06-02
US20170161621A1 (en) 2017-06-08
CN107408226A (en) 2017-11-28

Similar Documents

Publication Publication Date Title
US10261850B2 (en) Aggregate predictive model and workflow for local execution
US11036902B2 (en) Dynamic execution of predictive models and workflows
US10579750B2 (en) Dynamic execution of predictive models
US10878385B2 (en) Computer system and method for distributing execution of a predictive model
US10254751B2 (en) Local analytics at an asset
EP3427200B1 (en) Handling of predictive models based on asset location
US20180247239A1 (en) Computing System and Method for Compressing Time-Series Values
AU2017311107A1 (en) Computer architecture and method for recommending asset repairs
CA2989806A1 (en) Local analytics at an asset
WO2018213617A1 (en) Computing system and method for approximating predictive models and time-series values

Legal Events

Date Code Title Description
AS Assignment

Owner name: UPTAKE TECHNOLOGIES, INC., ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NICHOLAS, BRAD;REEL/FRAME:035894/0776

Effective date: 20150619

AS Assignment

Owner name: UPTAKE TECHNOLOGIES, INC., ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KOLB, JASON;REEL/FRAME:038653/0177

Effective date: 20160505

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION