US20150039280A1 - Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation - Google Patents

Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation Download PDF

Info

Publication number
US20150039280A1
US20150039280A1 US14/520,791 US201414520791A US2015039280A1 US 20150039280 A1 US20150039280 A1 US 20150039280A1 US 201414520791 A US201414520791 A US 201414520791A US 2015039280 A1 US2015039280 A1 US 2015039280A1
Authority
US
United States
Prior art keywords
dynamic system
lpv
parameter varying
nonlinear
linear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/520,791
Inventor
Wallace E. Larimore
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Adaptics Inc
Original Assignee
Adaptics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Adaptics Inc filed Critical Adaptics Inc
Priority to US14/520,791 priority Critical patent/US20150039280A1/en
Assigned to Adaptics, Inc. reassignment Adaptics, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LARIMORE, WALLACE E.
Publication of US20150039280A1 publication Critical patent/US20150039280A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06F17/5018
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/23Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing

Definitions

  • the modeling of nonlinear and time-varying dynamic processes or systems from measured output data and possibly input data is an emerging area of technology. Depending on the area of theory or application, it may be called time series analysis in statistics, system identification in engineering, longitudinal analysis in psychology, and forecasting in financial analysis.
  • Subspace methods can avoid iterative nonlinear parameter optimization that may not converge, and use numerically stable methods of considerable value for high order large scale systems.
  • the general problem of identification of nonlinear systems is known as a general nonlinear canonical variate analysis (CVA) procedure.
  • CVA canonical variate analysis
  • Lorenz attractor a chaotic nonlinear system described by a simple nonlinear difference equation.
  • nonlinear functions of the past and future are determined to describe the state of the process that is, in turn used to express the nonlinear state equations for the system.
  • One major difficulty in this approach is to find a feasible computational implementation since the number of required nonlinear functions of past and future expand exponentially as is well known. This difficulty has often been encountered in finding a solution to the system identification problem that applies to general nonlinear systems.
  • One exemplary embodiment describes a method for utilizing nonlinear, time-varying and parameter-varying dynamic processes.
  • the method may be used for generating reduced models of systems having time varying elements.
  • the method can include steps for expanding state space difference equations; expressing difference equations as a linear, time-invariant system in terms of outputs and augmented inputs; and estimating coefficients of the state equations.
  • Another exemplary embodiment may describe a system for estimating a set of equations governing nonlinear, time-varying and parameter-varying processes.
  • the system can have a first input, a second input, a feedback box and a time delay box. Additionally, in the system, the first input and the second input may be passed through the feedback box to the time delay box to produce an output.
  • the word “exemplary” means “serving as an example, instance or illustration.”
  • the embodiments described herein are not limiting, but rather are exemplary only. It should be understood that the described embodiments are not necessarily to be construed as preferred or advantageous over other embodiments.
  • the terms “embodiments of the invention”, “embodiments” or “invention” do not require that all embodiments of the invention include the discussed feature, advantage or mode of operation.
  • FIGS. 1-6 methods and systems for empirical modeling of time-varying, parameter-varying and nonlinear difference equations may be described. The methods and systems can be implemented and utilized to provide for a variety of results and which may be implemented efficiently.
  • a flow chart of a methodology for empirical modeling of time-varying, parameter varying and nonlinear difference equations may be shown.
  • a set of time-varying, parameter varying and, if desired, nonlinear state space difference equations may be utilized.
  • the equations may then be expanded with respect to a chosen set of basis functions, for example nonlinear input-output equations may be expanded in polynomials in x t and u t .
  • difference equations may then be expressed as a linear time-invariant, for example, in terms of outputs y t and augmented inputs u t , which can include inputs u t and basis functions, for example polynomials, in inputs u t scheduling functions p t and states x t .
  • FIG. 2 may show an exemplary flow chart where a linear, parameter varying system of difference equations may be utilized.
  • a set of linear parameter varying state space equations such as those shown below as equation 1 and equation 2, may be used.
  • x t+1 A 0 x t +B 0 u t +[A l ⁇ s (1)+ . . . +A s ⁇ t ( s )] x t +[B 1 ⁇ t (1)+ . . . + B s ⁇ t ( s )] u (1)
  • the state space difference equations may be expanded with respect to polynomials in the scheduling function p t , states x t and inputs u t , for example as shown in equations 3 and 4 below.
  • x t+1 A 0 x t +B 0 u t +[A 1 . . . A s ]( ⁇ t ⁇ circle around (x) ⁇ x s )+[ B 1 . . . B s ]( ⁇ t ⁇ circle around (x) ⁇ u t ) (3)
  • the difference equations can be expressed in terms of original outputs y t and augmented inputs [u t ,(p t ⁇ circle around (x) ⁇ x t ), (p t ⁇ circle around (x) ⁇ u t )] that are functions of u t , x t and p t .
  • Difference equations can have linear time-invariant unknown (A 0 , [B 0 A ⁇ B ⁇ ], C 0 , [D 0 C ⁇ D ⁇ ]) coefficients that can be estimated, and as shown in equations 5 and 6 below.
  • x t + 1 A 0 ⁇ x t + [ B O ⁇ A ⁇ ⁇ B ⁇ ] ⁇ [ u t ⁇ t ⁇ x t ⁇ t ⁇ u t ] ( Equation ⁇ ⁇ 5 )
  • y t C 0 ⁇ x t + [ D O ⁇ C ⁇ ⁇ D ⁇ ] ⁇ [ u t ⁇ t ⁇ x ⁇ t ⁇ u t ] ( Equation ⁇ ⁇ 6 )
  • the augmented inputs [u t ,( ⁇ t ⁇ circle around (x) ⁇ x t ), ( ⁇ t ⁇ circle around (x) ⁇ u t )], and, in some exemplary embodiments, specifically ⁇ t ⁇ circle around (x) ⁇ x t , can involve an unknown state x t vector, so iteration may be utilized or desired.
  • iteration using an iterated algorithm as described in further detail below, may be utilized.
  • FIG. 3 can show a flow chart of an iterated algorithm that may be implemented for iterated subspace identification.
  • nonlinear difference equations can be expanded in additive basis functions and expressed in linear time-invariant form with augmented inputs u t . This can include, in some examples, nonlinear basis functions involving outputs, state and scheduling functions.
  • the state estimate ⁇ circumflex over (x) ⁇ 6 [0] is unknown that is equivalent to (A ⁇ C ⁇ ) term not in the LPV model, so the corresponding terms may be deleted from the set of augmented inputs ⁇ t .
  • the iterated algorithm may then be implemented using augmented inputs as the inputs and can compute estimates ⁇ circumflex over ( ⁇ ) ⁇ [k] of the model parameters.
  • the state estimates ⁇ circumflex over (x) ⁇ t [1] can be computed along with the one-step prediction innovations. Then the likelihood function can be evaluated.
  • an iteration k for k ⁇ 2 may be made.
  • the state estimate ⁇ circumflex over (x) ⁇ t [k ⁇ 1] may be initialized for all t.
  • the iterative algorithm may then be implemented using the augmented inputs as the inputs and can compute estimates ⁇ circumflex over ( ⁇ ) ⁇ [k] of the parameters.
  • the state estimates ⁇ circumflex over (x) ⁇ t [k] may be computed.
  • the one-step prediction innovations may also be made and the likelihood ⁇ (Y 1:N
  • the convergence can be checked.
  • the change in the values of the log likelihood function and the state orders between iteration k ⁇ 1 and iteration k can be compared. If, in some examples, the state order is the same and the log likelihood function change is less than a chosen threshold ⁇ . This threshold in many examples may be less than one, for example 0.01, then the iterations may end or be stopped. Otherwise, where the value is above a chosen threshold ⁇ , step 306 above may be returned to and iteration k+1 may be performed. Following the performance of iteration k+1, the convergence may then be checked again.
  • a different approach may be taken to directly and simply obtain optimal or desired estimates of the unknown parameters for the case of autocorrelated errors and feedback in the system using, for example, subspace methods developed for linear time-invariant systems. This may be done by expressing the problem in a different form that can lead to a desire to iterate on the state estimate; however, the number of iterations may be very low, and, to further simplify the system and its development, stochastic noise may be removed.
  • the state space matrices can have the form of the following equations 9 through 12.
  • system identification methods for the class of LPV systems can have a number of potential applications and economic value.
  • Such systems can include, but are not limited to, aerodynamic and fluid dynamic vehicles, for example aircraft and ships, automotive engine dynamics, turbine engine dynamics, chemical processes, for example stirred tank reactors and distillation columns, amongst others.
  • One feature can be that at any given operating point pt the system dynamics can be described as a linear system.
  • the scheduling parameters pt may be complex nonlinear functions of operating point variables, for example, but not limited to, speed, pressures, temperatures, fuel flows and the like, that may be known or accurately measured variables that characterize the system dynamics within possibly unknown constant matrices A, B, C and D.
  • ⁇ t may be computable or determinable from the knowledge of any such operating point variables.
  • LPV models of automotive engines can involve the LPV state space equations that explicitly express the elements of the vector ⁇ t as very complex nonlinear functions of various operating point variables.
  • the scheduling parameter ⁇ t may be available when the system identification computations are performed. This can be a relaxation of the real-time use or requirement for such applications as real-time control or filtering.
  • the LPV equations can be written in time-invariant form by associating the scheduling parameter ⁇ t with the inputs u t and states x t as
  • x t+1 A 0 x t +B 0 u t +[A 1 . . . A s ]( ⁇ t ⁇ circle around (x) ⁇ x t )+[ B 1 . . . B s ]( ⁇ t ⁇ circle around (x) ⁇ u t ) (13)
  • x t + 1 A 0 ⁇ x t + [ B 0 ⁇ A ⁇ ⁇ B ⁇ ] ⁇ [ u t ( ⁇ t ⁇ x t ) ( ⁇ t ⁇ u t ) ] ( 15 )
  • y t C 0 ⁇ x t + [ D 0 ⁇ C ⁇ ⁇ D ⁇ ] ⁇ [ u t ( ⁇ t ⁇ x ) ( ⁇ t ⁇ u t ) ] ( 16 )
  • the feedback f t inputs can now be considered as actual inputs to the LTI system.
  • the matrices [A ⁇ B ⁇ ; C ⁇ D ⁇ ] of the LPV system description can be the appropriate quantities to describe the LTI feedback representation of the LPV system.
  • x t in ⁇ t ⁇ circle around (x) ⁇ x t may not be a known or measured quantity
  • a prior estimate of x t may be available or utilized and iterations may be used to obtain a more accurate or desired estimate of x t .
  • an LPV system can be expressed as a linear time-invariant system with nonlinear internal feedback that can involve the known parameter varying functions ⁇ t .
  • the system matrices P i of rank r i may be factored for each i with 1 ⁇ i ⁇ s using a singular value decomposition, such as that shown in equation 19.
  • Exemplary FIG. 4 may be a schematic diagram of Equations 15 and 16.
  • the state Equation 15 involves the upper boxes in 402 while the measurement Equation 16 involves the lower boxes in 402 .
  • ⁇ T 422 is a time delay of sample duration with the right hand side of Equation 15 at 444 entering 422 and the left hand side equal to state x t+1 at leaving. This is a recursion formula similar to equations 14 and 15, so the time index can be changed from “t” to “t+1” for the figure before the start of the next iteration and continuing until entering boxes 420 , 430 and 410 .
  • Scheduling parameters ⁇ t 406 , inputs u t 408 and outputs y t 446 are variables
  • the upper four boxes are multiplication from left to right by B 0 418 , A 0 420 , B ⁇ 414 , A ⁇ 416 , respectively.
  • the lower boxes are multiplication with A replaced by C and B replaced by D, depicted as D ⁇ 424 , C ⁇ 426 , D 0 428 and C 0 430 .
  • the Kronecker products involving ⁇ t and successively x t and u t are formed in 410 and 412 respectively.
  • ⁇ T 422 can represent a time delay block of duration ⁇ T that can act similar to a date line for this exemplary embodiment. Therefore the arrows in exemplary FIG. 4 indicate a time flow or an actual sequence of operations, the flow may start at 406 , 408 and the output of 422 and proceed through the diagram.
  • sample time t+1 may begin.
  • the same quantity is maintained, but all of the time labels can be changed to t+1 throughout the process shown in exemplary FIG. 4 .
  • equations 15 and 16 this can be equivalent to the LPV form shown in equations 15 and 16 where the state equations can be linear in the scheduling parameter vector ⁇ .
  • equations 28 and 29 the state equations for x t+1 and y t may be as shown in equations 28 and 29 below.
  • equations 28 and 29 may be the same as equations 15 and 16.
  • the LPV coefficient matrix P i [A i B i ; C i D i ] can be the regression matrix of the left hand side state equation variables (x t+1 ; y t ) on the vector of nonlinear terms [ ⁇ i,t x t ; ⁇ i,t u t ].
  • the LTI nonlinear feedback representation can solve a major barrier to applying existing subspace identification algorithms to the identification of LPV systems and overcomes previous problems with exponentially growing numbers of nonlinear terms used in other methods.
  • the above LTI nonlinear feedback representation can make it clear that nonlinear terms ( ⁇ t ⁇ circle around (x) ⁇ x t ; ⁇ t ⁇ circle around (x) ⁇ u t ) can be interpreted as inputs to an LTI nonlinear feedback system. Therefore it may be possible to directly estimate the matrices of the LTI system state space equations using linear subspace methods that can be accurate for processes with inputs and feedback.
  • LTI system matrices and state vectors may be determined following the reduction of an LTI subsystem of a nonlinear feedback system involving known scheduling functions and the state of the LTI subsystem. This embodiment can involve taking the iterative determination of both the LTI system state as well as the LTI state space matrices describing the LTI system.
  • One example may be to consider the polynomial system as a linear system in x and u with additional additive input terms in the higher order product terms so the additional inputs are ⁇ t ⁇ circle around (x) ⁇ x t and ⁇ t ⁇ circle around (x) ⁇ u t .
  • the scheduling variables p t are assumed to be available in real time as operating points or measured variables. If accurate estimates of the state x t are also available, then the problem could be only a direct application of the iterative algorithm for system identification. Since the variables x t are not available until after the solution of the system identification, a different approach may be utilized.
  • an initial estimate of the state vector may be made.
  • system identification may be performed on the terms in the state equations involving the variables x t , u t and ⁇ t ⁇ circle around (x) ⁇ u t but not the variables ⁇ t ⁇ circle around (x) ⁇ x t .
  • LTI linear time invariant
  • an iterate estimate of the state vectors may be made.
  • the state vector X 1,N [1] can be used as an initial estimates for x t in the terms ⁇ t ⁇ circle around (x) ⁇ x t in equations 15 and 16.
  • the iterative algorithm can be applied to obtain an estimate of the system matrices A, B, C, and D and a refined estimate of X 1,N [2] . Further, this step may then be iterated until a desired convergence is achieved.
  • Exemplary steps one through three above may therefore work with only a few iterations.
  • the iterative algorithm can be used to address the previously known problem of LPV system identification.
  • the following is an exemplary discussion of using the iterative algorithm in directly identifying the coefficients F ij and H ij of the additive polynomial expansions of the nonlinear difference equation functions f(x t ,u t ,v t ) and h(x t ,u t ,v t ), respectively.
  • This may be a very compact and parsimonious parameterization for such a nonlinear system.
  • the iterative algorithm described herein for linear time-invariant systems can therefore be used with only a very modest increase in computational requirements.
  • LTI linear time invariant
  • a linear parameter varying system that can be affine in the scheduling variables ⁇ t can be expressed in time invariant form involving the additional input variables ⁇ t ⁇ circle around (x) ⁇ x t and ⁇ t ⁇ circle around (x) ⁇ u t . Note that this involves nonlinear functions of ⁇ t with x t and u t .
  • the dynamic system can be linear time-invariant in these nonlinear functions of the variables.
  • the effect of additional inputs can be traced through the iterative algorithm.
  • the exemplary steps outlined below may further be reflected in the table shown in exemplary FIG. 5 and the flow chart of exemplary FIG. 6 .
  • the ARX model fitting can have a linear regression problem that makes no prior assumptions on the ARX order other than the maximum ARX order considered. If the identified ARX order is near a maximum considered, the maximum ARX order considered can be doubled and the regression recomputed.
  • a corrected future can be computed.
  • the effect of future inputs on future outputs can be determined using the ARX model and subtracted from the outputs. The effect of this can be to compute the future outputs that could be obtained from the state at time t if there were no inputs in the future of time t.
  • a canonical variate analysis can be made or computed.
  • the CVA between the past and the corrected future can be computed.
  • the covariance matrices among and between the past and corrected future may also be computed. This may be similar to an SVD on the joint past-future covariance matrix which is of the order of the covariance of the past to obtain the ARX model.
  • a result of this step is to obtain estimates of the states of the system called ‘memory’.
  • a regression using the state equation may be performed.
  • the ‘memory’ from step 606 can be used in the state equations as if it were data and resulting estimates of the state space matrices and covariance matrices of the noise processes can be obtained. These estimates can be asymptotically ML estimates of the state space model with no prior assumptions on the parameter values of the ARX or SS model.
  • the ML solution in the iterative algorithm can be obtain in one computation based on the assumed outputs and inputs in iteration k, as shown in 504 of exemplary FIG. 5 , with no iteration on assumed parameter values.
  • the iteration is the result of refinement of the state estimate in the nonlinear term ( ⁇ t ⁇ circle around (x) ⁇ circumflex over (x) ⁇ t [k ⁇ 1] ) T that can be part of the assumed data in the iteration k ( 504 ).
  • the ARX order lagp identified can be substantially higher due to the nonlinear input terms and depending on the statistically significant dynamics present among the output and augmented input variables.
  • the computation can involve computation of an SVD on the data covariance matrix that is dimension of the dimua*lagp where lagp is the maximum ARX order considered.
  • the computation that may be utilized for the SVD is of the order of 60*(dimua*lagp) 3 , so the computation increases proportional to (dimp(dimx+dimu)/dimu) 3 .
  • one consequence of augmenting the system inputs by the nonlinear terms ⁇ t ⁇ circle around (x) ⁇ x t and ⁇ t ⁇ circle around (x) ⁇ u t may be to increase the past by a factor dimp(dimx+dimu)/dimu, and to increase the computation by this factor cubed.
  • this can be very significant, however there is no exponential explosion in the number of terms or the computation time.
  • the LPV subspace algorithm of this invention still corresponds to subspace system identification for a linear time-invariant system and, in addition, because of the nonlinearity of the terms [u t T ( ⁇ t ⁇ circle around (x) ⁇ x t ) T ( ⁇ t ⁇ circle around (x) ⁇ u t ) T ] T involved the state estimates X t , iteration on the estimate of the system states until convergence can be desired or, in some alternatives, required.
  • the result of the CVA in exemplary step 606 is the computation of an estimate, denoted as m t , of the state sequence x t .
  • the symbol ‘m’ is used as in the word ‘memory’ to distinguish it from the actual state vector and various estimates of the state vector as used in the EM algorithm discussed below.
  • the estimate ⁇ circumflex over (m) ⁇ t in combination with the maximization step of the EM algorithm can be shown to be a maximum likelihood solution of the system identification problem for the case of a linear time-invariant system. In that case, the global ML solution can be obtained in only one iteration of the algorithm. This may be different with the EM and gradient search methods that almost always utilize many iterations.
  • the CVA estimate m t may actually be an estimate of the state sequence for the system with parameters equal to the maximum likelihood estimates in the case of LTI systems and large sample size. This is different from the usual concept of first obtaining the ML parameter estimates and then estimating the states using a Kalman filter. Not only is the optimal state sequence for the ML parameters obtained in a single iteration, the optimal state order may also be determined in the process. In the usual ML approach, it can be desired or, in some alternatives, required to obtain the ML estimates for each choice of state order and then proceed to hypothesis testing to determine the optimal state order.
  • the convergence of the iterative algorithm for the case of LPV may be described.
  • a substantially similar approach may be taken for other forms of nonlinear difference equations.
  • the iterative algorithm can be closely related to the class of EM algorithms that can be shown to always converge, under an assumption on the LPV system stability.
  • the rate of convergence can be computed to be geometric. This latter result is unique since the EM algorithm typically makes rapid early progress but becomes quite slow in the end. The reason for the rapid terminal convergence of the LPV algorithm will be discussed in further detail. Issues of initialization, stability and convergence will be elaborated below.
  • the replacements that can be made in the GN discussion to obtain the LPV algorithm may be as follows: replace the LTI state equations with the LPV state equations and, for the missing data, replace the state estimate from the Kalman smoother with the ‘memory’ vector m t in the iterative algorithm.
  • the consequence of this can be significant because for linear systems as in GN the iterative algorithm described herein can obtain the global ML parameter estimates in one step in large samples. On the other hand, for linear systems it may take the EM algorithm many iterations to obtain the ML solution.
  • the subspace approach can specify the CVA state estimate m t or ‘memory’, as the missing data.
  • the memory m t is the estimate of the state vector obtained by a canonical variate analysis between the corrected future and the past obtained in exemplary step 606 of the iterative algorithm using the input and output vectors specified in FIG. 5 .
  • This may be similar to a Kalman filter state estimate at the global ML parameter estimates associated with the output and input data at iteration k rather than a Kalman smoother state estimate at the last estimated parameter value.
  • a difference is that in the exemplary step 608 of the CVA algorithm the expectation can be with respect to the true global ML estimates associated with the output and input data at iteration k whereas the GN estimate is an expectation with respect to the parameter value obtained in the previous iteration.
  • Lemma 3.1 of GN holds but also can achieve the global ML estimate associated with the input-output vectors of exemplary FIG. 5 in one step.
  • Lemma 3.2 of GN can be replaced by the iterative algorithm to obtain the memory estimates ⁇ circumflex over (m) ⁇ t [k] .
  • Lemma 3.3 of GN is the same result as obtained in the iterative algorithm.
  • An additional step may be used to compute ⁇ circumflex over (x) ⁇ t [k] from the estimates ⁇ [k] and the linear time-varying state equations given by the LPV state equations. This step may be desired to obtain the state estimate ⁇ circumflex over (x) ⁇ t [k] for starting the next iteration k 1 .
  • the memory ⁇ circumflex over (m) ⁇ t [k] could be used in place of the state estimate ⁇ circumflex over (x) ⁇ t [k] .
  • ⁇ circumflex over (m) ⁇ t [k] projected on the recursive structure of the state equations in equations (43GN) and (44GN) can produce the ML state estimates asymptotically and the corresponding optimal filtered state estimates ⁇ circumflex over (x) ⁇ t [k] .
  • the LPV system identification algorithm may converge at a geometric rate near the maximum likelihood solution.
  • the result for a linear system can be developed.
  • the same approach may work for an LPV system, but the expressions below may be time dependent and can be of greater complexity.
  • equation 17 and 18 the time invariant feedback from equations 17 and 18 can be considered with the substitution of notation ( ⁇ , ⁇ tilde over (B) ⁇ , ⁇ tilde over (C) ⁇ , ⁇ tilde over (D) ⁇ , ⁇ t ,u t ) by (A,B,C,D,u t , ⁇ t ).
  • equation 17 and equation 18, with noise v t in innovation form may be written as equations 32 and 33 below.
  • equation 35 Through recursively substituting the right hand side of equation 34 for x t can provide equation 35 below.
  • ⁇ t can be the original inputs such that u t can include the nonlinear Kronecker product terms.
  • J can be arbitrarily close to constant for a sufficiently large iteration k such that
  • L t can be time varying and can have the time varying scheduling parameters ⁇ t combined with terms L. Also, if ⁇ t for all t is bounded, L t will be similarly bounded.
  • can mean that L N ⁇ k extends to the right.
  • M to denote the upper triangular matrix can give the fundamental expression for the difference ⁇ X lag:N i between state sequences X lag:N i i at successive iterations k and k ⁇ 1 as equation 36 below.
  • subscript ⁇ circle around (x) ⁇ x means to select the submatrix of B ⁇ KD with columns corresponding to the corresponding rows of ⁇ circle around (x) ⁇ x in ⁇ t .
  • the convergence of the iterative linear subspace computation may be affected by the stability of the LPV difference equations and, more specifically, the stability of the LPV linear subspace system identification described herein.
  • a set of time-invariant linear state space difference equations may be stable if and only if all of the eigenvalues of the state transition matrix are stable, for example the eigenvalues are less than 1.
  • the LPV case is more complex, but for the purposes of this exemplary embodiment, the rate of growth or contraction per sample time can be given for each eigenvector component of the state vector x t by the respective eigenvalues of the LPV state transition matrix from equation 9 and now shown as equation 37 below.
  • the difficulty can lie in that the algorithm initialization and sample size since at the optimal solution with a large sample, the algorithm is stable and convergent. If it was possible to compute with infinite precision, then problems with illconditioning could be avoided; however, with 15 or 30 decimal place accuracy, for example, some real data sets such as for the aircraft wing flutter, can benefit from further consideration.
  • some exemplary embodiments may deal with manners of correcting for or otherwise lessening any undesired effects that may result from algorithm instability. For example, if the state sequence ⁇ circumflex over (x) ⁇ t [k] is sufficiently close to the optimum as based upon the terminal convergence results described previously, the iterative algorithm may be stable provided it is assumed that the LPV system is stable. Therefore, in some examples, large initial errors in the estimate ⁇ circumflex over (x) ⁇ t [1] can lead to an unstable computation.
  • outlier editing of unstable regions may be performed.
  • the time intervals with significant instabilities can be determined and removed from the computation.
  • the scheduling parameters can be scheduled to enhance the system identification in several ways.
  • An initial region that can avoid computational instabilities can be chosen to obtain sufficiently accurate initial estimates of the LPV parameters (A, B, C, D). This can then be used in other regions to initialize the state estimate of the algorithm with sufficient accuracy that computational instabilities will not be encountered.
  • the removal of unstable outliers at each iteration can be the most general and robust procedure.
  • the number of outliers can be expected to decrease until there is rapid terminal convergence.
  • a counter example to this expectation is when the beginning of the scheduling parameters ⁇ t time history has little variation so that the LPV model for this part of the data is good for that portion of the data, but is a poor global model. Then, in the later part of the time history, there can be consideration variation in ⁇ t such that unstable behavior may result.
  • the proposed algorithm can perform much better than existing methods that presently are not feasible on industrial problems. Further, in many situations, it can be desired to design the experiment to obtain results of a desired fidelity for a specified global region of the operating space at as little cost in time and resources as possible. Because the iterative algorithm's linear subspace method is a maximum likelihood based procedure, designs can be developed for LTI system identification. Also, as it identifies a stochastic model with estimated disturbance models, including confidence bands on quantities such as dynamic frequency response functions, the required sample size and system input excitation can be developed with little prior information on the disturbance processes.
  • the LPV methods and systems described herein may be extended to nonlinear systems. For example, it may be shown that a number of complex and nonlinear systems can be expressed in an approximate LPV form that can be sufficient for application of the LPV subspace system identification methods described herein.
  • a general nonlinear, time varying, parameter varying dynamic system can be described by a system of nonlinear state equations, such as those shown in equations 38 and 39.
  • x t can be the state vector
  • u t can be the input vector
  • y t can be the output vector
  • v t can be a white noise measurement vector.
  • the ‘scheduling’ variables ⁇ t that can be time-varying parameters can describe the present operating point of the system.
  • Very general classes of functions f( ⁇ ) and h( ⁇ ) can be represented by additive borel functions that need not be continuous.
  • x t (i) x t ⁇ circle around (x) ⁇ x t (i ⁇ 1) and similarly for u t (j) .
  • equations 40 and 41 may be polynomial expansions of the nonlinear functions f( ⁇ ) and h( ⁇ ).
  • the nonlinear equations may involve nonlinear functions of relatively simple form such as the approximating polynomial equations that involve only sums of products that are readily computed for various purposes.
  • the problem can become difficult for low dimensions of y, u, and x, even using subspace methods.
  • the matrix dimensions can grow exponentially with the dimension of the ‘past’ that can be used. This can occur in expanding equation 40 by repeated substitution into x t [k] on the right hand side of the state equation 40 with x t on the left hand side of equation 40.
  • equation 40 With t replaced by t ⁇ 1 can be raised to the power lagp, the order of the past typically selected as the estimated ARX model order.
  • the number of additive terms can increase exponentially.
  • equations 39 and 40 can be converted through Carleman bilinearization to bilinear vector differential equations in the state variable as shown in equation 42:
  • equation 40 which expresses the state-affine form, can then be rewritten as equations 44 and 45, below.
  • the coefficient matrices (A, B, C, D) may be linear functions of scheduling parameters ⁇ t denoted as (A( ⁇ t ),B( ⁇ t ),C( ⁇ t )D( ⁇ t )) and in state-affine form as in equations 9 through 12.
  • the scheduling parameters ⁇ t may be nonlinear functions of the operating point or other known or accurately measured variables. For example, since the inputs u t ⁇ circle around (x) ⁇ are multiplicative and can be assumed to be known in real time or accurately measured, they can be absorbed into the scheduling parameters ⁇ t , thereby possibly decreasing their dimension.
  • the bilinear equations can then become those shown in equations 46 and 47 below.
  • Equations 46 and 47 are in explicitly LPV form. Therefore, for functions f(x t ,u u , ⁇ t ,v t ) and h(x t ,u u , ⁇ t ,v t ) that may be sufficiently smooth and for which there may exist a well defined power series expansion in a neighborhood of the variable, there can exist an LPV approximation of the process.
  • the equations may be linear time varying and, in some instances, may have a higher dimension ⁇ t .
  • This can show the dual roles of inputs and scheduling parameters and how they may be interchangeable in a variety of manners and, for example, their difference may be their rate of variation with time.
  • the knowledge of scheduling can therefore often be derived from the underlying physics, chemistry or other fundamental information.
  • LPV models can therefore be considered, on occasion, as graybox models insofar as they may be able to incorporate considerable global information about the behavior of the process that can incorporate into the model how the process dynamics can change with operating point.

Abstract

Methods and systems for estimating differential or difference equations that can govern a nonlinear, time-varying and parameter-varying dynamic process or system. The methods and systems for estimating the equations may be based upon estimations of observed outputs and, when desired, input data for the equations. The methods and systems can be utilized with any system or process that may be capable of being described with nonlinear, time-varying and parameter-varying difference equations and can used for automated extraction of the difference equations in describing detailed system or method behavior for use in system control, fault detection, state estimation and prediction and adaptation of the same to changes in a system or method.

Description

    CROSS-REFERENCE APPLICATIONS
  • The present invention claims priority under 35 U.S.C. §120 to U.S. Provisional Patent Application No. 61/239,745, filed on Sep. 3, 2009, the disclosure of which is incorporated by reference herein in its entirety.
  • BACKGROUND
  • The modeling of nonlinear and time-varying dynamic processes or systems from measured output data and possibly input data is an emerging area of technology. Depending on the area of theory or application, it may be called time series analysis in statistics, system identification in engineering, longitudinal analysis in psychology, and forecasting in financial analysis.
  • In the past there has been the innovation of subspace system identification methods and considerable development and refinement including optimal methods for systems involving feedback, exploration of methods for nonlinear systems including bilinear systems and linear parameter varying (LPV) systems. Subspace methods can avoid iterative nonlinear parameter optimization that may not converge, and use numerically stable methods of considerable value for high order large scale systems.
  • In the area of time-varying and nonlinear systems there has been work undertaken, albeit without the desired results. This work is typical of the present state of the art in that rather direct extensions of linear subspace methods are used for modeling nonlinear systems. This approach expresses the past and future as linear combinations of nonlinear functions of past inputs and outputs. One consequence of this approach is that the dimension of the past and future expand exponentially in the number of measured inputs, outputs, states, and lags of the past that are used. When using only a few of each of these variables, the dimension of the past can number over 104 or even more than 106. For typical industrial processes, the dimension of the past can easily exceed 109 or even 10 12. Such extreme numbers result in inefficient exploitation and results, at best.
  • Other techniques use an iterative subspace approach to estimating the nonlinear terms in the model and as a result require very modest computation. This approach involves a heuristic algorithm, and has been used for high accuracy model identification in the case of LPV systems with a random scheduling function, i.e. with white noise characteristics. One of the problems, however, is that in most LPV systems the scheduling function is usually determined by the particular application, and is often very non-random in character. In several modifications that have been implemented to attempt to improve the accuracy for the case of nonrandom scheduling functions, the result is that the attempted modifications did not succeed in substantially improving the modeling accuracy.
  • In a more general context, the general problem of identification of nonlinear systems is known as a general nonlinear canonical variate analysis (CVA) procedure. The problem was illustrated with the Lorenz attractor, a chaotic nonlinear system described by a simple nonlinear difference equation. Thus nonlinear functions of the past and future are determined to describe the state of the process that is, in turn used to express the nonlinear state equations for the system. One major difficulty in this approach is to find a feasible computational implementation since the number of required nonlinear functions of past and future expand exponentially as is well known. This difficulty has often been encountered in finding a solution to the system identification problem that applies to general nonlinear systems.
  • Thus, in some exemplary embodiments described below, methods and systems may be described that can achieve considerable improvement and also produce optimal results in the case where a ‘large sample’ of observations is available. In addition, the method is not ‘ad hoc’ but can involve optimal statistical methods.
  • SUMMARY
  • One exemplary embodiment describes a method for utilizing nonlinear, time-varying and parameter-varying dynamic processes. The method may be used for generating reduced models of systems having time varying elements. The method can include steps for expanding state space difference equations; expressing difference equations as a linear, time-invariant system in terms of outputs and augmented inputs; and estimating coefficients of the state equations.
  • Another exemplary embodiment may describe a system for estimating a set of equations governing nonlinear, time-varying and parameter-varying processes. The system can have a first input, a second input, a feedback box and a time delay box. Additionally, in the system, the first input and the second input may be passed through the feedback box to the time delay box to produce an output.
  • DETAILED DESCRIPTION
  • Aspects of the present invention are disclosed in the following description and related figures directed to specific embodiments of the invention. Those skilled in the art will recognize that alternate embodiments may be devised without departing from the spirit or the scope of the claims. Additionally, well-known elements of exemplary embodiments of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention.
  • As used herein, the word “exemplary” means “serving as an example, instance or illustration.” The embodiments described herein are not limiting, but rather are exemplary only. It should be understood that the described embodiments are not necessarily to be construed as preferred or advantageous over other embodiments. Moreover, the terms “embodiments of the invention”, “embodiments” or “invention” do not require that all embodiments of the invention include the discussed feature, advantage or mode of operation.
  • Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, “logic configured to” perform the described action.
  • Generally referring to exemplary FIGS. 1-6, methods and systems for empirical modeling of time-varying, parameter-varying and nonlinear difference equations may be described. The methods and systems can be implemented and utilized to provide for a variety of results and which may be implemented efficiently.
  • As shown in exemplary FIG. 1, a flow chart of a methodology for empirical modeling of time-varying, parameter varying and nonlinear difference equations according to an exemplary embodiment may be shown. Here, in 102, a set of time-varying, parameter varying and, if desired, nonlinear state space difference equations may be utilized. In 104, the equations may then be expanded with respect to a chosen set of basis functions, for example nonlinear input-output equations may be expanded in polynomials in xt and ut. Then, in 106, difference equations may then be expressed as a linear time-invariant, for example, in terms of outputs yt and augmented inputs ut, which can include inputs ut and basis functions, for example polynomials, in inputs ut scheduling functions pt and states xt.
  • Exemplary FIG. 2 may show an exemplary flow chart where a linear, parameter varying system of difference equations may be utilized. In this embodiment in 202 a set of linear parameter varying state space equations, such as those shown below as equation 1 and equation 2, may be used.

  • x t+1 =A 0 x t +B 0 u t +[A lρs(1)+ . . . +Asρt(s)]x t +[B 1ρt(1)+ . . . +B sσt(s)]u   (1)

  • y t =C 0 x t +D 0 u t +[C 1ρt(1)+ . . . +C sρ(s)]x t +[D lρt(1)+ . . . +D sρt(s)]u t   (2)
  • Then, in 204, the state space difference equations may be expanded with respect to polynomials in the scheduling function pt, states xt and inputs ut, for example as shown in equations 3 and 4 below.

  • x t+1 =A 0 x t +B 0 u t +[A 1 . . . As](ρt {circle around (x)}x s)+[B 1 . . . Bs](ρt {circle around (x)}u t)   (3)

  • y t =C 0 x t +D 0 u t +[C 1 . . . C x](ρt {circle around (x)}x)+[D 1 . . . D s](ρt {circle around (x)}u t)   (4)
  • Next, in 206, the difference equations can be expressed in terms of original outputs yt and augmented inputs [ut,(pt{circle around (x)}xt), (pt{circle around (x)}ut)] that are functions of ut, xt and pt. Difference equations can have linear time-invariant unknown (A0, [B0 Aρ Bρ], C0, [D0 Cρ Dρ]) coefficients that can be estimated, and as shown in equations 5 and 6 below.
  • x t + 1 = A 0 x t + [ B O A ρ B ρ ] [ u t ρ t x t ρ t u t ] ( Equation 5 ) y t = C 0 x t + [ D O C ρ D ρ ] [ u t ρ t x t ρ t u t ] ( Equation 6 )
  • Then, as shown in 208, the augmented inputs [ut,(ρt{circle around (x)}xt), (ρt{circle around (x)}ut)], and, in some exemplary embodiments, specifically ρt{circle around (x)}xt, can involve an unknown state xt vector, so iteration may be utilized or desired. Thus, in such exemplary embodiments, iteration using an iterated algorithm, as described in further detail below, may be utilized.
  • Exemplary FIG. 3 can show a flow chart of an iterated algorithm that may be implemented for iterated subspace identification. In this embodiment, and as seen in 302, nonlinear difference equations can be expanded in additive basis functions and expressed in linear time-invariant form with augmented inputs ut. This can include, in some examples, nonlinear basis functions involving outputs, state and scheduling functions.
  • Then, in 304, the state estimate {circumflex over (x)}6 [0] is unknown that is equivalent to (AρCρ) term not in the LPV model, so the corresponding terms may be deleted from the set of augmented inputs ũt. The iterated algorithm may then be implemented using augmented inputs as the inputs and can compute estimates {circumflex over (Θ)}[k] of the model parameters. Then, using, for example, a Kalman filter at the estimated parameter values {circumflex over (Θ)}[1], the state estimates {circumflex over (x)}t [1] can be computed along with the one-step prediction innovations. Then the likelihood function can be evaluated.
  • Next, in 306, an iteration k for k≧2 may be made. Here the state estimate {circumflex over (x)}t [k−1] may be initialized for all t. The iterative algorithm may then be implemented using the augmented inputs as the inputs and can compute estimates {circumflex over (Θ)}[k] of the parameters. Again, for example, using a Kalman filter at the estimated parameter values {circumflex over (Θ)}[k], the state estimates {circumflex over (x)}t [k] may be computed. The one-step prediction innovations may also be made and the likelihood ρ(Y1:N1:N;{circumflex over (Θ)}[k]) may also be evaluated.
  • In 308 the convergence can be checked. Here the change in the values of the log likelihood function and the state orders between iteration k−1 and iteration k can be compared. If, in some examples, the state order is the same and the log likelihood function change is less than a chosen threshold ε. This threshold in many examples may be less than one, for example 0.01, then the iterations may end or be stopped. Otherwise, where the value is above a chosen threshold ε, step 306 above may be returned to and iteration k+1 may be performed. Following the performance of iteration k+1, the convergence may then be checked again.
  • In another exemplary embodiment, a different approach may be taken to directly and simply obtain optimal or desired estimates of the unknown parameters for the case of autocorrelated errors and feedback in the system using, for example, subspace methods developed for linear time-invariant systems. This may be done by expressing the problem in a different form that can lead to a desire to iterate on the state estimate; however, the number of iterations may be very low, and, to further simplify the system and its development, stochastic noise may be removed.
  • For example, consider a linear system where the system matrices are time varying functions of a vector of scheduling parameters ρt=(ρt(1) ρt(2) . . . ρt(s))T of the forms as shown in the equations below:

  • x t+1 At)x t +Bt)u t   (7)

  • y t =Ct)x t +Dt)u t   (8)
  • For affine dependence on the scheduling parameters, the state space matrices can have the form of the following equations 9 through 12.
  • A ( ρ t ) = A 0 + i = 1 s ρ t ( i ) A i ( Equation 9 ) B ( ρ t ) = B 0 + i = 1 s ρ t ( i ) B i ( Equation 10 ) C ( ρ t ) = C 0 + i = 1 s ρ t ( i ) C i ( Equation 11 ) D ( ρ t ) = D 0 + i = 1 s ρ t ( i ) D i ( Equation 12 )
  • In the above equations, it may be noted that where the matrices on the left hand side are expressed as linear combinations (1 ρt(1) ρt(2) . . . ρt(s))T on the right hand side specified by the scheduling parameters ρt, which may be called an affine form. In further discussion, the notation will be used for the system matrix A=[A0 A1 . . . As]=[A0 Aρ] and similarly for B, C, and D.
  • In some further exemplary embodiments, system identification methods for the class of LPV systems can have a number of potential applications and economic value. Such systems can include, but are not limited to, aerodynamic and fluid dynamic vehicles, for example aircraft and ships, automotive engine dynamics, turbine engine dynamics, chemical processes, for example stirred tank reactors and distillation columns, amongst others. One feature can be that at any given operating point pt the system dynamics can be described as a linear system. The scheduling parameters pt may be complex nonlinear functions of operating point variables, for example, but not limited to, speed, pressures, temperatures, fuel flows and the like, that may be known or accurately measured variables that characterize the system dynamics within possibly unknown constant matrices A, B, C and D. It may also be assumed that ρt may be computable or determinable from the knowledge of any such operating point variables. For example LPV models of automotive engines can involve the LPV state space equations that explicitly express the elements of the vector ρt as very complex nonlinear functions of various operating point variables. In some exemplary embodiments described herein, it may only be desired that the scheduling parameter ρt may be available when the system identification computations are performed. This can be a relaxation of the real-time use or requirement for such applications as real-time control or filtering.
  • To simplify the discussion, the LPV equations can be written in time-invariant form by associating the scheduling parameter ρt with the inputs ut and states xt as

  • x t+1 =A 0 x t +B 0 u t +[A 1 . . . As](ρt {circle around (x)}x t)+[B 1 . . . Bs](ρt {circle around (x)}u t)   (13)

  • y t =C 0 x t +D 0 u t +[C 1 . . . Cs](ρt {circle around (x)}x t)+[D1 . . . Ds](ρt {circle around (x)}u t)   (14)
  • Here, {circle around (x)} can denote the Kronecker product M{circle around (x)}N, defined for any matrices M and N as the partitioned matrix formed from blocks of i,j as (M{circle around (x)}N)i,j=Mi,jN with the i,j element of M denoted as mi,j. Also, the notation [M;N]=[MT NT]T can be used for stacking the vectors or matrices M and N. Equations 13 and 14 above can then also be written as shown below in the formats of equations 15 and 16.
  • x t + 1 = A 0 x t + [ B 0 A ρ B ρ ] [ u t ( ρ t x t ) ( ρ t u t ) ] ( 15 ) y t = C 0 x t + [ D 0 C ρ D ρ ] [ u t ( ρ t x t ) ( ρ t u t ) ] ( 16 )
  • As discussed in more detail below, the above equations can be interpreted as a linear time-invariant (LTI) system with nonlinear feedback of ft=[(ρt{circle around (x)}xt);(ρt{circle around (x)}ut)] where the states xt and inputs ut can be multiplied by the time varying scheduling parameters ρt. The feedback ft inputs can now be considered as actual inputs to the LTI system. As shown in further detail below, the matrices [Aρ Bρ; Cρ Dρ] of the LPV system description can be the appropriate quantities to describe the LTI feedback representation of the LPV system.
  • Further, the above equations may now be described as shown below in equations 17 and 18.

  • x t+1 Ãx t +{tilde over (B)}ũ t   (17)

  • y t ={tilde over (C)}x t +{tilde over (D)}ũ t   (18)
  • Thus for measurements of outputs and inputs ũt=[ut Tt{circle around (x)}xt)Tt{circle around (x)}ut)T]T the time-invariant matrices can be (Ã, {tilde over (B)}, {tilde over (C)}, {tilde over (D)})=(A0, [B0 Aρ Bρ], C0, [B0 Cρ Dρ]) respectively. Also, in situations where xt in ρt{circle around (x)}xt may not be a known or measured quantity, a prior estimate of xt may be available or utilized and iterations may be used to obtain a more accurate or desired estimate of xt.
  • In still further exemplary embodiments, an LPV system can be expressed as a linear time-invariant system with nonlinear internal feedback that can involve the known parameter varying functions ρt. (see, Section 2.1 of Nonlinear System Identification: A State-Space Approach, Vincent Verdult, 2002, Ph.D. thesis, University of Twente, The Netherlands, the contents of which are hereby incorporated by reference in their entirety). In this exemplary embodiment the system matrices Pi of rank ri may be factored for each i with 1≦i≦s using a singular value decomposition, such as that shown in equation 19.
  • P i = [ A i B i C i D i ] = [ B f , i D f , i ] [ C z , i D z , i ] ( Equation 19 )
  • The quantities may then be defined as the following, shown in equations 20 through 23.

  • B f =└B f,1 B f,2 . . . B f,s┘  (20)

  • D f =└D f,1 D f,2 . . . D f,s┘  (21)

  • C z T =└C z,1 T C z,2 T . . . C z,s T┘  (22)

  • D z T =└D z,1 T D z,2 T . . . D z,s T┘  (23)
  • Next, internal feedback in the LTI system P0=[A 0B0;C0D0] may be considered with outputs zt=Czxt+Dzut and with nonlinear feedback from zt to ftt{circle around (x)}zt entering the LTI system P0 state equations through the input matrices Bf and Df. The state equations for xt+1 and yt of the feedback system may be shown below in equations 24 through 27.

  • x t =A 0 x t +B 0 u t +B f f t   (24)

  • y t =C 0 x t +D 0 u t +D f f t   (25)

  • zt =C z x t +D z u t   (26)

  • f t =[p t {circle around (x)}z t]  (27)
  • With respect to the above and referring now to exemplary FIG. 4, if it can be assumed that for any time t there is no effect of the feedback ft on the output zt as in the feedback structure shown in FIG. 4 and in equation 26, then there is no parameter dependence in the linear fractional transformation (LFT) description (see K. Zhou, J. Doyle, and K. Glover (1996), Robust and Optimal Control, Prentice-Hall, Inc., Section 10.2, the contents of which are hereby incorporated by reference in their entirety). As shown in FIG. 4, there may be a system 400 having two boxes, box 404 which may be a memory-less nonlinear feedback system and box 402 that can be a linear time-invariant system. Therefore, in some exemplary embodiments, the parameter dependence can become affine.
  • Exemplary FIG. 4 may be a schematic diagram of Equations 15 and 16. The state Equation 15 involves the upper boxes in 402 while the measurement Equation 16 involves the lower boxes in 402. ΔT 422 is a time delay of sample duration with the right hand side of Equation 15 at 444 entering 422 and the left hand side equal to state xt+1 at leaving. This is a recursion formula similar to equations 14 and 15, so the time index can be changed from “t” to “t+1” for the figure before the start of the next iteration and continuing until entering boxes 420, 430 and 410. Scheduling parameters ρt 406, inputs u t 408 and outputs y t 446 are variables The upper four boxes are multiplication from left to right by B 0 418, A0 420, B ρ 414, Aρ 416, respectively. Similarly, the lower boxes are multiplication with A replaced by C and B replaced by D, depicted as D ρ 424, C ρ 426, D 0 428 and C 0 430. In the feedback box 404, the Kronecker products involving ρt and successively xt and ut are formed in 410 and 412 respectively. The pairs of boxes aligned vertically are multiplying respectively from left to right the variables xt, ut, ρt{circle around (x)}xt and ρt{circle around (x)}ut. Additionally, arrows shown as touching a wire can symbolize addition. Further, ΔT 422 can represent a time delay block of duration ΔT that can act similar to a date line for this exemplary embodiment. Therefore the arrows in exemplary FIG. 4 indicate a time flow or an actual sequence of operations, the flow may start at 406, 408 and the output of 422 and proceed through the diagram. Upon reaching ΔT 422, all operations for sample t have been performed and, upon crossing through ΔT 422, sample time t+1 may begin. Thus, for example, upon leaving ΔT 422, the same quantity is maintained, but all of the time labels can be changed to t+1 throughout the process shown in exemplary FIG. 4.
  • Further, as shown below, this can be equivalent to the LPV form shown in equations 15 and 16 where the state equations can be linear in the scheduling parameter vector ρ.
  • In further exemplary embodiments and now using a definition of feedback defined as ftt{circle around (x)}z, as given by equations 26 and 27, the state equations for xt+1 and yt may be as shown in equations 28 and 29 below.
  • x t + 1 = A 0 x t + B 0 u t + i = 1 S B f , i ρ t ( i ) C z , i x t + i = 1 S B f , i ρ t ( i ) D z , i u k ( Equation 28 ) y t = C 0 x t + D 0 u t + i = 1 S D f , i ρ t ( i ) C z , i x t + i = 1 S D f , i ρ t ( i ) D z , i u k ( Equation 29 )
  • Then, if equation 19 is used to define the above factors, equations 28 and 29 may be the same as equations 15 and 16.
  • Next, in a further exemplary embodiment, as the rank of Pi may not be known, the outputs zt may be set as the states and inputs zt=[xt T;ut T]T that may subsequently be fed back through the static nonlinearity so that ft=[ρt{circle around (x)}xtt{circle around (x)}ut]. Then └Cz,iEz,i┘=[Idim x0;0Idim u] and equations 30 and 31 may be set as below.
  • P i = [ A i B i C i D i ] = [ B f , i D f , i ] [ C z , i D z , i ] = [ B f , i D f , i ] [ I dim x 0 0 I dim u ] ( Equation 31 ) = [ B f , i D f , i ] ( Equation 30 )
  • Thus, from the above, the LPV coefficient matrix Pi=[Ai Bi; Ci Di] can be the regression matrix of the left hand side state equation variables (xt+1; yt) on the vector of nonlinear terms [ρi,t xt; ρi,t ut]. More generally, the LPV coefficient matrix Pρ=[Aρ Bρ; Cρ Dρ]=[Bf, Df] can be the regression matrix of the left hand side state equation variables (xt+1; yt) on the vector of nonlinear feedback terms (ρt{circle around (x)}xt; ρt{circle around (x)}ut).
  • The LTI nonlinear feedback representation can solve a major barrier to applying existing subspace identification algorithms to the identification of LPV systems and overcomes previous problems with exponentially growing numbers of nonlinear terms used in other methods. For example, the above LTI nonlinear feedback representation can make it clear that nonlinear terms (ρt{circle around (x)}xt; ρt{circle around (x)}ut) can be interpreted as inputs to an LTI nonlinear feedback system. Therefore it may be possible to directly estimate the matrices of the LTI system state space equations using linear subspace methods that can be accurate for processes with inputs and feedback. This can directly involve the use of the outputs yt as well as augmented inputs [ut;(ρt{circle around (x)}xtt{circle around (x)}ut)] of the LTI nonlinear feedback system.
  • In another exemplary embodiment, LTI system matrices and state vectors may be determined following the reduction of an LTI subsystem of a nonlinear feedback system involving known scheduling functions and the state of the LTI subsystem. This embodiment can involve taking the iterative determination of both the LTI system state as well as the LTI state space matrices describing the LTI system.
  • One example may be to consider the polynomial system as a linear system in x and u with additional additive input terms in the higher order product terms so the additional inputs are ρt{circle around (x)}xt and ρt{circle around (x)}ut. The scheduling variables pt are assumed to be available in real time as operating points or measured variables. If accurate estimates of the state xt are also available, then the problem could be only a direct application of the iterative algorithm for system identification. Since the variables xt are not available until after the solution of the system identification, a different approach may be utilized.
  • Thus, in an exemplary first step, an initial estimate of the state vector may be made. Here, system identification may be performed on the terms in the state equations involving the variables xt, ut and ρt{circle around (x)}ut but not the variables ρt{circle around (x)}xt. From this an approximation of the linear time invariant (LTI) part of the system giving estimates of A0, B0, C0, D0, Bρ, and Dρ as well as estimates for the state vectors X1,N[1]=[xt T x2 T . . . xN T]T may be obtained.
  • Then, in an exemplary second step, an iterate estimate of the state vectors may be made. Here the state vector X1,N [1] can be used as an initial estimates for xt in the terms ρt{circle around (x)}xt in equations 15 and 16. Then the iterative algorithm can be applied to obtain an estimate of the system matrices A, B, C, and D and a refined estimate of X1,N [2]. Further, this step may then be iterated until a desired convergence is achieved.
  • Exemplary steps one through three above may therefore work with only a few iterations. Thus, in an exemplary manner in which the iterative algorithm can be used to address the previously known problem of LPV system identification. Thus, the following is an exemplary discussion of using the iterative algorithm in directly identifying the coefficients Fij and Hij of the additive polynomial expansions of the nonlinear difference equation functions f(xt,ut,vt) and h(xt,ut,vt), respectively. This may be a very compact and parsimonious parameterization for such a nonlinear system. The iterative algorithm described herein for linear time-invariant systems can therefore be used with only a very modest increase in computational requirements. Further, this exemplary use of the iterative algorithm to directly treat additive nonlinear terms in the state equations involving the state vector such as ρt{circle around (x)}xt as additional inputs. Since the state xt may not be initially known, this may facilitate iteration to estimate the state sequence starting with only the linear time invariant (LTI) part of the system, A0, B0, C0 and D0.
  • As seen in the above exemplary discussion following equations 13 and 14, a linear parameter varying system that can be affine in the scheduling variables ρt can be expressed in time invariant form involving the additional input variables ρt{circle around (x)}xt and ρt{circle around (x)}ut. Note that this involves nonlinear functions of ρt with xt and u t. The dynamic system can be linear time-invariant in these nonlinear functions of the variables.
  • In a further exemplary embodiment, the effect of additional inputs can be traced through the iterative algorithm. The exemplary steps outlined below may further be reflected in the table shown in exemplary FIG. 5 and the flow chart of exemplary FIG. 6.
  • Using elements 502 of exemplary FIG. 5 and in step 602 of exemplary FIG. 6, an ARX model may be fitted to the observations on iteration k with outputs yt and inputs [ut Tt{circle around (x)}{circumflex over (x)}t [k−1])Tt{circle around (x)}ut)T]T, where for k=1 the term ρ{circle around (x)}{circumflex over (x)}t [k−1] may be absent from the inputs and further can be the equivalent to removing the term involving (AρCρ) from the LPV model structure. The ARX model fitting can have a linear regression problem that makes no prior assumptions on the ARX order other than the maximum ARX order considered. If the identified ARX order is near a maximum considered, the maximum ARX order considered can be doubled and the regression recomputed.
  • In exemplary step 604 a corrected future can be computed. The effect of future inputs on future outputs can be determined using the ARX model and subtracted from the outputs. The effect of this can be to compute the future outputs that could be obtained from the state at time t if there were no inputs in the future of time t.
  • In exemplary step 606, a canonical variate analysis (CVA) can be made or computed. Here, the CVA between the past and the corrected future can be computed. Again, the covariance matrices among and between the past and corrected future may also be computed. This may be similar to an SVD on the joint past-future covariance matrix which is of the order of the covariance of the past to obtain the ARX model. A result of this step is to obtain estimates of the states of the system called ‘memory’.
  • In exemplary step 608, a regression using the state equation may be performed. The ‘memory’ from step 606 can be used in the state equations as if it were data and resulting estimates of the state space matrices and covariance matrices of the noise processes can be obtained. These estimates can be asymptotically ML estimates of the state space model with no prior assumptions on the parameter values of the ARX or SS model.
  • Thus, using the above-described methodology, the ML solution in the iterative algorithm can be obtain in one computation based on the assumed outputs and inputs in iteration k, as shown in 504 of exemplary FIG. 5, with no iteration on assumed parameter values. The iteration is the result of refinement of the state estimate in the nonlinear term (ρt{circle around (x)}{circumflex over (x)}t [k−1])T that can be part of the assumed data in the iteration k (504).
  • Referring back to step 602, the ARX model fitting, the dimension of the augmented outputs is increased from dimu to dimua=dimu+dimp(dimx+dimu). To fit the ARX model, the ARX order lagp identified can be substantially higher due to the nonlinear input terms and depending on the statistically significant dynamics present among the output and augmented input variables. The computation can involve computation of an SVD on the data covariance matrix that is dimension of the dimua*lagp where lagp is the maximum ARX order considered. The computation that may be utilized for the SVD is of the order of 60*(dimua*lagp)3, so the computation increases proportional to (dimp(dimx+dimu)/dimu)3.
  • Therefore, one consequence of augmenting the system inputs by the nonlinear terms ρt{circle around (x)}xt and ρt{circle around (x)}ut may be to increase the past by a factor dimp(dimx+dimu)/dimu, and to increase the computation by this factor cubed. Depending on the particular dimensions of ut, xt, and ρt, this can be very significant, however there is no exponential explosion in the number of terms or the computation time. The LPV subspace algorithm of this invention still corresponds to subspace system identification for a linear time-invariant system and, in addition, because of the nonlinearity of the terms [ut Tt{circle around (x)}xt)T t{circle around (x)}ut)T]T involved the state estimates Xt, iteration on the estimate of the system states until convergence can be desired or, in some alternatives, required.
  • Another factor to be considered is that the result of the CVA in exemplary step 606 is the computation of an estimate, denoted as mt, of the state sequence xt. The symbol ‘m’ is used as in the word ‘memory’ to distinguish it from the actual state vector and various estimates of the state vector as used in the EM algorithm discussed below. The estimate {circumflex over (m)}t in combination with the maximization step of the EM algorithm can be shown to be a maximum likelihood solution of the system identification problem for the case of a linear time-invariant system. In that case, the global ML solution can be obtained in only one iteration of the algorithm. This may be different with the EM and gradient search methods that almost always utilize many iterations. The CVA estimate mt may actually be an estimate of the state sequence for the system with parameters equal to the maximum likelihood estimates in the case of LTI systems and large sample size. This is different from the usual concept of first obtaining the ML parameter estimates and then estimating the states using a Kalman filter. Not only is the optimal state sequence for the ML parameters obtained in a single iteration, the optimal state order may also be determined in the process. In the usual ML approach, it can be desired or, in some alternatives, required to obtain the ML estimates for each choice of state order and then proceed to hypothesis testing to determine the optimal state order.
  • In another exemplary embodiment, the convergence of the iterative algorithm for the case of LPV may be described. In this embodiment, a substantially similar approach may be taken for other forms of nonlinear difference equations.
  • For any iterative algorithm, two issues may typically arise: (1) does the algorithm always converge, and (2) at what rate does it converge. In the computational examples considered, the iteration k on the estimated states {circumflex over (x)}t [k] can be very rapid, for example half a dozen steps. Thus it may be shown herein that the iterative algorithm can be closely related to the class of EM algorithms that can be shown to always converge, under an assumption on the LPV system stability. Also, the rate of convergence can be computed to be geometric. This latter result is unique since the EM algorithm typically makes rapid early progress but becomes quite slow in the end. The reason for the rapid terminal convergence of the LPV algorithm will be discussed in further detail. Issues of initialization, stability and convergence will be elaborated below.
  • Additionally, the methods and systems described herein may be discussed in the context of the EM algorithm as there can be some parallelism between the two. To show the convergence of the LPV algorithm, the development in Gibson and Ninness (2005), denoted as GN below and incorporated by reference herein in its entirety (S. Gibson and B. Ninness, “Robust maximum-likelihood estimation of multivariable dynamic systems,” Automatica, vol. 41, no. 5, pp.1667-1682, 2005) will be discussed with various modifications made that are appropriate for the LPV algorithm. All equation numbers from Gibson and Ninness (2005) will include GN following the number in the paper GN, for example (23GN).
  • In this exemplary embodiment, the replacements that can be made in the GN discussion to obtain the LPV algorithm may be as follows: replace the LTI state equations with the LPV state equations and, for the missing data, replace the state estimate from the Kalman smoother with the ‘memory’ vector mt in the iterative algorithm. The consequence of this can be significant because for linear systems as in GN the iterative algorithm described herein can obtain the global ML parameter estimates in one step in large samples. On the other hand, for linear systems it may take the EM algorithm many iterations to obtain the ML solution.
  • Replacing the LTI model with the LPV model replaces the state space model equation 11GN by the LPV equations (15) and (16). This can produce a number of modifications in the equations of Lemma 3.1 since ut and (A, B, C, D) in GN is replaced by [ut Tt{circle around (x)}xt)T t{circle around (x)}ut)T]T and (A0, [B0AρBρ],C0, [B0CρDρ], ) respectively. So the ‘data’ includes the vector pt{circle around (x)}xt where the state vector xt is not available.
  • To execute iteration k as shown in 504 of exemplary FIG. 5, a straight forward approach involving the EM algorithm could be used. The ‘missing data’ would include state vector xt as the missing data. (See S. Gibson, A. Wills, and B. Ninness (2005), “Maximum likelihood parameter estimation of bilinear systems”, IEEE Trans. Automatic Control, Vol. 50, No. 10, pp. 1581-2005, the contents of which are hereby incorporated by reference in their entirety). Then we could proceed for the case of a bilinear system that involves a term ut{circumflex over (x)}xt that may be completely analogous to ρt{circumflex over (x)}xt for the LPV algorithm. But such an EM approach can, on occasion, result in the typical slow convergence behavior of EM algorithms near the maximum.
  • Therefore, instead the LPV algorithm may be used and can result in a rapid convergence. Thus, in this exemplary embodiment, instead of specifying the ‘missing data’ as the estimate of the Kalman smoother, the subspace approach can specify the CVA state estimate mt or ‘memory’, as the missing data. The memory mt is the estimate of the state vector obtained by a canonical variate analysis between the corrected future and the past obtained in exemplary step 606 of the iterative algorithm using the input and output vectors specified in FIG. 5. This may be similar to a Kalman filter state estimate at the global ML parameter estimates associated with the output and input data at iteration k rather than a Kalman smoother state estimate at the last estimated parameter value. One difference can occur because the CVA method expresses the likelihood function in terms of the corrected future conditioned on the past so that estimates of memory mt may depend only on the output and input data and their distribution depends on the ML estimates associated with the output and input data used in iteration k rather than smoothed estimates of the state. The actual conditional likelihood function pet/zt) of (26GN) with ξt T=[xt+1 Tyt T] and zt T=[x t Tut T] is what can be involved in all of the EM computations. This can be the same likelihood function involved in the exemplary step 608 of the CVA algorithm of estimating the parameters of the state space equation as in Lemma 3.3 of GN. A difference is that in the exemplary step 608 of the CVA algorithm the expectation can be with respect to the true global ML estimates associated with the output and input data at iteration k whereas the GN estimate is an expectation with respect to the parameter value obtained in the previous iteration.
  • Further, the use of the LPV model and the choice of memory mt as the missing data in the expectation step of the algorithm can have the following consequence in GN. The basic theory in section 2, The Expectation-Maximization (EM) Algorithm of GN, needs no modification. In Section 3 of GN, the missing data is taken to be the CVA state estimates {circumflex over (m)}t [k] based on the input and output quantities for iteration k in 504 of FIG. 5. So the xt in equations (22GN) through (28GN) can be replaced by {circumflex over (m)}t [k].
  • Thus, Lemma 3.1 of GN holds but also can achieve the global ML estimate associated with the input-output vectors of exemplary FIG. 5 in one step. Lemma 3.2 of GN can be replaced by the iterative algorithm to obtain the memory estimates {circumflex over (m)}t [k]. Lemma 3.3 of GN is the same result as obtained in the iterative algorithm. An additional step may be used to compute {circumflex over (x)}t [k] from the estimates ⊖[k] and the linear time-varying state equations given by the LPV state equations. This step may be desired to obtain the state estimate {circumflex over (x)}t [k] for starting the next iteration k1. This step can further allow for the iterative algorithm to produce Q(θ,θl)≧Q(θll) with equality if and only if {circumflex over (x)}t [k]={circumflex over (x)}t [k+1].
  • Then, in principle, the memory {circumflex over (m)}t [k] could be used in place of the state estimate {circumflex over (x)}t [k]. In fact, {circumflex over (m)}t [k] projected on the recursive structure of the state equations in equations (43GN) and (44GN) can produce the ML state estimates asymptotically and the corresponding optimal filtered state estimates {circumflex over (x)}t [k]. Thus the use of {circumflex over (m)}t [k] instead of {circumflex over (x)}t [k] in the computation of ρt{circle around (x)}{circumflex over (x)}t [k] as part of the ‘input data’ can lead to essentially the same result except for some ‘noise’ in the results. But the computational noise can be avoided by the small additional computation of {circumflex over (x)}t [k].
  • Also, in some exemplary embodiments, there may not be a need for a ‘robust’ improvement to the iterative algorithm described herein since it has been developed using primarily singular value decomposition computations to be robust and demonstrated as such for more than a decade. An exception is possibly the computation of the filtered state estimate {circumflex over (x)}t [k] that could be implemented using the square root methods of Bierman (G. J. Bierman, Factorization Methods for Discrete Sequential Estimation, Academic Press (1977); republished by Dover, New York (2006), the contents of which are hereby incorporated by reference in their entirety) if ill-conditioned LPV dynamic systems are to be solved to high precision.
  • In still another exemplary embodiment, it may be demonstrated that the LPV system identification algorithm may converge at a geometric rate near the maximum likelihood solution. Here the result for a linear system can be developed. The same approach may work for an LPV system, but the expressions below may be time dependent and can be of greater complexity.
  • Further, in order to simplify the derivation herein, the time invariant feedback from equations 17 and 18 can be considered with the substitution of notation (Ã,{tilde over (B)},{tilde over (C)},{tilde over (D)},ũt,ut) by (A,B,C,D,utt). Thus, equation 17 and equation 18, with noise vt in innovation form may be written as equations 32 and 33 below.

  • x t+1 =Ax t +Bu t +Kv t   (32)

  • y t =Cx t +Du t +v t   (33)
  • Thus, from the above, it may be seen that for measurements of outputs yt and inputs ut=[ũt Tt{circle around (x)}xt)T]T, the time-invariant matrices are (A, B, C, D)=(A0,{B0AρBρ],C0,[B0CρDρ]), respectively. Then, solving for vt in equation 33 and substituting in equation 32 produces equation 34 below.

  • x t+1 =Ax t +Bu t +K(y t −Cx t +Du t)=(A−KC)x t+(B−KD)u t +Ky t   (34)
  • Next, through recursively substituting the right hand side of equation 34 for xt can provide equation 35 below.
  • x t = i = 1 ( A - KC ) i - 1 [ ( B - KD ) u t - i + Ky t - i ] = Jp t ( y t , [ u ~ t ( ρ t x t ) , ( ρ t u ~ t ) ] ) ( Equation 35 )
  • In equation 35, J can contain the ARX coefficients and pt can be the past output yt and inputs ut=[ũt,(ρt{circle around (x)}xt),(ρt{circle around (x)}ũt)]. Here ũt can be the original inputs such that ut can include the nonlinear Kronecker product terms.
  • In a further exemplary embodiment, asymptotically for a large sample, J can be arbitrarily close to constant for a sufficiently large iteration k such that
  • Δ x t [ k ] = x t [ k + 1 ] - x t [ k ] = J p t [ 0 , 0 ( ρ r ( x t [ k ] - x t [ k - 1 ] ) ) 0 ] = L [ ρ t - 1 Δ x t - 1 [ k - 1 ] ρ t - lag Δ x t - lag [ k - 1 ] ] = L t [ Δ x t - 1 [ k - 1 ] Δ x t - lag [ k - 1 ] ] ,
  • where Lt can be time varying and can have the time varying scheduling parameters ρt combined with terms L. Also, if ρt for all t is bounded, Lt will be similarly bounded.
  • Now, when writing the state sequence in block vector form as X1:N i=vec[x1:N i . . . x1 i] where the vec operation can stack the columns of a matrix starting with left hand columns on top, the above result can imply that
  • [ Δ x N [ k ] Δ x t - k [ k ] ] = [ 0 L N - 1 0 0 0 L N - k 0 0 0 ] [ Δ x t N - 1 [ k - 1 ] Δ x N - 1 [ k - 1 ] ] ,
  • where → can mean that LN−k extends to the right. Using M to denote the upper triangular matrix can give the fundamental expression for the difference ΔXlag:N i between state sequences Xlag:N i i at successive iterations k and k−1 as equation 36 below.

  • ΔX lag+2:N i =MΔX lag+1:N−1 i−1   (36)
  • From equation 36 it may then be seen that the terminal convergence rate of the iteration in the LPV case can be governed by M, in particular the largest singular value of M. The various blocks of M can thus be computed by
  • ( A - KC ) i - 1 ( B - KD ) ρ x [ ρ N - 1 ( 1 ) I dim x ρ N - i ( dim p ) I dim x ] ,
  • where the subscript ρ{circle around (x)}x means to select the submatrix of B−KD with columns corresponding to the corresponding rows of ρ{circle around (x)}x in ũt.
  • In some other exemplary embodiments, the convergence of the iterative linear subspace computation may be affected by the stability of the LPV difference equations and, more specifically, the stability of the LPV linear subspace system identification described herein. Because a set of time-invariant linear state space difference equations may be stable if and only if all of the eigenvalues of the state transition matrix are stable, for example the eigenvalues are less than 1. The LPV case is more complex, but for the purposes of this exemplary embodiment, the rate of growth or contraction per sample time can be given for each eigenvector component of the state vector xt by the respective eigenvalues of the LPV state transition matrix from equation 9 and now shown as equation 37 below.
  • A ( ρ t ) = A 0 + i = 1 s ρ t ( i ) A i ( Equation 37 )
  • In equation 37, the matrices A=(A0 Aρ)=(A0 A1 . . . As) can be assumed to be unknown constant matrices. Therefore, it is apparent that the transition matrix A(ρt) may be a linear combination [1; ρt] of the matrices Ai for 0≦i≦s. Therefore, for any choice of the matrices Ai for 0≦i≦s, there can be possible values of ρt that could produce unstable eigenvalues at particular sample times t. If or when this occurs only sporadically and/or for only a limited number of consecutive observation times, no problems or errors may arise.
  • In some other exemplary embodiments, for example in the k-th iterative computation of the subspace algorithm, only estimated values of the state sequence {circumflex over (x)}t [k] and the matrices Âi [k] for 0≦i≦s may be used. Therefore, if large errors in the estimates of these quantities are possible or acceptable, then there can be a greater potential for unstable behavior. For example, there may be areas of application of more significant importance, such as including identification of aircraft wing flutter dynamics where the flutter dynamics may be marginally stable or even unstable with the vibration being stabilized by active control feedback from sensors to wing control surfaces. Then, in some other applications, it may be possible to guarantee that not any combination of scheduling parameters ρt, values of matrices Ai and uncertainty could produce instabilities. Further to this, the stability of the transition matrix can provide a potential for issues and therefore further consideration could be desired.
  • In some further exemplary embodiments, instabilities can produce periods where the predicted system response can rapidly grow very large. For example, when the eigenvalues are all bounded less than 1, then the predicted system response can be bounded, whereas if estimation errors are large, for example, one of the eigenvalues is equal to 2 for a period of time, then the predicted response may double approximately every sample during that time. Thus if 1030=(103)10≅(210)10=2100, then in as few as 100 samples, 30 digits of precision could be lost in the computation which could then provide meaningless results.
  • Therefore, per the above, there may be conditions under which there may be considerable loss of numerical accuracy that can be associated with periods where the LPV transition matrix is unstable, for example with extended intervals of time. Further, the difficulty can lie in that the algorithm initialization and sample size since at the optimal solution with a large sample, the algorithm is stable and convergent. If it was possible to compute with infinite precision, then problems with illconditioning could be avoided; however, with 15 or 30 decimal place accuracy, for example, some real data sets such as for the aircraft wing flutter, can benefit from further consideration.
  • Therefore, some exemplary embodiments may deal with manners of correcting for or otherwise lessening any undesired effects that may result from algorithm instability. For example, if the state sequence {circumflex over (x)}t [k] is sufficiently close to the optimum as based upon the terminal convergence results described previously, the iterative algorithm may be stable provided it is assumed that the LPV system is stable. Therefore, in some examples, large initial errors in the estimate {circumflex over (x)}t[1] can lead to an unstable computation.
  • In other examples, such as during time intervals when the scheduling parameter values cause significant instabilities in A(ρt), some components of the ft|qt, the future outputs ft corrected for the effects of future inputs qt, may become quite large. Therefore, in the computation of the covariance matrix of ft|qt, such large values can cause considerable illconditioning and loss in numerical precision.
  • In examples where the iterative algorithm and some other subspace methods permit the arbitrary editing of the times where the instabilities occur, outlier editing of unstable regions may be performed. For example, in the ARX covariance computation where the corrected future can be computed, the time intervals with significant instabilities can be determined and removed from the computation.
  • In examples dealing with experimental design, if the trajectory of the operating point variables can be specified, then the scheduling parameters can be scheduled to enhance the system identification in several ways. An initial region that can avoid computational instabilities can be chosen to obtain sufficiently accurate initial estimates of the LPV parameters (A, B, C, D). This can then be used in other regions to initialize the state estimate of the algorithm with sufficient accuracy that computational instabilities will not be encountered.
  • In the above examples, the removal of unstable outliers at each iteration can be the most general and robust procedure. As the estimated values of the state sequence {circumflex over (x)}t [k] and LPV parameters improve with more iterations, the number of outliers can be expected to decrease until there is rapid terminal convergence. A counter example to this expectation is when the beginning of the scheduling parameters ρt time history has little variation so that the LPV model for this part of the data is good for that portion of the data, but is a poor global model. Then, in the later part of the time history, there can be consideration variation in ρt such that unstable behavior may result.
  • Additionally, in many potential exemplary applications of the methods and systems described herein, it can be expected that the proposed algorithm can perform much better than existing methods that presently are not feasible on industrial problems. Further, in many situations, it can be desired to design the experiment to obtain results of a desired fidelity for a specified global region of the operating space at as little cost in time and resources as possible. Because the iterative algorithm's linear subspace method is a maximum likelihood based procedure, designs can be developed for LTI system identification. Also, as it identifies a stochastic model with estimated disturbance models, including confidence bands on quantities such as dynamic frequency response functions, the required sample size and system input excitation can be developed with little prior information on the disturbance processes.
  • In some further exemplary embodiments, the LPV methods and systems described herein may be extended to nonlinear systems. For example, it may be shown that a number of complex and nonlinear systems can be expressed in an approximate LPV form that can be sufficient for application of the LPV subspace system identification methods described herein.
  • In one example, a general nonlinear, time varying, parameter varying dynamic system can be described by a system of nonlinear state equations, such as those shown in equations 38 and 39.

  • x t+1 =f(x t u tt ,v t)   (38)

  • y t =h(x t ,u tt ,v t)   (39)
  • In equations 38 and 39, xt can be the state vector, ut can be the input vector, yt can be the output vector and vt can be a white noise measurement vector. In some exemplary embodiments, to deal with ‘parameter varying’ systems, the ‘scheduling’ variables ρt that can be time-varying parameters can describe the present operating point of the system. Very general classes of functions f(·) and h(·) can be represented by additive borel functions that need not be continuous.
  • In a simplified manner, the case of functions admitting Taylor expansion as in Rugh Section 6.3 (W. J. Rugh, Nonlinear System Theory: The Volterra/Wiener Approach. Baltimore, Md.: Johns Hopkins Univ. Press, 1981, the contents of which are hereby incorporated by reference in their entirety), where ρt and vt may be absent can provide the following, as shown in equations 40 and 41.
  • x t + 1 = i = 0 I j = 0 J F ij x t ( i ) u t ( j ) ( Equation 40 ) y t = i = 0 I j = 0 J H ij x t ( i ) u t ( j ) ( Equation 41 )
  • In equations 39 and 40, the notation xt (i) can be defined recursively as xt (i)=xt{circle around (x)}xt (i−1) and similarly for ut (j).
  • Thus, equations 40 and 41 may be polynomial expansions of the nonlinear functions f(·) and h(·). Note that the nonlinear equations may involve nonlinear functions of relatively simple form such as the approximating polynomial equations that involve only sums of products that are readily computed for various purposes. However for empirical estimation of the coefficients in the presence of autocorrelated errors, the problem can become difficult for low dimensions of y, u, and x, even using subspace methods. For subspace methods, the matrix dimensions can grow exponentially with the dimension of the ‘past’ that can be used. This can occur in expanding equation 40 by repeated substitution into xt [k] on the right hand side of the state equation 40 with xt on the left hand side of equation 40. Thus the entire right hand side of equation 40 with t replaced by t−1 can be raised to the power lagp, the order of the past typically selected as the estimated ARX model order. For a relatively low order past, and low dimensions of xt and ut, the number of additive terms can increase exponentially.
  • However, in a further exemplary embodiment and following Rugh the equations 39 and 40 can be converted through Carleman bilinearization to bilinear vector differential equations in the state variable as shown in equation 42:

  • x t {circle around (x)} =[x t (1) ;x t (2) ; . . . x t (I)];   (42)
  • and the input power and products variables as shown in equation 43.

  • u t {circle around (x)} =[u t (1) ;u t (2) ; . . . u t (J)];   (43)
  • In a further exemplary embodiment, equation 40, which expresses the state-affine form, can then be rewritten as equations 44 and 45, below.

  • x t {circle around (x)} =A(x t {circle around (x)} {circle around (x)}u t {circle around (x)})+Bu t {circle around (x)}  (44)

  • x t {circle around (x)} =C(x t {circle around (x)} {circle around (x)}u t {circle around (x)})+Du t {circle around (x)}  (45)
  • Next, it may be assumed that the coefficient matrices (A, B, C, D) may be linear functions of scheduling parameters ρt denoted as (A(ρt),B(ρt),C(ρt)D(ρt)) and in state-affine form as in equations 9 through 12. The scheduling parameters ρt may be nonlinear functions of the operating point or other known or accurately measured variables. For example, since the inputs ut {circle around (x)} are multiplicative and can be assumed to be known in real time or accurately measured, they can be absorbed into the scheduling parameters ρt, thereby possibly decreasing their dimension. The bilinear equations can then become those shown in equations 46 and 47 below.

  • x t {circle around (x)} =At)x t {circle around (x)} +Bt)u t {circle around (x)}  (46)

  • x t {circle around (x)} =Ct)x t {circle around (x)} +Dt)u t {circle around (x)}  (47)
  • Equations 46 and 47 are in explicitly LPV form. Therefore, for functions f(xt,uut,vt) and h(xt,uut,vt) that may be sufficiently smooth and for which there may exist a well defined power series expansion in a neighborhood of the variable, there can exist an LPV approximation of the process.
  • Thus, with the absorption of the inputs ut {circle around (x)} into the scheduling parameter ρt, the equations may be linear time varying and, in some instances, may have a higher dimension ρt. This can show the dual roles of inputs and scheduling parameters and how they may be interchangeable in a variety of manners and, for example, their difference may be their rate of variation with time. The knowledge of scheduling can therefore often be derived from the underlying physics, chemistry or other fundamental information. LPV models can therefore be considered, on occasion, as graybox models insofar as they may be able to incorporate considerable global information about the behavior of the process that can incorporate into the model how the process dynamics can change with operating point.
  • The foregoing description and accompanying figures illustrate the principles, preferred embodiments and modes of operation of the invention. However, the invention should not be construed as being limited to the particular embodiments discussed above. Additional variations of the embodiments discussed above will be appreciated by those skilled in the art.
  • Therefore, the above-described embodiments should be regarded as illustrative rather than restrictive. Accordingly, it should be appreciated that variations to those embodiments can be made by those skilled in the art without departing from the scope of the invention as defined by the following claims.

Claims (12)

What is claimed is:
1. A method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, comprising:
sensing, with a sensor, dynamic system data;
storing sensed dynamic system data on a memory;
performing at least one of a nonlinear and linear autoregressive with inputs ((N)ARX) model fitting with one of a predetermined autoregressive with inputs (ARX) model and a predetermined nonlinear ARX (NARX) model to the stored dynamic system data, the fitting comprising:
performing a parameter estimation using stored dynamic system input and output data determined from a predetermined iteration of an algorithm for subspace identification with a processor, at least one of a set of ARX models of increasing order with a specified maximum order or a set of linear regression problems in terms of NARX models of increasing order and monomial degrees with a specified maximum order and degree, comprising; performing a model comparison, with a processor, to compute an Akaike's Information Criterion (AIC) of model fits for at least each ARX order and each NARX order and degree;
selecting a model that minimizes the AIC for at least one of a set of predetermined ARX models with a minimum AIC and a set of predetermined NARX models with a minimum AIC, wherein if more than one model achieves the desired minimum AIC, then selecting the ARX model or NARX model that further minimizes the number of estimated parameters that is also computed in the AIC computation;
performing a state space model fitting of a state space dynamic model of dynamic system operation that is parametric in its scheduling parameters, with a processor, using the ARX or NARX model selected as minimizing AIC, the state space model fitting comprising;
performing a corrected future calculation, by a processor, the corrected future calculation determining one or more corrected future outputs of dynamic system data through prediction and subtraction of an effect of one or more future inputs of dynamic system data on future outputs of the algorithm;
determining estimates of states with values whose elements are ordered as their predictive correlation for the future by performing a canonical variate analysis (CVA), with a processor, between corrected future outputs and past augmented inputs;
selecting one of a state order that minimizes the AIC or the lowest order of state orders that minimize the AIC;
inputting the estimates of states into one or more state equations;
performing a linear regression calculation on the one or more state equations to determine matrix coefficients of the state equations, and
providing a dynamic model of dynamic system data in the form of state equations with linear parameter varying matrix coefficients as functions of the scheduling parameters to extend subspace identification methods to LPV and nonlinear parameter varying systems with general scheduling functions.
2. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 1, wherein the ARX model fitting has a linear regression problem that assumes a maximum ARX order to be considered.
3. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions of claim 1, wherein the fitting of the ARX models further comprises:
a specified ARX order lagp wherein, for the specified ARX order lagp, and for every time t greater than lagp, the prediction of outputs yt using an autoregression of past outputs yt−i for 0<i<lagp+1, and an exogenous moving average comprising past inputs ut−i, and further augmented past inputs
? ? ? indicates text missing or illegible when filed
and augmented state estimates
? ? ? indicates text missing or illegible when filed
augmented respectively by Kronecker products that are similarly time shifted past scheduling functions ρt−i for −1<i<lagp+1.
4. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 1, wherein the dynamic system is an engine.
5. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 4, wherein the engine is an automotive engine.
6. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 4, wherein the engine is a turbine engine.
7. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 1, wherein the dynamic system is a chemical process.
8. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 7, wherein the chemical process occurs in a stirred tank reactor.
9. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 7, wherein the chemical process occurs in one or more distillation columns.
10. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 1, wherein the dynamic system is an aircraft.
11. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 1, wherein the dynamic system is an automobile.
12. The method of modelling a dynamic system by extending subspace identification methods to linear parameter varying (LPV) and nonlinear parameter varying systems with general scheduling functions, of claim 1, wherein the dynamic system is a ship.
US14/520,791 2009-09-03 2014-10-22 Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation Abandoned US20150039280A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/520,791 US20150039280A1 (en) 2009-09-03 2014-10-22 Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US23974509P 2009-09-03 2009-09-03
US12/875,456 US8898040B2 (en) 2009-09-03 2010-09-03 Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation
US14/520,791 US20150039280A1 (en) 2009-09-03 2014-10-22 Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/875,456 Continuation US8898040B2 (en) 2009-09-03 2010-09-03 Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation

Publications (1)

Publication Number Publication Date
US20150039280A1 true US20150039280A1 (en) 2015-02-05

Family

ID=43626135

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/875,456 Active 2033-01-16 US8898040B2 (en) 2009-09-03 2010-09-03 Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation
US14/520,791 Abandoned US20150039280A1 (en) 2009-09-03 2014-10-22 Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/875,456 Active 2033-01-16 US8898040B2 (en) 2009-09-03 2010-09-03 Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation

Country Status (7)

Country Link
US (2) US8898040B2 (en)
EP (1) EP2465050A4 (en)
JP (2) JP2013504133A (en)
KR (1) KR101894288B1 (en)
CN (1) CN102667755B (en)
CA (1) CA2771583C (en)
WO (1) WO2011029015A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845016A (en) * 2017-02-24 2017-06-13 西北工业大学 One kind is based on event driven measurement dispatching method
CN106934124A (en) * 2017-02-24 2017-07-07 西北工业大学 It is a kind of that window method is drawn based on the adaptive strain for measuring change detection
CN107122611A (en) * 2017-04-28 2017-09-01 中国石油大学(华东) Penicillin fermentation process quality dependent failure detection method
CN107194110A (en) * 2017-06-13 2017-09-22 哈尔滨工业大学 The Global robust parameter identification and output estimation method of a kind of double rate systems of linear variation parameter
US20180121488A1 (en) * 2016-11-02 2018-05-03 Oracle International Corporation Automatic linearizability checking
US10344615B2 (en) 2017-06-22 2019-07-09 General Electric Company Method and system for schedule predictive lead compensation
CN110220637A (en) * 2018-03-01 2019-09-10 通用汽车环球科技运作有限责任公司 Method for estimating the compressor inlet pressure of turbocharger
CN111324852A (en) * 2020-03-06 2020-06-23 常熟理工学院 Method of CSTR reactor time delay system based on state filtering and parameter estimation
CN111538237A (en) * 2020-03-20 2020-08-14 北京航空航天大学 Method for identifying and correcting non-linear light gray model of tilt rotor unmanned aerial vehicle

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102667755B (en) * 2009-09-03 2016-08-03 华莱士·E.·拉里莫尔 The method and system that time-varying, variable element and nonlinear system are carried out Experimental modeling is calculated by iterative linear subspace
US20120330717A1 (en) * 2011-06-24 2012-12-27 Oracle International Corporation Retail forecasting using parameter estimation
US20140278303A1 (en) * 2013-03-15 2014-09-18 Wallace LARIMORE Method and system of dynamic model identification for monitoring and control of dynamic machines with variable structure or variable operation conditions
AT512977B1 (en) * 2013-05-22 2014-12-15 Avl List Gmbh Method for determining a model of an output of a technical system
CA2913322C (en) 2013-06-14 2021-09-21 Wallace E. Larimore A method and system of dynamic model identification for monitoring and control of dynamic machines with variable structure or variable operation conditions
JP6239294B2 (en) * 2013-07-18 2017-11-29 株式会社日立ハイテクノロジーズ Plasma processing apparatus and method of operating plasma processing apparatus
WO2016194025A1 (en) * 2015-06-02 2016-12-08 日本電気株式会社 Linear parameter variation model estimation system, method, and program
US10634580B2 (en) * 2015-06-04 2020-04-28 The Boeing Company Systems and methods for analyzing flutter test data using damped sine curve fitting with the closed form shape fit
CN107168053B (en) * 2017-05-04 2020-10-30 南京理工大学 Finite field filter design method with random filter gain variation
CN107065557B (en) * 2017-05-04 2020-04-28 南京理工大学 Finite field filter design method with random filter gain variation
CN108764680B (en) * 2018-05-18 2022-04-19 秦皇岛港股份有限公司 Ship scheduling method for harbor forward parking area
CN109033021B (en) * 2018-07-20 2021-07-20 华南理工大学 Design method of linear equation solver based on variable parameter convergence neural network
WO2020118512A1 (en) * 2018-12-11 2020-06-18 大连理工大学 Lft-based aeroengine sensor and actuator fault diagnosis method
CN109840069B (en) * 2019-03-12 2021-04-09 烟台职业学院 Improved self-adaptive fast iterative convergence solution method and system
CN110298060B (en) * 2019-04-30 2023-04-07 哈尔滨工程大学 Indirect cooling gas turbine state space model identification method based on improved adaptive genetic algorithm
CN110513198B (en) * 2019-08-13 2021-07-06 大连理工大学 Active fault-tolerant control method for turbofan engine control system
CN110597230B (en) * 2019-09-24 2020-08-18 清华大学深圳国际研究生院 Active fault diagnosis method, computer readable storage medium and control method
JP7107339B2 (en) * 2019-09-30 2022-07-27 株式会社村田製作所 Nonlinear characteristic calculation method, nonlinear characteristic calculation program and its usage, and recording medium
KR102429079B1 (en) 2019-12-23 2022-08-03 주식회사 히타치하이테크 Plasma treatment method and wavelength selection method used for plasma treatment
CN112199856B (en) * 2020-10-23 2023-03-17 中国核动力研究设计院 Modelica-based nuclear reactor pipeline system model construction and strong coupling method and device
CN112487618A (en) * 2020-11-19 2021-03-12 华北电力大学 Distributed robust state estimation method based on equivalent information exchange
CN112610330B (en) * 2020-12-08 2023-05-09 孚创动力控制技术(启东)有限公司 Monitoring and analyzing system and method for running state of internal combustion engine
CN112613252B (en) * 2020-12-29 2024-04-05 大唐环境产业集团股份有限公司 Energy-saving operation method of absorption tower stirrer
CN113626983B (en) * 2021-07-06 2022-09-13 南京理工大学 Method for recursively predicting miss distance of antiaircraft projectile based on state equation
CN113641100B (en) * 2021-07-14 2023-11-28 苏州国科医工科技发展(集团)有限公司 Universal identification method for unknown nonlinear system
CN113704981B (en) * 2021-08-11 2024-04-09 武汉理工大学 Analysis method for time-varying dynamic behavior of high-speed bearing in temperature rising process
CN113742936A (en) * 2021-09-14 2021-12-03 贵州大学 Complex manufacturing process modeling and predicting method based on functional state space model

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5465321A (en) * 1993-04-07 1995-11-07 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Hidden markov models for fault detection in dynamic systems
US6173240B1 (en) * 1998-11-02 2001-01-09 Ise Integrated Systems Engineering Ag Multidimensional uncertainty analysis
US6349272B1 (en) * 1999-04-07 2002-02-19 Cadence Design Systems, Inc. Method and system for modeling time-varying systems and non-linear systems
US20020129038A1 (en) * 2000-12-18 2002-09-12 Cunningham Scott Woodroofe Gaussian mixture models in a data mining system
US20030125644A1 (en) * 2001-10-12 2003-07-03 Sharp Kabushiki Kaisha Method of preparing fluid-structure interactive numerical model and method of manufacturing fluttering robot using the same
US20050114098A1 (en) * 2002-03-28 2005-05-26 Yuichi Nagahara Random number generation method based on multivariate non-normal distribution, parameter estimation method thereof, and application to simulation of financial field and semiconductor ion implantation
US20060001673A1 (en) * 2004-06-30 2006-01-05 Mitsubishi Electric Research Laboratories, Inc. Variable multilinear models for facial synthesis
US7035782B2 (en) * 2000-06-02 2006-04-25 Cadence Design Systems, Inc. Method and device for multi-interval collocation for efficient high accuracy circuit simulation
US7082388B2 (en) * 2000-12-05 2006-07-25 Honda Giken Kogyo Kabushiki Kaisha Flutter test model
US20070028220A1 (en) * 2004-10-15 2007-02-01 Xerox Corporation Fault detection and root cause identification in complex systems
US20080126028A1 (en) * 2006-09-26 2008-05-29 Chang Gung University Method of reducing a multiple-inputs multiple-outputs (MIMO) interconnect circuit system in a global lanczos algorithm
US7487078B1 (en) * 2002-12-20 2009-02-03 Cadence Design Systems, Inc. Method and system for modeling distributed time invariant systems
US7493240B1 (en) * 2000-05-15 2009-02-17 Cadence Design Systems, Inc. Method and apparatus for simulating quasi-periodic circuit operating conditions using a mixed frequency/time algorithm
US7536282B1 (en) * 2004-09-01 2009-05-19 Alereon, Inc. Method and system for statistical filters and design of statistical filters
US20100225390A1 (en) * 2008-11-11 2010-09-09 Axis Network Technology Ltd. Resource Efficient Adaptive Digital Pre-Distortion System
US7840389B2 (en) * 2007-01-05 2010-11-23 Airbus France Method of optimizing stiffened panels under stress
US7881815B2 (en) * 2007-07-12 2011-02-01 Honeywell International Inc. Method and system for process control
US20110054863A1 (en) * 2009-09-03 2011-03-03 Adaptics, Inc. Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation
US20120109600A1 (en) * 2010-10-27 2012-05-03 King Fahd University Of Petroleum And Minerals Variable step-size least mean square method for estimation in adaptive networks
US8346693B2 (en) * 2009-11-24 2013-01-01 King Fahd University Of Petroleum And Minerals Method for hammerstein modeling of steam generator plant
US20130013086A1 (en) * 2011-07-06 2013-01-10 Honeywell International Inc. Dynamic model generation for implementing hybrid linear/non-linear controller
US20130030554A1 (en) * 2011-07-27 2013-01-31 Honeywell International Inc. Integrated linear/non-linear hybrid process controller

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0250701A (en) * 1988-08-12 1990-02-20 Fuji Electric Co Ltd Adaptive control method
JPH0695707A (en) * 1992-09-11 1994-04-08 Toshiba Corp Model forecast controller
JPH0962652A (en) * 1995-08-29 1997-03-07 Nippon Telegr & Teleph Corp <Ntt> Maximum likelihood estimation method
US5991525A (en) * 1997-08-22 1999-11-23 Voyan Technology Method for real-time nonlinear system state estimation and control
US20050021319A1 (en) * 2003-06-03 2005-01-27 Peng Li Methods, systems, and computer program products for modeling nonlinear systems
JP2005242581A (en) * 2004-02-25 2005-09-08 Osaka Prefecture Parameter estimation method, state monitoring method, parameter estimation apparatus, state monitoring apparatus and computer program
JP2007245768A (en) * 2006-03-13 2007-09-27 Nissan Motor Co Ltd Steering device, automobile, and steering control method
US7603185B2 (en) * 2006-09-14 2009-10-13 Honeywell International Inc. System for gain scheduling control

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5465321A (en) * 1993-04-07 1995-11-07 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Hidden markov models for fault detection in dynamic systems
US6173240B1 (en) * 1998-11-02 2001-01-09 Ise Integrated Systems Engineering Ag Multidimensional uncertainty analysis
US6349272B1 (en) * 1999-04-07 2002-02-19 Cadence Design Systems, Inc. Method and system for modeling time-varying systems and non-linear systems
US20090210202A1 (en) * 2000-05-15 2009-08-20 Cadence Design Systems, Inc. Method and apparatus for simulating quasi-periodic circuit operating conditions using a mixed frequency/time algorithm
US7493240B1 (en) * 2000-05-15 2009-02-17 Cadence Design Systems, Inc. Method and apparatus for simulating quasi-periodic circuit operating conditions using a mixed frequency/time algorithm
US7035782B2 (en) * 2000-06-02 2006-04-25 Cadence Design Systems, Inc. Method and device for multi-interval collocation for efficient high accuracy circuit simulation
US7082388B2 (en) * 2000-12-05 2006-07-25 Honda Giken Kogyo Kabushiki Kaisha Flutter test model
US20020129038A1 (en) * 2000-12-18 2002-09-12 Cunningham Scott Woodroofe Gaussian mixture models in a data mining system
US20030125644A1 (en) * 2001-10-12 2003-07-03 Sharp Kabushiki Kaisha Method of preparing fluid-structure interactive numerical model and method of manufacturing fluttering robot using the same
US20050114098A1 (en) * 2002-03-28 2005-05-26 Yuichi Nagahara Random number generation method based on multivariate non-normal distribution, parameter estimation method thereof, and application to simulation of financial field and semiconductor ion implantation
US7487078B1 (en) * 2002-12-20 2009-02-03 Cadence Design Systems, Inc. Method and system for modeling distributed time invariant systems
US20060001673A1 (en) * 2004-06-30 2006-01-05 Mitsubishi Electric Research Laboratories, Inc. Variable multilinear models for facial synthesis
US7536282B1 (en) * 2004-09-01 2009-05-19 Alereon, Inc. Method and system for statistical filters and design of statistical filters
US20070028220A1 (en) * 2004-10-15 2007-02-01 Xerox Corporation Fault detection and root cause identification in complex systems
US20080126028A1 (en) * 2006-09-26 2008-05-29 Chang Gung University Method of reducing a multiple-inputs multiple-outputs (MIMO) interconnect circuit system in a global lanczos algorithm
US7840389B2 (en) * 2007-01-05 2010-11-23 Airbus France Method of optimizing stiffened panels under stress
US7881815B2 (en) * 2007-07-12 2011-02-01 Honeywell International Inc. Method and system for process control
US20100225390A1 (en) * 2008-11-11 2010-09-09 Axis Network Technology Ltd. Resource Efficient Adaptive Digital Pre-Distortion System
US20110054863A1 (en) * 2009-09-03 2011-03-03 Adaptics, Inc. Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation
US8346693B2 (en) * 2009-11-24 2013-01-01 King Fahd University Of Petroleum And Minerals Method for hammerstein modeling of steam generator plant
US20120109600A1 (en) * 2010-10-27 2012-05-03 King Fahd University Of Petroleum And Minerals Variable step-size least mean square method for estimation in adaptive networks
US20130013086A1 (en) * 2011-07-06 2013-01-10 Honeywell International Inc. Dynamic model generation for implementing hybrid linear/non-linear controller
US20130030554A1 (en) * 2011-07-27 2013-01-31 Honeywell International Inc. Integrated linear/non-linear hybrid process controller

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180121488A1 (en) * 2016-11-02 2018-05-03 Oracle International Corporation Automatic linearizability checking
US10552408B2 (en) * 2016-11-02 2020-02-04 Oracle International Corporation Automatic linearizability checking of operations on concurrent data structures
CN106845016A (en) * 2017-02-24 2017-06-13 西北工业大学 One kind is based on event driven measurement dispatching method
CN106934124A (en) * 2017-02-24 2017-07-07 西北工业大学 It is a kind of that window method is drawn based on the adaptive strain for measuring change detection
CN107122611A (en) * 2017-04-28 2017-09-01 中国石油大学(华东) Penicillin fermentation process quality dependent failure detection method
CN107194110A (en) * 2017-06-13 2017-09-22 哈尔滨工业大学 The Global robust parameter identification and output estimation method of a kind of double rate systems of linear variation parameter
US10344615B2 (en) 2017-06-22 2019-07-09 General Electric Company Method and system for schedule predictive lead compensation
CN110220637A (en) * 2018-03-01 2019-09-10 通用汽车环球科技运作有限责任公司 Method for estimating the compressor inlet pressure of turbocharger
CN111324852A (en) * 2020-03-06 2020-06-23 常熟理工学院 Method of CSTR reactor time delay system based on state filtering and parameter estimation
CN111538237A (en) * 2020-03-20 2020-08-14 北京航空航天大学 Method for identifying and correcting non-linear light gray model of tilt rotor unmanned aerial vehicle

Also Published As

Publication number Publication date
JP2013504133A (en) 2013-02-04
CN102667755A (en) 2012-09-12
CN102667755B (en) 2016-08-03
KR20120092588A (en) 2012-08-21
US20110054863A1 (en) 2011-03-03
WO2011029015A3 (en) 2011-07-21
CA2771583A1 (en) 2011-03-10
JP2015181041A (en) 2015-10-15
CA2771583C (en) 2018-10-30
WO2011029015A2 (en) 2011-03-10
KR101894288B1 (en) 2018-09-03
EP2465050A4 (en) 2014-01-29
US8898040B2 (en) 2014-11-25
EP2465050A2 (en) 2012-06-20

Similar Documents

Publication Publication Date Title
US8898040B2 (en) Method and system for empirical modeling of time-varying, parameter-varying, and nonlinear systems via iterative linear subspace computation
US10996643B2 (en) Method and system of dynamic model identification for monitoring and control of dynamic machines with variable structure or variable operation conditions
Gan et al. A variable projection approach for efficient estimation of RBF-ARX model
Peng et al. A parameter optimization method for radial basis function type models
US20140278303A1 (en) Method and system of dynamic model identification for monitoring and control of dynamic machines with variable structure or variable operation conditions
Martinez et al. H-infinity set-membership observer design for discrete-time LPV systems
Ding et al. Convergence analysis of estimation algorithms for dual-rate stochastic systems
Gaggero et al. Approximate dynamic programming for stochastic N-stage optimization with application to optimal consumption under uncertainty
Otto et al. Learning Bilinear Models of Actuated Koopman Generators from Partially Observed Trajectories
Larimore Identification of nonlinear parameter-varying systems via canonical variate analysis
Li et al. Extended explicit pseudo two-step RKN methods for oscillatory systems y ″+ M y= f (y)
Bergboer et al. An efficient implementation of Maximum Likelihood identification of LTI state-space models by local gradient search
Goethals et al. Subspace intersection identification of Hammerstein-Wiener systems
Liu et al. Iterative state and parameter estimation algorithms for bilinear state-space systems by using the block matrix inversion and the hierarchical principle
Larimore et al. ADAPT-LPV software for identification of nonlinear parameter-varying systems
Martens Learning the linear dynamical system with asos
Larimore et al. CVA identification of nonlinear systems with LPV state-space models of affine dependence
Robles et al. N4SID-VAR Method for Multivariable Discrete Linear Time-variant System Identification.
Weber et al. ABS: A formally correct software tool for space-efficient symbolic synthesis
Yu et al. Identification of lti systems
Scardapane et al. A preliminary study on transductive extreme learning machines
Губарев et al. Interval state estimator for linear systems with known structure
Bidar et al. parameter models, Chemometrics and Intelligent Laboratory Systems
Gubarev et al. Identification of Regularized Models in the Linear Regression Class
Holden Tractable estimation and smoothing of highly non-linear dynamic state-space models

Legal Events

Date Code Title Description
AS Assignment

Owner name: ADAPTICS, INC., VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LARIMORE, WALLACE E.;REEL/FRAME:034006/0903

Effective date: 20140814

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION