CN101807046A - Online modeling method based on extreme learning machine with adjustable structure - Google Patents

Online modeling method based on extreme learning machine with adjustable structure Download PDF

Info

Publication number
CN101807046A
CN101807046A CN 201010119408 CN201010119408A CN101807046A CN 101807046 A CN101807046 A CN 101807046A CN 201010119408 CN201010119408 CN 201010119408 CN 201010119408 A CN201010119408 A CN 201010119408A CN 101807046 A CN101807046 A CN 101807046A
Authority
CN
China
Prior art keywords
ball
new
training
matrix
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010119408
Other languages
Chinese (zh)
Other versions
CN101807046B (en
Inventor
刘民
李国虎
董明宇
吴澄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2010101194082A priority Critical patent/CN101807046B/en
Publication of CN101807046A publication Critical patent/CN101807046A/en
Application granted granted Critical
Publication of CN101807046B publication Critical patent/CN101807046B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an online modeling method based on an extreme learning machine with adjustable structure, belongs to the fields of automatic control, information technology and advanced manufacturing, in particular to a method for adjusting the structure and parameters of an extreme learning machine to hold newly acquired data in the online learning process of the extreme learning machine. The method is characterized by comprising the following steps: defining the concept of a category ball; judging whether the newly acquired data is out of the category ball and leads to reduction of modeling accuracy or not in every learning process; if yes, adding a new hidden node; if not, only adjusting the center and the radius of the category ball, and updating the weight of the output layer of the extreme learning machine at last. The method firstly introduces the concept of the category ball for holding the used data in a previous training process, when determining the parameters of the newly added hidden node, the output of the node at a point nearest to the category ball is made to small enough so as to guarantee the output value of the node on the used data is zero, and a formula for updating the weight of the output layer is given. The method can enhance the online modeling accuracy by adding the hidden node.

Description

A kind of line modeling method based on extreme learning machine with adjustable structure
Technical field
The invention belongs to automatic control, infotech and advanced manufacturing field, be specifically related in the on-line study process of extreme learning machine, adjust its structure and parameter to hold the method for new acquisition data.
Background technology
In the many modeling environments that detect, control and optimize towards actual industrial process, the modeling desired data usually has the characteristics that arrive successively.At These characteristics, academia and industry member have proposed line modeling method (or on-line study method), as RAN, RANEKF, MRAN, GAP-RBF, GGAP-RBF, these class methods can be according to the online data adjustment model structure and parameter of new generation, holding new data information, and need not at the modeling again of acquired all data.Wait to regulate that parameter is many, training speed waits deficiency slowly but said method mostly exists, have a strong impact on its practical application effect.Though the OS-ELM method that occurs is reduced to one with parameter to be regulated recently, it lacks adjustability of structure, and the ability that makes it hold fresh information is relatively limited, and model accuracy can't further improve.
Summary of the invention
For solving an above-mentioned line modeling difficult problem, the present invention proposes a kind of line modeling method based on extreme learning machine with adjustable structure (being called for short SAO-ELM).In SAO-ELM, (Extreme Learning Machine: extreme learning machine) network is identical, but its number of hidden nodes can be regulated in the line modeling process with ELM for the basic structure of network.The main difficult point that increases hidden node in modeling process is, the training objective of SAO-ELM is to make adjusted model reach minimum with respect to the sum of errors of all training datas, but in each on-line study process, the training data of before using must abandon, and this makes and increases hidden node output the unknown on those data that abandoned newly.For this reason, the present invention has defined classification ball notion, in order to surrounding the training data that all had been used, and record and according to the centre of sphere and the radius of newly-increased this ball of Data Update at any time.When increasing hidden node, its excitation function is elected Gaussian function as, select the center and the width of suitable excitation function then, make this node enough little, thereby can be considered as 0 increasing the output of hidden node on the data that abandoned newly in the output of distance classification ball closest approach.Under these conditions, can derive the iterative output layer right value update formula when increasing hidden node, thereby realize line modeling based on extreme learning machine with adjustable structure.
A kind of line modeling method based on extreme learning machine with adjustable structure is characterized in that this method realizes according to following steps:
Step (1): Model Selection and parameter initialization
The number of hidden nodes M of single hidden layer extreme learning machine is set, and the input layer number is identical with the training sample dimension n, and the output node number is identical with the dimension m of training objective;
Excitation function G (a of hidden node i, b i, x) adopt Gaussian function, determine the center a of each hidden node at random iWith width b i, i=1,2 ... M;
According to an initial N sample
Figure GSA00000031561000021
The training extreme learning machine obtains initial hidden layer output matrix H 0With output layer connection matrix β 0, wherein,
β 0=(H 0 TH 0) -1H 0T 0
Figure GSA00000031561000022
T 0 = t 11 · · · t 1 m · · · t N 1 · · · t Nm = t 1 T · · · t N T N × m
The initialization matrix K makes
Figure GSA00000031561000024
Calculate Preserve β 0, K 0And P 0
Surround initial training sample set X with a classification ball O 0, make this ball just with X 0In all sample points be enclosed in its inside, determine the centre of sphere C of this ball 0And radius R 0
Step (2): on-line study process
Newly-increased training data x 1=(x N+1, t N+1) when arriving, according to following steps training ELM, make it can preserve X 0Knowledge, can hold x again 1The new knowledge that is comprised
Step (2.1): it is constant to keep network structure, only according to x 1Adjust output layer connection matrix β 0, the output layer weights connection matrix after the renewal is β 1, upgrade matrix K simultaneously 0And P 0Be K 1And P 1,
β 1 = β 0 + P 1 H 1 T ( T 1 - H 1 β 0 )
P 1 = K 1 - 1 = P 0 - P 0 H 1 T ( I + H 1 P 0 H 1 T ) - 1 H 1 P 0
K 1 = K 0 + H 1 T H 1
Wherein, H 1And T 1Be respectively ELM to new samples x 1Hidden layer output matrix and training objective matrix, promptly
H 1=[G(a 1,b 1,x N+1)…G(a M,b M,x N+1)] 1×M
T 1 = [ t ( N + 1 ) 1 · · · t ( N + 1 ) m ] = [ t N + 1 T ] 1 × m
Step (2.2): the ELM after the calculating adjustment parameter is to newly-increased sample x 1Training error e, judge new samples x 1Whether outside ball O, if outside ball O and e greater than setting threshold, abandon above-mentioned all adjustment, turn to step (2.3), otherwise, turn to step (3);
Step (2.3): increase a hidden node, setting its center a is x 1, width b is definite by following formula, promptly
b ≤ - | | x c - a | | ln ϵ
Wherein, ε is the threshold value that presets, x cFor the last coordinate of classification ball O, can determine by following formula apart from new sample point x1 closest approach
x c=x o1(x a-x o)
Wherein, x oBe the sphere centre coordinate of ball O, λ 1Can determine by following formula
λ 1 = | | x c - x o | | | | x a - x o | | = R | | x a - x o | |
X in the following formula aBe new sample point x 1Coordinate; Readjust output layer connection matrix β 0Be β 1, and correspondingly upgrade matrix K 0And P 0Be K 1And P 1, make
β 1 = P 1 K 0 β 0 + H 1 T T 1 H 11 T T 1 , K 1 = K 0 + H 1 T H 1 H 1 T H 11 H 11 T H 1 H 11 T H 11 , P 1 = K 1 - 1 = A 11 A 12 A 21 A 22
A 11 = P 1 ′ + P 1 ′ ( H 1 T H 11 ) R - 1 ( H 11 T H 1 ) P 1 ′ , A 12 = - P 1 ′ ( H 1 T H 11 ) R - 1 , A 21 = A 12 T , A 22 = R - 1
R = H 11 T H 11 - ( H 11 T H 1 ) P 1 ′ ( H 1 T H 11 )
P 1 ′ = ( K 0 + H 1 T H 1 ) - 1 = P 0 - P 0 H 1 T ( I + H 1 P 0 H 1 T ) - 1 H 1 P 0 , P 0 = K 0 - 1
Wherein, H 01And H 11Be respectively newly-increased hidden node to former sample set X 0With new sample point x 1The hidden layer output matrix, promptly
Figure GSA00000031561000041
Figure GSA00000031561000042
N 1Be respectively x with L 1In sample point number and newly-increased hidden node number, this method is only considered the situation that new sample point arrives one by one, so N 1=L=1;
Step (3): the parameter of upgrading classification ball O
Upgrade the parameter of classification ball O, promptly upgrade its sphere centre coordinate and radius, make new ball O 1Just with X 0And x 1In all sample points all be enclosed in wherein, its more new formula be:
R new = | | x a - x b | | 2
x o _ new = x a + x b 2
Wherein, x aAnd x bBe respectively new sample point x 1Coordinate and ball O go up apart from x 1The coordinate in solstics, x bCan calculate by following formula,
x b=x o2(x o-x a)
Wherein, x oBe the coordinate of centre of sphere O, λ 2Can calculate by following formula,
λ 2 = | | x b - x o | | | | x o - x a | | = R | | x o - x a | |
According to above-mentioned line modeling method, the present invention has done a large amount of l-G simulation tests, can find out that from simulation result the line modeling method that the present invention proposes has higher learning accuracy than other line modeling methods, the model of setting up with the method also has better extensive performance.
Description of drawings
Fig. 1: algorithm flow chart is each step based on the line modeling method specific implementation of extreme learning machine with adjustable structure that the present invention proposes.
Fig. 2: the synoptic diagram of the classification ball of all data that the encirclement training process had been used, wherein, the other ball O of group is that encirclement all except that newly-increased training data had been used the classification ball of data, big classification ball O 1Be to surround all to have used the classification ball of data and newly-increased data.
Fig. 3: the Gaussian function synoptic diagram, wherein, middle peak value is its output in the Gaussian function center, the less output valve of edge is that it is in the output of distant place from the Gaussian function center.
Fig. 4: the graph of a relation that training precision and checking precision change with the number of hidden nodes in the emulation experiment, wherein, red curve is the variation relation of training precision with the number of hidden nodes, green curve is the variation relation of checking precision with the number of hidden nodes.
Fig. 5: line modeling process synoptic diagram in the emulation experiment, wherein, the variation relation that Fig. 5 .1 increases with training data for the checking precision, Fig. 5 .2 are the variation relation that the number of hidden nodes increases with training data.
Embodiment
The line modeling method based on extreme learning machine with adjustable structure that the present invention proposes, its main advantage is can adjust network structure as required in the line modeling process.According to the characteristics of line modeling, if just learn when having newly-increased training data to arrive, have model now and predict, and along with the increase of training data, precision of prediction can progressively be improved otherwise just use.
Below to being elaborated that the present invention proposes based on the related step of the line modeling method of extreme learning machine with adjustable structure:
The first step, Model Selection
For the method that the present invention proposes, Model Selection only relates to determines initial ELM the number of hidden nodes M.The present invention adopts the method for cross validation to determine initial hidden node number: the initial training data are divided into two parts, and a part is used for training, and another part is used for checking; Since a less latent node number, train ELM with training data earlier, obtain the checking error with verification msg then, progressively increase the hidden node number again, and repeat above-mentioned training and verification step, last, selecting to make the number of hidden nodes of checking error minimum is initial the number of hidden nodes.
Second step, the initialization of model
The initialization of model is the initialization of model parameter.The method that the present invention proposes adopts the ELM network structure, and the excitation function of hidden node adopts Gaussian function, so at first want initialized parameter that the center a of Gaussian function is arranged iWith width b i, i=1,2 ... M, a iAnd b iFrom the random number that meets specific distribution, choose respectively.Secondly, determine the initial number of samples of participating in training.In the present invention, to being used for the model of classification problem, initial number of training is elected M+100 as; For the model that is used for regression problem, then elect M+50 as.The principle of determining the initial training sample number is to make H 0The row full rank.At last, with the classification ball S of a radius minimum 0Surround the initial training data, remember that its centre of sphere and radius are respectively C 0And R 0
The 3rd goes on foot, and determines the initial value of training process data
So-called training process primary data comprises hidden layer output matrix H, output layer connection matrix β, calculates the intermediary matrix K and the inverse matrix K thereof of output layer connection matrix needs -1
If the initial training data are
Figure GSA00000031561000061
Then corresponding hidden layer output matrix is
Risk minimization (ERM) principle rule of thumb, the objective function of asking for output layer connection matrix β is min (‖ F-T 0‖)=min (‖ H 0 (N * M)β (M * m)-T 0 (N * m)‖), wherein, T 0Be training objective, promptly
T 0 = t 11 · · · t 1 m · · · t N 1 · · · t Nm = t 1 T · · · t N T N × m
According to the knowledge of matrix, find the solution easily above-mentioned optimization problem separate for
Figure GSA00000031561000064
Wherein
Figure GSA00000031561000065
Be H 0The pseudoinverse of matrix.Work as matrix H 0During the row full rank, i.e. rank (H 0During)=M, have
Figure GSA00000031561000066
Be convenient and derive, introduce intermediary matrix K, make K=H TH, then
Figure GSA00000031561000067
Figure GSA00000031561000068
Note P=K -1, then
Figure GSA00000031561000069
The 4th step, do not adjust the ELM structure, only come match to increase data newly by adjusting ELM output layer weights
If newly-increased data are
Figure GSA000000315610000610
(N in fact, in the method 1Elect 1 as; And N 1Situation greater than 1 then all can be converted into N 1Equal 1 situation processing), then corresponding therewith hidden layer output matrix and training objective are respectively:
Figure GSA000000315610000611
T 1 = t ( N + 1 ) 1 · · · t ( N + 1 ) m · · · t ( N + N 1 ) 1 · · · t ( N + N 1 ) m = t N + 1 T · · · t N + N 1 T N 1 × m
At this moment, the objective function of asking for new output layer connection matrix β is
min ( H 0 H 1 β - T 0 T 1 )
, obtain above-mentioned optimization problem separate for
β 1 = ( K 1 ) - 1 H 0 H 1 T T 0 T 1 , K 1 = H 0 H 1 T H 0 H 1
For realizing the target of line modeling, β 1Must and H 0, T 0Irrelevant, and only be about P 0, K 0, β 0Function.Derive and can get by simple mathematical,
β 1 = β 0 + K 1 - 1 H 1 T ( T 1 - H 1 β 0 )
K 1 = K 0 + H 1 T H 1
Above-mentioned two formula are not to be adjusted the ELM structure and only adjusts ELM training algorithm under its parameter situation.
In the 5th step, judge that the ELM that obtains with the 4th training method that goes on foot is at newly-increased data X 1On training error whether meet the requirements, also need judgment data X simultaneously 1Whether at classification ball S 0Outside.If training error does not meet the demands, and X 1At S 0Outside, then turned to for the 6th step, otherwise, turned to for the 7th step.
The 6th step increased a hidden node, adjusted output layer then and connected weights
Under the situation that increases a hidden node, the hidden layer output matrix becomes
H 0 H 01 H 1 H 11
Wherein, H 01For increasing the data set X that latent node had been used with respect to training process newly 0Output, H 11For increasing hidden node newly with respect to increasing newly and unworn data set X 1Output.Correspondingly, the objective function of asking for new output layer connection matrix β becomes
min ( H 0 H 01 H 1 H 11 β - T 0 T 1 )
But because the data set X that had used 0Be dropped before latent node increases, this causes H 01The unknown must be handled H earlier well so will ask for β 01Unknown problem.
From the synoptic diagram (Fig. 3) of Gaussian function as can be seen, latent node can be considered 0 in the output away from the center.If data set X 0And X 1Synoptic diagram as shown in Figure 2, among the figure, S 0The data of having used for training process of surrounding, the A point is newly-increased data present position.So if the A point is elected at the center that will increase hidden node newly as, then as long as its output of ordering at C is enough little, also i.e. requirement
e - | | x c - a | | b ≤ ϵ ⇒ b ≤ - | | x c - a | | ln ϵ
Wherein ε is previously selected threshold value, x cBe the C point coordinate, can determine by following two formulas
x c=x o1(x a-x o)
λ 1 = | | x c - x o | | | | x a - x o | | = R | | x a - x o | |
Wherein, x oBe classification ball S 0Sphere centre coordinate, x aBe the coordinate that A is ordered, R is classification ball S 0Radius.
Select according to said method after the center a and width b of newly-increased hidden node, the output that newly-increased hidden node is ordered at C is just less than a very little real number ε, and it is at classification ball S 0Within output then littler so that can be considered 0, so H 01Can be considered 0 matrix.At this moment, the hidden layer output matrix behind the increase hidden node is
min ( H 0 0 H 1 H 11 β - T 0 T 1 )
Find the solution above-mentioned optimization problem, can get
β 1 = ( H 0 0 H 1 H 11 T H 0 0 H 1 H 11 ) - 1 H 0 0 H 1 H 11 T T 0 T 1
If order
H = H 0 H 1 , δH = 0 H 11
Then
β 1 = ( H δH T H δH ) - 1 H δH T T 0 T 1
At this moment,
K 1 = H δH T H δH = H T δH T H δH
Order
K 1 - 1 = A = A 11 A 12 A 21 A 22 = ( H T δH T H δH ) - 1
Wherein, each element of matrix A is
A 11=(H TH) -1+(H TH) -1(H TδH)×R -1(δH TH)(H TH) -1
A 12=-(H TH) -1(H TδH)R -1
Figure GSA00000031561000088
A 22=R -1
Wherein, R=δ H Tδ H-(δ H TH) (H TH) -1(H Tδ H), the expression formula substitution with H and δ H gets:
A 11 = ( K 0 + H 1 T H 1 ) - 1 + ( K 0 + H 1 T H 1 ) - 1 ( H 1 T H 11 ) R - 1 ( H 11 T H 1 ) ( K 0 + H 1 T H 1 ) - 1
A 12 = - ( K 0 + H 1 T H 1 ) - 1 ( H 1 T H 11 ) R - 1 , A 21 = A 12 T , A 22=R -1
R = H 11 T H 11 - ( H 11 T H 1 ) ( K 0 + H 1 T H 1 ) - 1 ( H 1 T H 11 )
Comprehensive above various can be under the situation that increases hidden node, the more new formula of K, P and β is:
β 1 = P 1 K 0 β 0 + H 1 T T 1 H 11 T T 1 , K 1 = K 0 + H 1 T H 1 H 1 T H 11 H 11 T H 1 H 11 T H 11 , P 1 = K 1 - 1 = A 11 A 12 A 21 A 22
A 11 = P 1 ′ + P 1 ′ ( H 1 T H 11 ) R - 1 ( H 11 T H 1 ) P 1 ′ , A 12 = - P 1 ′ ( H 1 T H 11 ) R - 1 , A 21 = A 12 T , A 22=R -1
R = H 11 T H 11 - ( H 11 T H 1 ) P 1 ′ ( H 1 T H 11 )
P 1 ′ = ( K 0 + H 1 T H 1 ) - 1 = P 0 - P 0 H 1 T ( I + H 1 P 0 H 1 T ) - 1 H 1 P 0 , P 0 = K 0 - 1
In the 7th step, upgrade classification ball S 0Be S 1, make S 1Comprised S 0In data X 0With newly-increased data X 1
As can be seen from Figure 2, S 1The centre of sphere should be at the mid point of A and B, radius should be half of line segment AB.Wherein, the A point is the new data position, and the B point is ball S 0Go up apart from A point solstics.So new classification ball center and radius is respectively
R new = | | x a - x b | | 2
x o _ new = x a + x b 2
x bBe the coordinate that B is ordered, it can be determined by following two formulas
x b=x o2(x o-x a)
λ 2 = | | x b - x o | | | | x o - x a | | = R | | x o - x a | |
The method flow diagram that the present invention proposes as shown in Figure 1.
According to above-mentioned line modeling method based on extreme learning machine with adjustable structure, the present invention has done a large amount of emulation experiments, as space is limited, only provides the effect of said method in actual steel-making continuous casting quality forecast data here.This data set derives from industry spot, and the input dimension is 84, and the output dimension is 1; Number of training is 1056, and the test specimens given figure is 508.
The present invention has compared above-mentioned SAO-ELM method and batch processing formula algorithm---the effect of BP neural network algorithm and OS-ELM method.The BP neural network algorithm is classical neural metwork training method, but it is not an on-line learning algorithm, the OS-ELM method is a kind of on-line learning algorithm, and the difference of the algorithm that itself and the present invention propose is that it does not have adjustability of structure, promptly lacks the step of the 5th among the present invention, the 6th step and the 7th step.Comparative result sees Table 1:
The performance of table 1SAO-ELM and other algorithms relatively
As can be seen from Table 1, it all is best that two kinds of modeling methods of the training precision of SAO-ELM and measuring accuracy and other are compared, and its training time has then been lacked order of magnitude nearly than the BP algorithm.Fig. 5 has provided the line modeling process, from its variation tendency more as can be known, along with the propelling of modeling process, the number of hidden nodes is in continuous increase, and learning accuracy is also in continuous improvement, and both variation tendencies are consistent, and this also shows the validity of the SAO-ELM that the present invention proposes.

Claims (1)

1. the line modeling method based on extreme learning machine with adjustable structure (ELM) is characterized in that, described method is to realize as follows successively on computers:
Step (1): Model Selection and parameter initialization
The number of hidden nodes M of single hidden layer extreme learning machine is set, and the input layer number is identical with the training sample dimension n, and the output node number is identical with the dimension m of training objective;
Excitation function G (a of hidden node i, b i, x) adopt Gaussian function, determine the center a of each hidden node at random iWith width b i, i=1,2 ... M;
According to an initial N sample The training extreme learning machine obtains initial hidden layer output matrix H 0With output layer connection matrix β 0, wherein,
β 0=(H 0 TH 0) -1H 0T 0
Figure FSA00000031560900012
T 0 = t 11 . . . t 1 m . . . t N 1 . . . t Nm = t 1 T . . . t N T N × m
The initialization matrix K makes Calculate Preserve β 0, K 0And P 0
Surround initial training sample set X with a classification ball O 0, make this ball just with X 0In all sample points be enclosed in its inside, determine the centre of sphere C of this ball 0And radius R 0
Step (2): on-line study process
Newly-increased training data x 1=(x N+1, t N+1) when arriving, according to following steps training ELM, make it can preserve X 0Knowledge, can hold x again 1The new knowledge that is comprised
Step (2.1): it is constant to keep network structure, only according to x 1Adjust output layer connection matrix β 0, the output layer weights connection matrix after the renewal is β 1, upgrade matrix K simultaneously 0And P 0Be K 1And P 1,
β 1 = β 0 + P 1 H 1 T ( T 1 - H 1 β 0 )
P 1 = K 1 - 1 = P 0 - P 0 H 1 T ( I + H 1 P 0 H 1 T ) - 1 H 1 P 0
K 1 = K 0 + H 1 T H 1
Wherein, H 1And T 1Be respectively ELM to new samples x 1Hidden layer output matrix and training objective matrix, promptly
H 1=[G(a 1,b 1,x N+1)…G(a M,b M,x N+1)] 1×M
T 1 = [ t ( N + 1 ) 1 . . . t ( N + 1 ) m ] = [ t N + 1 T ] 1 × m
Step (2.2): the ELM after the calculating adjustment parameter is to newly-increased sample x 1Training error e, judge new samples x 1Whether outside ball O, if outside ball O and e greater than setting threshold, abandon above-mentioned all adjustment, turn to step (2.3), otherwise, turn to step (3);
Step (2.3): increase a hidden node, setting its center a is x 1, width b is definite by following formula, promptly
b ≤ - | | x c - a | | ln ϵ
Wherein, ε is the threshold value that presets, x cFor classification ball O goes up apart from new sample point x 1The coordinate of closest approach can be determined by following formula
x c=x o1(x a-x o)
Wherein, x oBe the sphere centre coordinate of ball O, λ 1Can determine by following formula
λ 1 = | | x c - x o | | | | x a - x o | | = R | | x a - x o | |
X in the following formula aBe new sample point x 1Coordinate; Readjust output layer connection matrix β 0Be β 1, and correspondingly upgrade matrix K 0And P 0Be K 1And P 1, make
β 1 = P 1 K 0 β 0 + H 1 T T 1 H 11 T T 1 , K 1 = K 0 + H 1 T H 1 H 1 T H 11 H 11 T H 1 H 11 T H 11 , P 1 = K 1 - 1 = A 11 A 12 A 21 A 22
A 11 = P 1 ′ + P 1 ′ ( H 1 T H 11 ) R - 1 ( H 11 T H 1 ) P 1 ′ , A 12 = - P 1 ′ ( H 1 T H 11 ) R - 1 , A 21 = A 12 T , A 22=R -1
R = H 11 T H 11 - ( H 11 T H 1 ) P 1 ′ ( H 1 T H 11 )
P 1 ′ = ( K 0 + H 1 T H 1 ) - 1 = P 0 - P 0 H 1 T ( I + H 1 P 0 H 1 T ) - 1 H 1 P 0 , P 0 = K 0 - 1
Wherein, H 01And H 11Be respectively newly-increased hidden node to former sample set X 0With new sample point x 1The hidden layer output matrix, promptly
Figure FSA00000031560900031
Figure FSA00000031560900032
N 1Be respectively x with L 1In sample point number and newly-increased hidden node number, this method is only considered the situation that new sample point arrives one by one, so N 1=L=1;
Step (3): the parameter of upgrading classification ball O
Upgrade the parameter of classification ball O, promptly upgrade its sphere centre coordinate and radius, make new ball O 1Just with X 0And x 1In all sample points all be enclosed in wherein, its more new formula be:
R new = | | x a - x b | | 2
x o _ new = x a + x b 2
Wherein, x aAnd x bBe respectively new sample point x 1Coordinate and ball O go up apart from x 1The coordinate in solstics, x bCan calculate by following formula,
x b=x o2(x o-x a)
Wherein, x oBe the coordinate of centre of sphere O, λ 2Can calculate by following formula,
λ 2 = | | x b - x o | | | | x o - x a | | = R | | x o - x a | | .
CN2010101194082A 2010-03-08 2010-03-08 Online modeling method based on extreme learning machine with adjustable structure Expired - Fee Related CN101807046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101194082A CN101807046B (en) 2010-03-08 2010-03-08 Online modeling method based on extreme learning machine with adjustable structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101194082A CN101807046B (en) 2010-03-08 2010-03-08 Online modeling method based on extreme learning machine with adjustable structure

Publications (2)

Publication Number Publication Date
CN101807046A true CN101807046A (en) 2010-08-18
CN101807046B CN101807046B (en) 2011-08-17

Family

ID=42608870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101194082A Expired - Fee Related CN101807046B (en) 2010-03-08 2010-03-08 Online modeling method based on extreme learning machine with adjustable structure

Country Status (1)

Country Link
CN (1) CN101807046B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708381A (en) * 2012-05-09 2012-10-03 江南大学 Improved extreme learning machine combining learning thought of least square vector machine
CN103106331A (en) * 2012-12-17 2013-05-15 清华大学 Photo-etching line width intelligence forecasting method based on dimension-reduction and quantity-increment-type extreme learning machine
WO2013182176A1 (en) * 2012-06-06 2013-12-12 Kisters Ag Method for training an artificial neural network, and computer program products
CN104537167A (en) * 2014-12-23 2015-04-22 清华大学 Interval type index forecasting method based on robust interval extreme learning machine
CN108229026A (en) * 2018-01-04 2018-06-29 电子科技大学 A kind of electromagnetic field modeling and simulating method based on dynamic core extreme learning machine
CN111125760A (en) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 Model training and predicting method and system for protecting data privacy

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5621648A (en) * 1994-08-02 1997-04-15 Crump; Craig D. Apparatus and method for creating three-dimensional modeling data from an object
CN101504736A (en) * 2009-02-27 2009-08-12 江汉大学 Method for implementing neural network algorithm based on Delphi software
CN101576734A (en) * 2009-06-12 2009-11-11 北京工业大学 Dissolved oxygen control method based on dynamic radial basis function neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5621648A (en) * 1994-08-02 1997-04-15 Crump; Craig D. Apparatus and method for creating three-dimensional modeling data from an object
CN101504736A (en) * 2009-02-27 2009-08-12 江汉大学 Method for implementing neural network algorithm based on Delphi software
CN101576734A (en) * 2009-06-12 2009-11-11 北京工业大学 Dissolved oxygen control method based on dynamic radial basis function neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《山东大学学报(理学版)》 20100531 李彬等 ELM_RBF神经网络的智能优化策略 48-51 1 第45卷, 第5期 *
《电网技术》 20020430 丁坚勇等 基于ELMAN神经网络的同步电机动态参数在线辨识 22-25 1 第26卷, 第4期 *
《系统仿真学报》 20071231 常玉清等 基于极限学习机的生化过程软测量建模 5587-5590 1 第19卷, 第23期 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708381A (en) * 2012-05-09 2012-10-03 江南大学 Improved extreme learning machine combining learning thought of least square vector machine
CN102708381B (en) * 2012-05-09 2014-02-19 江南大学 Improved extreme learning machine combining learning thought of least square vector machine
WO2013182176A1 (en) * 2012-06-06 2013-12-12 Kisters Ag Method for training an artificial neural network, and computer program products
CN103106331A (en) * 2012-12-17 2013-05-15 清华大学 Photo-etching line width intelligence forecasting method based on dimension-reduction and quantity-increment-type extreme learning machine
CN103106331B (en) * 2012-12-17 2015-08-05 清华大学 Based on the lithographic line width Intelligent Forecasting of dimensionality reduction and increment type extreme learning machine
CN104537167A (en) * 2014-12-23 2015-04-22 清华大学 Interval type index forecasting method based on robust interval extreme learning machine
CN104537167B (en) * 2014-12-23 2017-12-15 清华大学 Interval type indices prediction method based on Robust Interval extreme learning machine
CN108229026A (en) * 2018-01-04 2018-06-29 电子科技大学 A kind of electromagnetic field modeling and simulating method based on dynamic core extreme learning machine
CN108229026B (en) * 2018-01-04 2021-07-06 电子科技大学 Electromagnetic field modeling simulation method based on dynamic kernel extreme learning machine
CN111125760A (en) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 Model training and predicting method and system for protecting data privacy
CN111125760B (en) * 2019-12-20 2022-02-15 支付宝(杭州)信息技术有限公司 Model training and predicting method and system for protecting data privacy

Also Published As

Publication number Publication date
CN101807046B (en) 2011-08-17

Similar Documents

Publication Publication Date Title
CN101807046B (en) Online modeling method based on extreme learning machine with adjustable structure
CN104537033B (en) Interval type indices prediction method based on Bayesian network and extreme learning machine
CN107561503B (en) Adaptive target tracking filtering method based on multiple fading factors
CN102096373B (en) Microwave drying PID (proportion integration differentiation) control method based on increment improved BP (back propagation) neural network
CN108682023A (en) Close coupling Unscented kalman tracking filter algorithm based on Elman neural networks
CN108304679A (en) A kind of adaptive reliability analysis method
CN104239659A (en) Carbon steel corrosion rate prediction method of back propagation (BP) neural network
Maher et al. Signal optimisation using the cross entropy method
CN106446424A (en) Unsteady aerodynamic model parameter prediction method
CN112926152B (en) Digital twin-driven thin-wall part clamping force precise control and optimization method
CN102033991A (en) Microwave drying prediction method through BP (back-propagation) neural network based on incremental improvement
CN108022004A (en) A kind of adaptive weighting training method of multi-model weighted array Forecasting Power System Load
CN108563895A (en) A kind of interval model modification method considering correlation
CN109508498A (en) Rubber shock absorber formula designing system and method based on BP artificial neural network
CN103106331A (en) Photo-etching line width intelligence forecasting method based on dimension-reduction and quantity-increment-type extreme learning machine
CN102393645A (en) Control method of high-speed electro-hydraulic proportional governing system
CN106568647A (en) Nerve network-based concrete strength predication method
CN114880806A (en) New energy automobile sales prediction model parameter optimization method based on particle swarm optimization
Dorosti et al. Finite element model reduction and model updating of structures for control
CN106651090B (en) Normalized man-machine system flight quality prediction method
CN109146055A (en) Modified particle swarm optimization method based on orthogonalizing experiments and artificial neural network
CN115688588B (en) Sea surface temperature daily variation amplitude prediction method based on improved XGB method
CN107436957A (en) A kind of chaos polynomial construction method
CN106202694A (en) Combination Kriging model building method based on combination forecasting method
CN105921522B (en) Section cooling temperature self-adaptation control method based on RBF neural

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110817

Termination date: 20140308