New Approaches in Automation and Robotics by Harald Aschemann - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

y ( , )

kk

e

2

(2)

1− β m

kk

e

(3)

M

k l > k

β ( m )3

3

(2)

2

kk

e

y ( ,

)

(3)

M

k k

β m

kk (

e )3

3

(2)

4

y ( , 2 )

Table 2. Relations between moments and EB(k,k) parameters

Fig. 2. EB(5,5) sequence and its characteristics

Bilinear Time Series in Signal Analysis

117

Diagonal EB(k,k) time series { y has a non-zero mean value, equal to

(1)

M . Deviation from

i }

y

the mean

(2)

z = y M is a non-Gaussian time series. A Gaussian equivalent of z is a

i

i

y

i

MA(k) series:

z = w + c w (18)

i

i

k

ik

where w is a Gaussian white noise series. Values of

m can be calculated from the

i

c and (2)

k

w

set of equations (19):

(2)

2

(2)

4

(2) 2

m

+ β m + β m

e (1

kk

e

kk (

e ) )

(2)

2

= m

+ c

w (1

k )

2

(2)

1− β m

(19)

kk

e

2

(2) 2

(2)

β m

= c m

kk (

e )

k

w

3. Identification of EB(k,l) models

Under assumption that the model EB(k,l) is identifiable, and that the model structure is

known, methods of estimation of the model parameters are similar to the methods of

estimation of linear model parameters. The similarity stems from that the bilinear model

structure, though nonlinear in e and y , is linear in parameter β . A number of estimation i

i

kl

methods originate from minimization of a squared prediction error (20). Three of them,

which are frequently applied in estimation of bilinear model parameters, will be discussed

in the section 3.1.

ε = y − ˆ y

(20)

i

i

|

i i−1

Moments’ methods are an alternative way of parameters’ estimation. Model parameters are

calculated on the base of estimated stochastic moments (Tang & Mohler, 1988). Moments’

methods are seldom applied, because hardly ever analytical formulae connecting moments

and model's parameters are known. For elementary bilinear time series models the formulae

were derived, (see table 1, table 2) and therefore, method of moments and generalized

method of moments, discussed in section 3.2, may be implemented to estimate elementary

bilinear models parameters.

3.1 Methods originated from minimization of the squared prediction error

Methods that originate from minimization of the squared prediction error (20) calculate

model parameters by optimization of a criterion

2

J( ε , being a function of the squared

i )

prediction error. In this section the following methods are discussed:

− minimization of sum of squares of prediction error,

− maximum likelihood,

− repeated residuum.

a) Minimization of the sum of squares of prediction error

Minimization of the sum of squares of prediction error is one of the simplest and the most

frequently used methods for time series model identification. Unfortunately, the method is

sensitive to any anomaly in data set applied in model identification (Dai & Sinha, 1989).

Generally, filtration of the large data deviation from the normal or common course of time

118

New Approaches in Automation and Robotics

series (removing outliers) precedes the model identification. However, filtration cannot be

applied to the bilinear time series, for which sudden and unexpected peaks of data follows

from the bilinear process nature, and should not be removed from the data set used for

identification. Therefore, the basic LS algorithm cannot be applied to elementary bilinear

model identification and should be replaced by a modified LS algorithm, resistant to

anomalies. Dai and Sinha proposed robust recursive version (RLS) of LS algorithm, where

β parameter of the model EB(k,l) is calculated in the following way:

kl

b = b

+

−Φ

k y

b

kl, i

kl, i 1

i ( i

i kl, i−1 )

P Φ

i-1

i

k =

(21)

i

2

α + Φ p

i

i

i−1

2

2

1 ⎜

P

Φ

i−1

i

P =

P

i

i-1

2

α ⎜⎝

α

+ Φ p

i

i

i

i−1 ⎠

where:

b -- evaluation of model parameter β calculated in i-th iteration,

kl, i

kl

Φ = ˆ w

-- generalized input,

y

i

i k il

ˆ w = y −Φ b

-- one step ahead prediction error,

i

i

i kl, i−1

α -- coefficient that depends upon the prediction error as follows:

i

⎧⎪

(

sign ˆ w y

i ) tresh

for ˆ w >

y

α

= ⎨

ˆ

i

tresh

w

i

i

⎪⎪⎪ 1 for ˆ w y

⎪⎩

i

tresh

y

-- a threshold value

tresh

b) Maximum likelihood

Maximum likelihood method was first applied to bilinear model identification by Priestley

(Priestley, 1980) then Subba (Subba, 1981), and others e.g. (Brunner & Hess, 1995). In this

method, elementary bilinear model EB(k,l) is represented as a function of two parameters

y

( b y

:

kl , ik )

model

y

= b y

(22)

w

model

kl i k

il

where w is an innovation series, equivalent to the model errors:

i

w = y y

b y

(23)

i

i -

l ( kl , i k ).

mode

-

Likelihood is defined as:

N

(2)

(2)

L = (

L b m

= ∏ f b m w (24)

kl ,

w )

( kl , w ; i )

i=1

Maximization of L is equivalent to minimization of -l=-ln(L):

N

(2)

(2)

− ( lb m

= −∑

f b m

w

(25)

kl ,

w )

ln( ( kl , w ; i ))

i=1

Bilinear Time Series in Signal Analysis

119

Assuming that w is a Gaussian series with the mean value equal to zero, and the variance

i

equal to (2)

m , negative logarithm likelihood -ln(L) is:

w

N

2

N

w

(2)

(2)

-ln( L) = - ( lw , w

,..., w | b , m ) =

ln(2 πm )

i

+ ∑

(26)

N

N

kl

w

w

.

-1

1

(2)

2

i=1 2 mw

Having assumed initial values b

and (2)

m , parameters b and (2)

m are calculated by

kl,0

w 0

kl

w

minimization of (26). Solution is obtained iteratively, using e.g. Newton-Raphson method.

Essential difficulty lies in the fact that w is immeasurable and, in each iteration, should be

i

calculated as:

w = y b

w y (27)

i

i - kl, i -1

i- k i- l

Obtained estimates of EB(k,l) parameters are asymptotically unbiased if w is Gaussian

i

(Kramer & Rosenblatt, 1993). For other distributions, Gaussian approximation of the

probability density function f ( y y

b y

causes that the estimated parameters are

i

l ( kl , ik ))

mode

biased.

c) Repeated residuum method

Alternative estimation method, named repeated residuum method, is proposed in (Priestley,

1980). Implemented to identification of elementary bilinear models, the method may be

presented as the following sequence of steps:

1. Model EB(k,l) is expressed as:

y = w (1

k

+ b y D (28)

i

i

kl i l

)

-

or equivalently:

y

i

w =

(29)

i

1

k

+ b y D

kl i- l

2. Assuming b small, the (29) may be approximated by:

kl

w = (1 -

k

b y D y = y b y y

(30)

i

kl i l

) i

i - kl i l i k .

-

-

-

Presuming w is an identification error, an initial estimate b

of the parameter b can

i

kl,0

kl

be evaluated from the (30), with the use of e.g. LS method.

3. Next, starting from b

and w = 0 , succeeding w can be calculated iteratively:

kl,0

0

i

w = y b w

=

+

(31)

y

i

k k

N

i

i

kl

i k i l for

,

1,..., .

,0

4. Having known y and w for i=k,...N, an improved estimate b that minimizes the i

i

kl

following sum of squared errors (32) may be calculated.

N

2

V( b = ∑ y b w y

(32)

kl )

( i - kl i k i l) .

-

-

i= k

5. The steps 3 and 4 are repeated until the estimate achieves an established value.

120

New Approaches in Automation and Robotics

3.2 Moments method

With respect to the group of methods that originate from the minimization of the squared

prediction error, a precise forms of estimation algorithms can be formulated. On the

contrary, for moments method a general idea may be characterized only, and the details

depend on a model type and a model structure. Moments method MM consists of two

stages:

Stage 1: Under the assumption that the model structure is the same as the process structure,

moments and central moments

( r )

M are presented as a function of process parameters Θ :

y

( r )

M = f Θ (33)

y

( )

If it is possible, the moments are chosen such that the set of equations (33) has an unique

solution.

Stage 2: In (33) the moments

( r )

M are replaced with their evaluation

( )

ˆ r

M , estimated on the

y

y

base of available data set y .

i

( r )

ˆ

M = f Θ (34)

y

( )

The set of equations (34) is then solved according to the parameters Θ . Taking into

consideration particular relation between moments and parameters for elementary bilinear

models, MM estimation algorithm in a simple and a generalized version can be proposed.

MM – simple version

It is assumed that w is a stochastic series, symmetrical distributed around zero, and that

i

the even moments (2 r)

m

satisfy the following relations:

w

(2 r )

(2)

m

= k ( m ) r

r =

(35)

w

r

w

for

1,2,3...

2

Identification of EB(k,l) consists of identification of the model structure (k,l), and estimation

of the parameters b and (2)

m . Identification algorithm is presented below as the sequence

kl

w

of steps:

1. Data analysis:

a. On the base of data set { y for i=1,...,N, estimate the following moments:

i }

(1)

(2)

(3)

(4)

ˆ

ˆ

ˆ

ˆ

M

M

m

m =

M

l l

l l =

M

y ;

y (

) for

0,1,2...; y ( , ) for ,

0,1,2...; y (0,0,0)

1

2

1

2

b. Find the values of l ≠ 0 and l ≠ 0 ( l l ), for which the absolute value of 1

2

1

2

the third moment

(3)

ˆ

M

l l is maximal.

y (

, )

1

2

2. Structure identification:

a. If l = k, l = l then subdiagonal model EB(k,l) should be chosen.

1

2

b. If l = k, l =