Robust Adaptive Control by Petros A. Ioannou, Jing Sun - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

the boundedness of x, u. We consider the estimation error equation

˙1 = −am 1 + ˜ ax

(4.3.10)

where ˜ a = ˆ a − a and propose the same function

2

˜ a 2

V = 1 +

(4.3.11)

2

2

166

CHAPTER 4. ON-LINE PARAMETER ESTIMATION

as in Section 4.2.2. The time derivative of V along (4.3.9) and (4.3.10) is

given by

˙

V = −a

2

m 1 0

(4.3.12)

Because x is not necessarily bounded, it cannot be treated as an independent

bounded function of time in (4.3.10) and, therefore, (4.3.10) cannot be de-

coupled from (4.3.8). Consequently, (4.3.8) to (4.3.10) have to be considered

and analyzed together in R 3, the space of 1 , ˜ a, ˆ x. The chosen function V

in (4.3.11) is only positive semidefinite in R 3, which implies that V is not a

Lyapunov function; therefore, Theorems 3.4.1 to 3.4.4 cannot be applied. V

is, therefore, a Lyapunov-like function, and the properties of V , ˙

V allow us

to draw some conclusions about the behavior of the solution 1( t) , ˜ a( t) with-

out having to apply the Lyapunov Theorems 3.4.1 to 3.4.4. From V ≥ 0 and

˙

V = −a

2

m 1 0 we conclude that V ∈ L∞, which implies that 1 , ˜

a ∈ L∞,

and 1 ∈ L 2. Without assuming x ∈ L∞, however, we cannot establish any

bound for ˙˜ a in an Lp sense.

As in Section 4.3.1, let us attempt to use normalization and modify

(4.3.9) to achieve bounded speed of adaptation in some sense. The use of

normalization is not straightforward in this case because of the dynamics

introduced by the transfer function

1

, i.e., dividing each side of (4.3.7)

s+ am

by m may not help because

x

1

x

u

=

( a

+

m

s + a

m − a)

m

m

m

For this case, we propose the error signal

1

1

= x − ˆ

x −

n 2

ax − n 2

s + a

s =

s)

(4.3.13)

m

s + am

i.e.,

˙ = −am + ˜ ax − n 2 s

where ns is a normalizing signal to be designed.

Let us now use the error equation (4.3.13) to develop an adaptive law for

ˆ a. We consider the Lyapunov-like function

2

˜ a 2

V =

+

(4.3.14)

2

2

4.3. ADAPTIVE LAWS WITH NORMALIZATION

167

whose time derivative along the solution of (4.3.13) is given by

˙

V = −a

2

m

2 n 2 s + ˜ a x + ˜ a ˙˜ a

Choosing

˙˜ a = ˙ˆ a = − x

(4.3.15)

we have

˙

V = −a

2

m

2 n 2 s ≤ 0

which together with (4.3.14) imply V, , ˜ a ∈ L∞ and , ns ∈ L 2. If we now

write (4.3.15) as

˙

x

˜ a = − m m

where m 2 = 1 + n 2 s and choose ns so that x ∈ L

m

, then m ∈ L 2 (because

, ns ∈ L 2) implies that ˙˜ a ∈ L 2. A straightforward choice for ns is ns = x,

i.e., m 2 = 1 + x 2.

The effect of ns can be roughly seen by rewriting (4.3.13) as

˙ = −am − n 2 s + ˜ ax

(4.3.16)

and solving for the “quasi” steady-state response

˜ ax

s =

(4.3.17)

am + n 2 s

obtained by setting ˙ 0 in (4.3.16) and solving for . Obviously, for

n 2 s = x 2, large s implies large ˜ a independent of the boundedness of x, which, in turn, implies that large s carries information about the parameter error

˜ a even when x ∈ L∞. This indicates that ns may be used to normalize the

effect of the possible unbounded signal x and is, therefore, referred to as

the normalizing signal. Because of the similarity of s with the normalized

estimation error defined in (4.3.5), we refer to

in (4.3.13), (4.3.16) as the

normalized estimation error too.

Remark 4.3.1 The normalizing term n 2 s in (4.3.16) is similar to the non-

linear “damping” term used in the control of nonlinear systems [99].

It makes ˙

V more negative by introducing the negative term 2 n 2 s in

the expression for ˙

V and helps establish that ns ∈ L 2. Because ˙ˆ a is

168

CHAPTER 4. ON-LINE PARAMETER ESTIMATION

4

3.5

3

2.5

a

˜

2

1.5

α=10

1

α=2

0.5

α=0.5

00

5

10

15

20

25

30

35

40sec

Figure 4.4 Effect of normalization on the convergence and performance of

the adaptive law (4.3.15).

bounded from above by

n 2 s + 1 = m and m ∈ L 2, we can con-

clude that ˙ˆ a ∈ L 2, which is a desired property of the adaptive law.

Note, however, that ˙ˆ a ∈ L 2 does not imply that ˙ˆ a ∈ L∞. In contrast

to the example in Section 4.3.1, we have not been able to establish

that ˙ˆ a ∈ L∞. As we will show in Chapter 6 and 7, the L 2 property

of the derivative of the estimated parameters is sufficient to establish

stability in the adaptive control case.

Simulations

Let us simulate the effect of normalization on the convergence and perfor-

mance of the adaptive law (4.3.15) when a = 0 is unknown, u = sin t, and

am = 2. We use n 2 s = αx 2 and consider different values of α ≥ 0. The

simulation results are shown in Figure 4.4. It is clear that large values of α

lead to a large normalizing signal that slows down the speed of convergence.

4.3. ADAPTIVE LAWS WITH NORMALIZATION

169

4.3.3

General Plant

Let us now consider the SISO plant

˙ x = Ax + Bu, x(0) = x 0

(4.3.18)

y = C x

where x ∈ Rn and only y, u are available for measurement. Equation (4.3.18)

may also be written as

y = C ( sI − A) 1 Bu + C ( sI − A) 1 x 0

or as

Z( s)

C adj( sI − A) x

y =

u +

0

(4.3.19)

R( s)

R( s)

where Z( s) , R( s) are in the form

Z( s) = bn− 1 sn− 1 + bn− 2 sn− 2 + · · · + b 1 s + b 0

R( s) = sn + an− 1 sn− 1 + · · · + a 1 s + a 0

The constants ai, bi for i = 0 , 1 , . . . , n − 1 are the plant parameters. A

convenient parameterization of the plant that allows us to extend the results

of the previous sections to this general case is the one where the unknown

parameters are separated from signals and expressed in the form of a linear

equation. Several such parameterizations have already been explored and

presented in Chapter 2. We summarize them here and refer to Chapter 2

for the details of their derivation.

Let

θ∗ = [ bn− 1 , bn− 2 , . . . , b 1 , b 0 , an− 1 , an− 2 , . . . , a 1 , a 0]

be the vector with the unknown plant parameters. The vector θ∗ is of di-

mension 2 n. If some of the coefficients of Z( s) are zero and known, i.e.,

Z( s) is of degree m < n − 1 where m is known, the dimension of θ∗ may be

reduced. Following the results of Chapter 2, the plant (4.3.19) may take any

one of the following parameterizations:

z = θ∗ φ + η 0

(4.3.20)

y = θ∗λ φ + η 0

(4.3.21)

170

CHAPTER 4. ON-LINE PARAMETER ESTIMATION

y = W ( s) θ∗λ ψ + η 0

(4.3.22)

where

u

u

z = W 1( s) y,

φ = H( s)

,

ψ = H

,

η

y

1( s)

y

0 = c 0 eΛ ctB 0 x 0

θ∗λ = θ∗ − bλ

W 1( s) , H( s) , H 1( s) are some known proper transfer function matrices with

stable poles, = [0 , λ ] is a known vector, and Λ c is a stable matrix which

makes η 0 to be an exponentially decaying to zero term that is due to non-

zero initial conditions. The transfer function W ( s) is a known strictly proper

transfer function with relative degree 1, stable poles, and stable zeros.

Instead of dealing with each parametric model separately, we consider

the general model

z = W ( s) θ∗ ψ + η 0

(4.3.23)

where W ( s) is a proper transfer function with stable poles, z ∈ R 1 , ψ ∈ R 2 n

are signal vectors available for measurement and η 0 = c 0 eΛ ctB 0 x 0. Initially

we will assume that η 0 = 0, i.e.,

z = W ( s) θ∗ ψ

(4.3.24)

and use (4.3.24) to develop adaptive laws for estimating θ∗ on-line. The effect

of η 0 and, therefore, of the initial conditions will be treated in Section 4.3.7.

Because θ∗ is a constant vector, going from form (4.3.23) to form (4.3.20)

is trivial, i.e., rewrite (4.3.23) as z = θ∗ W ( s) ψ + η 0 and define φ = W ( s) ψ.

As illustrated in Chapter 2, the parametric model (4.3.24) may also be a

parameterization of plants other than the LTI one given by (4.3.18). What

is crucial about (4.3.24) is that the unknown vector θ∗ appears linearly in

an equation where all other signals and parameters are known exactly. For

this reason we will refer to (4.3.24) as the linear parametric model. In the

literature, (3.4.24) has also been referred to as the linear regression model.

In the following section we use different techniques to develop adaptive

laws for estimating θ∗ on-line by assuming that W ( s) is a known, proper

transfer function with stable poles, and z, ψ are available for measurement.

4.3. ADAPTIVE LAWS WITH NORMALIZATION

171

4.3.4

SPR-Lyapunov Design Approach

This approach dominated the literature of continuous adaptive schemes [48,

149, 150, 153, 172, 178, 187]. It involves the development of a differential

equation that relates the estimation or normalized estimation error with

the parameter error through an SPR transfer function. Once in this form

the KYP or the MKY Lemma is used to choose an appropriate Lyapunov

function V whose time derivative ˙

V is made nonpositive, i.e., ˙

V ≤ 0 by

properly choosing the differential equation of the adaptive law.

The development of such an error SPR equation had been a challeng-

ing problem in the early days of adaptive control [48, 150, 153, 178]. The

efforts in those days were concentrated on finding the appropriate transfor-

mation or generating the appropriate signals that allow the expression of the

estimation/parameter error equation in the desired form.

In this section we use the SPR-Lyapunov design approach to design adap-

tive laws for estimating θ∗ in the parametric model (4.3.24). The connection

of the parametric model (4.3.24) with the adaptive control problem is dis-

cussed in later chapters. By treating parameter estimation independently

of the control design, we manage to separate the complexity of the estima-

tion part from that of the control part. We believe this approach simplifies

the design and analysis of adaptive control schemes, to be discussed in later

chapters, and helps clarify some of the earlier approaches that appear tricky

and complicated to the nonspecialist.

Let us start with the linear parametric model

z = W ( s) θ∗ ψ

(4.3.25)

Because θ∗ is a constant vector, we can rewrite (4.3.25) in the form

z = W ( s) L( s) θ∗ φ

(4.3.26)

where

φ = L− 1( s) ψ

and L( s) is chosen so that L− 1( s) is a proper stable transfer function and

W ( s) L( s) is a proper SPR transfer function.

Remark 4.3.2 For some W ( s) it is possible that no L( s) exists such that

W ( s) L( s) is proper and SPR. In such cases, (4.3.25) could be prop-

erly manipulated and put in the form of (4.3.26). For example, when

172

CHAPTER 4. ON-LINE PARAMETER ESTIMATION

W ( s) = s− 1 , no L( s) can be found to make W ( s) L( s) SPR. In this s+2

case, we write (4.3.25) as ¯

z =

s+1

θ∗ φ where φ = s− 1 ψ and

( s+2)( s+3)

s+1

¯

z = 1 z. The new W ( s) in this case is W ( s) =

s+1

and a wide

s+3

( s+2)( s+3)

class of L( s) can be found so that W L is SPR.

The significance of the SPR property of W ( s) L( s) is explained as we

proceed with the design of the adaptive law.

Let θ( t) be the estimate of θ∗ at time t. Then the estimate ˆ

z of z at time

t is constructed as

ˆ

z = W ( s) L( s) θ φ

(4.3.27)

As with the examples in the previous section, the estimation error 1 is

generated as

1 = z − ˆ

z

and the normalized estimation error as

= z − ˆ

z − W ( s) L( s) n 2 s = 1 − W ( s) L( s) n 2 s (4.3.28)

where ns is the normalizing signal which we design to satisfy

φ ∈ L

m

∞,

m 2 = 1 + n 2 s

(A1)

Typical choices for ns that satisfy (A1) are n 2 s = φ φ, n 2 s = φ P φ for

any P = P

> 0, etc. When φ ∈ L∞, (A1) is satisfied with m = 1, i.e.,

ns = 0 in which case = 1.

We examine the properties of

by expressing (4.3.28) in terms of the

parameter error ˜

θ = θ − θ∗, i.e., substituting for z, ˆ

z in (4.3.28) we obtain

= W L( ˜

θ φ − n 2 s)

(4.3.29)

For simplicity, let us assume that L( s) is chosen so that W L is strictly proper

and consider the following state space representation of (4.3.29):

˙ e = Ace + Bc( ˜

θ φ − n 2 s)

(4.3.30)

= Cc e

where Ac, Bc, and Cc are the matrices associated with a state space repre-

sentation that has a transfer function W ( s) L( s) = Cc ( sI − Ac) 1 Bc.

4.3. ADAPTIVE LAWS WITH NORMALIZATION

173

The error equation (4.3.30) relates

with the parameter error ˜

θ and is

used to construct an appropriate Lyapunov type function for designing the

adaptive law of θ. Before we proceed with such a design, let us examine

(4.3.30) more closely by introducing the following remark.