Robust Adaptive Control by Petros A. Ioannou, Jing Sun - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

and then using (5.6.17), H( s) can be conveniently expressed as

 

sn− 1

 

sn− 2  

.

 

L

.

  1

H( s) = 

.

 

(5.6.18)

  a( s)

s

1

a( s)

To explore the linear dependency of H( jωi), we define ¯

H = [ H( 1) , H( 2), . . . ,

H( jωn+1)]. Using the expression (5.6.18) for H( s), we have

( 1) n− 1 ( 2) n− 1 · · · ( jωn+1) n− 1

 ( 1) n− 2

( 2) n− 2 · · · ( jωn+1) n− 2 

.

.

.

¯

L

0

.

.

.

H

=

1

.

.

.

01 ×n

1

1

2

· · ·

jωn+1

1

1

· · ·

1

a( 1)

a( 2)

· · ·

a( jωn+1)

5.6. PARAMETER CONVERGENCE PROOF

305

1

0

· · ·

0

a( 1)

0

1

· · ·

0

a(

×

2)

.

.

..

. .

0

0

· · ·

0

1

a( jωn+1)

From the assumption that ( A, B) is controllable, we conclude that the matrix L

is of full rank. Thus the matrix ¯

H has rank of n + 1 if and only if the matrix V 1

defined as

( 1) n− 1 ( 2) n− 1 · · · ( jωn+1) n− 1

 ( 1) n− 2

( 2) n− 2 · · · ( jωn+1) n− 2 

.

.

.

.

.

.

V

.

.

.

1 = 

1

2

· · ·

jωn+1

1

1

· · ·

1

a( 1)

a( 2)

· · ·

a( jωn+1)

has rank of n + 1. Using linear transformations (row operations), we can show that

V 1 is equivalent to the following Vandermonde matrix [62]:

( 1) n

( 2) n

· · ·

( jωn+1) n

 ( 1) n− 1

( 2) n− 1 · · · ( jωn+1) n− 1 

 ( 1) n− 2

( 2) n− 2 · · · ( jωn+1) n− 2 

V = 

.

.

.

.

.

.

.

.

.

1

2

· · ·

jωn+1

1

1

· · ·

1

Because

det( V ) =

( jωi − jωk)

1 ≤i<k≤n+1

V is of full rank for any ωi with ωi = ωk i, k = 1 , . . . n + 1. This leads to

the conclusion that V 1 and, therefore, ¯

H have rank of n + 1, which implies that

H( 1) , H( 2) , . . . , H( jωn+1) are linearly independent for any ω 1 , ω 2 , . . . ωn+1. It then follows immediately from Theorem 5.2.1 that z is PE.

Because we have shown that all the conditions of Lemma 5.6.3 are satisfied by

(5.6.14), we can conclude that xe = 0 of (5.6.14) is e.s. in the large, i.e., 1 , ˜

θ → 0

exponentially fast as t → ∞. Thus, ˆ

A → A, ˆ

B → B exponentially fast as t → ∞,

and the proof of Theorem 5.2.2 for the series-parallel scheme is complete.

For the parallel scheme, we only need to establish that ˆ

z = [ˆ

x , u] is PE. The

rest of the proof follows by using exactly the same arguments and procedure as in

the case of the series-parallel scheme.

Because x is the state of the plant and it is independent of the identification

scheme, it follows from the previous analysis that z is PE under the conditions given

306

CHAPTER 5. IDENTIFIERS AND ADAPTIVE OBSERVERS

in Theorem 5.2.2, i.e., ( A, B) controllable and u sufficiently rich of order n+1. From

the definition of 1, we have

0

ˆ

x = x −

1

1 ,

ˆ

z = z −

1

1

thus the PE property of ˆ

z follows immediately from Lemma 4.8.3 by using 1 ∈ L 2

and z being PE.

5.6.4

Proof of Theorem 5.2.3

We consider the proof for the series-parallel scheme. The proof for the parallel

scheme follows by using the same arguments used in Section 5.6.3 in the proof of

Theorem 5.2.2 for the parallel scheme.

Following the same procedure used in proving Theorem 5.2.2, we can write the

differential equations

˙1 = Am 1 + ( A − ˆ

A) x + ( B − ˆ

B) u

˙ˆ

A = γ 1 x

˙ˆ

B = γ 1 u

in the vector form as

˙1 = Am 1 − F ( t

θ

˙˜ θ = F( t) 1

(5.6.19)

where ˜

θ = [˜ a 1 , ˜ a 2 , . . . , ˜ a , ˜ b 1 , ˜ b 2 , . . . , ˜ bq ] and ˜ ai, ˜ bi denotes the i th column of

˜

A, ˜

B, respectively, F ( t) = [ x 1 In, x 2 In, . . . , xnIn, u 1 In, u 2 In, . . . , uqIn]. Following exactly the same arguments used in the proof for Theorem 5.2.2, we complete the

proof by showing (i) there exists a matrix P 0 > 0 such that A 0 P 0+ P 0 A 0 = −C 0 C 0 , where A 0 , C 0 are defined the same way as in Section 5.6.3 and (ii) z = [ x 1, x 2, . . . , xn, u 1, u 2, . . . , uq] is PE.

The proof for (i) is the same as that in Section 5.6.3. We prove (ii) by showing

that the autocovariance of z, Rz(0) is positive definite as follows: We express z as

q

( sI − A) 1 B

( sI − A) 1 b

z =

u =

i

u

I

i

q

ei

i=1

where Iq ∈ Rq×q is the identity matrix and bi, ei denote the i th column of B, Iq,

respectively. Assuming that ui, i = 1 , . . . , q are stationary and uncorrelated, the

5.6. PARAMETER CONVERGENCE PROOF

307

autocovariance of z can be calculated as

1

q

Rz(0) =

H

( ω) H

2 π

i( −jω) Sui

i ( )

i=1

−∞

where

( sI − A) 1 b

H

i

i( s) =

ei

and Su ( ω) is the spectral distribution of u

i

i.

Using the assumption that ui is

sufficiently rich of order n + 1, we have

n+1

Su ( ω) =

f ( ω

i

ui

ik) δ( ω − ωik)

k=1

and

1

1 n+1

H

( ω) H

f ( ω

2 π

i( −jω) Sui

i ( ) =

ui

ik) Hi( −jωik) Hi ( jωik)

−∞

2 π k=1

where fu ( ω

i

ik) > 0. Therefore,

1

q n+1

Rz(0) =

f ( ω

2 π

ui

ik) Hi( −jωik) Hi ( jωik)

(5.6.20)

i=1 k=1

Let us now consider the solution of the quadratic equation

x Rz(0) x = 0 ,

x ∈ Rn+ q

Because each term under the summation of the right-hand side of (5.6.20) is semi-

positive definite, x Rz(0) x=0 is true if and only if

i = 1 , 2 , . . . q

fu ( ω

i

ik) x Hi( −jωik) Hi ( jωik) x = 0 ,

k = 1 , 2 , . . . n + 1

or equivalently

i = 1 , 2 , . . . q

Hi ( jωik) x = 0 ,

(5.6.21)

k = 1 , 2 , . . . n + 1

adj( aI − A) b

Because H

i

i( s) =

1

where a( s) = det( sI −A), (5.6.21) is equiv-

a( s)

a( s) ei

alent to:

¯

i = 1 , 2 , . . . q

Hi ( jωik) x = 0 ,

¯

Hi( s) = a( s) Hi( s) ,

(5.6.22)

k = 1 , 2 , . . . n + 1

308

CHAPTER 5. IDENTIFIERS AND ADAPTIVE OBSERVERS

Noting that each element in ¯

Hi is a polynomial of order at most equal to n, we

find that gi( s) = ¯

Hi ( s) x is a polynomial of order at most equal to n. Therefore,

(5.6.22) implies that the polynomial gi( s) vanishes at n + 1 points, which, in turn,

implies that gi( s) 0 for all s ∈ C. Thus, we have

¯

Hi ( s) x ≡ 0 , i = 1 , 2 , . . . q

(5.6.23)

for all s. Equation (5.6.23) can be written in the matrix form

adj( sI − A) B

x ≡ 0

a( s) I

q

(5.6.24)

q

where 0 q ∈ Rq is a column vector with all elements equal to zero. Let X =

[ x 1 , . . . , xn] ∈ Rn, Y = [ xn+1 , . . . , xn+ q] ∈ Rq, i.e., x = [ X , Y ] . Then (5.6.24) can be expressed as

(adj( sI − A) B) X + a( s) Y = 0 q

(5.6.25)

Consider the following expressions for adj( sI − A) B and a( s):

adj( sI − A) B = Bsn− 1 + ( AB + an− 1 B) sn− 2+ ( A 2 B + an− 1 AB + an− 2 B) sn− 3

+ . . . + ( An− 1 B + an− 1 An− 2 B + . . . + a 1 B)

a( s) = sn + an− 1 sn− 1 + . . . + a 1 s + a 0

and equating the coefficients of si on both sides of equation (5.6.25), we find that

X, Y must satisfy the following algebraic equations:

Y = 0

q

B X + an− 1 Y = 0 q

( AB + an− 1 B) X + an− 2 Y = 0 q

.

.

.

 ( An− 1 + an− 1 An− 2 B + ... + a 1 B) X + a 0 Y = 0 q or equivalently:

Y = 0 q

and

( B, AB, . . . , An− 1 B) X = 0 nq

(5.6.26)

where 0 nq is a zero column-vector of dimension nq. Because ( A, B) is controllable,

the matrix ( B, AB, . . . An− 1 B) is of full rank; therefore, (5.6.26) is true if and only

if X = 0 , Y = 0, i.e. x = 0.

Thus, we have proved that x Rz(0) x = 0 if and only if x = 0, which implies

that Rz(0) is positive definite. Then it follows from Lemma 5.6.1 that z is PE.

Using exactly the same arguments as used in proving Theorem 5.2.2, we con-

clude from z being PE that 1( t) , ˜

θ( t) converge to zero exponentially fast as t → ∞.

From the definition of ˜

θ we have that ˆ

A( t) → A, ˆ

B( t) → B exponentially fast as

t → ∞ and the proof is complete.

5.6. PARAMETER CONVERGENCE PROOF

309

5.6.5

Proof of Theorem 5.2.5

Let us define

¯( t) = ˜

θ ( t) (0)˜

θ( t)

where ˜

θ( t) = θ( t) − θ∗