(5.5.14)
s 2 + a 1 s + a 0
where a 1 , a 0 > 0 and b 1 , b 0 are the unknown parameters. We first obtain the plant representation 2 by following the results and approach presented in Chapter 2.
We choose Λ( s) = ( s + λ 0)( s + λ) for some λ 0 , λ > 0. It follows from (5.5.14) that
s 2
s
1
s
1
y = [ b
u − [ a
y
Λ( s)
1 , b 0]
1
Λ( s)
1 , a 0]
1
Λ( s)
Because s 2 = 1 − ( λ 0+ λ) s+ λ 0 λ we have
Λ( s)
Λ( s)
α
α
α
y = θ∗
1( s)
1( s)
1( s)
1
u − θ∗
y + ¯
λ
y
(5.5.15)
Λ( s)
2
Λ( s)
Λ( s)
where θ∗ 1 = [ b 1 , b 0] , θ∗ 2 = [ a 1 , a 0] , ¯ λ = [ λ 0 + λ, λ 0 λ] and α 1( s) = [ s, 1] . Because Λ( s) = ( s + λ 0)( s + λ), equation (5.5.15) implies that
1
α
α
α
y =
θ∗
1( s) u − θ∗
1( s) y + ¯ λ
1( s) y
s + λ
1
2
0
s + λ
s + λ
s + λ
5.5. NONMINIMAL ADAPTIVE OBSERVER
295
Table 5.5 Adaptive observer (Realization 2)
˙¯ x 1 = −λ 0¯ x 1 + ¯ θ∗ φ,
¯
x 1(0) = 0
˙ φ 1 = Λ cφ 1 + lu,
φ 1(0) = 0
˙ φ 2 = Λ cφ 2 − ly,
φ 2(0) = 0
˙ ω = Λ
Plant
cω,
ω(0) = ω 0
η 0 = C 0 ω
y = ¯
x 1 + η 0
where φ = [ u, φ 1 , y, φ 2 ]
φi ∈ Rn− 1 , i = 1 , 2; ¯ x 1 ∈ R 1
˙ˆ x 1 = −λ 0ˆ x 1 + ¯ θ ˆ φ,
ˆ
x 1(0) = 0
˙ˆ φ
ˆ
1 = Λ cφ 1 + lu,
ˆ
φ 1(0) = 0
˙ˆ
Ôbserver
φ 2 = Λ cφ 2 − ly,
ˆ
φ 2(0) = 0
ˆ
y = ˆ
x 1
where ˆ
φ = [ u, ˆ
φ 1 , y, ˆ
φ 2 ]
ˆ
φi ∈ Rn− 1 , i = 1 , 2 , ˆ x 1 ∈ R 1
Adaptive law
˙¯ θ = Γ˜ yˆ φ , ˜ y = y − ˆ y
Design
Γ = Γ > 0; Λ
variables
c ∈ R( n− 1) ×( n− 1) is any stable matrix,
and λ 0 > 0 is any scalar
Substituting for
α 1( s)
1
s
1
1
−λ
=
=
+
s + λ
s + λ
1
0
s + λ
1
we obtain
1
1
1
y =
b
u − a
y
s + λ
1 u + ( b 0 − λb 1)
1 y − ( a 0 − λa 1)
0
s + λ
s + λ
+( λ 0 + λ) y − λ 2 1 y
s + λ
296
CHAPTER 5. IDENTIFIERS AND ADAPTIVE OBSERVERS
✲
y
Plant
❄
+
❧ ˜
y
u
α
ˆ
Σ
φ 1
✁✁✕
n− 2( s)
✲
✲
1
ˆ
y ✻
−
+
Λ( s)
¯
θ
✲ ❧
✲
✲
1
Σ
✁
s + λ 0
✻
+
✁✁
✁✁✕ ˆ
φ 2 −αn− 2( s)
✁
✛
✛
¯
θ 2
Λ( s)
✛
✁
y
✁
¯
θ 2
Adaptive Law ✛
¯
θ 1
(5.5.13)
✛
φ
Figure 5.4 Adaptive observer using nonminimal Realization 2.
which implies that
˙¯ x 1 = −λ 0¯ x 1 + ¯ θ∗ φ, ¯ x 1(0) = 0
˙ φ 1 = −λφ 1 + u, φ 1(0) = 0
˙ φ 2 = −λφ 2 − y, φ 2(0) = 0
y =
¯
x 1
where φ = [ u, φ 1 , y, φ 2] , ¯
θ∗ = [ b 1 , b 0 − λb 1 , λ 0 + λ − a 1 , a 0 − λa 1 + λ 2] . Using Table 5.5, the adaptive observer for estimating ¯
x 1 , φ 1 , φ 2 and θ∗ is given by
˙ˆ x 1 = −λ 0ˆ x 1 + ¯ θ ˆ φ, ˆ x 1(0) = 0
˙ˆ φ 1 = −λˆ φ 1 + u, ˆ φ 1(0) = 0
˙ˆ φ 2 = −λˆ φ 2 − y, ˆ φ 2(0) = 0
ˆ
y =
ˆ
x 1
˙¯ θ = Γˆ φ( y − ˆ y)
where ˆ
φ = [ u, ˆ
φ 1 , y, ˆ
φ 2] and Γ = Γ > 0. If in addition to ¯
θ∗, we like to estimate
θ∗ = [ b 1 , b 0 , a 1 , a 0] , we use the relationships
ˆ b 1 = ¯ θ 1
ˆ b 0 = ¯ θ 2 + λ¯ θ 1
ˆ a 1 = −¯
θ 3 + λ 0 + λ
ˆ a 0 = ¯
θ 4 − λ¯
θ 3 + λλ 0
5.6. PARAMETER CONVERGENCE PROOFS
297
where ¯
θi, i = 1 , 2 , 3 , 4 are the elements of ¯
θ and ˆ bi, ˆ ai, i = 1 , 2 are the estimates of
bi, ai, i = 0 , 1, respectively.
For parameter convergence we choose
u = 6 sin 2 . 6 t + 8 sin 4 . 2 t
which is sufficiently rich of order 4.
✷
5.6
Parameter Convergence Proofs
In this section we present all the lengthy proofs of theorems dealing with convergence
of the estimated parameters.
5.6.1
Useful Lemmas
The following lemmas are used in the proofs of several theorems to follow:
Lemma 5.6.1 If the autocovariance of a function x : R+ → Rn defined as
1
t 0+ T
Rx( t) = lim
x( τ ) x ( t + τ ) dτ
(5.6.1)
T →∞ T
t 0
exists and is uniform with respect to t 0 , then x is PE if and only if Rx(0) is positive
definite.
Proof
If: The definition of the autocovariance Rx(0) implies that there exists a T 0 > 0
such that
1
1
t 0+ T 0
3
R
x( τ ) x ( τ ) dτ ≤ R
2 x(0) ≤ T
x(0) ,
∀t ≥ 0
0
t
2
0
If Rx(0) is positive definite, there exist α 1 , α 2 > 0 such that α 1 I ≤ Rx(0) ≤ α 2 I.
Therefore,
1
1
t 0+ T 0
3
α
x( τ ) x ( τ ) dτ ≤ α
2 1 I ≤ T
2 I
0
t
2
0
for all t 0 ≥ 0 and thus x is PE.
Only if: If x is PE, then there exist constants α 0 , T 1 > 0 such that
t+ T 1
x( τ ) x ( τ ) dτ ≥ α 0 T 1 I
t
298
CHAPTER 5. IDENTIFIERS AND ADAPTIVE OBSERVERS
for all t ≥ 0. For any T > T 1, we can write
t
k− 1
0+ T
t 0+( i+1) T 1
t 0+ T
x( τ ) x ( τ ) dτ
=
x( τ ) x ( τ ) dτ +
x( τ ) x ( τ ) dτ
t 0
i=0
t 0+ iT 1
t 0+ kT 1
≥ kα 0 T 1 I
where k is the largest integer that satisfies k ≤ T /T 1, i.e., kT 1 ≤ T < ( k + 1) T 1.
Therefore, we have
1
t+ T
kT
x( τ ) x ( τ ) dτ ≥
1 α
T
0 I
t
T
For k ≥ 2, we have kT 1 = ( k+1) T 1 − T 1 ≥ 1 − T 1 ≥ 1 , thus, T
T
T
T
2
1
t 0+ T
α
x( τ ) x ( τ ) dτ ≥ 0 I
T t
2
0
and
1
t 0+ T
α
R
0
x(0) = lim
x( τ ) x ( τ ) dτ ≥
I
T →∞ T
t
2
0
which implies that Rx(0) is positive definite.
✷
Lemma 5.6.2 Consider the system
y = H( s) u
where H( s) is a strictly proper transfer function matrix of dimension m × n with
stable poles and real impulse response h( t) . If u is stationary, with autocovariance
Ru( t) , then y is stationary, with autocovariance
∞
∞
Ry( t) =
h( τ 1) Ru( t + τ 1 − τ 2) h ( τ 2) dτ 1 dτ 2
−∞
−∞
and spectral distribution
Sy( ω) = H( −jω) Su( ω) H ( jω)
Proof See [201].
Lemma 5.6.3 Consider the system described by
˙ x 1
A
−F ( t)
x
=
1
(5.6.2)
˙ x 2
P 1 F ( t) P 2
0
x 2
5.6. PARAMETER CONVERGENCE PROOF
299
where x 1 ∈ Rn 1 , x 2 ∈ Rrn 1 for some integer r, n 1 ≥ 1 , A, P 1 , P 2 are constant matrices and F ( t) is of the form
z 1 In 1
z 2 In 1
F ( t) =
.
∈ Rrn 1 ×n 1
..
zrIn 1
where zi, i = 1 , 2 , . . . , r are the elements of the vector z ∈ Rr. Suppose that z is PE
and there exists a matrix P 0 > 0 such that
˙
P 0 + A 0 P 0 + P 0 A 0 + C 0 C 0 ≤ 0
(5.6.3)
where
A
−F ( t)
A 0 =
, C
, 0]
P
0 = [ In 1
1 F ( t) P 2
0
Then the equilibrium x 1 e = 0 , x 2 e = 0 of (5.6.2) is e.s. in the large.
Proof Consider the system (5.6.2) that we express as
˙ x = A 0( t) x
(5.6.4)
y = C 0 x = x 1
where x = [ x 1 , x 2 ] . We first show that ( C 0 , A 0) is UCO by establishing that
( C 0 , A 0 + KC 0 ) is UCO for some K ∈ L∞ which according to Lemma 4.8.1 implies
that ( C 0 , A 0) is UCO. We choose
−γI
− A
K =
n 1
−P 1 F ( t) P 2
for some γ > 0 and consider the following system associated with ( C 0 , A 0 + KC 0 ):
˙
Y 1
−γIn
−F ( t)
Y
1
1
˙
=
Y 2
0
0
Y 2
(5.6.5)
Y
y
1
1 = [ In
0]
1
Y 2
According to Lemma 4.8.4, the system (5.6.5) is UCO if
1
Ff ( t) =
F ( t)
<