Theorem 9.2.1 Assume that the nominal transfer function G 0( s) has no
zero-pole cancellation. If u is dominantly rich of order n + m + 1 , then there
exists a µ∗ > 0 such that for µ ∈ [0 , µ∗) , the adaptive law (9.2.18) or any
stable adaptive law from Tables 4.2, 4.3, and 4.5, with the exception of the
pure least-squares algorithm, based on the parametric model (9.2.17) with
η = 0 guarantees that , θ ∈ L∞ and θ converges exponentially fast to the
residual set
Re = {θ | |θ − θ∗| ≤ cµ}
where c is a constant independent of µ.
To prove Theorem 9.2.1 we need to use the following Lemma that gives
conditions that guarantee the PE property of φ for µ = 0. We express the
signal vector φ as
φ = H 0( s) u + µH 1( µs, s) u
(9.2.19)
where
1
H 0( s) =
α
Λ( s)
m( s) , −αn− 1( s) G 0( s)
1
H 1( µs, s) =
0 , . . . , 0 , −α
Λ( s)
n− 1( s) G 0( s)∆ m( µs, s)
Lemma 9.2.1 Let u : R+ → R be stationary and H 0( s) , H 1( µs, s) satisfy
the following assumptions:
(a) The vectors H 0( jω 1) , H 0( jω 2) , . . ., H 0( jω¯ n) are linearly independent on C¯ n for all possible ω 1 , ω 2 , . . . , ω¯ n ∈ R, where ¯ n = n + m + 1 and ωi = ωk for i = k.
(b) For any set {ω 1 , ω 2 , . . . , ω¯ n} satisfying |ωi − ωk| > O( µ) for i = k and
|ωi| < O( 1 ) , we have | det( ¯
H) | > O( µ) where ¯
H = [ H
µ
0( jω 1) , H 0( jω 2) ,
. . . , H 0( jω¯ n)] .
9.2. ROBUST ADAPTIVE OBSERVERS
645
(c) |H 1( jµω, jω) | ≤ c for some constant c independent of µ and for all
ω ∈ R.
Then there exists a µ∗ > 0 such that for µ ∈ [0 , µ∗) , φ is PE of order ¯
n with
level of excitation α 1 > O( µ) provided that the input signal u is dominantly
rich of order ¯
n for the plant (9.2.16).
Proof of Lemma 9.2.1: Let us define
φ 0 = H 0( s) u,
φ 1 = H 1( µs, s) u
Because φ 0 is the signal vector for the ideal case, i.e., H 0( s) does not depend on µ,
u being sufficiently rich of order ¯
n together with the assumed properties (a), (b) of
H 0( s) imply, according to Theorem 5.2.1, that φ 0 is PE with level α 0 > 0 and α 0
is independent of µ, i.e.,
1
t+ T
φ
T
0( τ ) φ 0 ( τ ) dτ ≥ α 0 I
(9.2.20)
t
∀t ≥ 0 and some T > 0. On the other hand, because H 1( µs, s) is stable and
|H 1( jµω, jω) | ≤ c for all ω ∈ R, we have φ 1 ∈ L∞ and
1
t+ T
φ
T
1( τ ) φ 1 ( τ ) dτ ≤ βI
(9.2.21)
t
for some constant β which is independent of µ. Note that
1
t+ T
1
t+ T
φ( τ ) φ ( τ ) dτ
=
( φ
T
0( τ ) + µφ 1( τ ))( φ 0 ( τ ) + µφ 1 ( τ )) dτ
t
T t
1
t+ T φ
t+ T
≥
0( τ ) φ 0 ( τ ) dτ − µ 2
φ
T
1( τ ) φ 1 ( τ ) dτ
t
2
t
where the second inequality is obtained by using ( x + y)( x + y) ≥ xx − yy , we
2
obtain that
1
t+ T
α
φ( τ ) φ ( τ ) dτ ≥ 0 I − µ 2 βI
T t
2
which implies that φ has a level of PE α 1 = α 0 , say, for µ ∈ [0 , µ∗) where µ∗ =
α 0 .
4
4 β
✷
Lemma 9.2.1 indicates that if u is dominantly rich, then the PE level of
φ 0, the signal vector associated with the dominant part of the plant, is much
646
CHAPTER 9. ROBUST ADAPTIVE CONTROL SCHEMES
higher than that of µφ 1, the signal vector that is due to the unmodeled part
of the plant, provided of course that µ is relatively small. The smaller the
parameter µ is, the bigger the separation of the spectrums of the dominant
and unmodeled high frequency dynamics.
Let us now use Lemma 9.2.1 to prove Theorem 9.2.1.
Proof of Theorem 9.2.1 The error equation that describes the stability properties
of the parameter identifier is given by
˙˜ θ = −Γ φφ ˜ θ+ µΓ φη
(9.2.22)
where ˜
θ = θ−θ∗ is the parameter error. Let us first assume that all the conditions of
Lemma 9.2.1 are satisfied so that for a dominantly rich input and for all µ ∈ [0 , µ∗),
the signal vector φ is PE with level α 1 > O( µ). Using the results of Chapter 4 we
can show that the homogeneous part of (9.2.22) is u.a.s., i.e., there exists constants
α > 0 , β > 0 independent of µ such that the transition matrix Φ( t, t 0) of the
homogeneous part of (9.2.22) satisfies
Φ( t, t 0) ≤ βe−α( t−t 0)
(9.2.23)
Therefore, it follows from (9.2.22), (9.2.23) that
|˜
θ( t) | ≤ ce−αt + µc,
∀µ ∈ [0 , µ∗)
where c ≥ 0 is a finite constant independent of µ, which implies that θ, ∈ L∞ and
θ( t) converges to the residual set Re exponentially fast.
Let us now verify that all the conditions of Lemma 9.2.1 assumed above are
satisfied. These conditions are
(a) H 0( jω 1) , . . . , H 0( jω¯ n) are linearly independent for all possible ω 1 , ω 2 , . . . , ω¯ n where ωi = ωk, i, k = 1 , . . . , ¯ n; ¯ n = n + m + 1
(b) For any set {ω 1 , ω 2 , . . . , ω¯ n} where |ωi − ωk| > O( µ) , i = k and |ωi| < O( 1 ), µ
we have
| det {[ H 0( jω 1) , H 0( jω 2) , . . . , H 0( jω¯ n)] }| > O( µ) (c) |H 1( µjω, jω) | ≤ c for all ω ∈ R
It has been shown in the proof of Theorem 5.2.4 that the coprimeness of the
numerator and denominator polynomials of G 0( s) implies the linear independence
of H 0( jω 1) , H 0( jω 2) , . . . , H 0( jω¯ n) for any ω 1 , ω 2 , . . . , ω¯ n with ωi = ωk and thus (a) is verified. From the definition of H 1( µs, s) and the assumption of G 0( s)∆ m( µs, s) being proper, we have |H 1( µjω, jω) | ≤ c for some constant c, which verifies (c).
9.2. ROBUST ADAPTIVE OBSERVERS
647
To establish condition (b), we proceed as follows: From the definition of H 0( s),
we can write
s¯ n− 1
s¯ n− 2
1
H
.
0( s) = Q 0
.
Λ( s) R
.
p( s)
s
1
where Q 0 ∈ R¯ nׯ n is a constant matrix. Furthermore, Q 0 is nonsingular, otherwise,
we can conclude that H 0( jω 1) , H 0( jω 2) , . . . , H 0( jω¯ n) are linearly dependent that contradicts with (a) which we have already proven to be true. Therefore,
[ H 0( jω 1) , . . . , H 0( jω¯ n)]
( jω 1)¯ n− 1 ( jω 2)¯ n− 1 · · · ( jω¯ n)¯ n− 1
( jω 1)¯ n− 2 ( jω 2)¯ n− 2 · · · ( jω¯ n)¯ n− 2
1
= Q
.
.
.
0
.
.
.
diag
(9.2.24)
.
.
.
Λ( jω
i) Rp( jωi)
jω
1
jω 2
· · ·
jω¯ n
1
1
· · ·
1
Noting that the middle factor matrix on the right hand side of (9.2.24) is a Van-
dermonde matrix [62], we have
¯
n
1
det {[ H 0( jω 1) , . . . , H 0( jω¯ n)] } = det( Q 0)
( jω
Λ( jω
i − jωk)
i=1
i) Rp( jωi) 1 ≤i<k≤¯ n
and, therefore, (b) follows immediately from the assumption that |ωi − ωk| > O( µ).
✷
Theorem 9.2.1 indicates that if the input u is dominantly rich of order
n + m + 1 and there is sufficient separation between the spectrums of the
dominant and unmodeled high frequency dynamics, i.e., µ is small, then the
parameter error bound at steady state is small. The condition of dominant
richness is also necessary for the parameter error to converge exponentially
fast to a small residual set in the sense that if u is sufficiently rich but not
dominantly rich then we can find an example for which the signal vector φ
loses its PE property no matter how fast the unmodeled dynamics are. In
the case of the pure least-squares algorithm where the matrix Γ in (9.2.22)
is replaced with P generated from ˙
P = −P φφ P , we cannot establish the
m 2
u.a.s. of the homogeneous part of (9.2.22) even when φ is PE. As a result,
we are unable to establish (9.2.23) and therefore the convergence of θ to a
residual set even when u is dominantly rich.
648
CHAPTER 9. ROBUST ADAPTIVE CONTROL SCHEMES
9.2.3
Robust Adaptive Observers
In this section we examine the stability properties of the adaptive observers
developed in Chapter 5 for the plant model
y = G 0( s) u
(9.2.25)
when applied to the plant
y = G 0( s)(1 + µ∆ m( µs, s)) u
(9.2.26)
with multiplicative perturbations. As in Section 9.2.2, the overall plant
transfer function in (9.2.26) is assumed to be strictly proper with stable
poles and lim µ→ 0 µG 0( s)∆ m( µs, s) = 0 for any given s. A minimal state-
space representation of the dominant part of the plant associated with G 0( s)
is given by
In− 1
˙ x
α
= −ap
− − −− xα + bpu,
xα ∈ Rn
0
y = [1 , 0 , . . . , 0] xα + µη
η = G 0( s)∆ m( µs, s) u
(9.2.27)
Because the plant is stable and u ∈ L∞, the effect of the plant perturbation
appears as an output bounded disturbance.
Let us consider the adaptive Luenberger observer designed and analyzed
in Section 5.3, i.e.,
˙ˆ x = ˆ
A( t)ˆ
x + ˆ bp( t) u + K( t)( y − ˆ y) ,
ˆ
x ∈ Rn
ˆ
y = [1 , 0 , . . . , 0]ˆ
x
(9.2.28)
where ˆ
x is the estimate of xα,
In− 1
ˆ
A( t) =
−ˆ
ap
− − −− , K( t) = a∗ − ˆ ap
0
ˆ ap, ˆ bp are the estimates of ap, bp, respectively, and a∗ ∈ Rn is a constant
vector, and such that
In− 1
A∗ =
−a∗
− − −−
0
9.2. ROBUST ADAPTIVE OBSERVERS
649
is a stable matrix. It follows that the observation error ˜
x = xα − ˆ x satisfies
˙˜ x = A∗˜ x − ˜ bpu + ˜ apy + µ( ap − a∗) η
(9.2.29)
where A∗ is a stable matrix; ˜ ap = ˆ ap − ap, ˜ bp = ˆ bp − bp are the parameter
errors. The parameter vectors ap, bp contain the coefficients of the denomi-
nator and numerator of G 0( s) respectively and can be estimated using the
adaptive law described by equation (9.2.18) presented in Section 9.2.2 or
any other adaptive law from Tables 4.2, 4.3. The stability properties of
the adaptive Luenberger observer described by (9.2.28) and (9.2.18) or any
adaptive law from Tables 4.2 and 4.3 based on the parametric model (9.2.17)
are given by the following theorem:
Theorem 9.2.2 Assume that the input u is dominantly rich of order 2 n
for the plant (9.2.26) and G 0( s) has no zero-pole cancellation. The adap-
tive Luenberger observer consisting of equation (9.2.28) with (9.2.18) or any
adaptive law from Tables 4.2 and 4.3 (with the exception of the pure least-
squares algorithm) based on the parametric model (9.2.17) with µ = 0 applied
to the plant (9.2.26) with µ = 0 has the following properties: There exists a
µ∗ > 0 such that for all µ ∈ [0 , µ∗)
(i) All signals are u.b.
(ii) The state observation error ˜
x and parameter error ˜
θ converge exponen-
tially fast to the residual set
Re = ˜ x, ˜
θ |˜
x| + |˜
θ| ≤ cµ
where c ≥ 0 is a constant independent of µ.
Proof: The proof follows directly from the results of Section 9.2.2 where we have
established that a dominantly rich input guarantees signal boundedness and expo-
nential convergence of the parameter estimates to a residual set where the parameter
error ˜
θ satisfies |˜
θ| ≤ cµ provided µ ∈ [0