(8.5.33) that
β
|˜
θ| ≤ β
1
0 e−β 2 t +
(¯
η + σ sup |θ|)
β 2
t
where ¯
η = sup t |ηs( t) |. Hence, (iii) is proved by setting c = β 1 max{ 1 , sup β
t |θ|}.
✷
2
Theorem 8.5.2 (Switching σ) Let
0
if |θ| ≤ M 0
q
w( t) = σ
|θ|
0
s,
σs =
− 1
σ
0
if M 0 < |θ| ≤ 2 M 0
M 0
σ 0
if |θ| > 2 M 0
where q 0 ≥ 1 is any finite integer and σ 0 , M 0 are design constants with
M 0 > |θ∗| and σ 0 > 0 . Then the adaptive law (8.5.26) guarantees that
(i)
θ, ∈ L∞
(ii)
, ns, ˙ θ ∈ S( η 2 s/m 2)
(iii) In the absence of modeling errors, i.e., when ηs = 0 , property (ii) can
be replaced with
588
CHAPTER 8. ROBUST ADAPTIVE LAWS
(ii ) , ns, ˙ θ ∈ L 2 .
(iv) In addition, if ns, φ, ˙ φ ∈ L∞ and φ is PE with level α 0 > 0 that is
independent of ηs, then
(a) ˜
θ converges exponentially to the residual set
Ds = ˜
θ
|˜
θ| ≤ c( σ 0 + ¯ η)
where c ∈ R+ and ¯
η = sup t |ηs|
(b) There exists a constant ¯
η∗ > 0 such that for ¯
η < ¯
η∗, the parameter
error ˜
θ converges exponentially fast to the residual set
¯
Ds = ˜
θ |˜
θ| ≤ c¯
η
Proof We have
σ ˜
sθ θ = σs( |θ| 2 − θ∗ θ) ≥ σs|θ|( |θ| − M 0 + M 0 − |θ∗|)
Because σs( |θ| − M 0) ≥ 0 and M 0 > |θ∗|, it follows that
σ ˜
sθ θ ≥ σs|θ|( |θ| − M 0) + σs|θ|( M 0 − |θ∗|) ≥ σs|θ|( M 0 − |θ∗|) ≥ 0
i.e.,
˜
θ θ
σs|θ| ≤ σs
(8.5.34)
M 0 − |θ∗|
The inequality (8.5.27) for ˙
V with w = σs can be written as
2
˙
n 2
η 2
V ≤ −λ
s
˜
s
0 |e| 2 −
− σ θ θ + c
(8.5.35)
2
s
0 m 2
Because for |θ| = |˜
θ + θ∗| > 2 M
˜
˜
0, the term −σsθ θ = −σ 0 θ θ ≤ − σ 0 | ˜
θ| 2 + σ 0 |θ∗| 2
2
2
behaves as the equivalent fixed- σ term, we can follow the same procedure as in the
proof of Theorem 8.5.1 to show the existence of a constant V 0 > 0 for which ˙ V ≤ 0
whenever V ≥ V 0 and conclude that V, e, , θ, ˜
θ ∈ L∞.
Integrating both sides of (8.5.35) from t 0 to t, we obtain that e, , ns, m,
σ ˜
sθ θ ∈ S ( η 2
s /m 2). From (8.5.34), it follows that
σ 2
˜
s |θ| 2 ≤ c 2 σsθ θ
for some constant c 2 > 0 that depends on the bound for σ 0 |θ|, and, therefore,
| ˙ θ| 2 ≤ c( | m| 2 + σ ˜
sθ θ) ,
for some c ∈ R+
8.5. ROBUST ADAPTIVE LAWS
589
Because m,
σ ˜
sθ θ ∈ S ( η 2
s /m 2), it follows that ˙
θ ∈ S( η 2 s/m 2).
The proof for part (iii) follows from (8.5.35) by setting η
˜
s = 0, using −σsθ θ ≤ 0
and repeating the above calculations for ηs = 0.
The proof of (iv)(a) is almost identical to that of Theorem 8.5.1 (iii) and is
omitted.
To prove (iv)(b), we follow the same arguments used in the proof of Theorem
8.5.1 (iii) to obtain the inequality
t
|˜
θ| ≤ β 0 e−β 2 t + β 1
e−β 2( t−τ)( |ηs| + σs|θ|) dτ
0
β
t
≤ β
1
0 e−β 2 t +
¯
η + β
e−β 2( t−τ) σ
β
1
s|θ|dτ
(8.5.36)
2
0
From (8.5.34), we have
1
1
σ
˜
s|θ| ≤
σ θ θ ≤
σ
M − |θ∗| s
M − |θ∗| s|θ| |˜
θ|
(8.5.37)
Therefore, using (8.5.37) in (8.5.36), we have
β
t
|˜
θ| ≤ β
1
0 e−β 2 t +
¯
η + β
e−β 2( t−τ) σ
β
1
s|θ| | ˜
θ|dτ
(8.5.38)
2
0
where β 1 =
β 1
. Applying B-G Lemma III to (8.5.38), it follows that
M 0 −|θ∗|
β
t
t
β
σ
τ
|˜
θ| ≤ ( β
1
1
s|θ|ds
σs|θ|ds
0 +
¯
η) e−β 2( t−t 0) e
t 0
+ β
e−β 2( t−τ) eβ
t
dτ (8.5.39)
β
1 ¯
η
2
t 0
Note from (8.5.34), (8.5.35) that
σs|θ| ∈ S( η 2 s/m 2), i.e.,
t
σs|θ|dτ ≤ c 1 ¯
η 2( t − t 0) + c 0
t 0
∀t ≥ t 0 ≥ 0 and some constants c 0 , c 1. Therefore,
t
|˜
θ| ≤ ¯
β 1 e−¯ α( t−t 0) + ¯
β 2 ¯
η
e−¯ α( t−τ) dτ
(8.5.40)
t 0
where ¯
α = β 2 − β 2 c 1¯ η 2 and ¯
β 1 , ¯
β 2 ≥ 0 are some constants that depend on c 0 and
the constants in (8.5.39). Hence, for any ¯
η ∈ [0 , ¯
η∗), where ¯
η∗ =
β 2
we have
β c
1
1
¯
α > 0 and (8.5.40) implies that
¯
β
|˜
θ| ≤ 2 ¯
η + ce−¯ α( t−t 0)
¯
α
for some constant c and for all t ≥ t 0 ≥ 0. Therefore the proof for (iv) is complete.
✷
590
CHAPTER 8. ROBUST ADAPTIVE LAWS
Theorem 8.5.3 ( -Modification) Let
w( t) = | m|ν 0
where ν 0 > 0 is a design constant. Then the adaptive law (8.5.26) with
w( t) = | m|ν 0 guarantees that
(i) θ, ∈ L∞
(ii) , ns, ˙ θ ∈ S( ν 0 + η 2 s/m 2)
(iii) In addition, if ns, φ, ˙ φ ∈ L∞ and φ is PE with level α 0 > 0 that is
independent of ηs, then ˜
θ converges exponentially to the residual set
D = ˜
θ |˜
θ| ≤ c( ν 0 + ¯ η)
where c ∈ R+ and ¯
η = sup t |ηs|.
Proof Letting w( t) = | m|ν 0 and using (8.5.26) in the equation for ˙ V given by
(8.5.24), we obtain
˙
e L
|η
V ≤ −ν
ce
s|
˜
c
− 2 n 2
− | m|ν θ θ
2
s + | m| m
0
Because −˜
θ θ ≤ − |˜ θ| 2 + |θ∗| 2 , it follows that
2
2
˙
|˜
θ| 2
|η
|θ∗| 2
V ≤ − 2 λ
s|
0 |e| 2 − 2 n 2
s − | m|
ν 0
−
−
ν
2
m
2
0
(8.5.41)
where λ 0 = νcλmin( Lc) , which implies that for large |e| or large |˜
θ|, ˙
V ≤ 0. Hence,
4
by following a similar approach as in the proof of Theorem 8.5.1, we can show the
existence of a constant V 0 > 0 such that for V > V 0, ˙ V ≤ 0, therefore, V , e, , θ,
˜
θ ∈ L∞.
Because |e| ≥ | | , we can write (8.5.41) as
|Cc|
˙
|˜
θ| 2
V
≤ −λ
2
2
2
0 |e| 2 − β 0
− 2 n 2 s + α 0 (1 + n 2 s) − α 0 m 2 − | m|ν 0 2
|η
|θ∗| 2
+ | m| s| + | m|
ν
m
2
0
by adding and subtracting the term α 2
2
0
m 2 = α 0 (1 + n 2 s), where β 0 = λ 0 and
|Cc| 2
α 0 > 0 is an arbitrary constant. Setting α 0 = min(1 , β 0), we have
˙
|η
|θ∗| 2
V ≤ −λ
2
s|
0 |e| 2 − α 0
m 2 + | m|
+ | m|
ν
m
2
0
8.5. ROBUST ADAPTIVE LAWS
591
By completing the squares and using the same approach as in the proof of The-
orem 8.5.1, we can establish that , ns, m ∈ S( ν 0 + η 2 s/m 2), which, together
with | ˙ θ| ≤
Γ | m| |φ| + ν
m
0 Γ | m||θ| ≤ c| m| for some c ∈ R+, implies that
˙ θ ∈ S( ν 0 + η 2 s/m 2).
The proof of (iii) is very similar to the proof of Theorem 8.5.1 (iii), which can
be completed by treating the terms due to ηs and the -modification as bounded
inputs to an e.s. linear time-varying system.
✷
Remark 8.5.1 The normalizing signal m given by (8.5.25) involves the dy-
namic term ms and the signals φ, u, y. Under some conditions, the
signals φ and/or u, y do not need to be included in the normalizing
signal. These conditions are explained as follows:
(i)
If φ = H( s)[ u, y]
where H( s) has strictly proper elements that are
analytic in Re[ s] ≥ −δ 0 / 2, then
φ
∈ L
1+ m
∞ and therefore the term
s
φ φ in the expression for n 2 s can be dropped.
(ii)
If W ( s) L( s) is chosen to be biproper, then W − 1 L− 1∆ u, W − 1 L− 1∆ y are strictly proper and the terms u 2 , y 2 in the expression for n 2 s can be
dropped.
(iii) The parametric model equation (8.5.20) can be filtered on both sides
by a first order filter f 0 where f
to obtain
s+ f
0 > δ 0
0
2
zf = W ( s) L( s)( θ∗ φf + ηf )
where xf = f 0 x denotes the filtered output of the signal x. In this
s+ f 0
case
f
η
0
f = L− 1( s) W − 1( s)
[∆
s + f
u( s) u + ∆ y( s) y + d 1]
0
is bounded from above by m 2 given by
m 2 = 1 + n 2 s, n 2 s = ms,
˙
ms = −δ 0 ms + u 2 + y 2 ,
ms(0) = 0
The choice of m is therefore dependent on the expression for the mod-
eling error term η in the parametric model and the properties of the
signal vector φ.
592
CHAPTER 8. ROBUST ADAPTIVE LAWS
Remark 8.5.2 The assumption that the level α 0 > 0 of PE of φ is indepen-
dent of ηs is used to guarantee that the modeling error term ηs does
not destroy or weaken the PE property of φ. Therefore, the constant
β 2 in the bound for the transition matrix given by (8.5.32) guaranteed
to be greater than zero for ηs ≡ 0 is not affected by ηs = 0.
8.5.3
Gradient Algorithms with Leakage
As in the ideal case presented in Chapter 4, the linear parametric model
with modeling error can be rewritten in the form
z = θ∗ φ + η, η = ∆ u( s) u + ∆ y( s) y + d 1
(8.5.42)
where φ = W ( s) ψ. The estimate ˆ
z of z and the normalized estimation error
are constructed as