θ φ
and the parameter estimates may be driven in a direction dictated mostly
by η. The principal idea behind the dead zone is to monitor the size of the
estimation error and adapt only when the estimation error is large relative
to the modeling error η, as shown below:
8.5. ROBUST ADAPTIVE LAWS
607
f ( ) = + g
f ( ) = + g
✻
✻
− g 0
− g 0
m
✲
m
✲
g 0
g 0
m
m
(a)
(b)
Figure 8.6 Normalized dead zone functions: (a) discontinuous; (b) con-
tinuous.
We first consider the gradient algorithm for the linear parametric model
(8.5.42). We consider the same cost function as in the ideal case, i.e.,
2 m 2
J( θ, t) =
2
and write
˙
−Γ ∇J( θ) if | m| > g
θ =
0 > |η|
m
(8.5.77)
0
otherwise
In other words we move in the direction of the steepest descent only when the
estimation error is large relative to the modeling error, i.e., when | m| > g 0,
and switch adaptation off when m is small, i.e., | m| ≤ g 0. In view of
(8.5.77) we have
˙
0
if | m| > g
θ = Γ φ( + g) ,
g =
0
(8.5.78)
−
if | m| ≤ g 0
To avoid any implementation problems which may arise due to the disconti-
nuity in (8.5.78), the dead zone function is made continuous as follows:
g
0
if m < −g
m
0
˙ θ = Γ φ( + g) , g =
− g 0
if m > g
(8.5.79)
0
m
−
if | m| ≤ g 0
The continuous and discontinuous dead zone functions are shown in Fig-
ure 8.6(a, b). Because the size of the dead zone depends on m, this dead
608
CHAPTER 8. ROBUST ADAPTIVE LAWS
zone function is often referred to as the variable or relative dead zone. Sim-
ilarly the least-squares algorithm with the dead zone becomes
˙ θ = P φ( + g)
(8.5.80)
where , g are as defined in (8.5.79) and P is given by either (8.5.71) or
(8.5.72).
The dead zone modification can also be incorporated in the integral adap-
tive law (8.5.47). The principal idea behind the dead zone remains the same
as before, i.e., shut off adaptation when the normalized estimation error is
small relative to the modeling error. However, for the integral adaptive law,
the shut-off process is no longer based on a pointwise-in-time comparison of
the normalized estimation error and the a priori bound on the normalized
modeling error. Instead, the decision to shut off adaptation is based on the
comparison of the L 2 δ norms of certain signals as shown below:
We consider the same integral cost
1
t
J( θ, t) =
e−β( t−τ) [ z( τ ) − θ ( t) φ( τ )]2 dτ
2 0
m 2( τ )
as in the ideal case and write
˙
−Γ ∇J( θ)
if
( t, ·) m( ·)
)
θ =
2 β > g 0 ≥ sup t ( η
m t 2 β + ν
(8.5.81)
0
otherwise
where β > 0 is the forgetting factor, ν > 0 is a small design constant,
z( τ ) − θ ( t) φ( τ )
( t, τ ) =
m 2( τ )
and
( t, ·) m( ·) 2 β = [ t e−β( t−τ) 2( t, τ) m 2( τ) dτ]1 / 2 is implemented as 0
( t, ·) m( ·) 2 β = ( r 0 + 2 θ Q + θ Rθ)1 / 2
(8.5.82)
˙ r 0 = −βr 0 + z 2 , r
m 2
0(0) = 0
where Q, R are defined in the integral adaptive law given by (8.5.47).
In view of (8.5.81) we have
˙ θ = −Γ( Rθ + Q − g)
˙
φφ
R = −βR +
, R(0) = 0
(8.5.83)
m 2
˙
zφ
Q = −βQ −
, Q(0) = 0
m 2
8.5. ROBUST ADAPTIVE LAWS
609
where
0
if
( t, ·) m( ·)
g =
2 β > g 0
( Rθ + Q) otherwise
To avoid any implementation problems which may arise due to the discon-
tinuity in g, the dead zone function is made continuous as follows:
0
if
( t, ·) m( ·) 2 β > 2 g 0
g =
( Rθ + Q) 2 − ( t,·) m( ·) 2 β
if g
(8.5.84)
0 <
( t, ·) m( ·) 2 β ≤ 2 g 0
g 0
( Rθ + Q)
if
( t, ·) m( ·) 2 β ≤ g 0
The following theorem summarizes the stability properties of the adaptive
laws developed above.
Theorem 8.5.7 The adaptive laws (8.5.79) and (8.5.80) with P given by
(8.5.71) or (8.5.72) and the integral adaptive law (8.5.83) with g given by
(8.5.84) guarantee the following properties:
(i)
, ns, θ, ˙ θ ∈ L∞.
(ii)
, ns, ˙ θ ∈ S( g 0 + η 2 /m 2) .
(iii) ˙ θ ∈ L 2 L 1 .
(iv) lim t→∞ θ( t) = ¯
θ where ¯
θ is a constant vector.
(v) If ns, φ ∈ L∞ and φ is PE with level α 0 > 0 independent of η, then ˜
θ( t)
converges exponentially to the residual set
Dd = ˜
θ ∈ Rn |˜
θ| ≤ c( g 0 + ¯ η)
where ¯
η = sup |η( t) |
t
and c ≥ 0 is a constant.
m( t)
Proof Adaptive Law (8.5.79) We consider the function
˜
θ Γ − 1 ˜
θ
V (˜
θ) =
2
whose time derivative ˙
V along the solution of (8.5.79) where m 2 = −˜
θ φ + η is
given by
˙
V = ˜
θ φ( + g) = −( m 2 − η)( + g)
(8.5.85)
Now
( m + g 0)2 − ( g 0 + η )( m + g
m
0) > 0
if m < −g 0
( m 2 −η)( + g) =
( m − g
)( m − g
(8.5.86)
0)2 + ( g 0 − η
m
0) > 0
if m > g 0
0
if | m| ≤ g 0
610
CHAPTER 8. ROBUST ADAPTIVE LAWS
Hence, ( m 2 − η)( + g) ≥ 0 , ∀t ≥ 0 and ˙
V ≤ 0, which implies that V, θ ∈ L∞ and
( m 2 − η)( + g) ∈ L 2. Furthermore, θ ∈ L∞ implies that ˜
θ, , ns ∈ L∞. From
(8.5.79) we have
˙
φ ΓΓ φ
θ ˙ θ =
( + g)2 m 2
(8.5.87)
m 2
However,
( m + g 0)2 if m < −g 0
( + g)2 m 2 = ( m + gm)2 =
( m − g
0)2
if m > g 0
0
if | m| ≤ g 0
which, together with (8.5.86), implies that
0 ≤ ( + g)2 m 2 ≤ ( m 2 − η)( + g) ,
i.e., ( + g) m ∈ L 2
Hence, from (8.5.87) and φ ∈ L
m
∞ we have that ˙
θ ∈ L 2. Equation (8.5.85) can also
be written as
˙
|η|
|η|
V ≤ − 2 m 2 + | m|
+ | m|g
g
m
0 + m 0
by using |g| ≤ g 0 . Then by completing the squares we have
m
2
˙
m 2
|η| 2
|η|
V
≤ −
+
+ g 2
g
2
m 2
0 + m 0
2 m 2
3 |η| 2
3
≤ −
+
+ g 2
2
2 m 2
2 0
which, together with V ∈ L∞, implies that m ∈ S( g 20 + η 2 ). Because m 2 = 1 + n 2
m 2
s
we have , ns ∈ S( g 20 + η 2 ). Because |g| ≤ g 0 ≤ c
m 2
m
1 g 0, we can show that | ˙
θ| ≤
( | m| + g 0), which implies that ˙ θ ∈ S( g 20 + η 2 ) due to m ∈ S( g 2
). Because g
m 2
0 + η 2
m 2
0
is a constant, we can absorb it in one of the constants in the definition of m.s.s. and
write m, ˙ θ ∈ S( g 0 + η 2 ) to preserve compatibility with the other modifications.
m 2
To show that lim t→∞ θ = ¯
θ, we use (8.5.85) and the fact that ( m 2 −η)( + g) ≥ 0
to obtain
˙
η
V
= −( m 2 − η)( + g) = − m −
| + g|m
m
|η|
≤ − | m| −
| + g|m
m
|η|
≤ − g 0 −
| + g|m (because | + g| = 0 if | m| ≤ g
m
0)
Because η ∈ L
we integrate both sides of the above inequality and
m
∞ and g 0 > |η|
m
use the fact that V ∈ L∞ to obtain that ( + g) m ∈ L 1. Then from the adaptive
8.5. ROBUST ADAPTIVE LAWS
611
law (8.5.79) and the fact that φ ∈ L
m
∞, it follows that | ˙˜
θ| ∈ L 1, which, in turn,
implies that the lim
t ˙
t→∞
θdτ exists and, therefore, θ converges to ¯
θ.
0
To show that ˜
θ converges exponentially to the residual set Dd for persistently
exciting φ, we follow the same steps as in the case of the fixed σ-modification or
the 1-modification. We express (8.5.79) in terms of the parameter error
˙˜
φφ
φη
θ = −Γ
˜
θ + Γ
+ Γ φg
(8.5.88)
m 2
m 2
where g satisfies |g| ≤ g 0 ≤ c
m
1 g 0 for some constant c 1 ≥ 0. Since the homogeneous
part of (8.5.88) is e.s. when φ is PE, a property that is established in Section 4.8.3,
the exponential convergence of ˜
θ to the residual set Dd can be established by re-
peating the same steps as in the proof of Theorem 8.5.4 (iii).
Adaptive Law (8.5.80)
The proof for the adaptive law (8.5.80) is very
similar to that of (8.5.79) presented above and is omitted.
Adaptive Law (8.5.83) with g Given by (8.5.84) This adaptive law may
be rewritten as
t
˙ θ = σeΓ
e−β( t−τ) ( t, τ ) φ( τ ) dτ
(8.5.89)
0
where
1
if
( t, ·) m( ·) 2 β > 2 g 0
σ
( t,·) m( ·) 2 β
e =
− 1 if g
(8.5.90)
g
0 <
( t, ·) m( ·) 2 β ≤ 2 g 0
0
0
if
( t, ·) m( ·) 2 β ≤ g 0
Once again we consider the positive definite function V (˜
θ) = ˜ θ Γ − 1 ˜ θ whose time
2
derivative along the solution of (8.5.83) is given by
t
˙
V = σ ˜
eθ ( t)
e−β( t−τ) ( t, τ ) φ( τ ) dτ
0
Using ˜
θ ( t) φ( τ ) = − ( t, τ ) m 2( τ ) + η( τ ), we obtain t
˙
η( τ )
V = −σe
e−β( t−τ) ( t, τ ) m( τ )
( t, τ ) m( τ ) −
dτ
(8.5.91)
0
m( τ )
Therefore, by using the Schwartz inequality, we have
˙
η
V ≤ −σe ( t, ·) m( ·) 2 β
( t, ·) m( ·) 2 β − ( )
m t 2 β
From the definition of σe and the fact that g 0 ≥ sup t ( η )
+ ν, it follows
m t 2 β
that ˙
V ≤ 0 which implies that V, θ, ∈ L∞ and lim t→∞ = V∞ exists. Also θ ∈ L∞
implies that ns ∈ L∞. Using g 0 ≥ sup t ( η )
+ ν in (8.5.91), we obtain
m t 2 β
˙
V ≤ −νσe ( t, ·) m( ·) 2 β
612
CHAPTER 8. ROBUST ADAPTIVE LAWS
Now integrating on both sides of the above inequality and using V ∈ L∞, we have