the adaptive law (6.5.5) guarantees that , ˜
k ∈ L∞ and , ns, ˙˜ k ∈ L 2 in-
dependent of the boundedness of x. The normalized estimation error
is
related to ˜
kx through the equation
1
1
= z − ˆ
z −
n 2
( −˜
kx − n 2
s + a
s =
s)
(6.5.9)
m
s + am
where n 2 s = x 2. Using (6.5.9) and n 2 s = nsx in (6.5.8), we obtain
1
1
x = +
n 2
n
s + a
s =
+
sx
(6.5.10)
m
s + am
Because
∈ L∞ L 2 and ns ∈ L 2 the boundedness of x is established
by taking absolute values on each side of (6.5.10) and applying the B-G
lemma. We leave this approach as an exercise for the reader.
A more elaborate but yet more systematic method that we will follow in
the higher order case involves the use of the properties of the L 2 δ norm and
the B-G Lemma. We present such a method below and use it to understand
the higher-order case to be considered in the sections to follow.
Step 1. Express the plant output y (or state x) and plant input u in
terms of the parameter error ˜
k. We have
1
( s − a)
x =
( −˜
kx) ,
u = ( s − a) x =
( −˜
kx)
(6.5.11)
s + am
s + am
The above integral equations may be expressed in the form of algebraic
inequalities by using the properties of the L 2 δ norm ( ·) t 2 δ, which for sim-
plicity we denote by
· .
We have
x ≤ c ˜
kx ,
u ≤ c ˜
kx
(6.5.12)
where c ≥ 0 is a generic symbol used to denote any finite constant. Let us
now define
m 2 f = 1 + x 2 + u 2
(6.5.13)
The significance of the signal mf is that it bounds |x|, | ˙ x| and |u| from above
provided k ∈ L∞. Therefore if we establish that mf ∈ L∞ then the bounded-
ness of all signals follows. The boundedness of |x|/mf , | ˙ x|/mf , |u|/mf follows
376
CHAPTER 6. MODEL REFERENCE ADAPTIVE CONTROL
from ˜
k ∈ L∞ and the properties of the L 2 δ-norm given by Lemma 3.3.2, i.e.,
from (6.5.11) we have
|x( t) |
1
x
≤
|˜
k|
≤ c
mf
s + am 2 δ
mf
and
| ˙ x( t) |
|x( t) |
|x( t) |
≤ a
+ |˜
k|
≤ c
m
m
f
mf
mf
Similarly,
|u( t) |
|x|
≤ |k|
≤ c
mf
mf
Because of the normalizing properties of mf , we refer to it as the fictitious
normalizing signal.
It follows from (6.5.12), (6.5.13) that
m 2 f ≤ 1 + c ˜ kx 2
(6.5.14)
Step 2. Use the Swapping Lemma and properties of the L 2 δ norm to
upper bound ˜
kx with terms that are guaranteed by the adaptive law to have
finite L 2 gains. We use the Swapping Lemma A.2 given in Appendix A to
write the identity
˜
α
α
1
α
kx = 1 −
0
˜
kx +
0
˜
kx =
( ˙˜
kx + ˜
k ˙ x) +
0
˜
kx
s + α 0
s + α 0
s + α 0
s + α 0
where α 0 > 0 is an arbitrary constant. Since, from (6.5.11), ˜ kx = −( s+ am) x, we have
˜
1
( s + a
kx =
( ˙˜
kx + ˜
k ˙ x) − α
m) x
(6.5.15)
s + α
0
0
( s + α 0)
which imply that
˜
1
s + a
kx ≤
( ˙˜
kx + ˜
k ˙ x ) + α
m
x
s + α
0
0 ∞δ
s + α 0 ∞δ
For α 0 > 2 am > δ, we have
1
< 2 , therefore,
s+ α
∞δ =
2
0
2 α 0 −δ
α 0
˜
2
kx ≤
( ˙˜
kx + ˜
k ˙ x ) + α
α
0 c x
0
6.5. DIRECT MRAC WITH NORMALIZED ADAPTIVE LAWS
377
where c = s+ am
, ˙ x ∈ L
s+ α
∞δ. Since
x
∞, it follows that
0
mf mf
˜
c
kx ≤
( ˙˜
km
α
f
+ ˜
kmf ) + α 0 c x
(6.5.16)
0
Equation (6.5.16) is independent of the adaptive law used to update k( t).
The term c ˙˜
km
α
f
in (6.5.16) is “small” because ˙ k ∈ L 2 (guaranteed by any
0
one of the adaptive laws (6.5.5) - (6.5.7)), whereas the term c ˜
km
α
f
can
0
be made small by choosing α 0 large but finite. Large α 0, however, may
make α 0 c x large unless x is also small in some sense. We establish
the smallness of the regulation error x by exploiting its relationship with
the normalized estimation error . This relationship depends on the specific
adaptive law used. For example, for the adaptive law (6.5.5) that is based
on the SPR-Lyapunov design approach, we have established that
1
x = +
n 2
s + a
s
m
which together with | n 2 s| ≤ | ns| |x| m
m
f ≤ c nsmf imply that
f
x ≤
+ c nsmf
hence,
˜
c
kx ≤
( ˙˜
km
α
f
+ ˜
kmf ) + α 0 c
+ α 0 c nsmf
(6.5.17)
0
Similarly, for the gradient or least-squares algorithms, we have
1
x = m 2 +
˙ kφ
(6.5.18)
s + am
obtained by using the equation
1
1
kx = kφ −
˙ kφ
s + am
s + am
that follows from Swapping Lemma A.1 together with the equation for m 2
in (6.5.6). Equation (6.5.18) implies that
x ≤
+
n 2 s + c ˙ kφ
378
CHAPTER 6. MODEL REFERENCE ADAPTIVE CONTROL
Because n 2 s = φ 2 and φ = 1 x, we have |φ( t) | ≤ c x which implies that s+ am
φ ∈ L
m
∞ and, therefore,
f
x ≤
+
nsmf + c ˙ kmf
Substituting for x in (6.5.16), we obtain the same expression for ˜
kx as
in (6.5.17).
Step 3. Use the B-G Lemma to establish boundedness. From (6.5.14)
and (6.5.17), we obtain
c
m 2
2
2
2
f ≤ 1 + α 2
0 c +
( ˙˜
km
+ ˜
km
) + cα 2
(6.5.19)
α 2
f
f
0
nsmf
0
by using the fact that ∈ L∞ L 2. We can express (6.5.19) as
c
m 2
2
2
f ≤ 1 + α 2
0 c +
m
+ cα 2
(6.5.20)
α 2
f
0 ˜
gmf
0
where ˜
g 2 = | ns| 2 + |˙˜ k| 2 . Because the adaptive laws guarantee that n
α 4
s, ˙
˜
k ∈
0
L 2 it follows that ˜ g ∈ L 2. Using the definition of the L 2 δ norm, inequality
(6.5.20) may be rewritten as
t
1
m 2 f ≤ 1 + cα 20 + c
e−δ( t−τ) α 20˜ g 2( τ) +
m 2 f( τ) dτ
0
α 20
Applying the B-G Lemma III, we obtain
t
m 2 f ≤ (1 + cα 20) e−δ( t−τ)Φ( t, t 0) + (1 + cα 20) δ
e−δ( t−τ)Φ( t, τ ) dτ
t 0
where
c ( t−τ)
t
Φ( t, τ ) = e α 2
α 2˜
g 2( σ) dσ
0
ec τ 0
Choosing α 0 so that c ≤ δ , α
α 2
2
0 > 2 am and using ˜
g ∈ L 2, it follows that
0
mf ∈ L∞. Because mf bounds x, ˙ x, u from above, it follows that all signals
in the closed-loop adaptive system are bounded.
Step 4. Establish convergence of the regulation error to zero. For the
adaptive law (6.5.5), it follows from (6.5.9), (6.5.10) that x ∈ L 2 and from
(6.5.8) that ˙ x ∈ L∞. Hence, using Lemma 3.2.5, we have x( t) → 0 as t → ∞.
For the adaptive law (6.5.6) or (6.5.7) we have from (6.5.18) that x ∈ L 2
and from (6.5.8) that ˙ x ∈ L∞, hence, x( t) → 0 as t → ∞.
6.5. DIRECT MRAC WITH NORMALIZED ADAPTIVE LAWS
379
6.5.2
Example: Adaptive Tracking
Let us consider the tracking problem defined in Section 6.2.2 for the first
order plant
˙ x = ax + bu
(6.5.21)
where a, b are unknown (with b = 0). The control law
u = −k∗x + l∗r
(6.5.22)
where
a
b
k∗ = m + a ,
l∗ = m
(6.5.23)
b
b
guarantees that all signals in the closed-loop plant are bounded and the plant
state x converges exponentially to the state xm of the reference model
b
x
m
m =
r
(6.5.24)
s + am
Because a, b are unknown, we replace (6.5.22) with
u = −k( t) x + l( t) r
(6.5.25)
where k( t) , l( t) are the on-line estimates of k∗, l∗, respectively. We design
the adaptive laws for updating k( t), l( t) by first developing appropriate
parametric models for k∗, l∗ of the form studied in Chapter 4. We then
choose the adaptive laws from Tables 4.1 to 4.5 of Chapter 4 based on the
parametric model satisfied by k∗, l∗.
As in Section 6.2.2, if we add and subtract the desired input −bk∗x+ bl∗r
in the plant equation (6.5.21) and use (6.5.23) to eliminate the unknown a,
we obtain
˙ x = −amx + bmr + b( u + k∗x − l∗r)
which together with (6.5.24) and the definition of e 1 = x − xm give
b
e 1 =
( u + k∗x − l∗r)
(6.5.26)
s + am
Equation (6.5.26) can also be rewritten as
e 1 = b( θ∗ φ + uf )
(6.5.27)
380
CHAPTER 6. MODEL REFERENCE ADAPTIVE CONTROL
where θ∗ = [ k∗, l∗] , φ =
1
[ x, −r] , u
u. Both equations are in
s+ a
f =
1
m
s+ am
the form of the parametric models given in Table 4.4 of Chapter 4. We can
use them to choose any adaptive law from Table 4.4. As an example, let us
choose the gradient algorithm listed in Table 4.4(D) that does not require
the knowledge of sign b. We have
˙ k = N( w) γ 1 φ 1
˙ l = N( w) γ 2 φ 2
˙ˆ b = N( w) γ ξ
ˆ b 2
N ( w) = w 2 cos w, w = w 0 + 2 γ
˙
w 0 =
2 m 2 , w 0(0) = 0
e
=
1 − ˆ
e 1 , ˆ e
m 2
1 = N ( w)ˆ
bξ
(6.5.28)
1
ξ = kφ 1 + lφ 2 + uf , uf =
u
s + am
1
1
φ 1 =
x, φ
r
s + a
2 = −
m
s + am
m 2 = 1 + n 2 s, n 2 s = φ 21 + φ 22 + u 2 f
γ 1 , γ 2 , γ > 0
As shown in Chapter 4, the above adaptive law guarantees that k, l, w,