where ρ∗ is an unknown constant; z, ψ, z 0 are signals that can be measured
and W ( s) is a known proper transfer function with stable poles. Because
the unknown parameters ρ∗, θ∗ appear in a special bilinear form, we refer to
(4.5.1) as the bilinear parametric model.
The procedure of Section 4.3 for estimating θ∗ in a linear model extends
to (4.5.1) with minor modifications when the sgn( ρ∗) is known or when
sgn( ρ∗) and a lower bound ρ 0 of |ρ∗| are known. When the sgn( ρ∗) is un-
known the design and analysis of the adaptive laws require some additional
modifications and stability arguments. We treat each case of known and
unknown sgn( ρ∗) , ρ 0 separately.
4.5.1
Known Sign of ρ∗
The SPR-Lyapunov design approach and the gradient method with an in-
stantaneous cost function discussed in the linear parametric case extend to
the bilinear one in a rather straightforward manner.
4.5. BILINEAR PARAMETRIC MODEL
209
Let us start with the SPR-Lyapunov design approach. We rewrite (4.5.1)
in the form
z = W ( s) L( s) ρ∗( θ∗ φ + z 1)
(4.5.2)
where z 1 = L− 1( s) z 0 , φ = L− 1( s) ψ and L( s) is chosen so that L− 1( s) is proper and stable and W L is proper and SPR. The estimate ˆ
z of z and the
normalized estimation error are generated as
ˆ
z = W ( s) L( s) ρ( θ φ + z 1)
(4.5.3)
= z − ˆ
z − W ( s) L( s) n 2 s
(4.5.4)
where ns is designed to satisfy
φ
z
,
1 ∈ L
m
m
∞,
m 2 = 1 + n 2 s
(A2)
and ρ( t) , θ( t) are the estimates of ρ∗, θ∗ at time t, respectively. Letting
˜
ρ = ρ − ρ∗, ˜
θ = θ − θ∗, it follows from (4.5.2) to (4.5.4) that
= W ( s) L( s)[ ρ∗θ∗ φ − ˜
ρz 1 − ρθ φ − n 2 s]
Now ρ∗θ∗ φ − ρθ φ = ρ∗θ∗ φ − ρ∗θ φ + ρ∗θ φ − ρθ φ = −ρ∗ ˜
θ φ − ˜
ρθ φ
and, therefore,
= W ( s) L( s)[ −ρ∗ ˜
θ φ − ˜
ρξ − n 2 s] , ξ = θ φ + z 1
(4.5.5)
A minimal state representation of (4.5.5) is given by
˙ e = Ace + Bc( −ρ∗ ˜
θ φ − ˜
ρξ − n 2 s)
(4.5.6)
= Cc e
where Cc ( sI − Ac) − 1 Bc = W ( s) L( s) is SPR. The adaptive law is now developed by considering the Lyapunov-like function
e P
˜
θ Γ − 1 ˜
θ
˜
ρ 2
V (˜
θ, ˜
ρ) =
ce + |ρ∗|
+
2
2
2 γ
where Pc = Pc > 0 satisfies the algebraic equations given by (4.3.32) that
are implied by the KYL Lemma, and Γ = Γ > 0 , γ > 0. Along the solution
of (4.5.6), we have
˙
e qq e
ν
˜
ρ ˙˜
ρ
V = −
− e L
2
2
ce − ρ∗ ˜
θ φ − ˜
ρξ − 2 n 2 s + |ρ∗|˜ θ Γ − 1 ˙˜ θ + γ
210
CHAPTER 4. ON-LINE PARAMETER ESTIMATION
where ν > 0 , Lc = Lc > 0. Because ρ∗ = |ρ∗| sgn( ρ∗) it follows that by
choosing
˙˜ θ = ˙ θ = Γ φ sgn( ρ∗)
(4.5.7)
˙˜ ρ = ˙ ρ = γ ξ
we have
˙
e qq e
ν
V = −
− e L
2
2
ce − 2 n 2
s ≤ 0
The rest of the analysis continues as in the case of the linear model. We sum-
marize the properties of the bilinear adaptive law by the following theorem.
Theorem 4.5.1 The adaptive law (4.5.7) guarantees that
(i)
, θ, ρ ∈ L∞.
(ii)
, ns, ˙ θ, ˙ ρ ∈ L 2 .
(iii) If φ, ˙ φ ∈ L∞, φ is PE and ξ ∈ L 2 , then θ( t) converges to θ∗ as t → ∞.
(iv) If ξ ∈ L 2 , the estimate ρ converges to a constant ¯
ρ independent of the
properties of φ.
Proof The proof of (i) and (ii) follows directly from the properties of V, ˙
V by
following the same procedure as in the linear parametric model case and is left as
an exercise for the reader. The proof of (iii) is established by using the results of
Corollary 4.3.1 to show that the homogeneous part of (4.5.6) with ˜
ρξ treated as
an external input together with the equation of ˜
θ in (4.5.7) form an e.s. system.
Because ˜
ρξ ∈ L 2 and Ac is stable, it follows that e, ˜
θ → 0 as t → ∞. The details of
the proof are given in Section 4.8. The proof of (iv) follows from , ξ ∈ L 2 and the
inequality
t
t
∞
1
1
2
∞
2
| ˙ ρ|dτ ≤ γ
| ξ|dτ ≤ γ
2 dτ
ξ 2 dτ
< ∞
0
0
0
0
which implies that ˙ ρ ∈ L 1. Therefore, we conclude that ρ( t) has a limit ¯
ρ, i.e.,
lim t→∞ ρ( t) = ¯
ρ.
✷
The lack of convergence of ρ to ρ∗ is due to ξ ∈ L 2. If, however, φ, ξ
are such that φα = [ φ , ξ] is PE, then we can establish by following the
same approach as in the proof of Corollary 4.3.1 that ˜
θ, ˜
ρ converge to zero
4.5. BILINEAR PARAMETRIC MODEL
211
exponentially fast. For ξ ∈ L 2, the vector φα cannot be PE even when φ is
PE.
For the gradient method we rewrite (4.5.1) as
z = ρ∗( θ∗ φ + z 1)
(4.5.8)
where z 1 = W ( s) z 0 , φ = W ( s) ψ. Then the estimate ˆ z of z and the normalized estimation error
are given by
ˆ
z = ρ( θ φ + z 1)
z − ˆ
z
z − ρ( θ φ + z
=
=
1)
(4.5.9)
m 2
m 2
where n 2 s is chosen so that
φ
z
,
1 ∈ L
m m
∞, m 2 = 1 + n 2
s
(A2)
As in the case of the linear model, we consider the cost function
2 m 2
( z − ρ∗θ φ − ρξ + ρ∗ξ − ρ∗z
J( ρ, θ) =
=
1)2
2
2 m 2
where ξ = θ φ + z 1 and the second equality is obtained by using the identity
−ρ( θ φ + z 1) = −ρξ − ρ∗θ φ + ρ∗ξ − ρ∗z 1. Strictly speaking J( ρ, θ) is not a convex function of ρ, θ over Rn+1 because of the dependence of ξ on θ. Let
us, however, ignore this dependence and treat ξ as an independent function
of time. Using the gradient method and treating ξ as an arbitrary function
of time, we obtain
˙ θ = Γ1 ρ∗ φ, ˙ ρ = γ ξ
(4.5.10)
where Γ1 = Γ1 > 0 , γ > 0 are the adaptive gains. The adaptive law (4.5.10)
cannot be implemented due to the unknown ρ∗. We go around this difficulty
as follows: Because Γ1 is arbitrary, we assume that Γ1 = Γ for some other
|ρ∗|
arbitrary matrix Γ = Γ > 0 and use it together with ρ∗ = |ρ∗| sgn( ρ∗) to
get rid of the unknown parameter ρ∗, i.e., Γ1 ρ∗ = Γ ρ∗ = Γsgn( ρ∗) leading
|ρ∗|
to
˙ θ = Γ φ sgn( ρ∗) , ˙ ρ = γ ξ
(4.5.11)
which is implementable. The properties of (4.5.11) are given by the following
theorem.
212
CHAPTER 4. ON-LINE PARAMETER ESTIMATION
Theorem 4.5.2 The adaptive law (4.5.11) guarantees that
(i)
, ns, θ, ˙ θ, ρ, ˙ ρ ∈ L∞.
(ii)
, ns, ˙ θ, ˙ ρ ∈ L 2 .
(iii) If ns, φ ∈ L∞, φ is PE and ξ ∈ L 2 , then θ( t) converges to θ∗ as t→ ∞.
(iv) If ξ ∈ L 2 , then ρ converges to a constant ¯
ρ as t → ∞ independent of
the properties of φ.
The proof follows from that of the linear parametric model and of The-
orem 4.5.1, and is left as an exercise for the reader.
The extension of the integral adaptive law and least-squares algorithm to
the bilinear parametric model is more complicated and difficult to implement
due to the appearance of the unknown ρ∗ in the adaptive laws. This problem
is avoided by assuming the knowledge of a lower bound for |ρ∗| in addition
to sgn( ρ∗) as discussed in the next section.
4.5.2
Sign of ρ∗ and Lower Bound ρ 0 Are Known
The complications with the bilinearity in (4.5.1) are avoided if we rewrite
(4.5.1) in the form of the linear parametric model
z = ¯
θ∗ ¯
φ
(4.5.12)
where ¯
θ∗ = [¯
θ∗ 1 , ¯ θ∗ 2 ] , ¯
φ = [ z 1 , φ ] , and ¯
θ∗ 1 = ρ∗, ¯ θ∗ 2 = ρ∗θ∗. We can now
use the methods of Section 4.3 to generate the estimate ¯
θ( t) of ¯
θ∗ at each
time t. From the estimate ¯
θ = [¯
θ 1 , ¯
θ 2 ] of ¯ θ∗, we calculate the estimate ρ, θ
of ρ∗, θ∗ as follows:
¯
θ
ρ( t) = ¯
θ
2( t)
1( t) ,
θ( t) = ¯
(4.5.13)
θ 1( t)
The possibility of division by zero or a small number in (4.5.13) is avoided
by constraining the estimate of ¯
θ 1 to satisfy |¯
θ 1( t) | ≥ ρ 0 > 0 for some
ρ 0 ≤ |ρ∗|. This is achieved by using the gradient projection method and
assuming that ρ 0 and sgn( ρ∗) are known. We illustrate the design of such a
gradient algorithm as follows:
By considering (4.5.12) and following the procedure of Section 4.3, we
generate
z − ˆ
z
ˆ
z = ¯
θ ¯
φ,
=
(4.5.14)
m 2
4.5. BILINEAR PARAMETRIC MODEL
213
where m 2 = 1 + n 2 s and ns is chosen so that ¯
φ/m ∈ L∞, e.g. n 2 s = ¯
φ ¯
φ. The
adaptive law is developed by using the gradient projection method to solve
the constrained minimization problem
( z − ¯
θ ¯
φ)2
min J(¯
θ) = min
¯
θ
¯
θ
2 m 2
subject to ρ 0 − ¯
θ 1sgn( ρ∗) ≤ 0
i.e.,
Γ ¯
φ
if ρ 0 − ¯
θ 1sgn( ρ∗) < 0
˙¯ θ =
or if ρ 0 − ¯
θ 1sgn( ρ∗) = 0 and (Γ ¯
φ) ∇g ≤ 0
Γ ¯ φ − Γ ∇g∇g Γ ¯ φ otherwise
∇g Γ ∇g
(4.5.15)
where g(¯
θ) = ρ 0 − ¯
θ 1sgn( ρ∗). For simplicity, let us assume that Γ =
diag{γ 1 , Γ2 } where γ 1 > 0 is a scalar and Γ2 = Γ2 > 0 and simplify the expressions in (4.5.15). Because
∇g = [ − sgn( ρ∗) , 0 , . . . , 0]
it follows from (4.5.15) that
γ
¯
1 φ 1
if ¯
θ 1sgn( ρ∗) > ρ 0
˙¯ θ
¯
1 =
or if ¯
θ
φ
(4.5.16)
1sgn( ρ∗) = ρ 0 and −γ 1 1 sgn( ρ∗) ≤ 0
0
otherwise
where ¯
θ 1(0) satisfies ¯
θ 1(0)sgn( ρ∗) ≥ ρ 0, and
˙¯ θ
¯
2 = Γ2 φ 2
(4.5.17)
where ¯
φ 1 = z 1 , ¯
φ 2 = φ.
Because ¯
θ 1( t) is guaranteed by the projection to satisfy |¯
θ 1( t) | ≥ ρ 0 > 0,
the estimate ρ( t) , θ( t) can be calculated using (4.5.13) without the possibility
of division by zero. The properties of the adaptive law (4.5.16), (4.5.17) with
(4.5.13) are summarized by the following theorem.
Theorem 4.5.3 The adaptive law described by (4.5.13), (4.5.16), (4.5.17)
guarantees that
214
CHAPTER 4. ON-LINE PARAMETER ESTIMATION
(i)
, ns, ρ, θ, ˙ ρ, ˙ θ ∈ L∞.
(ii)
, ns, ˙ ρ, ˙ θ ∈ L 2 .
(iii) If ns, ¯
φ ∈ L∞ and ¯
φ is PE, then ¯
θ, θ, ρ converge to ¯
θ∗, θ∗, ρ∗, respec-
tively, exponentially fast.
Proof Consider the Lyapunov-like function
˜¯2
θ
˜¯ θ
˜¯ θ
V =
1 + 2 Γ − 1
2
2
2 γ 1
2
where ˜¯
θ 1 = ¯
θ 1 − ¯
θ∗ 1, ˜¯ θ 2 = ¯ θ 2 − ¯ θ∗ 2. Then along the solution of (4.5.16), (4.5.17), we
have
− 2 m 2
if ¯
θ 1sgn( ρ∗) > ρ 0
˙
V =
or if ¯
θ
¯
1sgn( ρ∗) = ρ 0 and −γ 1 φ 1 sgn( ρ∗) ≤ 0
(4.5.18)
˜¯ θ ¯
¯
2 φ 2