θk+1
if ¯
θk+1 ∈ S
θk+1 =
¯
θk+1 M
|¯
θ
0
if ¯
θk+1 ∈ S
k+1 |
and θ 0 ∈ S. As in the continuous-time case, it can be shown that the hybrid
adaptive law with projection has the same properties as those of (4.6.4). In
addition it guarantees that θk ∈ S, ∀k ≥ 0. The details of this analysis are
left as an exercise for the reader.
4.7
Summary of Adaptive Laws
In this section, we present tables with the adaptive laws developed in the
previous sections together with their properties.
4.8
Parameter Convergence Proofs
In this section, we present the proofs of the theorems and corollaries of the previous
sections that deal with parameter convergence. These proofs are useful for the
reader who is interested in studying the behavior and convergence properties of the
parameter estimates. They can be omitted by the reader whose interest is mainly on
adaptive control where parameter convergence is not part of the control objective.
4.8.1
Useful Lemmas
The following two lemmas are used in the proofs of corollaries and theorems pre-
sented in this sections.
4.8. PARAMETER CONVERGENCE PROOFS
221
Table 4.1 Adaptive law based on SPR-Lyapunov design approach
Parametric model
z = W ( s) θ∗ ψ
Parametric model
z = W ( s) L( s) θ∗ φ, φ = L− 1( s) ψ
rewritten
Estimation model
ˆ
z = W ( s) L( s) θ φ
Normalized
= z − ˆ
z − W ( s) L( s) n 2 s
estimation error
Adaptive law
˙ θ = Γ φ
Design variables
L− 1( s) proper and stable; W ( s) L( s) proper and
SPR; m 2 = 1 + n 2 s and ns chosen so that φ ∈ L
m
∞
(e. g., n 2 s = αφ φ for some α > 0)
Properties
(i) , θ ∈ L∞; (ii) , ns, ˙ θ ∈ L 2
Lemma 4.8.1 (Uniform Complete Observability (UCO) with Output In-
jection). Assume that there exists constants ν > 0 , kν ≥ 0 such that for all t 0 ≥ 0 , K( t) ∈ Rn×l satisfies the inequality
t 0+ ν
|K( τ ) | 2 dτ ≤ kν
(4.8.1)
t 0
∀t ≥ 0 and some constants k 0 , ν > 0 . Then ( C, A) , where C ∈ Rn×l, A ∈ Rn×n, is a UCO pair if and only if ( C, A + KC ) is a UCO pair.
Proof We show that if there exist positive constants β 1 , β 2 > 0 such that the
observability grammian N ( t 0 , t 0 + ν) of the system ( C, A) satisfies
β 1 I ≤ N( t 0 , t 0 + ν) ≤ β 2 I
(4.8.2)
then the observability grammian N 1( t 0 , t 0 + ν) of ( C, A + KC ) satisfies
β 1 I ≤ N 1( t 0 , t 0 + ν) ≤ β 2 I
(4.8.3)
for some constant β 1 , β 2 > 0. From the definition of the observability grammian
matrix, (4.8.3) is equivalent to
222
CHAPTER 4. ON-LINE PARAMETER ESTIMATION
Table 4.2 Gradient algorithms
Parametric model
z = θ∗ φ
Estimation model
ˆ
z = θ φ
Normalized
z − ˆ
z
=
estimation error
m 2
A. Based on instantaneous cost
Adaptive law
˙ θ = Γ φ
Design variables
m 2 = 1 + n 2 s, n 2 s = αφ φ, α > 0, Γ = Γ > 0
Properties
(i) , ns, θ, ˙ θ ∈ L∞; (ii) , ns, ˙ θ ∈ L 2
B. Based on the integral cost
Adaptive law
˙ θ = −Γ( Rθ + Q)
˙
R = −βR + φφ , R(0) = 0
m 2
˙
Q = −βQ − zφ , Q(0) = 0
m 2
Design variables
m 2 = 1+ n 2 s, ns chosen so that φ/m ∈ L∞ (e.
g., n 2 s = αφ φ, α > 0 ); β > 0, Γ = Γ > 0
Properties
(i) , ns, θ, ˙ θ, R, Q ∈ L∞; (ii) , ns, ˙ θ ∈ L 2 ;
(iii) lim
˙
t→∞ θ = 0
t 0+ ν
β 1 |x 1( t 0) | 2 ≤
|C ( t) x 1( t) | 2 dt ≤ β 2 |x 1( t 0) | 2
(4.8.4)
t 0
where x 1 is the state of the system
˙ x 1 = ( A + KC ) x 1
(4.8.5)
y 1 = C x 1
which is obtained, using output injection, from the system
Table 4.3 Least-squares algorithms
4.8. PARAMETER CONVERGENCE PROOFS
223
Parametric model
z = θ∗ φ
Estimation model
ˆ
z = θ φ
Normalized
= ( z − ˆ
z) /m 2
estimation error
A. Pure least-squares
Adaptive law
˙ θ = P φ
˙
P = −P φφ P,
P (0) = P
m 2
0
Design variables
P 0 = P 0 > 0; m 2 = 1 + n 2 s ns chosen so that
φ/m ∈ L∞ (e.g., n 2 s = αφ φ, α > 0 or n 2 s =
φ P φ)
(i) , ns, θ, ˙ θ, P ∈ L∞; (ii) , ns, ˙ θ ∈ L 2; (iii)
Properties
lim t→∞ θ( t) = ¯
θ
B. Least-squares with covariance resetting
˙ θ = P φ
Adaptive law
˙
P = −P φφ P,
P ( t+
m 2
r ) = P 0 = ρ 0 I ,
where tr is the time for which λmin( P ) ≤ ρ 1
Design variables
ρ 0 > ρ 1 > 0; m 2 = 1 + n 2 s, ns chosen so that
φ/m ∈ L∞ (e.g., n 2 s = αφ φ, α > 0 )
Properties
(i) , ns, θ, ˙ θ, P ∈ L∞; (ii) , ns, ˙ θ ∈ L 2
C. Least-squares with forgetting factor
Adaptive law
˙ θ = P φ
˙
βP − P φφ P, if P ( t) ≤ R
P =
m 2
0
0
otherwise
P (0) = P 0
Design variables
m 2 = 1 + n 2 s, n 2 s = αφ φ or φ P φ; β > 0 , R 0 > 0 scalars; P 0 = P 0 > 0 , P 0 ≤ R 0
Properties
(i) , ns, θ, ˙ θ, P ∈ L∞; (ii) , ns, ˙ θ ∈ L 2
224
CHAPTER 4. ON-LINE PARAMETER ESTIMATION
Table 4.4 Adaptive laws for the bilinear model
Parametric model : z = W ( s) ρ∗( θ∗ ψ + z 0)
A. SPR-Lyapunov design: sign of ρ∗ known
z = W ( s) L( s) ρ∗( θ∗ φ + z
Parametric model
1)
rewritten
φ = L− 1( s) ψ, z 1 = L− 1( s) z 0
Estimation model
ˆ
z = W ( s) L( s) ρ( θ φ + z 1)
Normalized
= z − ˆ
z − W ( s) L( s) n 2
estimation error
s
˙ θ = Γ φ sgn( ρ∗)
Adaptive law
˙ ρ = γ ξ, ξ = θ φ + z 1
Design variables
L− 1( s) proper and stable; W ( s) L( s) proper and
SPR; m 2 = 1 + n 2 s; ns chosen so that φ , z 1 ∈ L
m m
∞
(e.g. n 2 s = α( φ φ+ z 21), α > 0); Γ = Γ > 0 , γ > 0
Properties
(i) , θ, ρ ∈ L∞; (ii) , ns, ˙ θ, ˙ ρ ∈ L 2
B. Gradient algorithm: sign( ρ∗) known
Parametric model
z = ρ∗( θ∗ φ + z 1)
rewritten
φ = W ( s) ψ, z 1 = W ( s) z 0
Estimation model
ˆ
z = ρ( θ φ + z 1)
Normalized
z − ˆ
z
=
estimation error
m 2
˙ θ = Γ φ sgn( ρ∗)
Adaptive law
˙ ρ = γ ξ, ξ = θ φ + z 1
Design variables
m 2 = 1 + n 2 s; ns chosen so that φ , z 1 ∈ L
m m
∞ (e.g.,
n 2 s = φ φ + z 21); Γ = Γ > 0 , γ > 0
Properties
(i) , ns, θ, ρ, ˙ θ, ˙ ρ ∈ L∞; (ii) , ns, ˙ θ, ˙ ρ ∈ L 2
˙ x = Ax
(4.8.6)
y = C x
Form (4.8.5) and (4.8.6), it follows that e = x 1 − x satisfies
˙ e = Ae + KC x 1
4.8. PARAMETER CONVERGENCE PROOFS
225
Table 4.4 (Continued)
C. Gradient algorithm with projection
Sign ( ρ∗) and lower bound 0 < ρ 0 ≤ |ρ∗| known
z = ¯
θ∗ ¯
φ
Parametric model
¯
θ∗ = [¯
θ∗ 1 , ¯ θ∗ 2 ] , ¯ θ∗ 1 = ρ∗, ¯ θ∗ 2 = ρ∗θ∗
rewritten
¯
φ = [ z 1 , φ ]
Estimation model
ˆ
z = ¯
θ ¯
φ
z − ˆ
z
Normalized
= m 2
estimation error
γ 1 z 1
if ¯
θ 1sgn( ρ∗) > ρ 0 or
˙¯ θ 1 =
if ¯
θ
1sgn( ρ∗) = ρ 0 and −γ 1 z 1 sgn( ρ∗) ≤ 0
0
otherwise
˙¯
Adaptive law
θ 2 = Γ2 φ
ρ = ¯
θ 1 , θ = ¯ θ 2
¯
θ 1
¯
m 2 = 1+ n 2
φ
s; ns chosen so that
∈ L
m
∞ (e.g., n 2
s =
Design variables
α ¯
φ ¯
φ, α > 0 ); γ 1 > 0; ¯
θ 1(0) satisfies |¯
θ 1(0) | ≥ ρ 0;
Γ2 = Γ2 > 0 , γ > 0
Properties
(i) , ns, θ, ρ, ˙ θ, ˙ ρ ∈ L∞; (ii) , ns, ˙ θ, ˙ ρ ∈ L 2
D. Gradient algorithm without projection
Unknown sign ( ρ∗)
Parametric model
z = ρ∗( θ∗ φ + z 1)
ˆ
z = N ( x) ρ( θ φ + z 1)
Estimation model
N ( x) = x 2 cos x
x = w + ρ 2 , ˙
w = 2 m 2 , w(0) = 0
2 γ
z − ˆ
z
Normalized
=
estimation error
m 2
˙ θ = N( x)Γ φ
Adaptive law
˙ ρ = N ( x) γ ξ, ξ = θ φ + z 1
Design variables
m 2 = 1 + n 2 s; ns chosen so that φ , z 1 ∈ L
m m
∞; (e.g.,
n 2 s = φ φ + z 21); γ > 0 , Γ = Γ > 0
Properties
(i) , ns, θ, ρ, ˙ θ, ˙ ρ, x, w ∈ L∞; (ii) , ns, ˙ θ, ˙ ρ ∈ L 2
226
CHAPTER 4. ON-LINE PARAMETER ESTIMATION
Table 4.5. Hybrid adaptive law
Parametric model
z = θ∗ φ
Estimation model
ˆ
z = θk φ, t ∈ [ tk, tk+1)
Normalized
estimation error
z − ˆ
z
= m 2
Adaptive law
θk+1 = θk +Γ tk+1
t
( τ ) φ( τ ) dτ, k = 0 , 1 , 2 , . . . ,
k
Design variables
Sampling period Ts = tk+1 − tk > 0 , tk = kTs;
m 2 = 1 + n 2 s and ns chosen so that |φ|/m ≤ 1
(e.g., n 2 s = αφ φ, α ≥ 1 )
Γ = Γ > 0
2 − Tsλmax(Γ) > γ for some constant γ > 0
Properties
(i) θk ∈ l∞, , ns ∈ L∞
(ii) |θk+1 − θk| ∈ l 2 ; , ns ∈ L 2
Consider the trajectories x( t) and x 1( t) with the same initial conditions. We
have
t
e( t) =
Φ( t, τ ) K( τ ) C ( τ ) x 1( τ) dτ
(4.8.7)
t 0
where Φ is the state transition matrix of (4.8.6). Defining
KC x
¯
x
1 /|KC
x 1 | if |C x 1 | = 0
1 =
K/|K|
if |C x 1 | = 0
we obtain, using the Schwartz inequality, that
t
2
|C ( t) e( t) | 2 ≤
C ( t)Φ(