11
103
Note that Pk+1 |k given by (66) satisfies (41) for any Φ k and Kk. In this sense, we can choose
them to minimize the covariance of the estimation error given by Pk+1 |k. We calculate the first
order partial derivatives of (66) with respect to Φ k and Kk and making them equal to zero, i.e. ,
∂
=
∂Φ Pk+1 |k
0
(67)
k
∂
=
∂ P
0.
(68)
K
k+1 |k
k
Then the optimal values Φ k = Φ ∗ and K
are given by
k
k = K∗
k
†
K∗ =
+ Ψ
+ Ψ
k
AkSkCTk
1, k
CkSkCTk
2, k
,
(69)
Φ ∗ =
−
k
Ak + ( Ak − K∗kCk) P 12 c, kP†22 c, k
I ,
(70)
where
Sk := P 11 c, k − P 12 c, kP†22 c, kPT 12 c, k,
(71)
Ψ1, k := Bw, kWc, kDT +
+ Δ
w, k
Bv, kVc, kDTv, k
1, k,
(72)
Ψ2, k := Dw, kWc, kDT +
+ Δ
w, k
Dv, kVc, kDTv, k
2, k.
(73)
Actually Φ ∗ and K∗ provide the global minimum of P
k
k
k+1 |k. This can be proved though the
convexity of P
>
k+1 |k at (66). We first have that Pk+1 |k
0, Wk > 0 and Vk > 0, ∀k. Then we
calculate the Hessian matrix to conclude that we have the global minimum:
⎡
⎤
∂ 2 P
∂ 2
k+1 |k
He P
⎣ ∂ 2Φ k
∂ 2[Φ k, Kk] Pk+1 |k ⎦ = 2 P 22, k|k− 1
2 CkS 2, k
>
k+1 |k
:=
∂ 2
∂ 2
0.
2 ST CT C
+ Ψ
∂ 2[
P
K
k+1 |k
2, k k
k SkCT
k
3, k
k ,Φ k ] Pk+1 |k
∂ 2 Kk
At the previous equations we used the pseudo-inverse instead of the simple matrix inverse.
Taking a look at the initial conditions P 12,0 |− 1 = PT
= P
12,0 |− 1
22,0 |− 1 = 0, one can note that
P 22,0 = 0 and, as consequence, the inverse does not exist for all instant k. However, it can be
proved that the pseudo-inverse does exist.
Replacing (70) and (69) in (52) and (53), we obtain
P 12,
=
=
=
k+1 |k
PT
12,
P
k+1 |k
22, k+1 |k
†
T
= AkP 12 c, kP− 1 PT
+ A
+ Ψ
+ Ψ
A
+ Ψ
. (74)
22 c, k 12 c, k AT
k
k SkCT
k
1, k
CkSkCTk
2, k
k SkCT
k
1, k
Since (74) holds for any symmetric P
=
k+1 |k, if we start with a matrix Pn+1 |n satisfying P 12, n+1 |n
PT
= P
12, n+1 |n
22, n+1 |n for some n ≥ 0, then we can conclude that (74) is valid for any k ≥ n.
The equality allows us some simplifications. The first one is
− 1
Sk = Pc, k|k− 1 := Pk|k− 1 + Pk|k− 1 GT
α− 1
x, k
I − G
G
x, k
x, k Pk|k− 1 GT
x, k
x, k Pk|k− 1.
(75)
In fact, the covariance matrix of the estimation error presents a modified notation to deal with
the uncertain system. At this point, we can conclude that αx, k shall now satisfy
α− 1 I − G
> 0.
(76)
x, k
x, k Pk|k− 1 GT
x, k
12
104
Discrete Time Systems
Discrete Time Systems
Using (74), we can simplify the expressions for Φ ∗, K∗ and P
k
k
k+1 |k. We can define Φ k given in
Step 4 of Table 2 as Φ k = Φ ∗. The simplified expression for the predictor gain is given by
k
†
K∗ =
+ Ψ
+ Ψ
k
AkPc, k|k− 1 CTk
1, k
CkPc, k|k− 1 CTk
2, k
,
which can be rewritten as presented in Step 4 of Table 2. The expression for the Riccati
equation can be written as
P
=
k+1 |k
( Ak − K∗kCk) Pc, k|k− 1 ( Ak − K∗kCk) T
+ B
T
w, k − K∗
k Dw, k Wc, k Bw, k − K∗
k Dw, k
+ B
T
v, k − K∗
k Dv, k Vc, k Bv, k − K∗
k Dv, k
+ α− 1 H
T
x, k
A, k − K∗
k HC, k
HA, k − K∗kHC, k
+ α− 1 H
T
w, k
Bw, k − K∗
k HDw, k
HBw, k − K∗kHDw, k
+ α− 1 H
T .
v, k
Bv, k − K∗
k HDv, k
HBv, k − K∗kHDv, k
Replacing the expression for K∗ in P
k
k+1 |k, we obtain the Riccati equation given in Step 5 of
Table 2.
Using an alternative representation, remember the predictor structure:
x
=
k+1 |k
Φ kxk|k− 1 + Bw, kwk + Bv, kvk + Kk yk − Ckxk|k− 1 − Dw, kwk − Dv, kvk .
(77)
Replace Φ ∗ into (77) to obtain
k
x
=
k+1 |k
Ac, kxk|k− 1 + Bw, kwk + Bv, kvk + Kk yk − Cc, kxk|k− 1 − Dw, kwk − Dv, kvk , (78) where
− 1
Ac, k := Ak + AkPk|k− 1 GT
α− 1
x, k
I − G
G
x, k
x, k Pk|k− 1 GT
x, k
x, k,
(79)
− 1
Cc, k := Ck + CkPk|k− 1 GT
α− 1
x, k
I − G
G
x, k
x, k Pk|k− 1 GT
x, k
x, k.
(80)
Once again, it is possible to obtain the classic estimator from the structure (79)-(80) for a system
without uncertainties.
5. Numerical example
At this section we perform a simulation to illustrate to importance to consider the
uncertainties at your predictor design.
One good way to quantify the performance of the estimator would be using its real variance
to the error estimation.
However, this is difficult to obtain from the response of the
model. For this reason, we approximate the real variance of the estimation error using the
ensemble-average (see Ishihara et al. (2006) and Sayed (2001)) given by:
N
(
(
2
var e
j)
j)
i, k
≈ 1 ∑ e − E e
,
(81)
N
i, k
i, k
j=1
(
N
(
E e j)
≈ 1 ∑ e j),
(82)
i, k
N
i, k
j=1
Kalman Filtering for Discrete Time Uncertain Systems
Kalman Filtering for Discrete Time Uncertain Systems
13
105
Step 0 (Initial conditions): x 0 |− 1 = x 0 and P 0 |− 1 = X 0.
Step 1: Obtain scalar parameters αx, k, αw, k and αv, k that satisfy (76),
(48) and (49), respectively. Then define
Δ1, k := α− 1 H
+ α− 1 H
+ α− 1 H
,
x, k
A, k HT
C, k
w, k
Bw, k HT
Dw, k
v, k
Bv, k HT
Dv, k
Δ2, k := α− 1 H
+ α− 1 H
+ α− 1 H
,
x, k
C, k HT
C, k
w, k
Dw, k HT
Dw, k
v, k
Dv, k HT
Dv, k
Δ3, k := α− 1 H
+ α− 1 H
+ α− 1 H
.
x, k
A, k HT
A, k
w, k
Bw, k HT
Bw, k
v, k
Bv, k HT
Bv, k
Step 2: Calculate the corrections due to the presence of uncertainties
− 1
Pc, k|k− 1 := Pk|k− 1 + Pk|k− 1 GT
α− 1 I − G
G
x, k
x, k
x, k Pk|k− 1 GT
x, k
x, k Pk|k− 1,
− 1
Wc, k := Wk + WkGT
α− 1 I − G
G
w, k
w, k
w, kWk GT
w, k
w, kWk.
− 1
Vc, k := Vk + VkGT
α− 1 I − G
G
v, k
v, k
v, kVk GT
v, k
v, kVk,
Step 3: Define the augmented matrices
Bk := Bw, k Bv, k , Dk := Dw, k Dv, k , Uc, k := diag Wc, k, Vc, k .
Step 4: Calculate the parameters of the predictor as
†
Kk = AkP
+
+
+
+
c, k|k− 1 CT
B
Δ
D
Δ
,
k
kUc, k DT
k
1, k
CkPc, k|k− 1 CTk
kUc, k DT
k
2, k
− 1
Φ k = Ak + ( Ak − KkCk) Pk|k− 1 GT α− 1 I − G
G
x, k
x, k
x, k Pk|k− 1 GT
x, k
x, k.
Step 5: Update xk+1 |k and Pk+1 |k as
x
=
k+1 |k
Φ kxk|k− 1 + Bw, kwk + Bv, kvk + Kk yk − Ckxk|k− 1 − Dw, kwk − Dv, kvk ,
P
=
+
+
k+1 |k
AkPc, k|k− 1 AT
B
Δ
k
kUc, k BT
k
3, k
†
T
− AkP
+
+
+
+
c, k|k− 1 CT
Δ
D
Δ
A
Δ
k
1, k
CkPc, k|k− 1 CTk
kUc, k DT
k
2, k
k Pc, k|k− 1 CT
k
1, k
Table 2. The Enhanced Robust Predictor.
(
(
where e j) is the i-th component of the estimation error vector