Discrete Time Systems by Mario A. Jordan and Jorge L. Bustamante - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

index-89_5.png

index-89_6.png

index-89_7.png

index-89_8.png

On the Error Covariance Distribution for Kalman Filters with Packet Dropouts

77

Notice that for any P 0 we can always find a k such that kIn P 0, where In is the identity

matrix of order n. From the monotonicity of φ(·, STm) (Lemma 3.1), it follows that

PT ≤ lim φ( kIn, STm).

(47)

k→∞

We then have that

PT PT,1 + PT,2 + PT,3 + P

+

T,3

PT,4,

(48)

with

−1

PT,1 = lim AT kIn k 2 O kOO + Σ V

O

A T

(49)

k→∞

T−1

PT,2 = ∑ AjQA j

j=0

−1

PT,3 = − lim kATO kOO + Σ V

Σ Q

k→∞

−1

PT,4 = − lim Σ Q kOO + Σ V

Σ Q.

k→∞

Using the matrix inversion lemma, we have that

−1

PT,1 = AT lim k−1 In + O Σ−1 O

A T

(50)

k→∞

V

−1

= AT O Σ−1 O

A T.

(51)

V

It is straightforward to see that PT,3 can be written as

−1

PT,3 = − lim ATO OO + Σ Vk−1

Σ Q

(52)

k→∞

−1

= − AT lim O Σ−12 Σ−12 OO Σ−12 + k−1 In

Σ− 12 Σ Q.

(53)

k→∞

V

V

V

V

From (Ben-Israel & Greville, 2003, pp. 115), it follows that lim k→∞ X XX + k−1 In = X†, for

any matrix X. By making X = Σ− 12 O, we have that

V

PT,3 = − AT Σ− 12 O

Σ− 12 Σ

V

V

Q.

(54)

Using the matrix inversion lemma, we have

−1

PT,4 = − lim Σ Q Σ−1 − Σ−1 O O Σ−1 O + k−1 In

O Σ−1 Σ Q

(55)

k→∞

V

V

V

V

−1

= Σ Q Σ−1 − Σ−1 O O Σ−1 O

O Σ−1 Σ

V

V

V

V

Q

and the result follows by substituting (51), (54) and (55) in (48).

index-90_1.png

index-90_2.png

index-90_3.png

index-90_4.png

index-90_5.png

index-90_6.png

index-90_7.png

index-90_8.png

index-90_9.png

index-90_10.png

index-90_11.png

index-90_12.png

index-90_13.png

index-90_14.png

index-90_15.png

78

Discrete Time Systems

In order to keep the notation consistent with that of Section 3.1, with some abuse of notation

we introduce the following definition:

φ(∞, STm)

P( STm), if O has full column rank

(56)

In,

otherwise

where ∞ In is an n × n diagonal matrix with ∞ on every entry of the main diagonal. Then, we

obtain a lower bound for F( x) as follows

2 T −1

FT( x) = ∑ P STm H x − Tr{ φ(∞, STm)} .

(57)

m=0

3.3 Monotonic approximation of the bounds to F( x)

T

In this section we show that the bounds FT( x) and F ( x) in (24) approach monotonically F( x),

as T tends to infinity. This is stated in the following theorem.

Theorem 3.1. We have that

FT+1( x) ≥ FT( x)

(58)

T+1

T

F

( x) ≤ F ( x).

(59)

T

Moreover, the bounds FT( x) and F ( x) approach monotonically the true CDF F( x) as T tends to .

Proof: Let STm be a sequence of length T. From (17) and Lemma 3.1 and for any P 0 > 0, we

have

φ( P 0, { STm, 0}) = φ(Φ0( P 0), STm) ≤ φ(∞, STm).

(60)

From the monotonicity of φ(·, STm) and Φ0(·), stated in Lemma 3.1 we have

φ( P 0, { STm, 0}) = φ(Φ0( P 0), STm) ≥ φ( P 0, STm)

(61)

which implies that

φ(∞, { STm, 0}) ≥ φ(∞, STm).

(62)

From (60) and (62), we have

φ(∞, { STm, 0}) = φ(∞, STm).

(63)

Also, if the matrix O (defined in Lemma 3.2) resulting from the sequence STm has full column

rank, then so has the same matrix resulting from the sequence { STm, 1}. This implies that

φ(∞, { STm, 1}) ≤ φ(∞, STm).

(64)

Now, from Lemma 3.1, Φ0( P) ≥ P, and therefore,

φ( P, { STm, 0}) = φ(Φ0( P), STm)

(65)

φ( P, STm).

(66)

index-91_1.png

index-91_2.png

index-91_3.png

index-91_4.png

index-91_5.png

index-91_6.png

index-91_7.png

index-91_8.png

index-91_9.png

index-91_10.png

On the Error Covariance Distribution for Kalman Filters with Packet Dropouts

79

1

0.9

0.8

x)≤) 0.7

ace(P(trP 0.6

T = 4

T = 6

0.5

T = 8

Experimental

0.40

2000

4000

6000

8000

10000

x

Fig. 1. Upper and lower bounds for the Error Covariance.

Also, since Φ1( P) = P, we have that

φ( P, { STm, 1}) = φ( φ 1( P), STm)

(67)

= φ( P, STm).

(68)

Hence, for any binary variable γ, we have that

φ(∞, { STm, γ}) ≤ φ(∞, STm)

(69)

φ( P, { STm, γ}) ≥ φ( P, STm).

(70)

Now notice that the bounds (29) and (57) only differ in the position of the step functions H(·).

Hence, the result follows from (69) and (70).

3.4 Example

Consider the system below, which is taken from Sinopoli et al. (2004),

A = 1.25 0

C = 1

1 1.1

1

(71)

Q = 20 0

R = 2.5,

0 20

T

with λ = 0.5. In Figure 1 we show the upper bound F ( x) and the lower bound FT( x), for

T = 3, T = 5 and T = 8. We also show an estimate of the true CDF F( x) obtained from a

Monte Carlo simulation using 10, 000 runs. Notice that, as T increases, the bounds become

tighter, and for T = 8, it is hard to distinguish between the lower and the upper bounds.

index-92_1.png

index-92_2.png

index-92_3.png

index-92_4.png

index-92_5.png

index-92_6.png

index-92_7.png

index-92_8.png

index-92_9.png

index-92_10.png

index-92_11.png

index-92_12.png

80

Discrete Time Systems

4. Bounds for the expected error covariance

In this section we derive upper and lower bounds for the trace G of the asymptotic EEC, i.e.,

G = lim Tr{ E{ Pt}}.

(72)

t→∞

Since Pt is positive-semidefinite, we have that,

Tr{ E{ Pt}} =

(1 − Ft( x)) dx.

(73)

0

Hence, 2

G =

(1 − lim Ft( x)) dx

(75)

0

t→∞

=

(1 − F( x)) dx

(76)

0

4.1 Lower bounds for the EEC

In view of (76), a lower bound for G, can be obtained from an upper bound of F( x). One

T

T

T

such bound is F ( x), derived in Section 3.1. A limitation of F ( x) is that F ( x) = 1, for all

x > φ( P, ST)

0 , hence it is too conservative for large values of x. To go around this, we introduce

an alternative upper bound for F( x), denoted by F ( x).

Our strategy for doing so is to group the sequences STm, m = 0, 1, · · · , 2 T − 1, according to the

number of consecutive lost measurements at its end. Then, from each group, we only consider

the worst sequence, i.e., the one producing the smallest EEC trace.

Notice that the sequences STm with m < 2 Tz, 0 ≤ z T, are those having the last z elements equal to zero. Then, from (25) and (26), it follows that

arg min Tr{ φ( X, STm)} = 2 Tz − 1,

(77)

0≤ m<2 Tz

i.e., from all sequences with z zeroes at its end, the one that produces the smallest EEC trace

has its first T z elements equal to one. Using this, an upper bound for F( x) is given by

F( x) ≤ F ( x)

1 − (1 − λ) k( x)

(78)

where

⎨0,

x P

k( x) = ⎩

j

(79)

min j : Tr φ( P, S ) > x , x > P.

0

2 Following the argument in Theorem 3.1, it can be verified that (1 − Ft( x)) ≤ F( x) with

F( x) =

1

x ≤ Tr{ P 0}

(74)

F( x) x > Tr{ P 0}.

Hence, using Lebesgue’s dominated convergence theorem, the limit can be exchanged with the integral

whenever

(1 − F( x)) dx < ∞, i.e., whenever the asymptotic EEC is finite.

0

index-93_1.png

index-93_2.png

index-93_3.png

index-93_4.png

index-93_5.png

index-93_6.png

index-93_7.png

index-93_8.png

index-93_9.png

index-93_10.png

index-93_11.png

index-93_12.png

index-93_13.png

index-93_14.png

index-93_15.png

index-93_16.png

index-93_17.png

index-93_18.png

index-93_19.png

index-93_20.png

index-93_21.png

On the Error Covariance Distribution for Kalman Filters with Packet Dropouts

81

T

We can now use both F ( x) and F ( x) to obtain a lower bound GT for G as follows

T

GT =

1 − min{ F ( x), F ( x)} dx.

(80)

0

The next lemma states the regions in which each bound is less conservative.

Lemma 4.1. The following properties hold true:

T

F ( x) ≤ F ( x), ∀ x ≤ Tr φ( P, ST)

0

(81)

T

F ( x) > F ( x), ∀ x > Tr φ( P, ST)

0

.

(82)

Proof: Define

j

Z( i, j)

Tr φ( P, S ) .

(83)

i

T

To prove (81), notice that F ( x) can be written as

2 T−1

T

F ( x) =

∑ P( ST)

j .

(84)

j=0

j: Z( j, T)≤ x

Substituting x = Z(0, K) we have for all 1 < K T

2 T−1

T

F ( Z(0, K)) =

P( ST)

j

(85)

j=0

j: Z( j, T)≤ Z(0, K)

2 T−1

= 1 −

P( ST)

j

(86)

j=0

j: Z( j, T)> Z(0, K)

Now, notice that the summation in (86) includes, but is not limited to, all the sequences

finishing with K zeroes. Hence

2 T−1

P( ST) ≥ (

j

1 − λ) K

(87)

j=0

j: Z( j, T)> Z(0, K)