Discrete Time Systems by Mario A. Jordan and Jorge L. Bustamante - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

Department of Information statistics, Gyeong sang National University

South Korea

1. Introduction

The integration of information from a combination of different types of observed

instruments (sensors) are often used in the design of high-accuracy control systems. Typical

applications that benefit from this use of multiple sensors include industrial tasks, military

commands, mobile robot navigation, multi-target tracking, and aircraft navigation (see (hall,

1992, Bar-Shalom, 1990, Bar-Shalom & Li, 1995, Zhu, 2002, Ren & Key, 1989) and references

therein). One problem that arises from the use of multiple sensors is that if all local sensors

observe the same target, the question then becomes how to effectively combine the

corresponding local estimates. Several distributed fusion architectures have been discussed

in (Alouani, 2005, Bar-Shalom & Campo, 1986, Bar-Shalom, 2006, Li et al., 2003, Berg &

Durrant-Whyte, 1994, Hamshemipour et al., 1998) and algorithms for distributed estimation

fusion have been developed in (Bar-Shalom & Campo, 1986, Chang et al., 1997, Chang et al,

2002, Deng et al., 2005, Sun, 2004, Zhou et al., 2006, Zhu et al., 1999, Zhu et al., 2001, Roecker

& McGillem, 1998, Shin et al, 2006). To this end, the Bar-Shalom and Campo fusion formula

(Bar-Shalom & Campo, 1986) for two-sensor systems has been generalized for an arbitrary

number of sensors in (Deng et al., 2005, Sun, 2004, Shin et al., 2007) The formula represents

an optimal mean-square linear combination of the local estimates with matrix weights. The

analogous formula for weighting an arbitrary number of local estimates using scalar weights

has been proposed in (Shin et al., 2007, Sun & Deng, 2005, Lee & Shin 2007).

However, because of lack of prior information, in general, the distributed filtering using the

fusion formula is globally suboptimal compared with optimal centralized filtering (Chang et

al., 1997). Nevertheless, in this case it has advantages of lower computational requirements,

efficient communication costs, parallel implementation, and fault-tolerance (Chang et al.,

1997, Chang et al, 2002, Roecker & McGillem, 1998). Therefore, in spite of its limitations, the

fusion formula has been widely used and is superior to the centralized filtering in real

applications.

The aforementioned papers have not focused on prediction problem, but most of them have

considered only distributed filtering in multisensory continuous and discrete dynamic

models. Direct generalization of the distributed fusion filtering algorithms to the prediction

problem is impossible. The distributed prediction requires special algorithms one of which

for discrete-time systems was presented in (Song et al. 2009). In this paper, we generalize the

results of (Song et al. 2009) on mixed continuous-discrete systems. The continuous-discrete

40

Discrete Time Systems

approach allows system to avoid discretization by propagating the estimate and error

covariance between observations in continuous time using an integration routine such as

Runge-Kutta. This approach yields the optimal or suboptimal estimate continuously at all

times, including times between the data arrival instants. One advantage of the continuous-

discrete filter over the alternative approach using system discretization is that in the former,

it is not necessary for the sample times to be equally spaced. This means that the cases of

irregular and intermittent measurements are easy to handle. In the absensce of data the

optimal prediction is given by performing only the time update portion of the algorithm.

Thus, the primary aim of this paper is to propose two distributed fusion predictors using

fusion formula with matrix weights, and analysis their statistical properties and relationship

between them. Then, through a comparison with an optimal centralized predictor,

performance of the novel predictors is evaluated.

This chapter is organized as follows. In Section 2, we present the statement of the

continuous-discrete prediction problem in a multisensor environment and give its optimal

solution. In Section 3, we propose two fusion predictors, derived by using the fusion

formula and establish the equivalence between them. Unbiased property of the fusion

predictors is also proved. The performance of the proposed predictors is studied on

examples in Section 4. Finally, concluding remarks are presented in Section 5.

2. Statement of problem – centralized predictor

We consider a linear system described by the stochastic differential equation

x =

+

t

t

F xt Gtvt , t 0, (1)

where

n

x ∈ℜ

t

is the state,

q

v ∈ℜ

t

is a zero-mean Gaussian white noise with covariance

E(

T

v

=

n n

n q

q q

tvs )

Qtδ(t-s) , and t F

×

∈ℜ

, G

×

∈ℜ

t

, and t

Q

×

∈ℜ .

Suppose that overall discrete observations

m

t

Y ∈ℜ at time instants t ,t ,... are composed

k

1

2

of N observation subvectors (local sensors) (1)

(N)

y ,...,y

, i.e.,

tk

tk

T

T

(1)

(N) T

t

Y =[y

y

] , (2)

k

tk

tk

where (i)

y , i=1,…,N are determined by the equations

tk

(1)

(1)

(1)

(1)

m1

y

∈ℜ

t =Ht xt +w

, y

,

k

k

k

tk

tk

(3)

(N)

(N)

(N)

(N)

mN

y =H x +w , y

∈ℜ

,

t

t

t

k

k

k

tk

tk

k=1,2,...; t

k+1>tk

t0=0; m=m1+ +mN ,

where (i)

mi

y ∈ℜ is the local sensor observation, (i)

n mi

H

×

∈ℜ

, and { (i)

mi

w ∈ℜ

=

t

, k 1,2,...

k

}

tk

tk

are zero-mean white Gaussian sequences, (i)

(i)

wt ~

0,R

, i=1,...,N . The distribution of the

k

( tk )

initial state x

(i)

=

0 is Gaussian, x0 ~ ( x0 , 0

P ) , and x0 , vt , and {wt , i 1,...,N are assumed

k }

mutually uncorrelated.

Distributed Fusion Prediction for Mixed Continuous-Discrete Linear Systems

41

A problem associated with such systems is to find the distributed weighted fusion predictor

ˆxt+Δ , 0

Δ ≥ of the state xt+Δ based on overall current sensor observations

tk

Y = Y ,...,Y , t <...<t ≤ t ≤ t + Δ , Δ ≥ 0. (4)

t

t

t

1

k

1

{ 1

k }

2.1 The optimal centralized predictor

The optimal centralized predictor is constructed by analogy with the continuous-discrete

Kalman filter (Lewis, 1986, Gelb, 1974). In this case the prediction estimate opt

ˆxt+Δ and its error

covariance opt

t+

P Δ are determined by the combining of time update and observation update,

opt

opt

opt

opt

ˆx =F ˆx , t ≤ s ≤ t+Δ , ˆx

s

s s

k

x ,

s=tk

tk

(5)

opt

opt

opt T

opt

opt

⎪ s

P = s

F s

P + s

P

s

F +Qs , P =P ,

s=tk

tk

where the initial conditions represent filtering estimate of the state opt

ˆx and its error

tk

covariance opt

P

which are given by the continuous-discrete Kalman filter equations (Lewis,

tk

1986, Gelb, 1974):

Time update between observations :

-

-

-

⎧ opt

opt

opt

opt

ˆx

=F ˆx

, t

≤ τ ≤ t , ˆx

⎪ τ

τ τ

k-1

k

τ=t

x

,

(6a)

k-1

tk-1

-

-

-

-

opt

opt

opt

T

opt

opt

⎪ τ

P

= τ

F τ

P

+ τ

P

τ

F +Qτ ,

τ

P =t =P ,

k-1

tk-1

Observation update at time t :

k

-

⎧ opt

opt

opt

opt

ˆx =ˆx

+L

Y -H ˆx

,

⎪ t

t

t

t

t

k

k

k (

-

k

k

tk )

-

-1

⎪ opt

opt

T

opt

T

(6b)

⎨ L =P

H

H P

H +R

,

t

t

t

t

t

t

t

k

k

(

-

k

k

k

k

k )

⎪⎪ opt

opt

opt

P = I -L H

P

.

t

n

t

t

k

( k ) -

k

tk

Here I

×

T

⎡ (1)

(N) ⎤

n is the n

n identity matrix,

T

Q

t =GtQtGt ,

T

T

t

Y = y

y

,

k

⎢ tk

tk

⎥⎦

T

T

T

⎡ (1)

(N)

H

(1)

(N)

(i)

t = H

H

, R =diag R

R

, and the matrices

G , Q and R

k

⎢ t

t

t

F , t

t

k

tk

⎥⎦ k

⎣ tk

tk ⎦

tk

are defined in (1)-(3). Note that in the absence of observation t

Y , the centralized predictor

k

includes two time update equations (5) and (6a), and in case of presence at time t=tk the

initial conditions opt

ˆx an d opt

P

for (5) computed by the observation update equations (6b).

tk

tk

Many advanced systems now make use of a large number of sensors in practical

applications ranging from aerospace and defence, robotics automation systems, to the

monitoring and control of process generation plants. Recent developments in integrated

sensor network systems have further motivated the search for decentralized signal

processing algorithms. An important practical problem in the above systems is to find a

fusion estimate to combine the information from various local estimates to produce a global

42

Discrete Time Systems

(fusion) estimate. Moreover, there are several limitations for the centralized estimators in

practical implementation, such as computational cost and capacity of data transmission.

Also numerical errors of the centralized estimator design are drastically increased with

dimension of the state

n

x ∈ℜ

m

t

and overall observations t

Y ∈ℜ . In these cases the

k

centralized estimators may be impractical. In next Section, we propose two new fusion

predictors for multisensor mixed continuous-discrete linear systems (1), (3).

3. Two distributed fusion predictors

The derivation of the fusion predictors is based on the assumption that the overall

observation vector

(1)

(N)

t

Y combines the local subvectors (individual sensors) y ,...,y

, which

k

tk

tk

can be processed separately. According to (1) and (3), we have N unconnected dynamic

subsystems ( i = 1,...,N ) with the common state x

(i)

t and local sensor y

:

tk

x

t = t

F xt+Gtvt , t t0 ,

(i)

(i)

(i)

y =H x +w ,

(7)

t

t

t

k

k

k

tk

k=1,2,...; t

=

k+1>tk

t0 0 ,

where i is the index of subsystem. Then by the analogy with the centralized prediction

equations (5), (6) the optimal local predictor (i)

ˆxt+Δ based on the overall local observations

{ (i) (i)

y

≤ ≤ + Δ

t ,...,yt

, tk t t

satisfies the following time update and observation update

1

k }

equations:

(i)

(i)

(i)

(i)

⎧ ˆx =F ˆx , t ≤ s ≤ t+Δ , ˆx

=ˆx ,

s

s s

k

s=tk

tk

(8)

(ii)

(ii)

(ii) T

(ii)

(ii)

⎪ s

P = s

F s

P + s

P

s

F +Qs ,

s=t

P

=P ,

k

tk

where the initial conditions (i)

ˆx and its error covariance (ii)

P are given by the continuous-

tk

tk

discrete Kalman filter equations

Time update between observations :

-

-

-

⎧ (i)

(i)

(i)

(i)

ˆx =F ˆx , t

≤ τ ≤ t , ˆx

⎪ τ

τ τ

k-1

k

τ=t

x

,

(9a)

k-1

tk-1

-

-

-

-

(ii)

(ii)

(ii)

T

(ii)

(ii)

⎪ τ

P

= τ

F τ

P

+ τ

P

τ

F +Qτ ,

τ

P =t =P ,

k-1

tk-1

Observation update at time t :

k

-

(i)

(i)

(i)

ˆx =ˆx +L

y -H ˆ

t

t (

-

(i)

(i) (i)

t

x

,

k

k

k

tk

tk tk )

-

T

(9b)

⎨ L = t

P Ht (

-

T

-1

(i)

(ii)

(i)

(i) (ii)

(i)

(i)

t

H P H +R

,

k

k

k

tk tk

tk

tk )

⎪⎪ (ii)t

P =(

(i)

(i)

I -Lt Ht )

-

(ii)

n

P

.

k

k

k

tk

Thus from (8) we have N local filtering (i) (i)

ˆx =ˆ

ˆx

= ˆ

t

xs=t and prediction (i)

(i)

t+Δ

xs=t+Δ estimates,

and corresponding error covariances (ii)

(ii)

t

P an d t+

P Δ for i=1,...,N and t tk . Using these

values we propose two fusion prediction algorithms.

Distributed Fusion Prediction for Mixed Continuous-Discrete Linear Systems

43

3.1 The fusion of local predictors (FLP Algorithm)

The fusion predictor FLP

ˆxt+Δ of the state xt+Δ based on the overall sensors (2), (3) is

constructed from the local predictors (i)

ˆx

=

t+Δ , i

1,...,N by using the fusion formula (Zhou et

al., 2006, Shin et al., 2006):

N

N

FLP

(i)

(i)

(i)

x

=∑a ˆ

t+Δ

t+Δxt+Δ ,

∑at+Δ=In , (10)

i=1

i=1

where (1)

(N)

a

×

t+Δ ,

,at+Δ are n n time-varying matrix weights determined from the mean-

square criterion,

2

N

FLP

(i)

(i)

J

t+Δ =E

xt+Δ -∑at+Δxt+Δ . (11)

i=1

The Theorems 1 and 2 completely define the fusion predictor FLP

ˆxt+Δ and its overall error

covariance FLP

FLP

FLP

FLP

FLP

P

=cov(x

,x

), x

=x

t+Δ

t+Δ

t+Δ

t+Δ

t+Δ xt+Δ .

Theorem 1: Let (1)

(N)

ˆx

,…,ˆ

t+Δ

xt+Δ are the local predictors of an unknown state xt+Δ . Then

a. The weights (1)

(N)

a

t+Δ ,

,at+Δ satisfy the linear algebraic equations

N

N

(i)

(ij)

(iN)

(i)

∑a ⎡

t+Δ

t+

P Δ - t+

P Δ =0,

at+Δ=In , j=1, ,N-1;

(12)

i=1

i=1

b. The local covariance (ii)

(i)

(i)

(i)

(i)

P =cov(x

,x

), x

=x

t+Δ

t+Δ

t+Δ

t+Δ

t+Δ xt+Δ satisfies (8) and local cross-

covariance ( ij)

( i)

( j)

P +Δ = cov( x +Δ , x +Δ ), ≠