A Mathematical Theory of Communication by C. E. Shannon - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

volume of the sphere approaches log

2 eN.

In the continuous case it is convenient to work not with the entropy H of an ensemble but with a derived quantity which we will call the entropy power. This is defined as the power in a white noise limited to the

same band as the original ensemble and having the same entropy. In other words if H 0 is the entropy of an ensemble its entropy power is

1

N 1

exp 2 H 0

=

:

2 e

In the geometrical picture this amounts to measuring the high probability volume by the squared radius of a

sphere having the same volume. Since white noise has the maximum entropy for a given power, the entropy

power of any noise is less than or equal to its actual power.

22. ENTROPY LOSS IN LINEAR FILTERS

Theorem 14: If an ensemble having an entropy H 1 per degree of freedom in band W is passed through a filter with characteristic Y f the output ensemble has an entropy

1 Z

H

2

2

H 1

log Y f

d f

=

+

j

j

:

W W

The operation of the filter is essentially a linear transformation of coordinates. If we think of the different frequency components as the original coordinate system, the new frequency components are merely the old

ones multiplied by factors. The coordinate transformation matrix is thus essentially diagonalized in terms

of these coordinates. The Jacobian of the transformation is (for n sine and n cosine components) n

J

Y f 2

i

=

j

j

i 1

=

where the fi are equally spaced through the band W . This becomes in the limit

1 Z

exp

log Y f 2 d f

j

j

:

W W

Since J is constant its average value is the same quantity and applying the theorem on the change of entropy with a change of coordinates, the result follows. We may also phrase it in terms of the entropy power. Thus

if the entropy power of the first ensemble is N 1 that of the second is

1 Z

N

2

1 exp

log Y f

d f

j

j

:

W W

39

index-40_1.png

index-40_2.png

index-40_3.png

index-40_4.png

index-40_5.png

index-40_6.png

index-40_7.png

index-40_8.png

index-40_9.png

index-40_10.png

index-40_11.png

index-40_12.png

index-40_13.png

index-40_14.png

index-40_15.png

index-40_16.png

index-40_17.png

index-40_18.png

index-40_19.png

index-40_20.png

index-40_21.png

index-40_22.png

index-40_23.png

index-40_24.png

index-40_25.png

index-40_26.png

index-40_27.png

index-40_28.png

index-40_29.png

index-40_30.png

index-40_31.png

index-40_32.png

index-40_33.png

index-40_34.png

index-40_35.png

index-40_36.png

index-40_37.png

index-40_38.png

index-40_39.png

index-40_40.png

index-40_41.png

index-40_42.png

index-40_43.png

index-40_44.png

index-40_45.png

index-40_46.png

index-40_47.png

index-40_48.png

index-40_49.png

index-40_50.png

index-40_51.png

index-40_52.png

index-40_53.png

index-40_54.png

index-40_55.png

index-40_56.png

index-40_57.png

index-40_58.png

index-40_59.png

index-40_60.png

index-40_61.png

TABLE I

ENTROPY

ENTROPY

GAIN

POWER

POWER GAIN

IMPULSE RESPONSE

FACTOR

IN DECIBELS

1

1

1

sin2 t 2

,

!

8 69

=

e 2

,

:

t 2 2

=

!

0

1

1

1

2

2 4

sin t

cos t

,

!

5 33

2

,

:

e

t 3 , t 2

!

0

1

1

1

3

cos t

1

cos t

sin t

,

,

!

0 411

3 87

6

:

,

:

t 4

,

2 t 2 + t 3

!

0

1

1

p

1

2

2 2

J 1 t

,

!

2 67

e

,

:

2

t

!

0

1

1

1

1

8 69

cos 1

t

cos t

:

,

,

e 2

,

t 2

!

0

1

The final entropy power is the initial entropy power multiplied by the geometric mean gain of the filter. If

the gain is measured in db, then the output entropy power will be increased by the arithmetic mean db gain over W .

In Table I the entropy power loss has been calculated (and also expressed in db) for a number of ideal gain characteristics. The impulsive responses of these filters are also given for W

2 , with phase assumed

=

to be 0.

The entropy loss for many other cases can be obtained from these results. For example the entropy

power factor 1 e 2 for the first case also applies to any gain characteristic obtain from 1

by a measure

=

,

!

preserving transformation of the

axis. In particular a linearly increasing gain G

, or a “saw tooth”

!

!

=

!

characteristic between 0 and 1 have the same entropy loss. The reciprocal gain has the reciprocal factor.

Thus 1

has the factor e 2. Raising the gain to any power raises the factor to this power.

=!

23. ENTROPY OF A SUM OF TWO ENSEMBLES

If we have two ensembles of functions f

t and g

t we can form a new ensemble by “addition.” Suppose

the first ensemble has the probability density function p x 1

xn and the second q x 1

xn . Then the

;

:

:

:

;

;

:

:

:

;

40

index-41_1.png

index-41_2.png

index-41_3.png

index-41_4.png

index-41_5.png

index-41_6.png

index-41_7.png

density function for the sum is given by the convolution:

Z

Z

r x 1

xn

p y 1

yn q x 1

y 1

xn

yn dy 1

dyn

;

:

:

:

;

=

;

:

:

:

;

,

;

:

:

:

;

,

:

Physically this corresponds to adding the noises or signals represented by the original ensembles of func-

tions.

The following result is derived in Appendix 6.

Theorem 15: Let the average power of two ensembles be N 1 and N 2 and let their entropy powers be N 1

and N 2. Then the entropy power of the sum, N 3, is bounded by

N 1

N 2

N 3

N 1

N 2

+

+

:

White Gaussian noise has the peculiar property that it can absorb any other noise or signal ensemble

which may be added to it with a resultant entropy power approximately equal to the sum of the white noise

power and the signal power (measured from the average signal value, which is normally zero), provided the

signal power is small, in a certain sense, compared to noise.

Consider the function space associated with these ensembles having n dimensions. The white noise

corresponds to the spherical Gaussian distribution in this space. The signal ensemble corresponds to another

probability distribution, not necessarily Gaussian or spherical. Let the second moments of this distribution

about its center of gravity be ai j. That is, if p x 1

xn is the density distribution function

;

:

:

:

;

Z

Z

ai j

p xi

i

x j

j dx 1

dxn

=

,

,

where the

i are the coordinates of the center of gravity. Now ai j is a positive definite quadratic form, and we can rotate our coordinate system to align it with the principal directions of this form. ai j is then reduced to diagonal form bii. We require that each bii be small compared to N, the squared radius of the spherical distribution.

In this case the convolution of the noise and signal produce approximately a Gaussian distribution whose

corresponding quadratic form is

N

bii

+

:

The entropy power of this distribution is

h∏

i1 n

=

N

bii

+

or approximately

h

i1 n

=

N n

b

n 1

,

ii N

=

+

1

:

N

bii

=

+

:

n

The last term is the signal power, while the first is the noise power.