Basics of Algebra, Topology, and Differential Calculus by Jean Gallier - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

6

4

c2

0

1/4

0

0  1

1

−1 −1 4

1

=

=

.

c 

0

0

1/2

0  1

 5

1

 3

 

−1

0

0   

c4

0

0

0

1/2

0

0

1

−1

1

2

Given a signal v = (v1, v2, v3, v4), we first transform v into its coefficients c = (c1, c2, c3, c4)

over the Haar basis by computing c = W −1v. Observe that

v

c

1 + v2 + v3 + v4

1 =

4

is the overall average value of the signal v. The coefficient c1 corresponds to the background

of the image (or of the sound). Then, c2 gives the coarse details of v, whereas, c3 gives the

details in the first part of v, and c4 gives the details in the second half of v.

Reconstruction of the signal consists in computing v = W c. The trick for good compres-

sion is to throw away some of the coefficients of c (set them to zero), obtaining a compressed

signal c, and still retain enough crucial information so that the reconstructed signal v = W c

looks almost as good as the original signal v. Thus, the steps are:

input v −→ coefficients c = W −1v −→ compressed c −→ compressed v = W c.

This kind of compression scheme makes modern video conferencing possible.

It turns out that there is a faster way to find c = W −1v, without actually using W −1.

This has to do with the multiscale nature of Haar wavelets.

Given the original signal v = (6, 4, 5, 1) shown in Figure 3.1, we compute averages and

half differences obtaining Figure 3.2. We get the coefficients c3 = 1 and c4 = 2. Then, again

we compute averages and half differences obtaining Figure 3.3. We get the coefficients c1 = 4

and c2 = 1.

6

4

5

1

Figure 3.1: The original signal v

3.2. HAAR BASIS VECTORS AND A GLIMPSE AT WAVELETS

63

2

1

5

5

3

3

−1

−2

Figure 3.2: First averages and first half differences

1

1

4

4

4

4

−1

−1

Figure 3.3: Second averages and second half differences

Note that the original signal v can be reconstruced from the two signals in Figure 3.2,

and the signal on the left of Figure 3.2 can be reconstructed from the two signals in Figure

3.3.

This method can be generalized to signals of any length 2n. The previous case corresponds

to n = 2. Let us consider the case n = 3. The Haar basis (w1, w2, w3, w4, w5, w6, w7, w8) is

given by the matrix

1

1

1

0

1

0

0

0 

1

1

1

0

−1

0

0

0 

1

1

−1

0

0

1

0

0 

1

1

−1

0

0

−1

0

0 

W = 

1

−1

0

1

0

0

1

0 

1

−1

0

1

0

0

−1

0 

1

−1

0

−1

0

0

0

1 

1 −1

0

−1

0

0

0

−1

The columns of this matrix are orthogonal and it is easy to see that

W −1 = diag(1/8, 1/8, 1/4, 1/4, 1/2, 1/2, 1/2, 1/2)W .

A pattern is begining to emerge. It looks like the second Haar basis vector w2 is the “mother”

of all the other basis vectors, except the first, whose purpose is to perform averaging. Indeed,

in general, given

w2 = (1, . . . , 1, −1, . . . , −1),

2n

the other Haar basis vectors are obtained by a “scaling and shifting process.” Starting from

w2, the scaling process generates the vectors

w3, w5, w9, . . . , w2j+1, . . . , w2n−1+1,

64

CHAPTER 3. MATRICES AND LINEAR MAPS

such that w2j+1+1 is obtained from w2j+1 by forming two consecutive blocks of 1 and −1

of half the size of the blocks in w2j+1, and setting all other entries to zero. Observe that

w2j+1 has 2j blocks of 2n−j elements. The shifting process, consists in shifting the blocks of

1 and −1 in w2j+1 to the right by inserting a block of (k − 1)2n−j zeros from the left, with

0 ≤ j ≤ n − 1 and 1 ≤ k ≤ 2j. Thus, we obtain the following formula for w2j+k:

0

1 ≤ i ≤ (k − 1)2n−j

1

(k − 1)2n−j + 1 ≤ i ≤ (k − 1)2n−j + 2n−j−1

w2j+k(i) =

−1 (k − 1)2n−j + 2n−j−1 + 1 ≤ i ≤ k2n−j

0

k2n−j + 1 ≤ i ≤ 2n,

with 0 ≤ j ≤ n − 1 and 1 ≤ k ≤ 2j. Of course

w1 = (1, . . . , 1) .

2n

The above formulae look a little better if we change our indexing slightly by letting k vary

from 0 to 2j − 1 and using the index j instead of 2j. In this case, the Haar basis is denoted

by

w1, h00, h10, h11, h20, h21, h22, h23, . . . , hj , . . . , hn−1

,

k

2n−1−1

and

0

1 ≤ i ≤ k2n−j

1

k2n−j + 1 ≤ i ≤ k2n−j + 2n−j−1

hj (i) =

k

−1 k2n−j + 2n−j−1 + 1 ≤ i ≤ (k + 1)2n−j

0

(k + 1)2n−j + 1 ≤ i ≤ 2n,

with 0 ≤ j ≤ n − 1 and 0 ≤ k ≤ 2j − 1.

It turns out that there is a way to understand these formulae better if we interpret a

vector u = (u1, . . . , um) as a piecewise linear function over the interval [0, 1). We define the

function plf(u) such that

i − 1

i

plf(u)(x) = ui,

≤ x <

, 1 ≤ i ≤ m.

m

m

In words, the function plf(u) has the value u1 on the interval [0, 1/m), the value u2 on

[1/m, 2/m), etc., and the value um on the interval [(m−1)/m, 1). For example, the piecewise

linear function associated with the vector

u = (2.4, 2.2, 2.15, 2.05, 6.8, 2.8, −1.1, −1.3)

is shown in Figure 3.4.

Then, each basis vector hj corresponds to the function

k

ψj = plf(hj ).

k

k

3.2. HAAR BASIS VECTORS AND A GLIMPSE AT WAVELETS

65

7

6

5

4

3

2

1

0

−1

−20

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 3.4: The piecewise linear function plf(u)

In particular, for all n, the Haar basis vectors

h00 = w2 = (1, . . . , 1, −1, . . . , −1)

2n

yield the same piecewise linear function ψ given by

1

if 0 ≤ x < 1/2

ψ(x) =

−1 if 1/2 ≤ x < 1

0

otherwise,

whose graph is shown in Figure 3.5. Then, it is easy to see that ψj is given by the simple

k

1

1

0

−1

Figure 3.5: The Haar wavelet ψ

expression

ψj (x) = ψ(2jx

k

− k), 0 ≤ j ≤ n − 1, 0 ≤ k ≤ 2j − 1.

The above formula makes it clear that ψj is obtained from ψ by scaling and shifting. The

k

function φ00 = plf(w1) is the piecewise linear function with the constant value 1 on [0, 1), and

the functions ψj together with ϕ0

k

0 are known as the Haar wavelets .

66

CHAPTER 3. MATRICES AND LINEAR MAPS

Rather than using W −1 to convert a vector u to a vector c of coefficients over the Haar

basis, and the matrix W to reconstruct the vector u from its Haar coefficients c, we can use

faster algorithms that use averaging and differencing.

If c is a vector of Haar coefficients of dimension 2n, we compute the sequence of vectors

u0, u1, . . ., un as follows:

u0 = c

uj+1 = uj

uj+1(2i − 1) = uj(i) + uj(2j + i)

uj+1(2i) = uj(i) − uj(2j + i),

for j = 0, . . . , n − 1 and i = 1, . . . , 2j. The reconstructed vector (signal) is u = un.

If u is a vector of dimension 2n, we compute the sequence of vectors cn, cn−1, . . . , c0 as

follows:

cn = u

cj = cj+1

cj(i) = (cj+1(2i − 1) + cj+1(2i))/2

cj(2j + i) = (cj+1(2i − 1) − cj+1(2i))/2,

for j = n − 1, . . . , 0 and i = 1, . . . , 2j. The vector over the Haar basis is c = c0.

We leave it as an exercise to implement the above programs in Matlab using two variables

u and c, and by building iteratively 2j. Here is an example of the conversion of a vector to

its Haar coefficients for n = 3.

Given the sequence u = (31, 29, 23, 17, −6, −8, −2, −4), we get the sequence

c3 = (31, 29, 23, 17, −6, −8, −2, −4)

c2 = (30, 20, −7, −3, 1, 3, 1, 1)

c1 = (25, −5, 5, −2, 1, 3, 1, 1)

c0 = (10, 15, 5, −2, 1, 3, 1, 1),

so c = (10, 15, 5, −2, 1, 3, 1, 1). Conversely, given c = (10, 15, 5, −2, 1, 3, 1, 1), we get the

sequence

u0 = (10, 15, 5, −2, 1, 3, 1, 1)

u1 = (25, −5, 5, −2, 1, 3, 1, 1)

u2 = (30, 20, −7, −3, 1, 3, 1, 1)

u3 = (31, 29, 23, 17, −6, −8, −2, −4),

which gives back u = (31, 29, 23, 17, −6, −8, −2, −4).

3.2. HAAR BASIS VECTORS AND A GLIMPSE AT WAVELETS

67

There is another recursive method for constucting the Haar matrix Wn of dimension 2n

that makes it clearer why the above algorithms are indeed correct (which nobody seems to

prove!). If we split Wn into two 2n × 2n−1 matrices, then the second matrix containing the

last 2n−1 columns of Wn has a very simple structure: it consists of the vector

(1, −1, 0, . . . , 0)

2n

and 2n−1 − 1 shifted copies of it, as illustrated below for n = 3:

 1

0

0

0 

−1

0

0

0 

 0

1

0

0 

 0

−1

0

0 

 .

 0

0

1

0 

 0

0

−1

0 

 0

0

0

1 

0

0

0

−1

Then, we form the 2n ×2n−2 matrix obtained by “doubling” each column of odd index, which

means replacing each such column by a column in which the block of 1 is doubled and the

block of −1 is doubled. In general, given a current matrix of dimension 2n × 2j, we form a

2n × 2j−1 matrix by doubling each column of odd index, which means that we replace each

such column by a column in which the block of 1 is doubled and the block of −1 is doubled.

We repeat this process n − 1 times until we get the vector

(1, . . . , 1, −1, . . . , −1) .

2n

The first vector is the averaging vector (1, . . . , 1). This process is illustrated below for n = 3:

2n

 1 

 1

0 

 1

0

0

0 

 1 

 1

0 

−1

0

0

0 

 1 

−1

0 

 0

1

0

0 

 1 

−1

0 

 0

−1

0

0 

 ⇐= 

 ⇐= 

−1

 0

1 

 0

0

1

0 

−1

 0

1 

 0

0

−1

0 

−1

 0

−1

 0

0

0

1 

−1

0

−1

0

0

0

−1

68

CHAPTER 3. MATRICES AND LINEAR MAPS

Adding (1, . . . , 1, 1, . . . , 1) as the first column, we obtain

2n

1

1

1

0

1

0

0

0 

1

1

1

0

−1

0

0

0 

1

1

−1

0

0

1

0

0 

1

1

−1

0

0

−1

0

0 

W

3 = 

 .

1

−1

0

1

0

0

1

0 

1

−1

0

1

0

0

−1

0 

1

−1

0

−1

0

0

0

1 

1 −1

0

−1

0

0

0

−1

Observe that the right block (of size 2n × 2n−1) shows clearly how the detail coefficients

in the second half of the vector c are added and subtracted to the entries in the first half of

the partially reconstructed vector after n − 1 steps.

An important and attractive feature of the Haar basis is that it provides a multiresolu-

tion analysis of a signal. Indeed, given a signal u, if c = (c1, . . . , c2n) is the vector of its

Haar coefficients, the coefficients with low index give coarse information about u, and the

coefficients with high index represent fine information. For example, if u is an audio signal

corresponding to a Mozart concerto played by an orchestra, c1 corresponds to the “back-

ground noise,” c2 to the bass, c3 to the first cello, c4 to the second cello, c5, c6, c7, c7 to the

violas, then the violins, etc. This multiresolution feature of wavelets can be exploited to

compress a signal, that is, to use fewer coefficients to represent it. Here is an example.

Consider the signal

u = (2.4, 2.2, 2.15, 2.05, 6.8, 2.8, −1.1, −1.3),

whose Haar transform is

c = (2, 0.2, 0.1, 3, 0.1, 0.05, 2, 0.1).

The piecewise-linear curves corresponding to u and c are shown in Figure 3.6. Since some of

the coefficients in c are small (smaller than or equal to 0.2) we can compress c by replacing

them by 0. We get

c2 = (2, 0, 0, 3, 0, 0, 2, 0),

and the reconstructed signal is

u2 = (2, 2, 2, 2, 7, 3, −1, −1).

The piecewise-linear curves corresponding to u2 and c2 are shown in Figure 3.7.

3.2. HAAR BASIS VECTORS AND A GLIMPSE AT WAVELETS

69

7

3

6

2.5

5

4

2

3

1.5

2

1

1

0

0.5

−1

−2

0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 3.6: A signal and its Haar transform

7

3

6

2.5

5

2

4

3

1.5

2

1

1

0.5

0

0

−1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 3.7: A compressed signal and its compressed Haar transform

An interesting (and amusing) application of the Haar wavelets is to the compression of

audio signals. It turns out that if your type load handel in Matlab an audio file will be

loaded in a vector denoted by y, and if you type sound(y), the computer will play this

piece of music. You can convert y to its vector of Haar coefficients, c. The length of y is

73113, so first tuncate the tail of y to get a vector of length 65536 = 216. A plot of the

signals corresponding to y and c is shown in Figure 3.8. Then, run a program that sets all

coefficients of c whose absolute value is less that 0.05 to zero. This sets 37272 coefficients

to 0. The resulting vector c2 is converted to a signal y2. A plot of the signals corresponding

to y2 and c2 is shown in Figure 3.9. When you type sound(y2), you find that the music

doesn’t differ much from the original, although it sounds less crisp. You should play with

other numbers greater than or less than 0.05. You should hear what happens when you type

sound(c). It plays the music corresponding to the Haar transform c of y, and it is quite

funny.

Another neat property of the Haar transform is that it can be instantly generalized to

70

CHAPTER 3. MATRICES AND LINEAR MAPS

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0

−0.2

−0.2

−0.4

−0.4

−0.6

−0.6

−0.8

−0.8

0

1

2

3

4

5

6

7

0

1

2

3

4

5

6

7

4

x 10

4

x 10

Figure 3.8: The signal “handel” and its Haar transform

1

0.6

0.8

0.4

0.6

0.4

0.2

0.2

0

0

−0.2

−0.2

−0.4

−0.4

−0.6

−0.6

−0.8

−1

−0.8

0

1

2

3

4

5

6

7

0

1

2

3

4

5

6

7

4

x 10

4

x 10

Figure 3.9: The compressed signal “handel” and its Haar transform

matrices (even rectangular) without any extra effort! This allows for the compression of

digital images. But first, we address the issue of normalization of the Haar coefficients. As

we observed earlier, the 2n × 2n matrix Wn of Haar basis vectors has orthogonal columns,

but its columns do not have unit length. As a consequence, Wn is not the inverse of Wn,

but rather the matrix

W −1

n

= DnWn

with Dn = diag 2−n, 2−n , 2−(n−1), 2−(n−1), 2−(n−2), . . . , 2−(n−2), . . . , 2−1, . . . , 2−1 .

20

21

22

2n−1

Therefore, we define the orthogonal matrix

1

H

2

n = WnDn

3.2. HAAR BASIS VECTORS AND A GLIMPSE AT WAVELETS

71

whose columns are the normalized Haar basis vectors, with

1

D 2n = diag 2−n2 , 2−n2 , 2−n−1

2