Wavelets and Wavelet Transforms by C. Sidney Burrus - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

Chapter 6The Scaling Function and Scaling Coefficients, Wavelet and Wavelet Coefficients*

It is licensed under the Creative Commons Attribution License: http://creativecommons.org/licenses/by/3.0/

2013/02/11 15:15:41 -0600

Summary

We will now look more closely at the basic scaling function and wavelet to see when they exist and what their properties are 5, 21, 13, 15, 14, 19, 4. Using the same approach that is used in the theory of differential equations, we will examine the properties of φ(t) by considering the equation of which it is a solution. The basic recursion Equation 3.13 that comes from the multiresolution formulation is

(6.1)
_autogen-svg2png-0002.png

with h(n) being the scaling coefficients and φ(t) being the scaling function which satisfies this equation which is sometimes called the refinement equation, the dilation equation, or the multiresolution analysis equation (MRA).

In order to state the properties accurately, some care has to be taken in specifying just what classes of functions are being considered or are allowed. We will attempt to walk a fine line to present enough detail to be correct but not so much as to obscure the main ideas and results. A few of these ideas were presented in Section: Signal Spaces and a few more will be given in the next section. A more complete discussion can be found in 33, in the introductions to 34, 36, 11, or in any book on function analysis.

6.1Tools and Definitions

Signal Classes

There are three classes of signals that we will be using. The most basic is called L2(R) which contains all functions which have a finite, well-defined integral of the square: _autogen-svg2png-0006.png. This class is important because it is a generalization of normal Euclidean geometry and because it gives a simple representation of the energy in a signal.

The next most basic class is L1(R), which requires a finite integral of the absolute value of the function: _autogen-svg2png-0008.png. This class is important because one may interchange infinite summations and integrations with these functions although not necessarily with L2 functions. These classes of function spaces can be generalized to those with _autogen-svg2png-0010.png and designated Lp.

A more general class of signals than any Lp space contains what are called distributions. These are generalized functions which are not defined by their having “values" but by the value of an “inner product" with a normal function. An example of a distribution would be the Dirac delta function δ(t) where it is defined by the property: _autogen-svg2png-0014.png.

Another detail to keep in mind is that the integrals used in these definitions are Lebesque integrals which are somewhat more general than the basic Riemann integral. The value of a Lebesque integral is not affected by values of the function over any countable set of values of its argument (or, more generally, a set of measure zero). A function defined as one on the rationals and zero on the irrationals would have a zero Lebesque integral. As a result of this, properties derived using measure theory and Lebesque integrals are sometime said to be true “almost everywhere," meaning they may not be true over a set of measure zero.

Many of these ideas of function spaces, distributions, Lebesque measure, etc. came out of the early study of Fourier series and transforms. It is interesting that they are also important in the theory of wavelets. As with Fourier theory, one can often ignore the signal space classes and can use distributions as if they were functions, but there are some cases where these ideas are crucial. For an introductory reading of this book or of the literature, one can usually skip over the signal space designation or assume Riemann integrals. However, when a contradiction or paradox seems to arise, its resolution will probably require these details.

Fourier Transforms

We will need the Fourier transform of φ(t) which, if it exists, is defined to be Φ

(6.2)
_autogen-svg2png-0017.png

and the discrete-time Fourier transform (DTFT) 23 of h(n) defined to be

(6.3)
_autogen-svg2png-0019.png

where _autogen-svg2png-0020.png and n is an integer (nZ). If convolution with h(n) is viewed as a digital filter, as defined in Section: Analysis - From Fine Scale to Coarse Scale, then the DTFT of h(n) is the filter's frequency response, 23, 24 which is 2π periodic.

If Φ(ω) exits, the defining recursive equation Equation 6.1 becomes

(6.4)
_autogen-svg2png-0027.png

which after iteration becomes

(6.5)
_autogen-svg2png-0028.png

if _autogen-svg2png-0029.png and Φ(0) is well defined. This may be a distribution or it may be a smooth function depending on H(ω) and, therefore, h(n)33, 4. This makes sense only if Φ(0) is well defined. Although Equation 6.1 and Equation 6.5 are equivalent term-by-term, the requirement of Φ(0) being well defined and the nature of the limits in the appropriate function spaces may make one preferable over the other. Notice how the zeros of H(ω) determine the zeros of Φ(ω).

Refinement and Transition Matrices

There are two matrices that are particularly important to determining the properties of wavelet systems. The first is the refinement matrixM, which is obtained from the basic recursion equation Equation 6.1 by evaluating φ(t) at integers 22, 6, 7, 29, 30. This looks like a convolution matrix with the even (or odd) rows removed. Two particular submatrices that are used later in Section 6.10 to evaluate φ(t) on the dyadic rationals are illustrated for N=6 by

(6.6)
_autogen-svg2png-0041.png

which we write in matrix form as

(6.7)
_autogen-svg2png-0042.png

with M0 being the 6×6 matrix of the h(n) and _autogen-svg2png-0046.png being 6×1 vectors of integer samples of φ(t). In other words, the vector _autogen-svg2png-0049.png with entries φ(k) is the eigenvector of M0 for an eigenvalue of unity.

The second submatrix is a shifted version illustrated by

(6.8)
_autogen-svg2png-0052.png

with the matrix being denoted M1. The general refinement matrix M is the infinite matrix of which M0 and M1 are partitions. If the matrix H is the convolution matrix for h(n), we can denote the M matrix by [↓2]H to indicate the down-sampled convolution matrix H. Clearly, for φ(t) to be defined on the dyadic rationals, M0 must have a unity eigenvalue.

A third, less obvious but perhaps more important, matrix is called the transition matrixT and it is built up from the autocorrelation matrix of h(n). The transition matrix is constructed by

(6.9)
_autogen-svg2png-0066.png

This matrix (sometimes called the Lawton matrix) was used by Lawton (who originally called it the Wavelet-Galerkin matrix) 14 to derive necessary and sufficient conditions for an orthogonal wavelet basis. As we will see later in this chapter, its eigenvalues are also important in determining the properties of φ(t) and the associated wavelet system.

6.2Necessary Conditions

Theorem 1 If φ(t)∈L1 is a solution to the basic recursion equation Equation 6.1 and if _autogen-svg2png-0069.png , then

(6.10)
_autogen-svg2png-0070.png

The proof of this theorem requires only an interchange in the order of a summation and integration (allowed in L1) but no assumption of orthogonality of the basis functions or any other properties of φ(t) other than a nonzero integral. The proof of this theorem and several of the others stated here are contained in Appendix A.

This theorem shows that, unlike linear constant coefficient differential equations, not just any set of coefficients will support a solution. The coefficients must satisfy the linear equation Equation 6.10. This is the weakest condition on the h(n).

Theorem 2 If φ(t) is an L1 solution to the basic recursion equation Equation 6.1 with _autogen-svg2png-0076.png, and

(6.11)
_autogen-svg2png-0077.png

with Φ(π+2πk)≠0 for some k, then

(6.12)
_autogen-svg2png-0080.png

where Equation 6.11 may have to be a distributional sum. Conversely, if Equation 6.12 is satisfied, then Equation 6.11 is true.

Equation Equation 6.12 is called the fundamental condition, and it is weaker than requiring orthogonality but stronger than Equation 6.10. It is simply a result of requiring the equations resulting from evaluating Equation 6.1 on the integers be consistent. Equation Equation 6.11 is called a partitioning of unity (or the Strang condition or the Shoenberg condition).

A similar theorem by Cavaretta, Dahman and Micchelli 1 and by Jia 10 states that if φLp and the integer translates of φ(t) form a Riesz basis for the space they span, then nh(2n)=∑nh(2n+1).

Theorem 3 If φ(t) is an L2L1 solution to Equation 6.1 and if integer translates of φ(t) are orthogonal as defined by

(6.13)
_autogen-svg2png-0087.png

then

(6.14)
_autogen-svg2png-0088.png

Notice that this does not depend on a particular normalization of φ(t).

If φ(t) is normalized by dividing by the square root of its energy _autogen-svg2png-0091.png, then integer translates of φ(t) are orthonormal defined by

(6.15)
_autogen-svg2png-0093.png

This theorem shows that in order for the solutions of Equation 6.1 to be orthogonal under integer translation, it is necessary that the coefficients of the recursive equation be orthogonal themselves after decimating or downsampling by two. If φ(t) and/or h(n) are complex functions, complex conjugation must be used in Equation 6.13, Equation 6.14, and Equation 6.15.

Coefficients h(n) that satisfy Equation 6.14 are called a quadrature mirror filter (QMF) or conjugate mirror filter (CMF), and the condition Equation 6.14 is called the quadratic condition for obvious reasons.

Corollary 1 Under the assumptions of Theorem Section 6.2, the norm of h(n) is automatically unity.

(6.16)
_autogen-svg2png-0098.png

Not only must the sum of h(n) equal _autogen-svg2png-0100.png, but for orthogonality of the solution, the sum of the squares of h(n) must be one, both independent of any normalization of φ(t). This unity normalization of h(n) is the result of the _autogen-svg2png-0104.png term in Equation 6.1.

Corollary 2 Under the assumptions of Theorem Section 6.2,