Digital Signal Processing and Digital Filter Design by C. Sidney Burrus - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

Chapter 1Signals and Signal Processing Systems

1.1Continuous-Time Signals*

Signals occur in a wide range of physical phenomenon. They might be human speech, blood pressure variations with time, seismic waves, radar and sonar signals, pictures or images, stress and strain signals in a building structure, stock market prices, a city's population, or temperature across a plate. These signals are often modeled or represented by a real or complex valued mathematical function of one or more variables. For example, speech is modeled by a function representing air pressure varying with time. The function is acting as a mathematical analogy to the speech signal and, therefore, is called an analog signal. For these signals, the independent variable is time and it changes continuously so that the term continuous-time signal is also used. In our discussion, we talk of the mathematical function as the signal even though it is really a model or representation of the physical signal.

The description of signals in terms of their sinusoidal frequency content has proven to be one of the most powerful tools of continuous and discrete-time signal description, analysis, and processing. For that reason, we will start the discussion of signals with a development of Fourier transform methods. We will first review the continuous-time methods of the Fourier series (FS), the Fourier transform or integral (FT), and the Laplace transform (LT). Next the discrete-time methods will be developed in more detail with the discrete Fourier transform (DFT) applied to finite length signals followed by the discrete-time Fourier transform (DTFT) for infinitely long signals and ending with the Z-transform which allows the powerful tools of complex variable theory to be applied.

More recently, a new tool has been developed for the analysis of signals. Wavelets and wavelet transforms 9, 1, 5, 16, 15 are another more flexible expansion system that also can describe continuous and discrete-time, finite or infinite duration signals. We will very briefly introduce the ideas behind wavelet-based signal analysis.

The Fourier Series

The problem of expanding a finite length signal in a trigonometric series was posed and studied in the late 1700's by renowned mathematicians such as Bernoulli, d'Alembert, Euler, Lagrange, and Gauss. Indeed, what we now call the Fourier series and the formulas for the coefficients were used by Euler in 1780. However, it was the presentation in 1807 and the paper in 1822 by Fourier stating that an arbitrary function could be represented by a series of sines and cosines that brought the problem to everyone's attention and started serious theoretical investigations and practical applications that continue to this day 8, 3, 11, 10, 7, 12. The theoretical work has been at the center of analysis and the practical applications have been of major significance in virtually every field of quantitative science and technology. For these reasons and others, the Fourier series is worth our serious attention in a study of signal processing.

Definition of the Fourier Series

We assume that the signal x(t) to be analyzed is well described by a real or complex valued function of a real variable t defined over a finite interval {0≤tT}. The trigonometric series expansion of x(t) is given by

(1.1)
_autogen-svg2png-0005.png

where xk(t)=cos(2πkt/T) and yk(t)=sin(2πkt/T) are the basis functions for the expansion. The energy or power in an electrical, mechanical, etc. system is a function of the square of voltage, current, velocity, pressure, etc. For this reason, the natural setting for a representation of signals is the Hilbert space of L2[0,T]. This modern formulation of the problem is developed in 6, 11. The sinusoidal basis functions in the trigonometric expansion form a complete orthogonal set in L2[0,T]. The orthogonality is easily seen from inner products

(1.2)
_autogen-svg2png-0010.png

and

(1.3)
_autogen-svg2png-0011.png

where δ(t) is the Kronecker delta function with δ(0)=1 and δ(k≠0)=0. Because of this, the kth coefficients in the series can be found by taking the inner product of x(t) with the kth basis functions. This gives for the coefficients

(1.4)
_autogen-svg2png-0018.png

and

(1.5)
_autogen-svg2png-0019.png

where T is the time interval of interest or the period of a periodic signal. Because of the orthogonality of the basis functions, a finite Fourier series formed by truncating the infinite series is an optimal least squared error approximation to x(t). If the finite series is defined by

(1.6)
_autogen-svg2png-0022.png

the squared error is

(1.7)
_autogen-svg2png-0023.png

which is minimized over all a(k) and b(k) by Equation 1.4 and Equation 1.5. This is an extraordinarily important property.

It follows that if x(t)∈L2[0,T], then the series converges to x(t) in the sense that ε→0 as N→∞6, 11. The question of point-wise convergence is more difficult. A sufficient condition that is adequate for most application states: If f(x) is bounded, is piece-wise continuous, and has no more than a finite number of maxima over an interval, the Fourier series converges point-wise to f(x) at all points of continuity and to the arithmetic mean at points of discontinuities. If f(x) is continuous, the series converges uniformly at all points 11, 8, 3.

A useful condition 6, 11 states that if x(t) and its derivatives through the qth derivative are defined and have bounded variation, the Fourier coefficients a(k) and b(k) asymptotically drop off at least as fast as _autogen-svg2png-0037.png as k→∞. This ties global rates of convergence of the coefficients to local smoothness conditions of the function.

The form of the Fourier series using both sines and cosines makes determination of the peak value or of the location of a particular frequency term difficult. A different form that explicitly gives the peak value of the sinusoid of that frequency and the location or phase shift of that sinusoid is given by

(1.8)
_autogen-svg2png-0039.png

and, using Euler's relation and the usual electrical engineering notation of _autogen-svg2png-0040.png,

(1.9) ejx = cos ( x ) + j sin ( x ) ,

the complex exponential form is obtained as

(1.10)
_autogen-svg2png-0042.png

where

(1.11)
_autogen-svg2png-0043.png

The coefficient equation is

(1.12)
_autogen-svg2png-0044.png

The coefficients in these three forms are related by

(1.13) | d | 2 = | c | 2 = a2 + b2

and

(1.14)
_autogen-svg2png-0046.png

It is easier to evaluate a signal in terms of c(k) or d(k) and θ(k) than in terms of a(k) and b(k). The first two are polar representation of a complex value and the last is rectangular. The exponential form is easier to work with mathematically.

Although the function to be expanded is defined only over a specific finite region, the series converges to a function that is defined over the real line and is periodic. It is equal to the original function over the region of definition and is a periodic extension outside of the region. Indeed, one could artificially extend the given function at the outset and then the expansion would converge everywhere.

A Geometric View

It can be very helpful to develop a geometric view of the Fourier series where x(t) is considered to be a vector and the basis functions are the coordinate or basis vectors. The coefficients become the projections of x(t) on the coordinates. The ideas of a measure of distance, size, and orthogonality are important and the definition of error is easy to picture. This is done in 6, 11, 17 using Hilbert space methods.

Properties of the Fourier Series

The properties of the Fourier series are important in applying it to signal analysis and to interpreting it. The main properties are given here using the notation that the Fourier series of a real valued function x(t) over {0≤tT} is given by F{x(t)}=c(k) and _autogen-svg2png-0057.png denotes the periodic extensions of x(t).

  1. Linear: F{x+y}=F{x}+F{y}
    Idea of superposition. Also scalability: F{ax}=aF{x}

  2. Extensions of x(t): _autogen-svg2png-0062.png
    _autogen-svg2png-0063.png is periodic.

  3. Even and Odd Parts: x(t)=u(t)+jv(t) and _autogen-svg2png-0065.png

    Table 1.1.
    uvAB|C|θ
    even0even0even0
    odd00oddeven0
    0even0evenevenπ/2
    0oddodd0evenπ/2
  4. Convolution: If continuous cyclic convolution is defined by

    (1.15)
    _autogen-svg2png-0074.png


    then _autogen-svg2png-0075.png

  5. Multiplication: If discrete convolution is defined by

    (1.16)
    _autogen-svg2png-0076.png


    then _autogen-svg2png-0077.png
    This property is the inverse of property 4 and vice versa.

  6. Parseval: _autogen-svg2png-0078.png
    This property says the energy calculated in the time domain is the same as that calculated in the frequency (or Fourier) domain.

  7. Shift: _autogen-svg2png-0079.png
    A shift in the time domain results in a linear phase shift in the frequency domain.

  8. Modulate: _autogen-svg2png-0080.png
    Modulation in the time domain results in a shift in the frequency domain. This property is the inverse of property 7.

  9. Orthogonality of basis functions:

    (1.17)
    _autogen-svg2png-0081.png

    Orthogonality allows the calculation of coefficients using inner products in Equation 1.4 and Equation 1.5. It also allows Parseval's Theorem in property 6. A relaxed version of orthogonality is called “tight frames" and is important in over-specified systems, especially in wavelets.

Examples

  • An example of the Fourier series is the expansion of a square wave signal with period 2π. The expansion is

    (1.18)
    _autogen-svg2png-0083.png

    Because x(t) is odd, there are no cosine terms (all a(k)=0) and, because of its symmetries, there are no even harmonics (even k terms are zero). The function is well defined and bounded; its derivative is not, therefore, the coefficients drop off as