Digital Signal Processing and Digital Filter Design by C. Sidney Burrus - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

Chapter 4Digital Filter Structures and Implementation

4.1Block, Multi-rate, Multi-dimensional Processing and Distributed Arithmetic*

Introduction

The partitioning of long or infinite strings of data into shorter sections or blocks has been used to allow application of the FFT to realize on-going or continuous convolution 40, 25. These notes develop the idea of block processing and shows that it is a generalization of the overlap-add and overlap-save methods 40, 22. They further generalize the idea to a multidimensional formulation of convolution 1, 10. Moving in the opposite direction, it is shown that, rather than partitioning a string of scalars into blocks and then into blocks of blocks, one can partition a scalar number into blocks of bits and then include the operation of multiplication in the signal processing formulation. This is called distributed arithmetic 9 and, since it describes operations at the bit level, is completely general. These notes try to present a coherent development of these ideas.

Block Signal Processing

In this section the usual convolution and recursion that implements FIR and IIR discrete-time filters are reformulated in terms of vectors and matrices. Because the same data is partitioned and grouped in a variety of ways, it is important to have a consistent notation in order to be clear. The nth element of a data sequence is expressed h(n) or, in some cases to simplify, hn. A block or finite length column vector is denoted _autogen-svg2png-0004.png with n indicating the nth block or section of a longer vector. A matrix, square or rectangular, is indicated by an upper case letter such as H with a subscript if appropriate.

Block Convolution

The operation of a finite impulse response (FIR) filter is described by a finite convolution as

(4.1)
_autogen-svg2png-0008.png

where x(n) is causal, h(n) is causal and of length L, and the time index n goes from zero to infinity or some large value. With a change of index variables this becomes

(4.2)
_autogen-svg2png-0013.png

which can be expressed as a matrix operation by

(4.3)
_autogen-svg2png-0014.png

The H matrix of impulse response values is partitioned into N by N square sub matrices and the X and Y vectors are partitioned into length-N blocks or sections. This is illustrated for N=3 by

(4.4)
_autogen-svg2png-0022.png
(4.5)
_autogen-svg2png-0023.png

Substituting these definitions into Equation 4.3 gives

(4.6)
_autogen-svg2png-0024.png

The general expression for the nth output block is

(4.7)
_autogen-svg2png-0026.png

which is a vector or block convolution. Since the matrix-vector multiplication within the block convolution is itself a convolution, Equation 4.8 is a sort of convolution of convolutions and the finite length matrix-vector multiplication can be carried out using the FFT or other fast convolution methods.

The equation for one output block can be written as the product

(4.8)
_autogen-svg2png-0027.png

and the effects of one input block can be written

(4.9)
_autogen-svg2png-0028.png

These are generalize statements of overlap save and overlap add 40, 22. The block length can be longer, shorter, or equal to the filter length.

Block Recursion

Although less well-known, IIR filters can be implemented with block processing 20, 12, 42, 7, 8. The block form of an IIR filter is developed in much the same way as for the block convolution implementation of the FIR filter. The general constant coefficient difference equation which describes an IIR filter with recursive coefficients al, convolution coefficients bk, input signal x(n), and output signal y(n) is given by

(4.10)
_autogen-svg2png-0033.png

using both functional notation and subscripts, depending on which is easier and clearer. The impulse response h(n) is

(4.11)
_autogen-svg2png-0035.png

which can be written in matrix operator form

(4.12)
_autogen-svg2png-0036.png

In terms of N by N submatrices and length-N blocks, this becomes

(4.13)
_autogen-svg2png-0040.png

From this formulation, a block recursive equation can be written that will generate the impulse response block by block.

(4.14)
_autogen-svg2png-0041.png
(4.15)
_autogen-svg2png-0042.png

with initial conditions given by

(4.16)
_autogen-svg2png-0043.png

This can also be written to generate the square partitions of the impulse response matrix by

(4.17)
_autogen-svg2png-0044.png

with initial conditions given by

(4.18)
_autogen-svg2png-0045.png

ane K=–A0–1A1. This recursively generates square submatrices of H similar to those defined in Equation 4.4 and Equation 4.6 and shows the basic structure of the dynamic system.

Next, we develop the recursive formulation for a general input as described by the scalar difference equation Equation 4.11 and in matrix operator form by

(4.19)
_autogen-svg2png-0048.png

which, after substituting the definitions of the sub matrices and assuming the block length is larger than the order of the numerator or denominator, becomes

(4.20)
_autogen-svg2png-0049.png

From the partitioned rows of Equation 4.21, one can write the block recursive relation

(4.21)
_autogen-svg2png-0050.png

Solving for _autogen-svg2png-0051.png gives

(4.22)
_autogen-svg2png-0052.png
(4.23)
_autogen-svg2png-0053.png

which is a first order vector difference equation 7, 8. This is the fundamental block recursive algorithm that implements the original scalar difference equation in Equation 4.11. It has several important characteristics.

  1. The block recursive formulation is similar to a state variable equation but the states are blocks or sections of the output 8, 28, 46, 47.

  2. The eigenvalues of K are the poles of the original scalar problem raised to the N power plus others that are zero. The longer the block length, the “more stable" the filter is, i.e. the further the poles are from the unit circle 7, 8, 46, 4, 6.

  3. If the block length were shorter than the denominator, the vector difference equation would be higher than first order. There would be a non zero A2. If the block length were shorter than the numerator, there would be a non zero B2 and a higher order block convolution operation. If the block length were one, the order of the vector equation would be the same as the scalar equation. They would be the same equation.

  4. The actual arithmetic that goes into the calculation of the output is partly recursive and partly convolution. The longer the block, the more the output is calculated by convolution and, the more arithmetic is required.

  5. It is possible to remove the zero eigenvalues in K by making K rectangular or square and N by N This results in a form even more similar to a state variable formulation 35, 8. This is briefly discussed below in The Z-Transform.

  6. There are several ways of using the FFT in the calculation of the various matrix products in Equation 4.22 and in Equation 4.24 and Equation 4.25. Each has some arithmetic advantage for various forms and orders of the original equation. It is also possible to implement some of the operations using rectangular transforms, number theoretic transforms, distributed arithmetic, or other efficient convolution algorithms 8, 46, 3, 11, 45, 36.

  7. By choosing the block length equal to the period, a periodically time varying filter can be made block time invariant. In other words, all the time varying characteristics are moved to the finite matrix multiplies which leave the time invariant properties at the block level. This allows use of z-transform and other time-invariant methods to be used for stability analysis and frequency response analysis 31, 32. It also turns out to be related to filter banks and multi-rate filters 30, 29, 14.

Block State Formulation

It is possible to reduce the size of the matrix operators in the block recursive description Equation 4.23 to give a form even more like a state variable equation 35, 8, 47. If K in Equation 4.23 has several zero eigenvalues, it should be possible to reduce the size of K until it has full rank. That was done in 8 and the result is

(4.24)
_autogen-svg2png-0062.png
(4.25)
_autogen-svg2png-0063.png

where H0 is the same N by N convolution matrix, N1 is a rectangular L by N partition of the convolution matrix H, K1 is a square N by N matrix of full rank, and K2 is a rectangular N by L matrix.

This is now a minimal state equation whose input and output are blocks of the original input and output. Some of the matrix multiplications can be carried out using the FFT or other techniques.

Block Implementations of Digital Filters

The advantage of the block convolution and recursion implementations is a possible improvement in arithmetic efficiency by using the FFT or other fast convolution methods for some of the multiplications in Equation 4.7 or Equation 4.22 33, 34. There is the reduction of quantization effects due to an effective decrease in the magnitude of the eigenvalues and the possibility of easier parallel implementation for IIR filters. The disadvantages are a delay of at least one block length and an increased memory requirement.

These methods could also be used in the various filtering methods for evaluating the DFT. This the chirp z-transform, Rader's method, and Goertzel's algorithm.

Multidimensional Formulation

This process of partitioning the data vectors and the operator matrices can be continued by partitioning Equation 4.7 and Equation 4.21 and creating blocks of blocks to give a higher dimensional structure. One should use index mapping ideas rather than partitioned matrices for this approach 1, 10.

Periodically Time-Varying Discrete-Time Systems

Most time-varying systems are periodically time-varying and this allows special results to be obtained. If the block length is set equal to the period of the time variations, the resulting block equations are time invariant and all to the time varying characteristics are contained in the matrix multiplications. This allows some of the tools of time invariant systems to be used on periodically time-varying systems.

The PTV system is analyzed in 44, 14, 13, 31, the filter analysis and design problem, which includes the decimation–interpolation structure, is addressed in 17, 32, 30, and the bandwidth compression problem in 29. These structures can take the form of filter banks 41.

Multirate Filters, Filter Banks, and Wavelets

Another area that is related to periodically time varying systems and to block processing is filter banks 41, 21. Recently the area of perfect reconstruction filter banks has been further developed and shown to be closely related to wavelet based signal analysis 14, 16, 19, 41, 5. The filter bank structure has several forms with the polyphase and lattice being particularly interesting. Further work on multirate filters can be found in 27, 15, 2, 26, 37, 18.

An idea that has some elements of multirate filters, perfect reconstruction, and distributed arithmetic is given in 23, 24. Parks has noted that design of multirate filters has some elements in common with complex approximation and of 2-D filter design 38, 39 and is looking at using Tang's method for these designs.

Distributed Arithmetic

Rather than grouping the individual scalar data values in a discrete-time signal into blocks, the scalar values can be partitioned into groups of bits. Because multiplication of integers, multiplication of polynomials, and discrete-time convolution are the same operations, the bit-level description of multiplication can be mixed with the convolution of the signal processing. The resulting structure is called distributed arithmetic 9,