Universal Algorithms in Signal Processing and Communications by Denver Greene - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

Chapter 1Introduction*

It is licensed under the Creative Commons Attribution License: http://creativecommons.org/licenses/by/3.0/

2013/05/17 09:37:13 -0500

Summary

1.1Motivation

This course focuses on algorithms in signal processing and communications. When training data is available in such systems, we can process the data by first training on historical data and then running a Bayesian scheme, which relies on the statistics being known. A similar Bayesian approach can also be used when the statistics are approximately known.

For example, consider lossy image compression. It is well known that wavelet coefficients of images have a distribution that resembles Laplace,

(1.1) fx ( x ) = c1x ec2 | x | ,

and the coefficients are approximately independent and identically distributed (i.i.d.). A well-known approach to lossy image compression is to first compute the wavelet coefficients and then compress them using a lossy compressor that is designed for Laplace i.i.d. coefficients 3. In this approach, the training consists of the body of art that realizes that wavelet coefficients are approximately Laplace i.i.d., and the Bayesan algorithm is a lossy compressor that is designed for this distribution.

However, sometimes the statistics of the input (often called the source) are completely unknown, there is no training data, or there is great uncertainty in the statistics. For example, in lossless data compression, we do not know whether a file is an executable, source code, a DNA sequence, contains financial transactions, is text, etc. And even if we know that the file contains text, it has been noted that even different chapters that appear in the same book that is written by the same author may contain different statistics.

For this latter set of problems, the Bayesian approach is useless, because there is no training data. A good alternative approach to Bayesian algorithms is to use universal algorithms 5, 4. These algorithms have good performance irrespective of statistics. In fact, in some cases, these algorithms can achieve (with equality) the theoretically optimum performance in an appropriate asymptotic framework.

In lossless compression, universal algorithms have had great impact. For example, the Lempel-Ziv family of algorithms 5, 6 asymptotically achieve the entropy rate 2, 1, which is the best possible compression ratio achievable in lossless compression, despite not knowing the input statistics. Additionally, the Lempel-Ziv algorithms allow efficient implementation.

1.2Overview

The goal of this course is to study universal algorithms, starting from the well-trodden material in lossless compression, and later discussing universal algorithms in other areas of communications and signal processing. Let us overview the material studied during the course. We begin in Chapter 2 with a review of some information theory material, including typical sequences and source coding, in order to provide sufficient background. Next, Chapter 3 statistical models for data will be described. Chapter 4 then presents techniques for universal lossless compression of parametric sources. One approach to universal compression of parametric sources is minimum description length, which compresses the data in order to minimize for the sum of the complexity of the model and the complexity of the data given the parameters of the model. Minimum description length has been used with context tree models to provide universal contextual prediction; context tree approaches are detailed in Chapter 5. We then switch gears in Chapter 5 and move beyond lossless compression; universal lossy compression and signal reconstruction are described in detail. Finally, Chapter 7 describes Lempel-Ziv algorithms for universal lossless compression based on parsing an input sequence. For convenienve, notation is summarized in Chapter 8.

This manuscript is a work in progress, and we expect to expand and improve it during future teachings of the course.

1.3References

  1. Cover, T. M. and Thomas, J. A. (2006). Elements of Information Theory. Wiley-Interscience.

  2. Gallager, R. G. (1968). Information Theory and Reliable Communications. Wiley.

  3. Mallat, S.G. (1999). A wavelet tour of signal processing. Academic Press.

  4. Rissanen, J. (1983). A universal data compression system. IEEE Trans. Inf. theory, 29(5), 656–664.

  5. Ziv, J. and Lempel, A. (1977). A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory, 23(3), 337–343.

  6. Ziv, J. and Lempel, A. (1978, Sep.). Compression of Individual Sequences via Variable-Rate Coding. IEEE Trans. Inf. Theory, 24(5), 530–536.

Solutions