(2.8)
x
Consider the following Java statement:
y = Math.sin(x)/x;
or an equivalent Matlab statement
y = sin(x)/x
Superficially, these look very similar to (2.8). There are minor differences in syntax
in the Java statement, but otherwise, it is hard to tell the difference. But there are
differences. For one, the mathematical equation (2.8) has meaning if y is known and
x is not. It declares a relationship between x and y. The Java and Matlab statements
define a procedure for computing y given x. Those statements have no meaning if
y is known and x is not.
The mathematical equation (2.8) can be interpreted as a predicate that defines a
function, for example the function Sinc : R → R, where
graph(Sinc) = {(x, y) | x ∈ R, y = sin(x)/x}.
(2.9)
The Java and Matlab statements can be interpreted as imperative definitions of a
function. Confusingly, many programming languages, including Matlab, use the
term “function” to mean something a bit different from a mathematical function.
They use it to mean a procedure that can compute an element in the range of
a function given an element in its domain. Under certain restrictions (avoiding
global variables for example), Matlab functions do in fact compute mathematical
functions. But in general, they do not. .
To interpret the Java and Matlab statements as imperative definitions of a function,
note that given an element in the domain, they specify how to compute an element
in the range. However, these two statements do not define the same function as
64
Lee & Varaiya, Signals and Systems
2. DEFINING SIGNALS AND SYSTEMS
in (2.9). To see this, consider the value of y when x = 0. Given the mathematical
equation, it is not entirely trivial to determine the value of y. You can verify that
y = 1 when x = 0 using l’Hôpital’s rule, which states that if f (a) = g(a) = 0, then
f (x)
f (x)
lim
= lim
,
(2.10)
x→a g(x)
x→a g (x)
if the limit exists, where f (x) is the derivative of f with respect to x, and g (x) is
the derivative of g with respect to x.
In contrast, the meaning of the Java and Matlab statements is that y = 0/0 when
x = 0, which Java and Matlab (and most modern languages) define to be NaN, not
a number. Thus, given x = 0, the procedures yield different values for y than the
mathematical expression. (An exception is symbolic algebra programs, such as
Mathematica or Maple, which will evaluate sin(x)/x to 1 when x = 0. These pro-
grams use sophisticated, rule-based solution techniques, and, in effect, recognize
the need and apply l’Hôpital’s rule.)
We can see from the above example some of the strengths and weaknesses of imperative
and declarative approaches. Given only a declarative definition, it is difficult for a com-
puter to determine the value of y. Symbolic mathematical software, such as Maple and
Mathematica, is designed to deal with such situations, but these are very sophisticated
programs. In general, using declarative definitions in computers requires quite a bit more
sophistication than using imperative definitions.
Imperative definitions are easier for computers to work with. But the Java and Matlab
statements illustrate one weakness of the imperative approach: it is arguable that y = NaN
is the wrong answer, so the Java and Matlab statements have a bug. This bug is unlikely
to be detected unless, in testing, these statements happen to be executed with the value
x = 0. A correct Java program might look like this:
if (x == 0.0) y = 1.0;
else y = Math.sin(x)/x;
Thus, the imperative approach has the weakness that ensuring correctness is more dif-
ficult. Humans have developed a huge arsenal of techniques and skills for thoroughly
Lee & Varaiya, Signals and Systems
65
2.2. DEFINING SIGNALS
understanding declarative definitions (thus lending confidence in their correctness), but
we are only beginning to learn how to ensure correctness in imperative definitions.
2.2
Defining signals
Signals are functions. Thus, both declarative and imperative approaches can be used to
define them.
2.2.1
Declarative definitions
Consider for example an audio signal s, a pure tone at 440 Hz (middle A on the piano
keyboard). Recall that audio signals are functions Sound : Time → Pressure, where the
set Time ⊂ R represents a range of time and the set Pressure represents air pressure.4 To
define this function, we might give the declarative description
∀ t ∈ Time, s(t) = sin(440 × 2πt).
(2.11)
In many texts, you will see the shorthand
s(t) = sin(440 × 2πt)
used as the definition of the function s. Using the shorthand is only acceptable when
the domain of the function is well understood from the context. This shorthand can be
particularly misleading when considering systems, and so we will only use it sparingly.
A portion of the graph of the function (2.11) is shown in Figure 1.3.
2.2.2
Imperative definitions
We can also give an imperative description of such a signal. When thinking of signals
rather than more abstractly of functions, there is a subtle question that arises when we
attempt to construct an imperative definition. Do you give the value of s(t) for a particular
t? Or for all t in the domain? Suppose we want the latter, which seems like a more
4Recall further that we normalize Pressure so that zero represents the ambient air pressure. We also use
arbitrary units, rather than a physical unit such as millibars.
66
Lee & Varaiya, Signals and Systems
2. DEFINING SIGNALS AND SYSTEMS
complete definition of the function. Then we have a problem. The domain of this function
may be any time interval, or all time! Suppose we just want one second of sound. Define
t = 0 to be the start of that one second. Then the domain is [0, 1]. But there are an
(uncountably) infinite number of values for t in this range! No Java or Matlab program
could provide the value of s(t) for all these values of t.
Since a signal is function, we give an imperative description of the signal exactly as we
did for functions. We give a procedure that has the potential of providing values for s(t),
given any t.
Example 2.14: We could define a Java method as follows:
double s(double t) {
return (Math.sin(440*2*Math.PI*t));
}
Calling this method with a value for t as an argument yields a value for s(t). Java
(and most object-oriented languages) use the term “method” for most procedures.
Another alternative is to provide a set of samples of the signal.
Example 2.15: In Matlab, we could define a vector t that gives the values of time
that we are interested in:
t = [0:1/8000:1];
In the vector t there are 8001 values evenly spaced between 0 and 1, so our sam-
pling rate is 8000 samples per second. Then we can compute values of s for these
values of t and listen to the resulting sound:
s = cos(2*pi*440*t);
sound(s,8000)
The vector s also has 8001 elements, representing evenly spaced samples of one
second of A-440.
Lee & Varaiya, Signals and Systems
67
2.3. DEFINING SYSTEMS
2.2.3
Physical modeling
An alternative way to define a signal is to construct a model for a physical system that
produces that signal.
Example 2.16: A pure tone might be defined as a solution to a differential equation
that describes the physics of a tuning fork.
A tuning fork consists of a metal finger (called a tine) that is displaced by striking
it with a hammer. After being displaced, it vibrates. If the tine has no friction, it
will vibrate forever. We can denote the displacement of the tine after being struck
at time zero as a function y : R+ → R. If we assume that the initial displacement
introduced by the hammer is one unit, then using our knowledge of physics we can
determine that for all t ∈ R+, the displacement satisfies the differential equation
¨
y(t) = − 2
ω0y(t)
(2.12)
where ω0 is constant that depends on the mass and stiffness of the tine, and and
where ¨
y(t) denotes the second derivative with respect to time of y (see box).
It is easy to verify that y given by
∀ y ∈ R+, y(t) = cos(ω0t)
(2.13)
is a solution to this differential equation (just take its second derivative). Thus,
the displacement of the tuning fork is sinusoidal. This displacement will couple
directly with air around the tuning fork, creating vibrations in the air (sound). If
we choose materials for the tuning fork so that ω0 = 2π × 440, then the tuning fork
will produce the tone of A-440 on the musical scale.
2.3
Defining systems
All of the methods that we have discussed for defining functions can be used, in princi-
ple, to define systems. However, in practice, the situation is much more complicated for
systems than for signals. Recall from Section 1.2.1 that a system is a function where the
domain and range are sets of signals called signal spaces. Elements of these domains and
68
Lee & Varaiya, Signals and Systems
2. DEFINING SIGNALS AND SYSTEMS
displacement
restorative force
tine
Figure 2.6: A tuning fork.
ranges are considerably more difficult to specify than, say, an element of R or Z. For this
reason, it is almost never reasonable to use a graph or a table to define a system. Much
of the rest of this book is devoted to giving precise ways to define systems where some
analysis is possible. Here we consider some simple techniques that can be immediately
motivated. Then we show how more complicated systems can be constructed from sim-
pler ones using block diagrams. We give a rigorous meaning to these block diagrams so
that we can use them without resorting to perilous intuition to interpret them.
Consider a system S where
S : [D → R] → [D → R ].
(2.14)
Suppose that x ∈ [D → R] and y = S(x). Then we call the pair (x, y) a behavior of the
system. A behavior is an input, output pair. The set of all behaviors is
Behaviors(S) = {(x, y) | x ∈ [D → R] and y = S(x)}.
Giving the set of behaviors is one way to define a system. Explicitly giving the set
Behaviors, however, is usually impractical, because it is a huge set, typically infinite (see
boxes on pages 679 and 681). Thus, we seek other ways of talking about the relationship between a signal x and a signal y when y = S(x).
Lee & Varaiya, Signals and Systems
69
2.3. DEFINING SYSTEMS
To describe a system, one must specify its domain (the space of input signals), its range
(the space of output signals), and the rule by which the system assigns an output signal
to each input signal. This assignment rule is more difficult to describe and analyze than
the input and output signals themselves. A table is almost never adequate, for example.
Indeed for most systems we do not have effective mathematical tools for describing or
Probing Further: Physics of a Tuning Fork
A tuning fork consists of two fingers called tines, as shown in Figure 2.6. If you displace
one of these tines by hitting it with a hammer, it will vibrate with a nearly perfect sinu-
soidal characteristic. As it vibrates, it pushes the air, creating a nearly perfect sinusoidal
variation in air pressure that propogates as sound. Why does it vibrate this way?
Suppose the displacement of the tine (relative to its position at rest) at time t is given
by x(t), where x : R → R. There is a force on the tine pushing it towards its at-rest
position. This is the restorative force of the elastic material used to make the tine. The
force is proportional to the displacement (the greater the displacement, the greater the
force), so
F(t) = −kx(t),
where k is the proportionality constant that depends on the material and geometry of the
tine. In addition, Newton’s second law of motion tells us the relationship between force
and acceleration,
F(t) = ma(t),
where m is the mass and a(t) is the acceleration at time t. Of course,
d2
a(t) =
x(t) = ¨
x(t),
dt
so
m ¨
x(t) = −kx(t)
or
¨
x(t) = −(k/m)x(t).
Comparing with (2.12), we see that
2
ω = k/m.
0
A solution to this equation needs to be some signal that is proportional to its own
second derivative. A sinusoid as in (2.13) has exactly this property. The sinusoidal
behavior of the tine is called simple harmonic motion.
70
Lee & Varaiya, Signals and Systems
2. DEFINING SIGNALS AND SYSTEMS
t
x
x( t)
f
F( x)
y( t)
Reals
y( t) = ( F( x))(t) = f( x( t))
Y
Figure 2.7: A memoryless system F has an associated function f that can be
used to determine its output y(t) given only the current input x(t) at time t. In
particular, it does not depend on values of the function x for other values of time.
understanding their behavior. Thus, it is useful to restrict our system designs to those we
can understand. We first consider some simple examples.
2.3.1
Memoryless systems and systems with memory
Memoryless systems are characterized by the property that previous input values are not
remembered when determining the current output value. More precisely, a system F :
[R → Y ] → [R → Y ] is memoryless if there is a function f : Y → Y such that
∀ t ∈ R and ∀ x ∈ [R → Y], (F(x))(t) = f (x(t)).
This is illustrated in Figure 2.7. In other words, at any time t, the output (F(x))(t) depends
only on the input x(t) at that same time t; in particular, it does not depend on t nor on
previous or future values of x.
Specification of a memoryless system reduces to specification of the function f . If Y is
finite, then a table may be adequate.
Lee & Varaiya, Signals and Systems
71
2.3. DEFINING SYSTEMS
Example 2.17:
Consider a continuous-time system with input x and output y,
where for all t ∈ R,
y(t) = x2(t).
This example defines a simple system, where the value of the output signal at each
time depends only on the value of the input signal at that time. Such systems are
said to be memoryless because you do not have to remember previous values of the
input in order to determine the current value of the output.
By contrast, here is an example of a system with memory.
Example 2.18: Consider a continuous-time system with input x and output y =
F(x) such that ∀ t ∈ R,
1 Z t
y(t) =
x(τ)dτ.
M t−M
By a change of variables this can also be written
1 Z M
y(t) =
x(t − τ)dτ.
M 0
This system is clearly not memoryless. It has the effect of smoothing the input
signal. We will study it and many related systems in detail in later chapters.
2.3.2
Differential equations
Consider a class of systems given by functions S : ContSignals → ContSignals where
ContSignals is a set of continuous-time signals. Depending on the scenario, we could
have ContSignals = [Time → R] or ContSignals = [Time → C], where Time = R or Time =
R+. These are often called continuous-time systems because they operate on continuous-
time signals. Frequently, such systems can be defined by differential equations that relate
the input signal to the output signal.
72
Lee & Varaiya, Signals and Systems
2. DEFINING SIGNALS AND SYSTEMS
Example 2.19:
Consider a particle constrained to move forward or backwards
along a straight line with an externally imposed force. We will consider this particle
to be a system where the output is its position and the externally imposed force is
the input.
Denote the position of the particle by x : Time → R, where Time = R+. By con-
sidering only the non-negative reals, we are assuming that the model has a starting
time. We denote the acceleration by a : Time → R. By Newton’s law, which relates
force, mass, and acceleration,
f (t) = ma(t),
where f (t) is the force at time t, and m is the mass. By the definition of acceleration,
∀ t ∈ R+, ¨x(t) = a(t) = f (t)/m,
where ¨
x(t) denotes the second derivative with respect to time of x. If we know the
initial position x(0) and initial speed ˙
x(0) of the particle at time 0, and if we are
given the input force f , we can evaluate the position at any t by integrating this
differential equation
Z t Z s
x(t) = x(0) + ˙
x(0)t +
[
( f (τ)/m)dτ]ds.
(2.15)
0
0
We can regard the initial position and velocity as inputs, together with force, in
which case the system is a function
Particle : R × R × [R+ → R] → [R+ → R],
where for any inputs (x(0), ˙x(0), f ), x = Particle(x(0), ˙x(0), f ) must satisfy (2.15).
Suppose for example that the input is (1, −1, f ) where m = 1 and ∀ t ∈ R+, f (t) = 1.
We can calculate the position by carrying out the integration in (2.15) to find that
∀ t ∈ R+, x(t) = 1 −t + 0.5t2.
Suppose instead that x(0) = ˙
x(0) = 0 and ∀ t ∈ R+, f (t) = cos(ω0t), where ω0 is
some fixed number. Again, we can carry out the integration to get
Z t Z s
cos(ω0u)du ds = − cos(ω0t) − 1 .
2
0
0
ω0
Lee & Varaiya, Signals and Systems
73
2.3. DEFINING SYSTEMS
Notice that the position of the particle is sinusoidal. Notice further that the ampli-
tude of this sinusoid decreases as ω0 increases. Intuitively, this has to be the case.
If the externally imposed force is varying more rapidly back and forth, the particle
has less time to respond to each direction of force, and hence its excursion is less.
In subsequent chapters, we will study how the response of certain kinds of systems
varies with the frequency of the input.
2.3.3
Difference equations
Consider a class of systems given by functions S : DiscSignals → DiscSignals where
DiscSignals is a set of discrete-time signals. Depending on the scenario, we could have
DiscSignals = [Z → R] or DiscSignals = [Z → C], or even DiscSignals = [N0 → R], or
DiscSignals = [N0 → C]. These are often called discrete-time systems because they op-
erate on discrete-time signals. Frequently, such systems can be defined by difference
equations that relate the input signal to the output signal.
Example 2.20: Consider a system
S : [N0 → R] → [N0 → R]
where for all x ∈ [N0 → R], S(x) = y is given by
∀ n ∈ Z, y(n) = (x(n) + x(n − 1))/2.
The output at each index is the average of two of the inputs. This is a simple
example of a moving average system, where typically more than two input values
get averaged to produce an output value.
Suppose that x = u, the unit step function, defined by
∀
1
if n ≥ 0
n ∈ Z,
u(n) =
(2.16)
0
otherwise
We can easily calculate the output y,
1
if n ≥ 1
∀ n ∈ Z, y(n) =
1/2 if n = 0
0
otherwise
74
Lee & Varaiya, Signals and Systems
2. DEFINING SIGNALS AND SYSTEMS
The system smoothes the transition of the unit step a bit.
A slightly more interesting input is a sinusoidal signal given by
∀ n ∈ Z, x(n) = cos(2π f n).
The output is given by
∀ n ∈ Z, y(n) = (cos(2π f n) + cos(2π f (n − 1)))/2.
Using the trigonometric identities in the box on page 76 this can be written as
y(n) = R cos(2π f n + θ)
where
sin(−2π f )
θ
=
arctan(
)/2,
1 + cos(−2π f )
R
=
2 + 2 cos(2π f )
As in the previous example, a sinusoidal input stimulates a sinusoidal output with
the same frequency. In this case, the amplitude of the output varies (in a fairly
complicated way) as a function of the input frequency. We will examine this phe-
nomenon in more detail in subsequent chapters by studying the frequency response
of such systems.
Example 2.21: The general form for a moving average is given by
M−1
∀
1
n ∈ Integers,
y(n) =
∑ x(n − k),
M k=0
where x is the input and y is the output. (If this notation is unfamiliar, see box on
page 77.)
This system is called an M-point moving average, since at any n it gives the average
of the M most recent values of the input. It computes an average, just like example
2.18 but the integral has been replaced by its discrete counterpart, the sum.
Lee & Varaiya, Signals and Systems
75
2.3. DEFINING SYSTEMS
Moving averages are widely used on Wall Street to smooth out momentary fluctuations in
stock prices to try to determine general trends. We will study the smoothing properties of
this system. We will also study more general forms of difference equations of which the
moving average is a special case.
The examples above give declarative definitions of systems. Imperative definitions re-
quire giving a procedure for computing the output signal given the input signal. It is clear
how to do that with the memoryless system, assuming that an imperative definition of the
function f is available, and with the moving average. The integral equation, however, is
harder to define imperatively. An imperative description of such systems that is suitable
for computation on a computer requires approximation via solvers for differential equa-
tions. Simulink, for example, which is part of the Matlab package, provides such solvers.