and D ∈ Rl×r could be constant or time varying. This class of plant models
is quite general because it can serve as an approximation of nonlinear plants
around operating points. A controller based on the linear model (1.1.1) is
expected to be simpler and easier to understand than a controller based on
a possibly more accurate but nonlinear plant model.
The class of plant models given by (1.1.1) can be generalized further if we
allow the elements of A, B, and C to be completely unknown and changing
with time or operating conditions. The control of plant models (1.1.1) with
A, B, C, and D unknown or partially known is covered under the area of
adaptive systems and is the main topic of this book.
1.2
Adaptive Control
According to Webster’s dictionary, to adapt means “to change (oneself) so
that one’s behavior will conform to new or changed circumstances.” The
words “adaptive systems” and “adaptive control” have been used as early
as 1950 [10, 27].
The design of autopilots for high-performance aircraft was one of the pri-
mary motivations for active research on adaptive control in the early 1950s.
Aircraft operate over a wide range of speeds and altitudes, and their dy-
namics are nonlinear and conceptually time varying. For a given operating
point, specified by the aircraft speed (Mach number) and altitude, the com-
plex aircraft dynamics can be approximated by a linear model of the same
form as (1.1.1). For example, for an operating point i, the linear aircraft
model has the following form [140]:
˙ x = Aix + Biu, x(0) = x 0
(1.2.1)
y = Ci x + Diu
where Ai, Bi, Ci, and Di are functions of the operating point i. As the air-
craft goes through different flight conditions, the operating point changes
6
CHAPTER 1. INTRODUCTION
✁✁✕
✲
Input
u
✲ y
✲Controller
✲
Plant
P
Command
✁✁
✁
Strategy for
✛
u( t)
Adjusting
✛
y( t)
Controller Gains
Figure 1.3 Controller structure with adjustable controller gains.
leading to different values for Ai, Bi, Ci, and Di. Because the output re-
sponse y( t) carries information about the state x as well as the parameters,
one may argue that in principle, a sophisticated feedback controller should
be able to learn about parameter changes by processing y( t) and use the
appropriate gains to accommodate them. This argument led to a feedback
control structure on which adaptive control is based. The controller struc-
ture consists of a feedback loop and a controller with adjustable gains as
shown in Figure 1.3. The way of changing the controller gains in response
to changes in the plant and disturbance dynamics distinguishes one scheme
from another.
1.2.1
Robust Control
A constant gain feedback controller may be designed to cope with parameter
changes provided that such changes are within certain bounds. A block
diagram of such a controller is shown in Figure 1.4 where G( s) is the transfer
function of the plant and C( s) is the transfer function of the controller. The
transfer function from y∗ to y is
y
C( s) G( s)
=
(1.2.2)
y∗
1 + C( s) G( s)
where C( s) is to be chosen so that the closed-loop plant is stable, despite
parameter changes or uncertainties in G( s), and y ≈ y∗ within the frequency
range of interest. This latter condition can be achieved if we choose C( s)
1.2. ADAPTIVE CONTROL
7
y∗ ✲
+
❧
✲
u
y
Σ
C( s)
✲ Plant
✲
G( s)
− ✻
Figure 1.4 Constant gain feedback controller.
so that the loop gain |C( jw) G( jw) | is as large as possible in the frequency
spectrum of y∗ provided, of course, that large loop gain does not violate
closed-loop stability requirements. The tracking and stability objectives can
be achieved through the design of C( s) provided the changes within G( s)
are within certain bounds. More details about robust control will be given
in Chapter 8.
Robust control is not considered to be an adaptive system even though
it can handle certain classes of parametric and dynamic uncertainties.
1.2.2
Gain Scheduling
Let us consider the aircraft model (1.2.1) where for each operating point
i, i = 1 , 2 , . . . , N , the parameters Ai, Bi, Ci, and Di are known. For a
given operating point i, a feedback controller with constant gains, say θi,
can be designed to meet the performance requirements for the correspond-
ing linear model. This leads to a controller, say C( θ), with a set of gains
{θ 1 , θ 2 , ..., θi, ..., θN } covering N operating points. Once the operating point,
say i, is detected the controller gains can be changed to the appropriate value
of θi obtained from the precomputed gain set. Transitions between different
operating points that lead to significant parameter changes may be handled
by interpolation or by increasing the number of operating points. The two
elements that are essential in implementing this approach is a look-up table
to store the values of θi and the plant auxiliary measurements that corre-
late well with changes in the operating points. The approach is called gain
scheduling and is illustrated in Figure 1.5.
The gain scheduler consists of a look-up table and the appropriate logic
for detecting the operating point and choosing the corresponding value of
θi from the table. In the case of aircraft, the auxiliary measurements are
the Mach number and the dynamic pressure. With this approach plant
8
CHAPTER 1. INTRODUCTION
Auxiliary
Gain
✛Measurements
θi Scheduler
Command or
Reference
✲
Signal Controller
u
✲
y✲
✲
C( θ)
Plant
✠
Figure 1.5 Gain scheduling.
parameter variations can be compensated by changing the controller gains
as functions of the auxiliary measurements.
The advantage of gain scheduling is that the controller gains can be
changed as quickly as the auxiliary measurements respond to parameter
changes. Frequent and rapid changes of the controller gains, however, may
lead to instability [226]; therefore, there is a limit as to how often and how
fast the controller gains can be changed.
One of the disadvantages of gain scheduling is that the adjustment mech-
anism of the controller gains is precomputed off-line and, therefore, provides
no feedback to compensate for incorrect schedules. Unpredictable changes
in the plant dynamics may lead to deterioration of performance or even to
complete failure. Another possible drawback of gain scheduling is the high
design and implementation costs that increase with the number of operating
points.
Despite its limitations, gain scheduling is a popular method for handling
parameter variations in flight control [140, 210] and other systems [8].
1.2.3
Direct and Indirect Adaptive Control
An adaptive controller is formed by combining an on-line parameter estima-
tor, which provides estimates of unknown parameters at each instant, with
a control law that is motivated from the known parameter case. The way
the parameter estimator, also referred to as adaptive law in the book, is
combined with the control law gives rise to two different approaches. In the
first approach, referred to as indirect adaptive control, the plant parameters
are estimated on-line and used to calculate the controller parameters. This
1.2. ADAPTIVE CONTROL
9
approach has also been referred to as explicit adaptive control, because the
design is based on an explicit plant model.
In the second approach, referred to as direct adaptive control, the plant
model is parameterized in terms of the controller parameters that are esti-
mated directly without intermediate calculations involving plant parameter
estimates. This approach has also been referred to as implicit adaptive con-
trol because the design is based on the estimation of an implicit plant model.
In indirect adaptive control, the plant model P ( θ∗) is parameterized with
respect to some unknown parameter vector θ∗. For example, for a linear
time invariant (LTI) single-input single-output (SISO) plant model, θ∗ may
represent the unknown coefficients of the numerator and denominator of the
plant model transfer function. An on-line parameter estimator generates
an estimate θ( t) of θ∗ at each time t by processing the plant input u and
output y. The parameter estimate θ( t) specifies an estimated plant model
characterized by ˆ
P ( θ( t)) that for control design purposes is treated as the
“true” plant model and is used to calculate the controller parameter or gain
vector θc( t) by solving a certain algebraic equation θc( t) = F ( θ( t)) at each time t. The form of the control law C( θc) and algebraic equation θc = F ( θ)
is chosen to be the same as that of the control law C( θ∗c) and equation θ∗c =
F ( θ∗) that could be used to meet the performance requirements for the plant
model P ( θ∗) if θ∗ was known. It is, therefore, clear that with this approach,
C( θc( t)) is designed at each time t to satisfy the performance requirements
for the estimated plant model ˆ
P ( θ( t)), which may be different from the
unknown plant model P ( θ∗). Therefore, the principal problem in indirect
adaptive control is to choose the class of control laws C( θc) and the class
of parameter estimators that generate θ( t) as well as the algebraic equation
θc( t) = F ( θ( t)) so that C( θc( t)) meets the performance requirements for the plant model P ( θ∗) with unknown θ∗. We will study this problem in
great detail in Chapters 6 and 7, and consider the robustness properties of
indirect adaptive control in Chapters 8 and 9. The block diagram of an
indirect adaptive control scheme is shown in Figure 1.6.
In direct adaptive control, the plant model P ( θ∗) is parameterized in
terms of the unknown controller parameter vector θ∗c, for which C( θ∗c) meets
the performance requirements, to obtain the plant model Pc( θ∗c) with exactly
the same input/output characteristics as P ( θ∗).
The on-line parameter estimator is designed based on Pc( θ∗c) instead of
10
CHAPTER 1. INTRODUCTION
✒
✲ Controller
Plant
u
✲
✲
✲
C( θc)
P ( θ∗)
y
Input
Command
❄
r
On-Line
✛
✲
Parameter
✛ r
Estimation of θ∗
❄
θ( t)
Calculations
θc( t) = F ( θ( t))
θc
Figure 1.6 Indirect adaptive control.
P ( θ∗) to provide direct estimates θc( t) of θ∗c at each time t by processing the plant input u and output y. The estimate θc( t) is then used to update the
controller parameter vector θc without intermediate calculations. The choice
of the class of control laws C( θc) and parameter estimators generating θc( t)
for which C( θc( t)) meets the performance requirements for the plant model
P ( θ∗) is the fundamental problem in direct adaptive control. The properties
of the plant model P ( θ∗) are crucial in obtaining the parameterized plant
model Pc( θ∗c) that is convenient for on-line estimation. As a result, direct
adaptive control is restricted to a certain class of plant models. As we will
show in Chapter 6, a class of plant models that is suitable for direct adaptive
control consists of all SISO LTI plant models that are minimum-phase, i.e.,
their zeros are located in Re [ s] < 0. The block diagram of direct adaptive
control is shown in Figure 1.7.
The principle behind the design of direct and indirect adaptive control
shown in Figures 1.6 and 1.7 is conceptually simple. The design of C( θc)
treats the estimates θc( t) (in the case of direct adaptive control) or the
estimates θ( t) (in the case of indirect adaptive control) as if they were the
true parameters. This design approach is called certainty equivalence and can
be used to generate a wide class of adaptive control schemes by combining
different on-line parameter estimators with different control laws.
1.2. ADAPTIVE CONTROL
11
✒
✲ Controller
Plant
u
✲
✲
✲
C( θc)
P ( θ∗) → Pc( θ∗c)
y
Input
Command
r
On-Line
✛
✲
Parameter
✛
r
Estimation of θ∗c
θc
Figure 1.7 Direct adaptive control.
The idea behind the certainty equivalence approach is that as the param-
eter estimates θc( t) and θ( t) converge to the true ones θ∗c and θ∗, respectively, the performance of the adaptive controller C( θc) tends to that achieved by
C( θ∗c) in the case of known parameters.
The distinction between direct and indirect adaptive control may be con-
fusing to most readers for the following reasons: The direct adaptive control
structure shown in Figure 1.7 can be made identical to that of the indi-
rect adaptive control by including a block for calculations with an identity
transformation between updated parameters and controller parameters. In
general, for a given plant model the distinction between the direct and in-
direct approach becomes clear if we go into the details of design and anal-
ysis. For example, direct adaptive control can be shown to meet the per-
formance requirements, which involve stability and asymptotic tracking, for
a minimum-phase plant. It is still not clear how to design direct schemes
for nonminimum-phase plants. The difficulty arises from the fact that, in
general, a convenient (for the purpose of estimation) parameterization of the
plant model in terms of the desired controller parameters is not possible for
nonminimum-phase plant models.
Indirect adaptive control, on the other hand, is applicable to both
minimum- and nonminimum-phase plants. In general, however, the mapping
between θ( t) and θc( t), defined by the algebraic equation θc( t) = F ( θ( t)), cannot be guaranteed to exist at each time t giving rise to the so-called
stabilizability problem that is discussed in Chapter 7. As we will show in
12
CHAPTER 1. INTRODUCTION
✲
Reference Model
Wm( s)
y
−
m
❄ e 1
♥ ✲
Σ
r ✲
+ ✻
✲ Controller
u ✲ Plant
y
✲
C( θ∗c)
G( s)
Figure 1.8 Model reference control.
Chapter 7, solutions to the stabilizability problem are possible at the expense
of additional complexity.
Efforts to relax the minimum-phase assumption in direct adaptive control
and resolve the stabilizability problem in indirect adaptive control led to
adaptive control schemes where both the controller and plant parameters
are estimated on-line, leading to combined direct/indirect schemes that are
usually more complex [112].
1.2.4
Model Reference Adaptive Control
Model reference adaptive control (MRAC) is derived from the model follow-
ing problem or model reference control (MRC) problem. In MRC, a good
understanding of the plant and the performance requirements it has to meet
allow the designer to come up with a model, referred to as the reference
model, that describes the desired I/O properties of the closed-loop plant.
The objective of MRC is to find the feedback control law that changes the
structure and dynamics of the plant so that its I/O properties are exactly
the same as those of the reference model. The structure of an MRC scheme
for a LTI, SISO plant is shown in Figure 1.8. The transfer function Wm( s) of
the reference model is designed so that for a given reference input signal r( t)
the output ym( t) of the reference model represents the desired response the
plant output y( t) should follow. The feedback controller denoted by C( θ∗c)
is designed so that all signals are bounded and the closed-loop plant transfer
function from r to y is equal to Wm( s). This transfer function matching
guarantees that for any given reference input r( t), the tracking error
1.2. ADAPTIVE CONTROL
13
✲ Reference Model
ym
Wm( s)
− ❄
❧
e
Σ ✲
1
✻
+
✒
✲ Controller
Plant
u
✲
✲
C( θc)
P ( θ∗)
y
Input
Command
❄
r
On-Line
✛
✲
Parameter
✛ r
Estimation of θ∗
θ( t)
❄
Calculations
θc( t) = F ( θ( t))
θc
Figure 1.9 Indirect