Robust Adaptive Control by Petros A. Ioannou, Jing Sun - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

e = ( C ¯

e)2 ( B P ¯

e)2 0

λ

which implies that ¯

ee = 0 is stable, ¯ e ∈ L∞ and C ¯ e, B P ¯ e ∈ L 2. We now rewrite

(7.3.34) as

1

˙¯ e = ( A − KoC e + KoC ¯ e − BB P ¯ e

λ

by using output injection, i.e., adding and subtracting the term KoC ¯ e. Because

A−KoC is stable and C ¯ e, B P ¯ e ∈ L∞

L 2, it follows from Corollary 3.3.1 that

¯

e ∈ L∞

L 2 and ¯ e → 0 as t → ∞. Using the results of Section 3.4.5 it follows that

the equilibrium ¯

ee = 0 is e.s. in the large which implies that

A − BKc = A − λ− 1 BB P

is a stable matrix. The rest of the proof is the same as that of Theorem 7.3.1 and

is omitted.

The main equations of the LQ control law are summarized in Table 7.3.

Example 7.3.3 Let us consider the scalar plant

˙ x = −ax + bup

yp = x

where a and b are known constants and b = 0. The control objective is to choose

up to stabilize the plant and regulate yp, x to zero. In this case Qm( s) = Q 1( s) = 1

and no observer is needed because the state x is available for measurement. The

control law that minimizes

J =

( y 2 p + λu 2 p) dt

0

is given by

1

up = − bpy

λ

p

7.3. PPC: KNOWN PLANT PARAMETERS

463

Table 7.3 LQ control law

Plant

yp = Zp( s) u

R

p

p( s)

Reference input

Qm( s) ym = 0

Observer

As in Table 7.2

Solve for P = P

> 0 the equation

Calculation

A P + P A − P Bλ− 1 B P + CC = 0

A, B, C as defined in Table 7.2

Control law

¯

up = −λ− 1 B P ˆ e, up = Q 1( s) ¯ u

Q

p

m( s)

Design variables

λ > 0 penalizes the control effort; Q 1( s) , Qm( s)

as in Table 7.2

where p > 0 is a solution of the scalar Riccati equation

p 2 b 2

2 ap −

+ 1 = 0

λ

The two possible solutions of the above quadratic equation are

−λa +

λ 2 a 2 + b 2 λ

−λa −

λ 2 a 2 + b 2 λ

p 1 =

,

p

b 2

2 =

b 2

It is clear that p 1 > 0 and p 2 < 0; therefore, the solution we are looking for is

p = p 1 > 0. Hence, the control input is given by

a 2 + b 2

u

−a

λ

p =

+

y

b

b

p

464

CHAPTER 7. ADAPTIVE POLE PLACEMENT CONTROL

that leads to the closed-loop plant

b 2

˙ x = − a 2 +

x

λ

which implies that for any finite λ > 0, we have x ∈ L∞ and x( t) 0 as t → ∞

exponentially fast. It is clear that for λ → 0 the closed-loop eigenvalue goes to

−∞. For λ → ∞ the closed-loop eigenvalue goes to −|a|, which implies that if

the open-loop system is stable then the eigenvalue remains unchanged and if the

open-loop system is unstable, the eigenvalue of the closed-loop system is flipped to

the left half plane reflecting the minimum effort that is required to stabilize the

unstable system. The reader may verify that for the control law chosen above, the

cost function J becomes

a 2 + b 2 − a

J = λ

λ

x 2(0)

b 2

It is clear that if a > 0, i.e., the plant is open-loop stable, the cost J is less than when

a < 0, i.e., the plant is open-loop unstable. More details about the LQ problem

may be found in several books [16, 95, 122].

Example 7.3.4 Let us consider the same plant as in Examples 7.3.1, i.e.,

b

Plant

yp =

u

s + a p

Control Objective

Choose up so that the closed-loop poles are stable and yp

tracks the reference signal ym = 1.

Tracking Error Equations The problem is converted to a regulation problem

by considering the tracking error equation

b( s + 1)

s

e 1 =

¯

u

u

( s + a) s p, ¯

up = s + 1 p

where e 1 = yp − ym generated as shown in Example 7.3.2. The state-space repre-

sentation of the tracking error equation is given by

−a 1

1

˙ e =

e +

b¯

u

0

0

1

p

e 1 = [1 , 0] e

Observer The observer equation is the same as in Example 7.3.2, i.e.,

˙

−a 1

1

ˆ

e =

ˆ

e +

b¯

u

0

0

1

p − Ko([1 0]ˆ

e − e 1)

7.3. PPC: KNOWN PLANT PARAMETERS

465

where Ko = [10 − a, 25] is chosen so that the observer poles are equal to the roots

of A∗o( s) = ( s + 5)2.

Control law The control law, according to (7.3.30) to (7.3.32), is given by

s + 1

¯

up = −λ− 1[ b, b] P ˆ e,

up =

¯

u

s

p

where P satisfies the Riccati equation

−a 1

−a 1

b

1 0

P + P

− P

λ− 1[ b b] P +

= 0

(7.3.35)

0

0

0

0

b

0 0

where λ > 0 is a design parameter to be chosen. For λ = 0 . 1 , a = 0 . 5 , b = 2, the positive definite solution of (7.3.35) is

0 . 1585 0 . 0117

P =

0 . 0117 0 . 0125

which leads to the control law

s + 1

¯

up = [3 . 4037 0 . 4829]ˆ e, up =

¯

u

s

p

This control law shifts the open-loop eigenvalues from λ 1 = 0 . 5, λ 2 = 0 to λ 1 =

1 . 01 , λ 2 = 6 . 263.

For λ = 1 , a = 0 . 5 , b = 2 we have

0 . 5097 0 . 1046

P =

0 . 1046 0 . 1241

leading to

¯

up = [1 . 2287 0 . 4574]ˆ e

and the closed-loop eigenvalues λ 1 = 1 . 69 , λ 2 = 1 . 19.

Let us consider the case where a = 1 , b = 1. For these values of a, b the pair

1 1

1

,

0

0

1

is uncontrollable but stabilizable and the open-loop plant has eigenvalues at λ 1 = 0,

λ 2 = 1. The part of the system that corresponds to λ 2 = 1 is uncontrollable. In

this case, λ = 0 . 1 gives

0 . 2114 0 . 0289

P =

0 . 0289 0 . 0471

and ¯

up = [2 . 402 0 . 760]ˆ e, which leads to a closed-loop plant with eigenvalues at

1 . 0 and 3 . 162. As expected, the uncontrollable dynamics that correspond to

λ 2 = 1 remained unchanged.

466

CHAPTER 7. ADAPTIVE POLE PLACEMENT CONTROL

7.4

Indirect APPC Schemes

Let us consider the plant given by (7.3.1), i.e.,

Z

y

p( s)

p = Gp( s) up,

Gp( s) = Rp( s)

where Rp( s) , Zp( s) satisfy Assumptions P1 and P2. The control objective is

to choose up so that the closed-loop poles are assigned to the roots of the

characteristic equation A∗( s) = 0, where A∗( s) is a given monic Hurwitz

polynomial, and yp is forced to follow the reference signal ym ∈ L∞ whose

internal model Qm( s), i.e.,

Qm( s) ym = 0

is known and satisfies assumption P3.

In Section 7.3, we assume that the plant parameters (i.e., the coefficients

of Zp( s) , Rp( s)) are known exactly and propose several control laws that meet

the control objective. In this section, we assume that Zp( s) , Rp( s) satisfy

Assumptions P1 to P3 but their coefficients are unknown constants; and use

the certainty equivalence approach to design several indirect APPC schemes

to meet the control objective. As mentioned earlier, with this approach we

combine the PPC laws developed in Section 7.3 for the known parameter case

with adaptive laws that generate on-line estimates for the unknown plant

parameters. The adaptive laws are developed by first expressing (7.3.1)

in the form of the parametric models considered in Chapter 4, where the

coefficients of Zp, Rp appear in a linear form, and then using Tables 4.1 to

4.5 to pick up the adaptive law of our choice. We illustrate the design of

adaptive laws for the plant (7.3.1) in the following section.

7.4.1

Parametric Model and Adaptive Laws

We consider the plant equation

Rp( s) yp = Zp( s) up

where Rp( s) = sn+ an− 1 sn− 1+ · · ·+ a 1 s+ a 0 , Zp( s) = bn− 1 sn− 1+ · · ·+ b 1 s+ b 0, which may be expressed in the form

[ sn + θ∗a αn− 1( s)] yp = θ∗b αn− 1( s) up

(7.4.1)

7.4. INDIRECT APPC SCHEMES

467

where αn− 1( s)=[ sn− 1 , . . . , s, 1] and θ∗a =[ an− 1 , . . . , a 0] , θ∗b =[ bn− 1 , . . . , b 0]

are the unknown parameter vectors. Filtering each side of (7.4.1) with

1

,

Λ p( s)

where Λ p( s) = sn + λn− 1 sn− 1 + · · · λ 0 is a Hurwitz polynomial, we obtain

z = θ∗p φ

(7.4.2)

where

sn

α

α

z =

y

n− 1( s) u

n− 1( s) y

Λ

p,

θ∗p = [ θ∗b , θ∗a ] , φ =

p, −

p

p( s)

Λ p( s)

Λ p( s)

Equation (7.4.2) is in the form of the linear parametric model studied in

Chapter 4, thus leading to a wide class of adaptive laws that can be picked

up from Tables 4.1 to 4.5 for estimating θ∗p.

Instead of (7.4.1), we can also write

1

1

yp = (Λ p − Rp)

y

u

Λ p + Zp

p

p

Λ p

that leads to the linear parametric model

yp = θ∗λ φ

(7.4.3)

where θ∗λ = θ∗b , ( θ∗a − λp)

and λp = [ λn− 1 , λn− 2 , . . . , λ 0] is the coeffi-

cient vector of Λ p( s) − sn. Equation (7.4.3) can also be used to generate a

wide class of adaptive laws using the results of Chapter 4.

The plant parameterizations in (7.4.2) and (7.4.3) assume that the plant

is strictly proper with known order n but unknown relative degree n∗ ≥

1. The number of the plant zeros, i.e., the degree of Zp( s), however, is

unknown. In order to allow for the uncertainty in the number of zeros,

we parameterize Zp( s) to have degree n − 1 where the coefficients of si for

i = m + 1 , m + 2 , . . . , n − 1 are equal to zero and m is the degree of Zp( s). If m < n − 1 is known, then the dimension of the unknown vector θ∗p is reduced

to n + m + 1.

The adaptive laws for estimating on-line the vector θ∗p or θ∗λ in (7.4.2),

(7.4.3) have already been developed in Chapter 4 and are presented in Tables

4.1 to 4.5. In the following sections, we use (7.4.2) or (7.4.3) to pick up

adaptive laws from Tables 4.1 to 4.5 of and combine them with the PPC

laws of Section 7.3 to form APPC schemes.

468

CHAPTER 7. ADAPTIVE POLE PLACEMENT CONTROL

7.4.2

APPC Scheme: The Polynomial Approach

Let us first illustrate the design and analysis of an APPC scheme based on

the PPC scheme of Section 7.3.2 using a first order plant model. Then we

consider the general case that is applicable to an nth-order plant.

Example 7.4.1 Consider the same plant as in example 7.3.1, i.e.,

b

yp =

u

s + a p

(7.4.4)

where a and b are unknown constants and up is to be chosen so that the poles of

the closed-loop plant are placed at the roots of A∗( s) = ( s + 1)2 = 0 and yp tracks

the constant reference signal ym = 1 ∀t ≥ 0.

Let us start by designing each block of the APPC scheme, i.e., the adaptive law

for estimating the plant parameters a and b; the mapping from the estimates of a,

b to the controller parameters; and the control law.

Adaptive Law We start with the following parametric model for (7.4.4)

z = θ∗p φ

where

s

1

u

b

z =

y

p

,

θ∗

(7.4.5)

s + λ p,

φ = s + λ −y

p =

p

a

and λ > 0 is an arbitrary design constant. Using Tables 4.1 to 4.5 of Chapter 4, we

can generate a number of adaptive laws for estimating θ∗p. For this example, let us

choose the gradient algorithm of Table 4.2

˙ θp = Γ εφ

(7.4.6)

z − θ

ε =

p φ , m 2 = 1 + φ φ

m 2

where Γ = Γ > 0 , θp = [ˆ b, ˆ a] and ˆ a( t) , ˆ b( t) is the estimate of a and b respectively.

Calculation of Controller Parameters As shown in Section 7.3.2, the control

law

Λ − LQ

P

u