Linear Controller Design: Limits of Performance by Stephen Boyd and Craig Barratt - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

CHAPTER 13 ELEMENTS OF CONVEX ANALYSIS

tive at is an element of ( ). (In fact, it is the only element of ( ).)

x

@

x

@

x

Scaling: If

0 and is convex, then a subgradient of

at is given by

w

w

x

, where is any subgradient of at .

w

g

g

x

Sum: If ( ) = ( ) + + m( ), where ... m are convex, then any

x

x

x

1

1

of the form = + + m is in ( ), where i

i( ).

g

g

g

g

@

x

g

2

@

x

1

Maximum: Suppose that

( ) = sup

( )

x

f

x

j

2

Ag

where each

is convex, and is any index set. Suppose that

is

A

2

A

ac

h

such that

( ) = ( ) (so that

( ) achieves the maximum). Then if

x

x

x

ac

h

ac

h

( ), we have

( ). Of course there may be several di erent

g

2

@

x

g

2

@

x

ac

h

indices that achieve the maximum we need only pick one.

A special case is when is the maximum of the functionals ... n, so that

1

= 1 ... . If ( ) = i( ), then any subgradient of i( ) is also a

A

f

ng

x

x

g

x

subgradient of ( ).

x

From these tools we can derive additional tools for determining a subgradient of

a weighted sum or weighted maximum of convex functionals. Their use will become

clear in the next section.

For quasiconvex functionals, we have the analogous tools:

Dierentiable functional: If is quasiconvex and di erentiable at , with

x

nonzero derivative, then its derivative at is a quasigradient of at .

x

x

Scaling: If

0 and is quasiconvex, then any quasigradient of at is

w

x

also a quasigradient of

at .

w

x

Maximum: Suppose that

( ) = sup

( )

x

f

x

j

2

Ag

where each is quasiconvex, and is any index set. Suppose that

A

2

A

ac

h

is such that

( ) = ( ). Then if is a quasigradient of

at , then

x

x

g

x

ac

h

ac

h

is a quasigradient of at .

g

x

Nested family: Suppose that is de ned in terms of a nested family of convex

sets, i.e., ( ) = inf

, where

whenever

(see

x

f

j

x

2

C

g

C

C

section 6.2.2). If T(

) = 0 de nes a supporting hyperplane to x at ,

(

)

g

z

;

x

C

x

then is a quasigradient of at .

g

x

(The sum tool is not applicable because the sum of quasiconvex functionals need

not be quasiconvex.)

index-310_1.png

index-310_2.png

index-310_3.png

index-310_4.png

index-310_5.png

index-310_6.png

index-310_7.png

index-310_8.png

13.4 COMPUTING SUBGRADIENTS

301

13.4

Computing Subgradients

In this section we show how to compute subgradients of several of the convex func-

tionals we have encountered in chapters 8{10. Since these are convex functionals

on , an in nite-dimensional space, the subgradients we derive will be linear func-

H

tionals on . In the next section we show how these can be used to calculate

H

subgradients in n when a nite-dimensional approximation used in chapter 15 is

R

made the algorithms of the next chapter can then be used.

In general, the convex functionals we consider will be functionals of some par-

ticular entry (or block of entries) of the closed-loop transfer matrix H. To simplify

notation, we will assume in each subsection that H consists of only the relevant

entry or entries.

13.4.1

An RMS Response

We consider the weighted 2 norm,

H

1=2

Z

(H) = 1 1

2

2

Sw(!) H(j!) d!

j

j

;1

with SISO H for simplicity (and of course, Sw(!) 0). We will determine a

subgradient of at the transfer function H0. If (H0) = 0, then the zero functional

is a subgradient, so we now assume that (H0) = 0. In this case is di erentiable

6

at H0, so our rst rule above tells us that our only choice for a subgradient is the

derivative of at H0, which is the linear functional sg given by

sg

Z

(H) =

1

1

2 (H

S

H

0)

w(!)

0(j!)H(j!) d!:

<

;1

(The reader can verify that for small H, (H0+H)

(H0)+ sg(H) the Cauchy-

Schwarz inequality can be used to directly verify that the subgradient inequal-

ity (13.3) holds.)

Using the subgradient for , we can nd a supporting hyperplane to the maxi-

mum RMS response speci cation (H)

.

There is an analogous expression for the case when H is a transfer matrix. For

(H) = H 2 and H0 = 0, a subgradient of at H0 is given by

k

k

6

sg

Z

(H) =

1

1

2 (H

(H

0)

0(j!) H(j!)) d!:

<

T

r

;1

13.4.2

Step Response Overshoot

We consider the overshoot functional,

(H) = sups(t) 1

t 0

;

index-311_1.png

index-311_2.png

index-311_3.png

index-311_4.png

index-311_5.png

index-311_6.png

302