Linear Controller Design: Limits of Performance by Stephen Boyd and Craig Barratt - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

CHAPTER 13 ELEMENTS OF CONVEX ANALYSIS

We can think of the right-hand side of (13.2) as an a ne approximation to ( ),

z

which is exact at = . The inequality (13.2) states that the right-hand side is a

z

x

global lower bound on . This is shown in gure 13.1.

slope g1

;

;

( )

x

@

I

@

slope g

@

@

R

2

@

I

@

slope ~g2

x

x

1

2

A convex function on along with three ane global lower

Figure

13.1

R

bounds on derived from subgradients. At , is dierentiable, and the

x

1

slope of the tangent line is = ( ). At , is not dierentiable two

0

g

x

x

1

1

2

dierent tangent lines, corresponding to subgradients and ~ , are shown.

g

g

2

2

We mention two important consequences of

( ). For T(

) 0 we

g

2

@

x

g

z

;

x

>

have ( )

( ), in other words, in the half-space

T (

) 0 , the values

z

>

x

fz

j

g

z

;

x

>

g

of exceed the value of at . Thus if we are searching for an that minimizes

x

x

, and we know a subgradient of at , then we can rule out the entire half-space

g

x

T (

) 0. The hyperplane T(

) = 0 is called a cut because it cuts o

g

z

;

x

>

g

z

;

x

from consideration the half-space T(

) 0 in a search for a minimizer. This

g

z

;

x

>

is shown in gure 13.2.

An extension of this idea will also be useful. From (13.2), every that satis es

z

( )

, where

( ), must also satisfy T(

)

( ). If we are searching

z

<

x

g

z

;

x

;

x

for a that satis es ( )

, we need not consider the half-space T(

)

z

z

g

z

;

x

>

( ). The hyperplane T(

) =

( ) is called a deep-cut because it rules

;

x

g

z

;

x

;

x

out a larger set than the simple cut T(

) = 0. This is shown in gure 13.3.

g

z

;

x

13.1.1

Subgradients: Infinite-Dimensional Case

The notion of a subgradient can be generalized to apply to functionals on in nite-

dimensional spaces. The books cited in the Notes and References at the end of this

chapter contain a detailed and precise treatment of this topic in this book we will

give a simple (but correct) description.

index-304_1.png

index-304_2.png

index-304_3.png

index-304_4.png

index-304_5.png

index-304_6.png

13.1 SUBGRADIENTS

295

g

q

x

T

T

g

z

>

g

x

q

x

( )

( )

z

x

;

;

=

T

T

g

z

g

x

T

T

g

z

<

g

x

A point and a subgradient of at . In the half-space

Figure

13.2

x

g

x

, ( ) exceeds ( ) in particular, any minimizer of must lie

T

T

g

z

>

g

x

z

x

x

in the half-space

.

T

T

g

z

g

x

g

q

6

x

( )z

;

;

(

) =

( )

T

g

z

;

x

;

x

A point and a subgradient of at determine a deep-cut

Figure

13.3

x

g

x

in the search for points that satisfy ( )

(assuming does not satisfy

z

x

this inequality). The points in the shaded region need not be considered

since they all have ( )

.

z

>

index-305_1.png

index-305_2.png

index-305_3.png

index-305_4.png

296