Applications of Nonlinear Control by Meral Altınay - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

index-133_10.png

Nonlinear Observer-Based

Nonlinear Observer-Based Control Allocation

123

Control Allocation

9

∂L ∂L

Since

∂λ , , e exponentially converges to zero as t → +∞, the closed-loop system

u

exponentially converges to

˙ˆx = A d ˆx + B dr

(43)

˙e = Dx f (ˆx + m1, u) − θ(ˆx, u)P 1[ Dh(ˆx)] TDh(ˆx + m0) e Since A d is a asymptotically stable matrix, we know that ˆx ∈ W is bounded. According to

Assumptions 1 and 4, Dx f (ˆx + m1, u), Dh(ˆx) and Dh(ˆx + m0) are all bounded for m0, m1 ∈ S

and u Ω. From k 0 > 0, we have 0 < γ 1(ˆx, u) < +∞. According to Assumption 4, we have ker R(ˆx, m0) ker Dh(ˆx) which ensures that 0 < νTR(ˆx, m0) ν < +∞ for every ν ∈ ∂S − M, m0 ∈ S and ˆx ∈ W. Thus, we have 0 < γ 2(ˆx) < +∞. As a result, 0 < θ(ˆx, u) < +∞. From (43), we know that ˙e exponentially converges to zero as e exponentially converges to zero.

Moreover, we have

˙x ˙e = A dx A de + B dr

(44)

Since ˙e and e exponentially converges to zero, we have the system (1) exponentially converges

to ˙x = A dx + B dr. This completes the proof.

Consider now the issue of solving (26) with respect to ξ 1 and ξ 2. One method to achieve

a well-defined unique solution to the under-determined algebraic equation is to solve a

least-square problem subject to (26). This leads to the Lagrangian

l( ξ 1, ξ 2, ρ) = 1 ( ξTξ

ξ

2

1 1 + ξ T

2 2) + ρ( αT ξ 1 + βT ξ 2 + δ + ωVm)

(45)

where ρ ∈ R is a Lagrange multiplier. The first order optimality conditions

∂l =

∂l =

∂l =

∂ξ

0,

0,

0

(46)

1

∂ξ 2

∂ρ

leads to the following system of linear equations

⎤ ⎡

Im 0 α

ξ 1

0

⎥ ⎢

⎣ 0 I

⎥ ⎢

m β ⎦ ⎣ ξ 2 ⎦ = ⎣

0

(47)

αT βT 0

ρ

−δ − ωVm

Remark 4. It is noted that Equation (47) always has a unique solution for ξ 1 and ξ 2 if any one of α

and β is nonzero.

4. Example

Consider the pendulum system

˙ x 1

=

x 2

(48)

˙ x

sin x

2

1 + u 1 cos x 1 + u 2 sin x 1

y = x 1 + x 2

(49)

index-134_1.png

index-134_2.png

index-134_3.png

index-134_4.png

124

Applications of Nonlinear Control

10

Will-be-set-by-IN-TECH

with x = [ x 1 x 2] T ∈ R2, u = [ u 1 u 2] T ∈ Ω and

Ω = u = [ u 1 u 2] T − 1 ≤ u 1 1, 0.5 ≤ u 2 0.5

(50)

As the system is affine in control and its measurement output y is a linear map of its state x,

Assumptions 1, 3 and 4 are satisfied automatically.

Choose

P = 3

0

0

1

For e = 0 and e ker[1 1], we have e 1 = −e 2 and

e TP Dx f (x, u)e |e 1= −e 2

= [

e

e

1

1

e 2]

0

3

|

cos x

e 1= −e 2

1 − u 1 sin x 1 + u 2 cos x 1

0

e 2

= ( cos x 1 − u 1 sin x 1 + u 2 cos x 1 + 3) e 1 e 2 |e 1= −e 2

[ 1.5 cos(arctan 2 ) sin(arctan 2 ) + 3] e

3

3

1 e 2 |e 1= −e 2

= ( 1.8028 + 3) e 1 e 2 |e 1= −e 2

= 0.5986 e 2 |e 1= −e 2

< −k 0 e 2 |e 1= −e 2

with 0 < k 0 < 0.5986. Hence, Assumption 5 is satisfied. Let S be the ball of radius r = 1, centered at zero and ∂S is the boundary of S. Define M ⊂ ∂S and

M = ν = [ ν 1 ν 2] T ∈ R2 : ν = 1, 3 ν 1 ν 2 + 1.8028 1 ν 2 | < −k 0

Obviously,

∂S − M = ν = [ ν 1 ν 2] T ∈ R2 : ν = 1, 3 ν 1 ν 2 + 1.8028 1 ν 2 | ≥ −k 0

As γ 1(ˆx, u) = 3 × 1.8028 + k 0 and

γ 2(ˆx) = min ( ν 1 + ν 2)2, ν ∈ ∂S − M = 1

2 k 0

3 1.8028

γ

choosing k

1( ˆ

x, u)

0 = 0.5, we have

=

γ

35.8699. Let θ(ˆx, u) = 36 > 35.8699 and we have

2( ˆ

x)

Φ(ˆx, u) = [12 36] T.

Now the nonlinear observer becomes

˙ˆ x 1

˙

=

ˆ x 2

+ 12 ( y − ˆ x

ˆ x

1 ˆ

x 2)

2

sin ˆ x 1 + u 1 cos ˆ x 1 + u 2 sin ˆ x 1

36

Choose the reference model (6) where

A d =

0

1

,

B

25 10

d =

0

25

index-135_1.png

index-135_2.png

index-135_3.png

index-135_4.png

index-135_5.png

index-135_6.png

Nonlinear Observer-Based

Nonlinear Observer-Based Control Allocation

125

Control Allocation

11

and the reference is given by

5

4

3

t

t

t

r

15

+ 10

,

0 ≤ t < t

f 6

1

t

t

t

1

1

1

rf ,

t 1 ≤ t < t 2

r = ⎪

5

4

3

t − t

t − t

t − t

2

2

2

+

⎦ +

r

6

15

10

r

f

f , t 2 ≤ t < t f

t

t

t

f − t 2

f − t 2

f − t 2

⎩ 0,

t ≥ t f

with t 1 = 10 s, t 2 = 20 s, t f = 30 s and r f = 0.5. Obviously, Assumption 2 is satisfied.

Set H1 = 0, H2 = 10 4I2, ω = 1, Γ1 = Γ2 = 2I2, and x 1(0) = 0.3 and x 2(0) = 0.5. Using the proposed approach, we have the simulation result of the pendulum system (48)-(50) shown in

Figures 2-5 where the control u 2 is stuck at 0.5 from t = 12 s onward.

From Figure 2, it is observed that the estimated states ˆ x 1 and ˆ x 2 converge to the actual

states x 1 and x 2 and match the desired states x 1 d and x 2 d well, respectively, even when u 2 is stuck at 0.5. This observation is further verified by Figure 3 where both the state

estimation errors e 1(= x 1 ˆ x 1) and e 2(= x 2 ˆ x 2) of the nonlinear observer as in (4) and the matching errors τ 1(= 0) and τ 2(= sin ˆ x 1 + u 1 cos ˆ x 1 + u 2 sin ˆ x 1 + 25 ˆ x 1 + 10 ˆ x 2 25r) as in (8) exponentially converge to zero. Moreover, Figure 4 shows that the control u 1 roughly

satisfies the control constraint u 1 [ 1, 1] while the control u 2 strictly satisfies the control constraint u 2 [ 0.5, 0.5]. This is because, in this example, the Lagrange multiplier λ 1 is first activated by the control u 1 < − 1 at t = 0 (see Figure 5 where λ 1 is no longer zero from t = 0), 0.4

0.2

state variable 1

0

hx

x

x

1

1

1d

0

5

10

15

20

25

30

time(s)

0.5

0

state variable 2 −0.5

hx

x

x

2

2

2d

0

5

10

15

20

25

30

time(s)

Fig. 2. Responses of the desired, estimated and actual states

126

Applications of Nonlinear Control

12

Will-be-set-by-IN-TECH

0.6

e

e

1

2

0.4

0.2

0

estimation error

−0.20

5

10

15

20

25

30

time(s)

10

τ

τ

1

2

5

0

matching error

−50

5

10

15

20

25

30

time(s)

Fig. 3. Responses of estimation error and matching error

1

0.5

1

u

0

−0.5

−1

0

5

10

15

20

25

30

time(s)

0.6

0.4

0.2

2

u

0

−0.2

−0.4

0

5

10

15

20

25

30

time(s)

Fig. 4. Responses of control u

and then the proposed dynamic update law forces the control u 1 to satisfy the constraint

u 1 [ 1, 1]. It is also noted from Figure 5 that the Lagrange multiplier λ 2 is not activated in this example as the control u 2 is never beyond the range [ 0.5, 0.5]. In addition, the output y

and the Lyapunov-like function Vm are shown in Figure 6. From Figure 6, it is observed that

the Lyapunov-like function Vm exponentially converges to zero.

Nonlinear Observer-Based

Nonlinear Observer-Based Control Allocation

127

Control Allocation

13

0.1

0

−0.1

−0.2

λ 1

−0.3

−0.4

−0.50

5

10

15

20

25

30

time(s)

0.1

0.05

0

λ 2

−0.05

−0.10

5

10

15

20

25

30

time(s)

Fig. 5. Responses of Lagrangian multiplier λ

1

0.5

y

0

−0.50

5

10

15

20

25

30

time(s)

−8

x 10

3

2

m

V

1

00

5

10

15

20

25

30

time(s)

Fig. 6. Responses of output y and Lyapunov-like function Vm

5. Conclusions

Sufficient Lyapunov-like conditions have been proposed for the control allocation design via

output feedback. The proposed approach is applicable to a wide class of nonlinear systems.

As the initial estimation error e(0) need be near zero and the predefined dynamics of the

128

Applications of Nonlinear Control

14

Will-be-set-by-IN-TECH

closed-loop is described by a linear stable reference model, the proposed approach will

present a local nature.

6. References

Ahmed-Ali, T. & Lamnabhi-Lagarrigue, F. (1999).

Sliding observer-controller design

for uncertain triangle nonlinear systems, IEEE Transactions on Automatic Control

44(6): 1244–1249.

Alamir, M. (1999). Optimization-based nonlinear observer revisited, International Journal of

Control 72(13): 1204–1217.

Benosman, M., Liao, F., Lum, K. Y. & Wang, J. L. (2009).

Nonlinear control allocation

for non-minimum phase systems, IEEE Transactions on Control Systems Technology

17(2): 394–404.

Besancon, G. (ed.) (2007). Nonlinear Observers and Applications, Springer.

Besancon, G. & Ticlea, A. (2007). An immersion-based observer design for rank-observable

nonlinear systems, IEEE Transactions on Automatic Control 52(1): 83–88.

Bestle, D. & Zeitz, M. (1983). Canonical form observer design for non-linear time-variable

systems, International Journal of Control 38: 419–431.

Bodson, M. (2002). Evaluation of optimization methods for control allocation, Journal of

Guidance, Control and Dynamics 25(4): 703–711.

Bornard, G. & Hammouri, H. (1991). A high gain observer for a class of uniformly observable

systems, Proceedings of the 30th IEEE Conference on Decision and Control, pp. 1494–1496.

Buffington, J. M., Enns, D. F. & Teel, A. R. (1998). Control allocation and zero dynamics, Journal

of Guidance, Control and Dynamics 21(3): 458–464.

Gauthier, J. P. & Kupka, I. A. K. (1994). Observability and observers for nonlinear systems,

SIAM Journal on Control and Optimization 32: 975–994.

Krener, A. J. & Isidori, A. (1983). Linearization by output injection and nonlinear observers,

Systems & Control Letters 3: 47–52.

Krener, A. J. & Respondek, W. (1985). Nonlinear observers with linearizable error dynamics,

SIAM Journal on Control and Optimization 23: 197–216.

Liao, F., Lum, K. Y., Wang, J. L. & Benosman, M. (2007). Constrained nonlinear finite-time

control allocation, Proc. of 2007 American Control Conference, New York City, NY.

Liao, F., Lum, K. Y., Wang, J. L. & Benosman, M. (2010).

Adaptive Control Allocation

for Non-linear Systems with Internal Dynamics, IET Control Theory & Applications

4(6): 909–922.

Luenberger, D. G. (1964). Observing the state of a linear system,