0% found this document useful (0 votes)
70 views

Chapter 5: Feedback Systems:: Consider The System

The document summarizes feedback control techniques for stabilizing nonlinear systems, including basic state feedback, integrator backstepping, and applying backstepping to chains of integrators. Basic state feedback involves finding a feedback law u=φ(x) that renders the origin asymptotically stable. Integrator backstepping stabilizes systems of the form x ̇=f(x)+g(x)ξ, ξ ̇=u by viewing ξ as an input and applying feedback to ξ to stabilize the x subsystem, then applying feedback to the overall system. Backstepping can also be applied to chains of integrators by viewing the system as a series of nested subsystems and applying the technique recursively. Examples

Uploaded by

electrotehnica
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Chapter 5: Feedback Systems:: Consider The System

The document summarizes feedback control techniques for stabilizing nonlinear systems, including basic state feedback, integrator backstepping, and applying backstepping to chains of integrators. Basic state feedback involves finding a feedback law u=φ(x) that renders the origin asymptotically stable. Integrator backstepping stabilizes systems of the form x ̇=f(x)+g(x)ξ, ξ ̇=u by viewing ξ as an input and applying feedback to ξ to stabilize the x subsystem, then applying feedback to the overall system. Backstepping can also be applied to chains of integrators by viewing the system as a series of nested subsystems and applying the technique recursively. Examples

Uploaded by

electrotehnica
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Chapter 5: Feedback Systems

Consider x = f (x, u), f (0, 0) = 0, (1) assume that u is obtained using state feedback: u = (x). The stability of the origin can be studied substituting (2) into (1): x = f (x, (x)). (3) (2)

Basic Feedback Stabilization : Consider the system x = ax2 + u where a is a non-zero constant (4)

Example 1

we look for a state feedback of the form u = (x) that makes x = 0 asymptotically stable. Consider rst setting u = ax2 x Substituting (5) into (4) we obtain x = x which is linear and globally asymptotically stable, as desired. Issues: this rst solution has 2 problems
(i)

(5)

It is based on the exact cancelation of the nonlinear term ax2 , thus requiring the exact knowledge of the system parameter(s). Canceling all nonlinear terms simplies the analysis but may not be a good idea.

(ii)

Example 2

: Consider the system given by x = ax2 x3 + u

following the approach in Example 1 we can set u which leads to x = x. 2 u1 cancels the terms ax2 and x3 , which are quite dierent:

u1 = ax2 + x3 x

The term in x2 is never desirable. It has a destabilizing eect. The term in x3 provides damping for x and can be benecial. Cancellation of the term x3 was achieved by incorporating the term x3 in the feedback law. Leads to very large input values.

Alternate solution: Given the system x = f (x, u) x Rn , u R, f (0, 0) = 0

we proceed to nd a feedback law u = (x) such that x = f (x, (x)) and x = 0 is asymptotically stable. We look for V1 = V1 (x) : D R satisfying
(i) (ii)

(6)

V1 (0) = 0, and V1 (x) is positive denite in D {0}. There exist L(x) : D R+ (positive denite) such that 1 (x) = V1 f (x, (x)) L(x) V x x D.

Example 3

: Consider again the system of example 2. x = ax2 x3 + u

2 dening V1 (x) = 1 2 x and computing V1 , we obtain

1 = ax3 x4 + xu. V In Example 2 we chose u = u1 = ax2 + x3 x. Thus 1 = ax3 x4 + x(ax2 + x3 x) = x2 V


2

L(x).

We now modify L(x) as follows: 1 = ax3 x4 + xu L(x) V choose u as, u = x ax2 . With this u, we obtain the following feedback system x = ax2 x3 + u = x x3 which is globally asymptotically stable. 2 Integrator Backstepping (x4 + x2 ).

We consider a system of the form x = f (x) + g (x), = u. f (0) = 0. Viewing the state variable as an independent input, we assume that there exists a state feedback control law that stabilizes the origin of the subsystem (7) = (x), (0) = 0 and a Lyapunov function V1 : D R+ such that 1 (x) = V1 [f (x) + g (x) (x)] L(x) 0 x D V x where L() : D R+ is a positive denite function in D. To stabilize now the system (7)(8) we proceed as follows:

x Rn , R

(7) (8)

We will make the following assumptions (see Figure 5.1(a)):


(i) (ii)

Adding and subtracting g (x)(x) to (7) (Figure 1(b)) we obtain the equivalent system x = f (x) + g (x)(x) + g (x)[ (x)] = u. (9) (10)

Dene z = (x) (x) = u (x) z = where (11) (12)

= x = [f (x) + g (x) ] (13) x x This change of variables can be seen as backstepping (x) through the integrator (Figure 1(c)). Dening v=z the resulting system is (Figure 1(d)) x = f (x) + g (x)(x) + g (x)z z = v (x) and (x) calulated as (13) V is dened as V = u Notice that (15)(16) is equivalent to (7)(8). To stabilize the system (15)(16) consider: 1 V = V (x, ) = V1 (x) + z 2 . 2 = V1 [f (x) + g (x)(x) + g (x)z ] + z z V x V1 V1 V1 f (x) + g (x)(x) + g (x)z + zv. = x x x We can choose V1 g (x) + kz , k > 0 v= x Thus = V1 f (x) + V1 g (x)(x) kz 2 V x x V1 [f (x) + g (x)(x)] kz 2 = x L(x) kz 2 .
4

(14)

(15) (16)

(17)

(18)

(19)

(19) implies that the origin x = 0, z = 0 is asymptotically stable. Since z = (x) and (0) = 0, the origin of the original system x = 0, = 0 is also asymptotically stable. The stabilizing state feedback law is given by u=z + (20)

u=

V1 [f (x) + g (x) ] g (x) k [ (x)]. x x

(21)

Example 4

: Consider the following system:


3 x 1 = ax2 1 x1 + x2 x 2 = u.

(22) (23)

Clearly this system is of the form (15)(16) with x f (x) g (x) = = = = x1 x2 3 f (x1 ) = ax2 1 x1 1

Step 1: Find = (x) to stabilize the origin x = 0. Dening V1 (x1 ) = 1 2 x 2 1 4 1 (x1 ) = ax3 V 1 x1 + x1 x2 Va (x1 ) x2 = (x1 ) = x1 ax2 1 we obtain x 1 = x1 x3 1. Step 2: To stabilize (22) (23), we use of the control law (21): u = V1 [f (x) + g (x) ] g (x) k [ (x)] x x 3 2 = (1 + 2ax1 )[ax2 1 x1 + x2 ] x1 k [x2 + x1 + ax1 ].

2 (x4 1 + x1 )

choosing

With this control law the origin is globally asymptotically stable. The composite Lyapunov function is 1 1 1 2 V = V1 + z 2 = x2 1 + [x2 (x1 )] 2 2 2 1 2 1 2 = x1 + [x2 x1 + ax2 1] . 2 2

Chain of Integrators

Consider now a system of the form x = f (x) + g (x)1 1 = 2 . . . k1 = k k = u For simplicity we focus on the third order system x = f (x) + g (x)1 1 = 2 2 = u (24) (25) (26)

To stabilize the origin we proceed as follows: we consider the rst subsystem (24). Assume that 1 = (x1 ) is a stabilizing control law for the system with Lyapunov function V1 . Consider now the rst 2 subsystems: x = f (x) + g (x)1 1 = 2 (27) (28)

We can stabilize this second order system using backstepping. Using the control law (21) and associated Lyapunov function V2 : 2 = (x, 1 ) = V2 =
(x) x [f (x) + g (x)1 ] 1 V x g (x) k [1 (x)] 2 V1 + 1 2 [1 (x)]

k>0

We now iterate view the third-order system as a more general version of (7)(8) with x= x 1 , = 2 , f= f (x) + g (x)1 0 , g= 0 1

Applying the backstepping algorithm once again, we obtain: (x) V2 u = x g (x) k [2 (x)], k > 0 x x (x, 1 ) (x, 1 ) 1 ]T V2 , V2 [0 , 1]T + [x, = , x 1 x 1 k [2 (x, 1 )], k > 0
7

or u = V2 (x, 1 ) (x, 1 ) 2 + [f (x) + g (x)1 ] + x 1 1 k [2 (x, 1 )], k > 0. 1 V = V2 + [2 (x, 1 )]2 2 1 1 = V1 + [1 (x)]2 + [2 (x, 1 )]2 . 2 2

The composite Lyapunov function is

Example 5

:
3 x 1 = ax2 1 x1 + x2 x 2 = u

Substitute x2 = x2d + 2 to the system equation, get


3 x 1 = ax2 1 x1 + x2d + 2 2 = x 2d + u

Design x2d as x2d = ax2 1 x1 , thus the rst system equation can be written as: x 1 = x1 x3 1 + 2 Since x2d is designed as x2d = ax2 1 x1 , then x 2d = d (ax2 1 x1 ) dt = (2x1 a + x1 )x 1 3 = (2x1 a + x1 )(ax2 1 x1 + x2 )

use (x1 , x2 ) to represent x 2d , then the second system equation can be written as: 2 = (x1 , x2 ) + u Design u as u = (x1 , x2 ) 2 + uaux The second equation of the system becomes: 2 = 2 + uaux Now for the following system: x 1 = x1 x3 1 2 = 2 + uaux
9

1 2 2 Using the following Lyapunov function candidate: V = 1 2 x1 + 2 2 , we have:

= x1 (x1 x3 V 1 + 2 ) + 2 (2 + uaux ) 2 x2 V 1 + x1 2 2 + 2 uaux if we let uaux = x1 , we can have


2 x2 V 1 2

Therefore the stabalizing control law can be: u = (x1 , x2 ) 2 + uaux 3 = (2x1 a + x1 )(ax2 1 + x1 + x3 1 x1 + x2 ) (x 1 ) x1 2 3 4 = (2a + a)x1 (2a + 1)x1 + (2a + 1)x1 x2
Example 6

: Consider the following system, : x 1 = ax2 1 + x2 x 2 = x3 x 3 = u.

1 2 1 = ax2 Step 1: Consider the rst equation x 1 + (x1 ). Using V1 = 2 x1 , it is immediate that (x1 ) = x1 ax2 1 stabilizes the origin. Step 2: Consider the rst two subsystems. We propose the stabilizing law (with k > 0) and associated Lyapunov function:

(x1 , x2 )(= x3 ) =

(x1 ) V1 [f (x1 ) + g (x1 )x2 ] g (x1 ) k [x2 (x1 )] x1 x1

1 1 V2 = V1 + z 2 = V1 + [x2 (x1 )] 2 2 1 2 = V1 + [x2 + x1 + ax2 1] . 2 In our case, setting k = 1, (x1 ) = (1 + 2ax1 ) x1 V1 = x1 x1


10

2 (x1 , x2 ) = (1 + 2ax1 )[ax2 1 + x2 ] x1 [x2 + x1 + ax1 ]

Step 3: Consider the third order system with x= x1 x2 , = x3 , f= f (x1 ) + g (x1 )x2 0 , g= 0 1

From the results in the previous section we have that u = + g (x1 )x2 ]+ k [x3 (x1 , x2 )]
(x1 ,x2 ) [f (x1 ) x1 (x1 ,x2 ) x3 x2

V2 x2 +

k>0

is a stabilizing control law with associated Lyapunov function 1 V = V2 + [x3 (x1 , x2 ]2 2 2

11

Strict Feedback Systems

Consider now strict feedback systems of the form x = f (x) + g (x)1 1 = f1 (x, 1 ) + g1 (x, 1 )2 2 = f2 (x, 1 , 2 ) + g2 (x, 1 , 2 )3 . . . k1 = fk1 (x, 1 , 2 , , k1 ) + gk1 (x, 1 , 2 , , k1 )k k = fk (x, 1 , 2 , , k ) + gk (x, 1 , 2 , , k )u also called triangular systems. Considering rst the special case: x = f (x) + g (x) = fa (x, ) + ga (x, )u. If ga (x, ) =0 over the domain of interest, then we can dene 1 u = (x, ) [u1 fa (x, )]. ga (x, ) Substituting (31) into (30) we obtain the modied system x = f (x) + g (x) = u1 (32) (33) (29) (30)

(31)

which is of the form (7)(8). The stabilizing control law and associated Lyapunov function are thus: u = 1 (x, ) = 1 ga (x, ) [f (x) + g (x) ] x k1 > 0 (34)

V1 g (x) k1 [ (x)] fa (x, ) , x

1 V2 = V2 (x, ) = V1 (x) + [ (x)]2 . 2 Considering now the system x = f (x) + g (x)1 1 = f1 (x, 1 ) + g1 (x, 1 )2 2 = f2 (x, 1 , 2 ) + g2 (x, 1 , 2 )3
12

(35)

which can be seen as a special case of (29)(30) with x = fa x , = 2 , u = 3 , f = 1 = f2 , ga = g2 . f + g 1 f1 ,g = 0 g1 ,

The stabilizing control law and associated Lyapunov function for this systems are as follows: 2 (x, 1 , 2 ) = 1 g2 1 1 (f1 (x) + g1 (x)2 )+ (f + g1 ) + x 1 V2 g1 k2 [2 1 ] f2 , k2 > 0 1

(36)

1 V3 (x, 1 , 2 ) = V2 (x) + [2 1 (x, 1 )]2 . 2

(37)

13

Example 7

: Consider the following systems:


2 x 1 = ax2 1 x1 + x1 x2 x 2 = x1 + x2 + (1 + x2 2 )u.

We begin by stabilizing the x subsystem. Using V1 = 1/2x2 1 we have that


2 1 = x1 [ax2 V 1 x1 + x1 x2 ] 2 3 = ax3 1 x1 + x1 x2 .

Thus, x2 = (x1 ) = (x1 + a) results in


4 1 = (x2 V 1 + x1 )

which shows that the x1 system is asymptotically stable. It then follows by (34)(35) that a stabilizing control law for the second-order system and the corresponding Lyapunov function are given by u = 1 (x1 , x2 ) = 1 3 2 3 {(1 + a)[ax2 1 x1 + x1 x2 ] x1 + 2 (1 + x2 ) k1 [x2 + x1 + a] (x1 + x2 )} , k1 > 0 1 1 2 = x2 1 + [x1 + x2 + a] . 2 2 2

V2

14

You might also like