Tracking Control of Robot Manipulators Using Second Order Neuro Sliding Mode
Tracking Control of Robot Manipulators Using Second Order Neuro Sliding Mode
39:285-294(2009)
TRACKING CONTROL OF ROBOT MANIPULATORS USING SECOND ORDER NEURO SLIDING MODE
R. GARCIA-RODRIGUEZ and V. PARRA-VEGA
Department of Electrical Engineering, Universidad de Chile, Av. Tupper 2007, Santiago, Chile [email protected] Robotics and Advanced, Manufacturing Division, CINVESTAV, Mxico e [email protected]
Abstract Few works on neural networksbased robot controllers address the issue of how many units of neurons, hidden layers and inputs are necessary to approximate any functions up to a bounded approximation error. Thus, most proposals are conservative in a sense that they depend on high dimensional hidden layer to guarantee a given bounded tracking error, at a computationally expensive cost, besides that an independent input is required to stabilize the system. In this paper, a low dimensional neural network with online adaptation of the weights is proposed with an stabilizer input which depends on the same variable that tunes the neural network. The size of the neural network is dened by degree of freedom of the robot, without hidden layer. The neuro-control strategy is driven by a second order sliding surface which produce a chattering-free control output to guarantee tracking error convergence. To speed the response up even more, a time base generator shapes a feedback gain to induce nite time convergence of tracking errors for any initial condition. Experimental results validate our proposed neuro-control scheme. Keywords Robot control, Neural networks, Second order sliding mode, Chatteringfree, Neuro-sliding controller. I. INTRODUCTION In robotics, one of main objectives is to design simple controllers to compensate nonlinear couplings, parameters variation and disturbances to execute complex tasks with greater precision in tracking regime. Although previously in 1980 the computed torque controller was presented;1 it was not until 1986 when Slotine and Li (1987) showed that a particular structure of manipulator dynamics exist to develop a simple
1 The exact knowledge of the parameters of robot and a great computational power is required and it can not compensate parameter variations.
controller avoiding measurements or estimates of the manipulator joint accelerations. Through this adaptive control scheme it is possible to compensate parameter variations to guarantee local stability of the system as well as asymptotic convergence of the tracking errors without any knowledge of the parameters, though the exact regressor is a requirement. Based on this result, many schemes in adaptive control has been developed and applied to a wide class of systems. Although the adaptive control represents a problem solution of parameter variation in the robots, the principal drawback is that it is a model-based controller, therefore the computation eort increases proportionally to degree of freedom of the robot or when the robotic system is more complex, e.g. in cooperative robots or mechanical hands. At the same time, in the 80s the simple PD controller was presented, where it can compensate nonlinearity and uncertainties of robot dynamics (Arimoto, 1996). In addition it is recognized that one of the generic characteristic of robot dynamics is its open loop passivity property from torque input to velocity output as a way to exploit the robot systems physical structure and design energetically stable controllers. In this case, for stability purposes, a storage energy function arises which gives rise to stable behavior, then the challenge is to produce passivity in closed-loop via a given error velocity as its output. In another way, as a result of the work done by many researchers started by McCulloch and Pitts, the neural networks attracted attention as networks ability to mimic basic patterns of the human brain, such as its ability to learn and respond in consequence, as if it were employed the learning capability to produce a control action. In terms of control design, the main interest in neural networks is their capability to approximate a large class of continuous nonlinear maps from the collective action of autonomous processing units interconnected in simple ways, as well as inherent parallel and highly redundant processing architecture, that makes it possible to develop parallel adaptation update laws and reduce latency. These neural network
285
39:285-294(2009)
properties have been used in a large number of applications as adaptive system identication (Narendra and Parthasarathy, 1990) and control of complex highly uncertain dynamical systems (Lewis et al., 1996; Kosmatopoulos and Christodoulou, 1994; Ge and Hang, 1998; Lee and Choi, 2004). The adaptive neural network-based controllers have been getting the attention because they make possible to design controllers of a wide class of systems without any knowledge of the dynamic model, the regressor nor the parameters. Basically, neural network-based controllers approximate the inverse dynamics of system in a given error coordinate system. However, well known results show that a large number of nodes is required in each layer of the neural network2 to achieve exact approximation of the unknown functional (Cotter, 1990). The amount of nodes can be prohibitory large for a simple practical systems3 . In order to design neuro-control schemes with smooth control and low computational eort to compensate the unknown physical parameters of robot, a dynamical combination of dierent intelligent and control techniques as variable structure systems, passivity, model-based control and PID-like controller has been proposed (Ertugrul and Kaynak, 2000; Sanchez et al., 2003; Yu, 2003; Choi et al., 2001; Lin et al., 2000; Lin et al., 2001; Barambones and Etxebarria, 2002; Debache et al., 2006; Hayakawa et al.,2005). In neuro-adaptive or neuro-sliding mode control (Ge and Harris, 1994; Ertugrul and Kaynak, 2000; Lewis et al., 1996; Ge and Hang, 1998) generally neural network approximate the inverse dynamics of the manipulator based on gradient descent method or adaptive control, where the main disadvantage of these schemes is that they use a great number of neurons in each layer of neural network. Sometimes it is necessary an additional and independent control term to guarantee stability and robustness in the presence of approximation error (Yamakita and Satoh, 1999; Ge and Harris, 1994; Yu, 2003; Sanchez et al., 2003, Lin et al., 2000; Lewis et al., 1996; Sun and Sun, 1999). However, its high frequency input represent the principal disadvantage in practical applications, though to eliminate the chattering a saturation function is included (Barambones and Etxebarria, 2002; Lin et al., 2001, Ertugrul and Kaynak, 2000; Chih-Min and Chun-Fei, 2002). Unfortunately, in the later case the invariant condition is not satised and tracking is not guaranteed.
2 The network topology refers to the number and organization of the computing units, the types of connections between neurons, and the direction of information ow in the network. The node is the basic organizational unit of a neural network, and nodes are arranged in a series of layers to create the Articial Neural Network (ANN). According to their location and function within the network, nodes are classied as input, output, or hidden layer nodes (Stern, 1991) 3 In feed-forward neural networks with multilayer structure, an oversized hidden layer does not increase the accuracy of approximation, but increases the risk of overtting, i.e. it may lead to a bad approximation
In other approaches based on passivity-based adaptive control, the convergence of tracking errors is guaranteed under assumption of a combination of neural network with regressor knowledge, which reduces the main advantage of the neural network (Jung and Hsia, 1996). Therefore, apparently there is not room for the main role of the neural networks as universal approximator of any continuous function if a high frequency input is included or if some part of the regressor is used. Some researchers devised the way to introduce a neural network to substitute the regressor into a classical adaptive control (Yu, 2003; Sanchez et al., 2003; Lewis et al., 1996; Ge and Harris, 1994), however they only guarantee stability without convergence of tracking errors. In this paper is presented a combination of second order sliding mode control with low dimensional neural network based on adaptive linear elements (Adalines) to guarantee tracking errors convergence with a smooth controller where the regressor of robot it is not required. The closed loop system renders a sliding mode for all time, whose solution converges in nite time and hence a perfect tracking is obtained. In addition, it is presented an alternative solution to address the issue of how many units of neurons are necessary to approximate any continuous function as well as the input set. Experimental results on a robot manipulators are presented that verify the closed loop stability properties. The paper is organized as follows. Section II shows the robot dynamics and its properties while Section III presents the open loop error dynamic system. In Section IV presents the main properties and characteristics of the neural network used in this paper. The proposed control scheme and its stability analysis is given in Section V . Section VI presents the experimental results on planar robot manipulator and conclusions are given in Section VII. II. ROBOT DYNAMICS The dynamic model of a rigid serial n-link robot manipulator with all revolute joints is described as follows H(q) + C(q, q)q + g(q) = q (1)
where q, q, q n are the generalized joint coordi nates, H(q) nxn denotes a symmetric positive definite inertial matrix, the second term on the left represents the Coriolis and centripetal forces C(q, q) nxn , g(q) n models the gravity forces, and n stands for the torque input. Some useful properties of robot dynamic are: Property 1: (Arimoto, 1996) Matrix H(q) is symmetric and positive denite. 1 Property 2: Matrix 2 M (q) C(q, q) is skew symmetric and hence satisface (Arimoto, 1996): 1 q T [ M (q) C(q, q]q = 0 2 q, q
n
(2)
286
R. GARCA-RODRGUEZ, V. PARRA-VEGA
Property 3: There exists positive scalar i (i = 0, . . . , 5) such that H(q) H(q) C(q, q) g(q) qr qr m (H(q)) > 0 > 0 M (H(q)) < 1 > 2 q 3 4 + q + 5 + q
III. ERROR MANIFOLDS AND ERROR DYNAMICS Let the nominal reference qr be
t
qr = qd q + Sd Ki
t0
sign(Sq ())d
(9)
with m (A), M (A) stand the minimum and maximum eigenvalues of matrix A nxn , respectively. The norm of a vector x is dened as x = xT x and A = M (AT A) as induced Frobenius norm. Property 4: The left-hand side of (1) is linear in terms of suitable selected set of robot and load parameters, i.e. Y = H(q) + C(q, q)q + g(q) q
nxp p
(4)
with q = q qd the position tracking error, with subscript d denoting the desired reference value, > T 0, Ki = Ki > 0 and function sign(x) stands for the signum function of x. Substituting (9) into (8), we obtain the extended error Sr , which depends in turn of a second order sliding surface Sq dened as
t
where Y = Y (q, q, q, q ) and containing the unknown robot manipulator and load parameters. Property 5: The robot dynamics is passive in open-loop, from torque input to velocity output, with the Hamiltonian as its storage function. If viscous friction were considered, the energy dissipates and the system is strictly passive. Due to linear parametrization property, (1) can be written in terms of a nominal reference qr and its derivative qr as (Lewis and Abdallaah, 1994): H(q)r + C(q, q)qr + g(q) = Yr q (5)
S r = S q + Ki
t0
sign(Sq ())d
(13)
where the regressor is dened as Yr = Yr (q, q, qr , qr ) nxp . Using Property 3 in (5) we have that Yr = H(q) qr + C(q, q) qr + g(q)
1 q +2 q ( q + + 4 ) + 3 (t) (6)
Remark 1: Sliding Surface with an integral term. Based on seminal work (Slotine and Spong, 1985) several approaches have reported sliding surface with an integral term (Jager, 1996; Stepanenko et al., 1998) to provide some robustness in the controller. However, in our paper, we use a integral with an entirely dierent purpose: the extend error uses the integral of the sliding surface to induce second order sliding mode at the sliding surface, without using integral term in the sliding surface Sq . That is, it is shown that the integral of sign(Sq ) satises the sliding condition for Sq . In this way, sgn() is used without introducing chattering, avoiding to use boundary layer. Furthermore notice that Sd C1 is given as desired reference of S in the phase plane (q, q) and it is designed to eliminate the reaching phase. Then Sd converges monotonously to zero with initial conditions Sd (t0 ) = S(t0 ) at time t = td > 0 (Sd (t0 ) = 0). Due to (5) involves the derivative of (9) we have that qr = qd q + Sd Ki sign(Sq ) (14)
where 3 = 1 5 + 3 and (t) = f (q, q, , i , t) is a state-dependent function. If we add and subtract (5) into (1) we obtain the open loop error equation H(q)Sr + C(q, q)Sr = Yr (7)
where the extend error Sr carries out a change of coordinates through (qr , qr ), dened by Sr = q q r (8)
which is discontinuous. However since neural networks can not approximate discontinuous signals, we need to avoid introducing discontinuous signals in the function Yr . To solve this, qr is decompose into continuous and discontinuous terms, as follows qr = qcont + Ki Z where qcont Z = = qd q + Sd Ki tanh(Sq ) tanh(Sq ) sign(Sq ), (16) (15)
The question is how to design a smooth without knowledge of Yr . To that end, it is useful to design a second order nominal reference qr .
287
39:285-294(2009)
for the vector tanh(x) = [tanh(x1 ), . . . , tanh(xk )]T as the continuous hyperbolic tangent function of X k , = T nxn > 0 and the equation (16) is bounded and it has the following properties: Z 1, Z 1, ZSq 0 = 1, ZSq 0+ = +1 and ZSq = 0. Substituting (15), (9) in (5) the parametrization Yr now is dened as H(q)cont + C(q, q)qr + g(q) = Ycont d q (17)
where the regressor Ycont = Yr (q, q, qr , qcont ) is con tinuous due to (qr , qcont ) C1 , and d = H(q)Ki Z models bounded discontinuous high frequency signals and it is considered as bounded disturbance in the controller design. This representation of robot dynamics in terms of nominal reference and its derivative will be of great importance in the next section. Adding and subtracting (17) into (1) yields the following open-loop error dynamics H(q)Sr = C(q, q)Sr + Ycont d (18)
If the regressor were known, adaptive control would suces. Remark 2: Adaptive-like control, with Ycont . In the case when the regressor is known, it is very well known that it suces to design an adaptive-like controller as = = Kd Sr + Ycont + d Ycont Sr (19) (20)
input to network, wi is the weight of connections and n is the number of inputs. It is important to notice that the tree structure could have one or more hidden layers where the linear activation function is used as last stage of a multilayer neural network. Based on this network structure, in this work we use an ADAptive LINear Element (ADALINE) proposed by Widrow and Ho (Widrow and Ho, 1960) which consist of a single neuron of the McCulloch-Pitts type. The Adaline expressed in matrix format is given as y = (XT W) where input vector X corresponds to the set of the input stimuli of the neuron, weight vector W corresponds to the set of synaptic strengths of the neuron4 and the activation function () represents the behavior of the neuron core5 . When a neuron is excited it produces the output y which depends on its input and on the state of the weight vector. The weight vector W may be constantly modied during the training. Denition: Let K be a closed bounded subset of n and a real vector valued functions f () be dened on K as f : K n . Based on the Stone-Weierstrass theorem, in Cotter (1990) is showed that any smooth function f (x) Cm (S), where S is a compact set simply connected set of n , can be approximated by a suciently large dimensional neural network, given as f (x) = (XT W1 ) (21)
where Kd and are positive denite gains of appropriate dimensions which produces an asymptotically stable closed-loop system. However, if the regressor Ycont is assumed unknown then (19)-(20) cannot be implemented, i.e. we do neither known anything about physical structure Ycont nor the parameters of robot manipulator. A great variety of approaches exist in the literature to approximate Ycont with neural networks, however its size and topology is fundamental to ensure a given approximation error and careful analysis is required to pick up the right neural network. In this paper a low dimensional neural network is proposed to yield bounded approximation error altogether with a smooth second order sliding mode term to nally ensure convergence, in contrast to some algorithms which ultimately bounded tracking, that is stable behavior only, not asymptotic stability. IV. NEURAL NETWORK APPROXIMATOR To approximate continuous regressor Ycont a tree network structure that satised Stone-Weierstrass theorem (Cotter, 1990) is used, i.e many neurons on one layer feed a single neuron on the next layer. The input-output relationship for this generic architecture n is given as yi = ( i=1 xi wi ) where xi n is the
where X belongs to a compact set K 2n , that is S := {x : x S} and the ideal weights required for (21) are bounded by known values W Wmax (Lewis et al., 1996). When approximation function is done with a low dimensional neural network, a bounded functional reconstruction error (x) appears f (x) = (XT W2 ) + (x) (22)
where W2 is a subset of W1 and (x) N with N > 0. In this paper f (x) = Ycont is estimated using a low dimensional neural network, where is proposed as linear function and the training is performed on line. The unknown linear function f (x) is parameterized by static adaline neural network and it is given as f (x) = Ycont XT W2 + (x) (23)
where input to the neural network XT nxp is independent of the dynamics parameters and linear parameters are estimated by neural network weights W2 p . See Fig 1.
4 The number of nodes in the input layer is equal to the number of independent variables entered into the network. The number of output nodes corresponds to the number of variables to be predicted. 5 The adaline neuron uses linear activation function so the output of the neuron is simply the weighted inputs
288
R. GARCA-RODRGUEZ, V. PARRA-VEGA
w1
x1 = q
x2 = q
x3 = q r
w2
w3
w4
q q r
Sr
K d Sr
q, q
Robot
W Weight Adjustment
x4 = q cont
From Sr
qd , q d
Neural Network
Adaptive Law
Input
Figure 1: Proposed neural network structure Remark 3: It is important to notice that the size of the neural network 2n, can be obtained roughly by checking carefully the dynamics of a general n-link rigid arm6 . Due to regressor Ycont is formed by independent dynamic parameters, the input to the neural network is dened as X = [q, q, qr , qcont ]
T
H(q)Sr = C(q, q)Sr Kd Sr XT W (x)d (27) where W = W W. Finally, we have the following result. Theorem 1. Exponential Stability: Consider robot dynamics (1) in the closed-loop with the control law (26) and neuro-adaptive law W = XSr (28)
(24)
Remark 4: The neural network provides an approximation of Ycont without worrying about its accuracy and only linear part of robot dynamics is approximated, i.e. the neural network can be considered as minimal architecture to approximate the robot dynamics taking into account the regressor elements. This architecture takes more relevance when the neural network it is driven by a second order sliding mode, as will be shown in the next section. Remark 5: An extension of Stone-Weierstrass theorem to bounded measurable functions applying Lusins theorem show that f (x) converge to f (x) almost everywhere. The practical consequence of this result is that an innitely large neural network can model any continuous functions while nite network might only accurately model such functions over a subset of the domain. Then extending the size and the layers of the proposed neural network a generic architecture is obtained as is reported in the literature (Lewis et al., 1996; Ge and Hang, 1998). Now we are ready to design the neuro-adaptive controller. V. NEURO-CONTROLLER DESIGN Substituting (23) into (18), we have H(q)Sr = C(q, q)Sr + XT W (x) d (25) Now consider the following control law = Kd Sr + XT W (26)
where = T pxp . Then exponential convergence of tracking errors is assured if Kd is large enough for any initial conditions and with the weight vector |S (t )| bounded for t q 0 . Proof. It is organized in three parts as follows. Part 1: Boundedness of Closed-loop Trajectories. Consider the following Lyapunov function V = 1 T 1 T Sr HSr + W 1 W, 2 2 (29)
whose total derivative along its solution (27) is as follows V = = = 1 T T T Sr H Sr + Sr HSr + W 1 W 2 T S T Kd Sr S T (x) S T d + W (XSr 1 W)
r T Sr Kd Sr
r T Sr
(x)
r T Sr d
(30)
Note that the term d is radially unbounded only when Sr and for bounded signals it is zero only at T Sr = 0. This arguments implies that Sr d Sr where = H(q) Ki . Then, Eq. (30) becomes
T V Sr Kd Sr + Sr
+ Sr
(31)
6 Without of generality, in the rest of the paper we refer W 2 as W, omitting its subindex.
Since Sr is a function of q, q and initial conditions, then for suciently small initial errors belonging to
289
39:285-294(2009)
a neighborhood with radius r > 0 centered in the equilibrium Sr = 0 and invoking Lyapunov arguments, there exists a large enough feedback gain Kd such that Kd > (t) and therefore, Sr converge into a bounded set . The boundedness of tracking error can be concluded Sr as t . In this way, the upper bounded of Sr is dened as Sr L (32)
then Sr < 1 with 1 > 0. Boundedness of Sr implies the boundedness of the state of the closed loop system and that Sq L and since desired trajectories are C2 and feedback gains are bounded, we have that (qr , qcont ) L , which implies that Ycont L L . In this way, from Eq. (31) render and W Sr 2 with 2 > 0 is bounded. By virtue of H(q) is positive denite and upper bounded and exists a constant such that (t) , from (27) we have that Sr = Sr H(q)1 {(C(q, q) + Kd )Sr T W X (x) d } M (H(q)1 ){(2 q + M (K)) + + N + } (t)
Figure 3: High performance planar manipulator Remark 6: Passivity and Dissipativity. Given the structure and properties of the proposed neural network it is possible to show that the neuro-adaptive weight vector guarantee passivity properties on the low dimensional neural network as well as in the closed loop. The passivity analysis of the closed loop system can be obtained as in Parra-Vega et al. (2003) where (29) qualies as Lyapunov candidate function. Now, let the feedforward block that mapping Sr XT W in (28). Then, we have that
t
(33)
where the bounded function (t) does not depend on acceleration measurement. So far, we conclude the boundedness of all closed-loop error signals. Part 2. Sliding Mode: Now, we show that a sliding mode at Sq = 0 arises for all time. If we multiply the T derivative of Sr by Sq , and rearranging we obtain the sliding mode condition
T Sq Sq T = Sq (Sr Ki sign(Sq )) T |Sq ||Sr | m |Sq |
T Sr XT Wd
=
0
T W 1 Wd
1 t d T 1 (W W)d = 2 0 dt 1 T 1 T W (0)1 W(0) = W (t)1 W(t) 2 2 1 T W (0)1 W(0) 2 and the mapping is passive. By other hand, for large enough Ki the dissipativity block is established from the mapping Sq Sq . Remark 7: Model-Free Control Structure. Notice that the control synthesis does not depend on any knowledge of the robot dynamics -model-free; and it keeps a very simple structure. The principal advantages of second order sliding mode with respect to others schemes (Barambones and Etxebarria, 2002; Jager, 1996; Stepanenko et al.,1998) is that we can guarantee smooth control input and to compensate some component of high frequency while in others approaches it is necessary chattering attenuation or chattering reduction (Lee and Choi, 2004). Remark 8: Adaptive Neural Network. In the proposed scheme the neural network is used to compensate unknown or time varying parameters due to payloads while the second order sliding mode stabilize unmodelled dynamics and disturbances. It is fundamental to notice that the neural network is tuned by extended error Sr . Then, bounded of the weights can be assured when Sr is bounded. Furthermore it is not
(34)
where = m sup with m = m (Ki ) and sup is the supremum of . Thus, in order to prove that Sq 0 in nite time, we can always choose m > M , in such a way that > 0, guarantees the existence of a sliding |S (t )| mode at Sq = 0 at time tq q 0 . However, notice that for any initial condition Sq (t0 ) = 0, and hence tq 0 implies that a sliding mode in Sq (t) = 0 is enforced for all time without reaching phase and then (10) renders S = Sd t. Part 3: Exponential Convergence. If k in (12) is tuned large enough such that Sd 0 for some small time 0 < td 1 then (10) yields S = 0 t td > 0. (35)
guarantee exponential stability of tracking errors since the solution of S = 0 goes to zero exponentially.
290
R. GARCA-RODRGUEZ, V. PARRA-VEGA
necessary in the control law a component to suppress the neural network reconstruction error for closed loop stability (Kwan et al., 2001). In Ertugrul and Kaynak (2000) it is presented a neuro-sliding mode scheme based on two parallel neural networks. To approximate equivalent control and corrective control, respectively. The number of neurons in each neural network are determined through the design of the rst order sliding mode. The chattering is eliminated dening a boundary layer nevertheless outside of boundary layer as in the reaching phase, high frequency transient may arise and the error increases. Although the Adalines was a rst one neural networks reported in the literature, recently in several approaches due to simplicity has been used to solve many problems as current compensator to achieve the selective compensation of harmonics currents in threephase electric systems with neutral conductor (Villalva and Filho, 2006) and as data driven function approximation based on generalized adalines (Wu et al.,2006). Remark 9: Comparisons. Some characteristics of the proposed scheme in comparison to other well known approaches are the following: i)The discontinuity associated to the sliding mode present in Sr = 0 is relegated to the rst order time derivative of the Sr . Furthermore, discontinuous dynamics that is imposing through sgn(Sq ) satises the sliding condition for Sq not for Sr and it avoids to use boundary layer. Then, it is guaranteed the sliding mode without chattering and without knowledge of the regressor in contrast to rst order sliding mode control; ii)In contrast to adaptive control, the proposed scheme is faster and more robust given that sliding mode is induced without reaching phase, without any knowledge of the regressor and without any overparametrization; and nally iii) In contrast to adaptive (rst order) sliding mode control, the proposed scheme induces a sliding mode for all time, thus it is faster and robust without any knowledge of the regressor. VI. EXPERIMENTAL RESULTS In this section we present the experimental results carried out on 2 degree of freedom planar robot arm (Fig. 3). The experiments were developed under LabWindows 5.0 on Pentium 4, 1.0 GHz with 256 Mb RAM under Windows 2000. Each run has an average running of 12 s. for 1 ms. sampling time. Planar manipulator control system used to demonstrate usefulness of our controller is shown in Fig. 4. The parameters of the planar robot are m1 = 7.19Kg, m2 = 1.89Kg, l1 = 0.5m, l2 = 0.35m, lc1 = 0.19m, lc2 = 0.12m, I1 = 0.02Kgm2 , I2 = 0.016Kgm2 for rst and second link, respectively. The objective of these experiments is to give a desired task and the end eector follows it in nite time. The desired task is dened as a circle of radius 0.1m in 2.5s whose center located at X=(0.55,0)m in the Cartesian workspace. For each experiment we have
Control 1 5 0 5 10 15 20 3 2 1 0 1 2
Control 2
[Nm]
10
15
[Nm]
10
15
10
8 6
4 2 0 2 4
10
5 Time [s]
10
15
0.1
0.05
0.45
0.5
0.55
0.6 [m]
0.65
0.7
0.75
0.8
291
39:285-294(2009)
Joint Error Position 1 25 20 0 [Degree] 15 10 5 1 0 5 0 5 Time [s] Joint Error Velocity 1 20 0 [Degree] 20 40 60 80 [Degree] 6 5 4 10 15 1.5 0 Degree 0.5 0.5
0.01
0.02
5 Tiempo (s)
10
15
[m] 0.03
0.05
3 2
0.06
1 0
0.07
5 Time [s]
10
15
5 Time [s]
10
15
6 Time [s]
10
12
0.1
10
0.08
0 20
0.06
5 30 0 2 4 6 8 Tiempo (s) 10 12
0.04 [m]
Joint Error Velocity 1 10 60 40 [Degree] 20 0 20 40 Joint Error Velocity 2
0.02
0 [Degree]
10 20 30
0.02
0.04
40 0 2 4 6 Time [s] 8 10 12 0 2 4 6 Time [s] 8 10 12
0.06
5 Time [s]
10
15
1 0.5
0.1
[Nm]
[Nm]
0 0.5 1
0.05
[m]
10
12
10
12
0
3 2 1 0 1 2
0.4 0.45 0.5 0.55 [m] 0.6 0.65 0.1 0.05
6 Time [s]
10
12
292
R. GARCA-RODRGUEZ, V. PARRA-VEGA
Table 1: Feedback gains Kd1 30 15 Kd2 1.8 1.5 1 5 3 2 5 3 K1 0.01 0.01 K2 0.01 0.01 10 10 Figures 5-8 9-12
Chih-Min, L. and H. Chun-Fei, Neural NetworkBased Adaptive Control for Induction Servomotor Drive System, IEEE Trans.on Industrial Electronics, 49, 115-123 (2002). Choi, Y., M. Lee, S. Kim and Y. Kay, Design and Implementation of an Adaptive Neural-Network Compensator for Control Systems, IEEE Trans. on Industrial Electronics, 48, 416-423 (2001). Cotter, N.E., The Stone-Weierstrass Theorem and Its Application to Neural Network, IEEE Trans. on Neural Networks, 1, 290-295 (1990). Debbache, A., A. Bennia and N. Gola, Neural e Network-based MRAC Control of Dynamic Nonlinear Systems, Int. J. Appl. Math. Comput. Sci., 16, 219-232 (2006). Ertugrul, M. and O. Kaynak, Neuro Siliding Mode Control of Robotic Manipulators, Mechatronics, 10, 239-263 (2000). Ge, S.S. and C.C. Hang, Estructural Network Modeling and Control of Rigid Body Robots, IEEE Trans. on Robotics and Automation, 14, 823-827 (1998). Ge, S.S. and T.H. Harris, Adaptive Neural Network Control of Robotic Manipulators, World Scientic, 1994. Hayakawa, T., W. Haddad, J.W. Bailey and N. Hovakimyan, Passivity-Based Neural Network Adaptive Outout Feedback Control for Nonlinear Nonnegative Dynamical Systems, IEEE Trans. on Neural Networks, 16, 387-398 (2005). Jager, B., Adaptive Robot Control with Second Order Sliding Component, 13th IFAC Triennial World Congress, San Francisco, USA, 271-276 (1996). Jung, S. and T.C. Hsia, A Study on Neural Network Control of Robot Manipulators, Robotica, 14, 715 (1996). Kosmatopoulos, E.B. and M.A. Christodoulou, Filtering, Prediction, and Learning Properties of ECE Neural Networks, IEEE Trans. Syst. Man Cybern., 24, 971-981 (1994). Kwan, C., D.M. Dawson and F.L. Lewis, Robust Adaptive Control of Robots using Neural Network: Global Stability, Asian Journal of Control, 3, 111121 (2001). Lee, M.-J. and Y.-K. Choi, An Adaptive Neurocontroller Using RBFN for Robot Manipulators, IEEE Trans. on Industrial Electronics, 51, 711-717 (2004).
dierent initial conditions, the initializing neural network weights are zero, zero initial velocity and there is 100% of parametric uncertain i.e. neural network approximate the regressor matrix only based in the states of this function (extend error). The performance of the proposed controller in Theorem 1 is depicted in Fig. 6. It can be seen from Fig. 5 the smooth and chattering free control input controller. Figure 7 and Fig. 8 show the position/velocity and cartesian tracking error, respectively. In order to increase the convergence velocity of tracking error a time base generator (TBG) to induce well-posed terminal attractors is proposed in ParraVega (2001). The TBG sliding surface yields nite time convergence of tracking errors and it allows to obtain small error at any given time that is generally dened by the user. Dening the nite time convergence in tg = 1.5 the end eector follows exactly the desired trajectory, Fig. 12. Smooth control effort is showed in Fig. 9-this frequency is normal in direct drive robots. Furthermore, the convergence of the cartesian and joint tracking errors are showed in Fig. 10 and Fig. 11, respectively. It is important to note that the overshooting present in the both experiments in t = 6.5s and t = 3s approximately do not have relation with the experiment and possibly it is a problem with the data acquisition system. The feedback gains used in these experiments are given in Table 1. Finally, the feedback gains are tuned in trial-and-error basis according to the interplay of each gain in the closed loop system. VII. CONCLUSION A neuro-sliding controller that uses a simple continuous second order change of coordinates is presented to guarantee convergence of tracking errors. The controller uses few nodes to approximate the regressor online, and chattering is eliminated by means of the second order sliding mode surface. The experimental results demonstrate the stability properties and robustness of control scheme proposed. REFERENCES Arimoto, S., Control Theory of Non-linear Mechanical Systems, Oxford University Press (1996). Barambones, O. and V. Etxebarria, Robust Neural Network for Robotic Manipulators, Automatica, 38, 235-242 (2002).
293
39:285-294(2009)
Lewis, F.L. and C.T. Abdallah, Control of Robot Manipulators, Macmillan (1994). Lewis, F.L., A. Yessildirek and K. Liu, Multilayer Neural Net Robot Controller with Guaranteed Tracking Performance, IEEE Trans. on Neural Networks, 7, 388-399 (1996). Lin, C.H., W.D. Chou and F.J. Lin, Stable Adaptive Control with Neural Network, Automatica, 36, 522 (2000). Lin, C.H., W.D. Chou and F.J. Lin, Adaptive Hibrid Control usign a Recurrent Neural Network for a linear Synchronous Motor Servo-drive System, IEEE Proc. Control Theory Appl., 148, 156-168 (2001). Narendra, K.S. and K. Parthasarathy, Identication and Control of Dynamical Systems Using Neural Networks, IEEE Trans. on Neural Networks, 1, 4-27 (1990). Parra-Vega, V., Second Order Sliding Mode Control for Robot Arms with Time Bse Generator for Finite-Time Tracking, Dynamics and Control, 11, 174-186 (2001). Parra-Vega, V., S. Arimoto, Y.H. Liu, G. Hirzinger and P. Akella, Dynamic Sliding PID Control for Tracking of Robot Manipulators: Theory and Experiments, IEEE Trans. on Robotics and Automation, 19, 967-976 (2003). Sanchez, E.N., A.G. Loukianov and R.A. Felix, Recurrent Neural Block form Control, Automatica, 39, 1275-1282 (2003). Slotine, J.J.E. and W. Li, On the Adaptive Control of Robot Manipulator, Int. Journal of Robotics Research, 6, 49-59 (1987). Slotine, J.J.E. and M.W. Spong, Robust Robot Control with Bounded Input Torques, Journal of Robotics Systems, 2, 329-352 (1985). Stepanenko, Y., Y. Cao and A.C. Su, Variable Structure Control of Robotic Manipulator with PID Sliding Surfaces, Int. Journal of Robust and Nonlinear Control, 8, 79-90 (1998). Stern, H.S., Neural Networks in Applied Statistics, Proc. of the Statistical Computing Section, American Statistical Association, 150-154 (1991). Sun, F.C. and Z.Q. Sun, Stable Neuro-Adaptive Control For Robots with the Upper Bound Estimation on the Neural Approximation Errors, J. Intelligent and Robotic System, 26, 91-100 (1999). Villalva, M.G. and E.R. Filho, Control of a Shunt Power Filter with Neural Network-Theory and Practical Results, IEEE Trans. on Industry Applications, 126, 946-953 (2006).
Widrow, B. and M.E. Ho, Adaptive Switching Circuits, Institute of Radio Engineering IRE WESCON Convention Record, NY, 4, 96-104 (1960). Wu, J., Z. Lin and P. Hsu, Function Approximation Using Generalized Adalines, IEEE Trans. on Neural Networks, 17, 541-558 (2006). Yamakita, M. and T. Satoh, Adaptive ANN Control of Robot Arm Using Structure of Lagrange Equation, Proc. of the American Control Conference, San Diego, 2834-2836 (1999). Yu., W., Passivity Analysis for Dynamic Neuro Identier, IEEE Trans. on Circuits and Systems-I: Fundamental Theory and Applications, 50, , 173178 (2003).
Received: March 11, 2008 Accepted: October 10, 2008 Recommended by Subject Editor: Jos Guivant
294