0% found this document useful (0 votes)
39 views7 pages

2011-Modeling of Quadruple Tank System Using Support Vector Regression

Uploaded by

Aravindan Mohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views7 pages

2011-Modeling of Quadruple Tank System Using Support Vector Regression

Uploaded by

Aravindan Mohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Modeling of Quadruple Tank System Using Support Vector Regression

Kemal UÇAK Gülay ÖKE


Department of Control Engineering Department of Control Engineering
Istanbul Technical University Istanbul Technical University
Istanbul,Turkey Istanbul,Turkey
[email protected] [email protected]

Abstract— In this paper, ε − Support Vector Regression


(SVR) method is employed to model a quadruple tank system.
For a successful and reliable analysis and synthesis in control
engineering, primarily, the correct estimation of system model Φ( , ) +ε
is of great significance. SVR can be used as an important tool in 0
ζ −ε
modeling, since it has a good generalization ability, owing to its
basic properties of structural risk minimization and ensuring
global minima. This suggests the use of SVR in intelligent
modeling of nonlinear systems and in tuning of controller
parameters based on this system model. In this work, we
employ SVR to model a quadruple tank system, as an example
of MIMO process modeling. We present and discuss our Fig. 1 Transition of data not represented by a linear regression surface
simulation results. lto the feature space using kernel

Keywords- Support Vector Regression, NARX Model,


Quadruple Tank System, MIMO System
(SRM) [4] rather than empirical risk minimization (ERM)
used in ANNs.
Due to their superior generalization ability, SVMs have
I. INTRODUCTION been successfully used to solve both classification and
regression problems. Gene selection for cancer cells [7], face
recognition [8], and intrusion detection [21] are some of the
S ystem modeling methods using sample training data,
such as artificial neural networks (ANN) and support
vector machines (SVM) are popularly applied in recent
areas where SVMs are used as pattern classifiers. Modeling
of microwave transition [2], robot manipulator control [9],
model predictive control [10] and PID controller tuning
years due to their non-linear prediction ability [1,2]. [19,20] are examples of applications where Support Vector
However, the performance of ANNs depends on the Regression technique has been employed for identification.
network’s structure and the selection of training samples, The main design component in SVM is the kernel, which
which may lead to over-fitting and low generalization ability is a nonlinear mapping function from the input to the feature
[3]. Another persistent problem observed in classical neural space [11]. The training data, not separable by a linear plane
nets (such as multilayer perceptrons) is the possibility of in the input space, can be mapped to a high dimensional
convergence to local minima. To a large extent these feature space where linear classification or regression can be
problems are avoided in SVMs [1]. successfully performed, as illustrated in Figure 1. Linear
The SVM algorithms rely on the statistical learning theory classification techniques are then applied in the high-
and the principle of structural risk minimization. The main dimensional feature space [11].
strength of SVMs is that they can solve classification and In this paper, Support Vector Regression has been
regression problems without getting stuck at local minima employed for modeling a quadruple tank system. In section
[4]. They achieve global minima by transforming the II, a brief overview of Support Vector Regression is given
problem into a quadratic programming (QP) problem [4]. and how the regression problem is transformed into a
Support Vector Machine theory has been used in the quadratic programming problem is explained. The quadruple
solution of many classification and regression problems tank system is described in detail in section III. In Section
recently [2,5,7], instead of Artificial Neural Network IV, the application of the Support Vector Regression method
approach, since it is based on structural risk minimization to the modeling of the quadruple tank is presented.
Simulation results, with a detailed search of best set of
kernel parameters and performance analysis of the system

978-1-61284-922-5/11/$26.00 ©2011 IEEE


234
for different initial conditions, are given in Section V. The 1 2 l *
paper ends with a brief conclusion in Section VI. w and C ∑ (ξi + ξi ) respectively. The
2 i =1
minimization problem presented in equation (4) is called the
II. S UPPORT VECTOR REGRESSION primal objective function [12]. The key idea in SVMs is to
construct a Lagrange function from the primal objective
A. Linear Regression function and the corresponding constraints, by introducing a
dual set of variables [13].
SVMs can be applied to solve regression problems by
By utilizing the primal objective function and its constraints,
introduction of an alternative loss function [5]. The training
Lagrangian function can be derived as follows :
data set:
n
( y1, x1 ).....( yk x k ) , x ∈R , y ∈R k = 1, 2,, , , k . (1) 1 2 l l
L= w + C ∑ (ξi + ξi*) − ∑ αi (ε + ξi − yi + w, xi + b)
where k is the size of training data and n is the dimension of 2 i=1 i=1
the input matrix can be approximated by a linear function, (5)
l * l
with the following form, − ∑ αi (ε + ξ*i + yi − w, xi − b) − ∑ (ηiξi +η*iξ*i )
i=1 i=1
f ( x) = 〈 w, x〉 + b (2)
* *
In (5), L is the Lagrangian and ηi , η i , α i , α i are Lagrange
where 〈. , .〉 denotes the inner product. The optimum multipliers [13]. Hence the dual variables in (5) have to
regression function is determined by the minimum of satisfy positivity constraints , i.e.
equation (3).
* *
1 2 αi ,η i ≥ 0 .
Φ ( w) = w (3)
2
Due to the fact that the Lagrangian function has a saddle
ε - Tolerance Loss Function, shown in Figure 2 , sets the point with respect to the primal and dual variables at the
constraints of the primal form. The primal form of the solution, the partial derivatives of L with respect to primal
optimization problem is defined as :
*
variables ( w, b, ξi , ξi ) have to vanish for optimality[13]:
1 2 l
min w + C ∑ (ξi + ξi* )
( w,b,ξ ,ξ *) 2
i i
i =1
subject to ∂L l *
= ∑ (α i − α i ) = 0 (6)
∂b i = 1
yi − w, xi − b ≤ ε + ξi
∂L l *
w, xi + b − yi ≤ ε + ξi* (4) = w − ∑ (α i − α i ) x i = 0 (7)
∂w i =1
ξi , ξi* ≥ 0 l *
w = ∑ (α i − α i ) xi
ζ i =1
+ε ∂L (*) (*)
0 = C − ∂ i − ηi = 0 (8)
ζ −ε (*)
∂ξi
(*) *
(Note: ∂i refers to α i and α i )

Fig. 2. ε - Tolerance Loss Function [13]


It is transformed to dual form , by utilizing equation (7)
above. Dual form of the regression problem is defined as
follows:
The term w is the Euclidean norm of weights, which
symbolizes the model complexity, and the second term of 1 l l * * l *
min ∑ ∑ (α − α i )(α j − α j ) xi , x j − ∑ [αi ( yi − ε ) − α i ( yi + ε )]
the objective function is the empirical risk of weight vector. * 2 i=1 j =1 i i = 1
αi ,αi
The trade-off between model complexity and empirical loss
function can be determined by means of the C parameter. subject to
ξ i , ξi *
are slack variables representing the upper and lower
constraints on the output of the system [5]. The model
complexity and the training error are minimized through

235
0 ≤ α i ≤ C i = 1, 2,3,,,,,,,,,, l 1 l
0 ≤ α i* ≤ C i = 1, 2,3,,,,,,,,,, l
b= ∑ ( yi − i∑
l i =1 ∈SV
λi K 〈 xi , x〉 ) (12)
l

∑ (α
i =1
i − α i* ) = 0
III. QUADRUPLE TANK S YSTEM
In this paper, support vector regression is used to model a
This dual problem can be solved by finding Lagrange quadruple tank system, a highly nonlinear and coupled
multipliers utilizing a quadratic programming technique (in system with four tanks with varying liquid levels and two
our implementation, the “quadprog” command from Matlab pumps.
optimization toolbox was used). The support vectors are the In the quadruple tank system, illustrated in Figure 3, the
training data related to nonzero Lagrange amount of liquid each pump provides is distributed between
multipliers[5,6,13]. The solution of the regression problem two tanks by means of a valve. Hence Pump 1 fills Tank 1
can be approximated by the support vectors and the related
and through the valve some amount of liquid is directed
Lagrange multipliers.
towards Tank 4. Similarly, Pump 2 fills Tank 2 and Tank 3.
The ratio of the split up is controlled by the position of each
B. Non-Linear Regression valve. The differential equations describing the system are as
Occasionally, the training data is nonlinearly distributed follows [15]:
and cannot be separated with a linear regression surface. In
this case, the training data is mapped onto a high d h1 a a γk
= − 1 2 gh1 + 3 2 gh3 + 1 1 v1
dimensional feature space by means of a kernel function as dt A1 A1 A1
depicted in Figure 1. This allows us to use linear regression
d h2 a a1 γ 2 k2
techniques to solve non-linear regression problems. In this =− 2 2 gh2 + 2 gh4 + v2
paper, Gaussian function has been employed as the kernel dt A2 A2 A2
function: d h3 a (1 − γ 2 )k2
 x -y 2 
=− 3 2 gh3 + v2
−  dt A3 A3
 2σ 2 
K ( x, y ) = e   (9) d h4 a (1 − γ 1 )k1
=− 4 2 gh4 + v1 (13)
where σ is the bandwidth of the Gaussian radial basis dt A4 A4
kernel function.
All linear regression formulas can be transformed to non- where
linear regression equations using K 〈 xi , x j 〉 instead of Ai cross section of Tank i
〈 xi , x j 〉 shaped inner product. Thus, the non-linear ai cross section of the outlet hole i
regression or approximation function, the optimal desired hi water level in tank i
weights vector of the regression hyperplane and an optimal
bias of the regression hyperplane can be expressed in terms γi ratio of the flow from tank i
of support vectors as given in (10)-(12) [ 5,14] :
The control inputs of the process are v1 and v2 (input

f ( x) = ∑ λ K 〈 x , x〉 + b ,
i∈SV
i i λi = α i − α *
i (10) voltages to the pumps) and the controlled variables are y1
and y2 (voltages from level measure devices) [16]. In the
technical literature, generally y1 and y2 have been selected
〈 w, x〉 = ∑ λ K 〈 x , x〉
i∈SV
i i (11)
as system outputs, since two pumps provide liquid directly
to Tank 1 and Tank 2 [15],[16],[17].

236
IV. SYSTEM MODELING
Modeling a system using support vector machines
involves finding the mapping functions between the input
and the output of the system, as presented in (14).

yˆ k ( x) = ∑ λ K 〈 x , x〉 + b
i∈SV k
i i k , k = 1, 2,3, 4 (14)

In this study, the NARX model of the quadruple tank


system has been obtained using ε − SVR . Firstly, a random
signal has been applied for 2000 second to reveal all
dynamics of the system, as depicted in Figures 5. Out of the
20000 instances of data generated, 300 are selected
randomly for training and 300 for testing. The selected data
has been used to train the ε − SVR ’s, then the test data is
used to evaluate the performance of the obtained NARX
model. The testing results are given in Section 5.
The dynamics of a non-linear system, can be represented
by the Nonlinear AutoR egressive with eXogenous inputs
Fig. 3. Quadruple Tank System [16] (NARX) model ,

γ i is a parameter that controls the amount of liquid that y ( n) = f (u ( n ),.., u ( n − nu ), y ( n − 1),.., y ( n − n y )) (15)

flows from Pump i to Tank i (i=1,2). If γ 1 is the ratio of


where u ( n ) is the control input applied to the plant at
the flow from Pump 1 to the first tank , then 1 − γ 1 denotes
time n , y ( n) is the output of the plant , and nu and n y
the flow ratio to the fourth tank. Similarly, γ 2 and stand for the number of past control inputs and the number
1 − γ 2 represent the ratio of flow from Pump 2 to Tank 2 and of past plant outputs involved in the model , respectively [4].

Tank 3, respectively.
 
The quadruple tank system has an adjustable zero which u1( n) u1(n ) 
u1(n − 1) 
can be controlled by means of the two valves . The system z −1
M
 
M 
can show both minimum and nonminimum phase z − nu1 u1(n − nu1 ) 
 
characteristics with the varying value of the adjustable zero. u 2(n ) u 2( n ) 
z −1 u 2( n − 1)  ε − SVR1 y1( n )
M 
Considering that our work is about modeling of the M
z − nu 2  
u 2( n − nu 2 ) 
quadruple tank system, we have tried to identify all states of y1( n ) z −1  y1( n − 1) 
  ε − SVR 2 y 2( n)
z −2  y1( n − 2) 
the process. Therefore, in our model v1 and v2 are used as M M  = x ( n) y (n)
z
− n y1
 
 y1( n − n y1 ) 
inputs and the liquid level in each tank, which we denote as y 2( n) z −1  y 2( n − 1) 
 
ε − SVR 3 y 3(n )

M M 
y1 , y2 y3 and y4 as outputs. z
− ny 2  y 2( n − n ) 
 y2 
y 3(n ) z −1  y 3( n − 1)  ε − SVR 4 y 4( n)
The physical parameters of the quadruple tank process are M M



− ny3

listed in table 1. z  y 3( n − n y 3 ) 
y 4( n)  y 4( n − 1) 
z −1  
M M 
z
− ny4  y 4( n − n ) 
 y4 
TABLE I. THE SYSTEM PARAMETERS

The System Parameters Fig. 4. NARX Model for training


Symbol
Description Value
Since the quadruple tank system has 2 inputs and 4 outputs,
A1 , A3 Area of the tank1 and tank3 28 cm2 NARX Model is defined as follows :
A2 , A4 Area of the tank 2 and tank4 32 cm2
Area of the outlet pipe 1 and y ( n) = f ([u ( n),.., u ( n − nu ), y ( n − 1), .., y ( n − n y )]) (16)
a1 a3 pipe 3 0.071cm2
Area of the outlet pipe 2 and where
a2 a4 pipe 4
0.057 cm2
k Calibration constant 0.5 V / cm
g Gravitational constant 981cm / s2

237
T Pump 1
 y1 ( n ) 
T n y1 
T   T   4
u1 ( n )   y2 ( n )   nu1  n y 2 

Voltage(V)
u (n) =   , y (n) =
 y ( n )  , n u =  n  , n y = n 
3
u2 ( n )   3   u2  y3  2

 y4 ( n )   
n y 4 
1
Pump 1

0 200 400 600 800 1000 1200 1400 1600 1800 2000
Time(sec)
Since, the amplitude of input signals fluctuates between 0 - 5 Pump 2

V randomly, the output of the system cannot exceed 20 cm


4
which is the maximum liquid level in the tanks. An input

Voltage(V)
3
varying between these values has been applied to the plant 2

during τ min = τ max = 20 seconds so as to reveal the 1


Pump 2

dynamics of the system. 0.1 second has been employed as 0 200 400 600 800 1000 1200 1400 1600 1800 2000
Time(sec)
the sampling period. Fourth order Runga Kutta method has
been used in the simulation of the system. Fig. 5. Input signals applied to process
As data is arranged according to the NARX model,
training process is converted to finding the solution of the
regression problem. Actual & SVR Model Output 1
The NARX model of the system, shown in Figure 4, has 15

Water Level 1 (cm)


been trained in the series-parallel (SP) mode [18]. Four
separate SVR MISO structures have been combined to 10

model the MIMO quadruple tank system. During training, 5 Actual


the system has also been utilized to search for the best kernel Model
parameters. The results of this search have been given in 0
0 200 400 600 800 1000 1200 1400 1600 1800 2000
section 5. Time(sec)
SVR Model Error for Output 1
The subsets of training data with nonzero Lagrange 1
multipliers are called support vectors. Lagrange multipliers 0.5
and corresponding training data constitute the model of the
Error (cm)

0
system in accordance with equation (14). The number of the
support vectors varies with parameter . Generally as the -0.5
Error
number of support vectors is increased there is a decrease in -1
0 200 400 600 800 1000 1200 1400 1600 1800 2000
the modeling error. Time(sec)
The testing results for SVR based modeling of the
Fig. 6. Testing results and modeling error for h1
quadruple tank system and a detailed analysis involving a
search for the best set of kernel parameters and response of
the model to varying initial conditions are given in section
Actual & SVR Model Output 2
V.
Water Level 2 (cm)

10

V. S IMULATION RESULTS
5
Actual
A. Testing Results for SVR Modeling Model
0
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Time(sec)
After the NARX model is trained with the selected 300 SVR Model Error for Output 2
1
data points as described in Section IV, the performance of
the attained model has been tested. The testing results are 0.5
Error (cm)

given in Figures 6-7 below. The liquid level obtained by the 0


model in comparison with the actual level and the modeling -0.5
error are illustrated for Tank 1 and 2 (h1 and h2). The testing Error
-1
is performed with the kernel parameter set given in Table 2, 0 200 400 600 800 1000 1200 1400 1600 1800 2000
which is the result of the search procedure for the best kernel Time(sec)

parameter set, as described in detail in Section V-B. Fig. 7. Testing results and modeling error for h2

B. The Best Kernel Parameter Search for Model


The modeling performance changes with the varying
values of the kernel parameters, epsilon and sigma, therefore

238
the SVR based model has been searched for the set of values
of these parameters that yield the best performance result.
Epsilon has been incremented from 0.01 to 0.001, with a TABLE II. SVR MODEL PARAMETERS
step size of 0.001, while sigma has been varied from 5 to 25 SVR Model Parameters
with a step size of 5. The training and testing results as Symbol
Description Value
epsilon and sigma are varied in these ranges are illustrated
ε1 , ε 2 Epsilon Parameter for SVR 0.001, 0.001
for h 1 in Figure 8. As a result of this search, the best set of
ε 3, ε 4 1 , SVR 2, SVR 3 , SVR 4 0.001, 0.001
model parameters have been identified as listed in Table 2.
In the search, the number of the past inputs( nu ) and outputs C Regulation Parameter 1000
σ 1 ,σ 2 Parameters for Kernel 25, 25
( n y ) have been selected 6. Table 3 lists the training, testing σ 3 ,σ 4 Function 25, 25
and modeling errors obtained for the SVR implementations
for each plant output, when the best set of kernel parameters
are used.
TABLE III. THE BEST SVR MODEL E RRORS

The Best SVR Model Errors


Model Training Error Testing Error Model Error
(MAE) (MAE) (MAE)
SVR1 0.000906 0.0017 0.070985

SVR2 0.00090523 0.0021 0.10439

SVR3 0.00087528 0.0012 0.028135

SVR4 0.00090625 0.0010 0.022156

C. Performance of the Model for Different Initial Conditions


In this subsection, the performance of the model has
been analyzed for different initial conditions. During the first
implementations, initial conditions for all outputs are chosen
as zero. Then we tested the attained model for other initial
conditions. It is observed that the model responses are
successful, not only for the initial conditions the process is
exposed to during training but also for various other initial
conditions. This indicates that nonlinear modeling has been
successfully performed. If the performance of the model
were good for only certain initial conditions and bad for
others, we would deduce that the model we obtained would
be valid for only certain regions. Figures 9-10 illustrate h1
obtained when the system is tested starting from different
initial conditions.

Actual & SVR Model Output 1


14
Water Level 1 (cm)

12
10
8
6
4 Actual
2 Model
Fig. 8. Error surface of y1 for training and testing obtained at best
kernel parameter search 0 200 400 600 800 1000 1200 1400 1600 1800 2000
Time(sec)
SVR Model Error for Output 1
5
Error (cm)

Error
-5
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Time(sec)

Fig 9. Output response of the system and model for


x10 = 5 cm , x 2 0 = 5 cm , x 30 = 10 cm , x 4 0 = 10 cm

239
[7] Guyon I, Weston J, Barnhill S , Vapnik V. “Gene Selection for
Actual & SVR Model Output 1 Cancer Classification Using Support Vector Machines” Machine
15 Learning Volume: 46 Issue: 1-3 Pages: 389-422 Published: 2002
Water Level 1 (cm)

[8] Guo GD, Li SZ, Chan KL ,”Support Vector Machines for face
10 recognition ” Image and Vision Computing Volume: 19 Issue: 9-
10 Pages: 631-638 Published: AUG 1 2001
5 Actual [9] Abdessemed F. , Bazi Y. “Kernel Regression for Robot Manipulator
Model Control”
0 200 400 600 800 1000 1200 1400 1600 1800 2000 [10] Na MG, Upadhyaya BR , “Model Predictive control of an SP-100
Time(sec) space reactor using support vector regression and genetic optimization
SVR Model Error for Output 1
” IEEE Transactions on Nuclear Science Volume:53 Pages: 2318-
2 2327 Part: Part 2 Published: Aug 2006
1 [11] W. M. Campbell, , D. E. Sturim,and D. A. Reynolds, “Support
Error (cm)

0
Vector Machines Using GMM Supervectors for Speaker
-1
Verification” IEEE Signal Processing Letters, Vol. 13, No. 5, May
-2
2006
Error
[12] Aly Farag & Refat M.Mohamed “ Regression Using Support Vector
0 200 400 600 800 1000 1200 1400 1600 1800 2000 Machines:Basic Foundations ” Technical Report December 2004
Time(sec)

[13] Smola AJ, Scholkopf B ,” A Tutorial on Support Vector


Fig. 10. Output response of the system and model for Regression”, Statistics and
Computing Volume: 14 Issue: 3 Pages: 199-222 Published: Aug
x10 = 2 cm , x 2 0 = 2 cm , x 30 = 5 cm , x 4 0 = 5 cm
2004
[14] Chang BR , “A Tunable Epsilon –Tube in Support Vector Regression
τ
for refining parameters of GM(1,1| ) Prediction Model-SVRGM
VI. CONCLUSION τ
(1,1| ) Approach” 2003 IEEE International Conference on Systems,
Man and Cybernetics, VOLS 1-5, Conference Proceedings ,
Pages: 4700-4704 Published: 2003
In this paper, Support Vector Regression method is used [15] R. Suja Mani Malar , T. Thyagarajan “Design of Decentralized Fuzzy
to model the highly nonlinear quadruple tank system. The Controllers for Quadruple Tank Process” IJCSNS International
testing results show that SVR can be successfully employed Journal of Computer Science and Network Security, VOL.8 No.11,
November 2008
in MIMO process modeling by using multiple SVR [16] Karl Henrik Johansson “ The Quadruple -Tank Process: “A
structures. The response of the system is analyzed for Multivariable Laboratory Process with an Adjustable Zero ” IEEE
changing values of initial conditions. The kernel parameters Transactions on Control System Technology ,Vol 8 , No 3,May 2000.
[17] S.Dormido , F. Esquembre “ The Quadruple-Tank Processes: An
play a crucial role in the modeling performance, so a grid Interactive Tool for Control Education ” Proceedings of the
search analysis has also been carried out to find the best European Control , 2003
kernel parameter set. One of the main strengths of ε − SVR [18] Jos´e Maria P. J´unior and Guilherme A. Barreto , “Long-Term Time
Series Prediction with the NARX Network: An Empirical Evaluation”
with respect to neural networks is that they solve regression page 5-6 ,10-12 , 9th Brazilian Symposium on Neural Networks
problems without getting stuck at local minima. They also Ribeirao Preto, BRAZIL, 2006, NEUROCOMPUTING ,
demonstrate powerful generalization ability with very few Volume: 71 , Pages: 3335-3343
training data. By combining the powerful features of neural [19]. Shang Wanfeng, Zhao Shengdun, Shen Yajing ,” Adaptive PID
Controller Based on Online LSSVM Identification ” 2008
and fuzzy approaches and support vector modeling, more IEEE/ASME International Conference on Advanced
sophisticated and successful modeling techniques can be Intelligent Mechatronics, vols 1-3 Pages: 694-698 , 2008
employed in future works. [20]. Đ plikçi Serdar ,” A comparative study on a novel model-based PID
tuning and control mechanism for nonlinear systems” International
Journal of Robust and Nonlinear Control Volume: 20 Pages: 1483-
1501 ,: Sep 10 ,2010
REFERENCES [21]. Mukkamala S. , Sung A.H. , “Detecting denial of service attacks using
Support Vector Machines”,12th IEEE International Conference on
Fuzzy Systems ,St Louis,MO,May 25-28,2003

[1] Johan A.K. Suykens, “Nonlinear Modeling and Support Vector


Machines” IEEE Instrumentation and Measurement Technology
Conference Budapest, Hungary, May 2001.
[2] Xia L. , Xu R. , Yan B. “LTCC Interconnect Modelling by Support
Vector Regression“Progress In Electromagnetics Research, PIER 69,
67–75, 2007
[3] Chen, K., C. Ho, and H. Shiau, “Application of support vector
regression in forecasting international tourism demand,”Tourism
Management Research,Vol. 4,81-97,2004
[4] Đplikçi Serdar ”Controlling the Experimental Three –Tank System via
Support Vector Machines” Lecture Notes in Computer Science ,
Springer Berlin / Heidelberg , Volume 5495/2009,Adaptive and
Natural Computing Algorithms , page 391 - 400
[5] Steve Gunn “Support Vector Machines for Classification and
Regression” ISIS Technical Report May 14,1998
[6] Vapnik, V., The Nature of Statistical Learning Theory,
Springer- Verlag, New York, 1995

240

You might also like