0% found this document useful (0 votes)
17 views

Training Neural Network Controller

training procedure of neural network controller

Uploaded by

kkk
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Training Neural Network Controller

training procedure of neural network controller

Uploaded by

kkk
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Neural networks are good at fitting functions.

In fact, there is proof that a fairly simple neural network


can fit any practical function. The work flow for the neural network design process has seven primary
steps.

1. Collect data
2. Create the network — Create Neural Network Object
3. Configure the network — Configure Neural Network Inputs and Outputs
4. Initialize the weights and biases
5. Train the network — Neural Network Training Concepts
6. Validate the network
7. Use the network

The Deep Learning Toolbox software uses the network object to store all of the information that defines
a neural network. After a neural network has been created, it needs to be configured and then trained.
Configuration involves arranging the network so that it is compatible with the problem you want to
solve, as defined by sample data. After the network has been configured, the adjustable network
parameters (called weights and biases) need to be tuned, so that the network performance is optimized.
This tuning process is referred to as training the network. Configuration and training require that the
network be provided with example data. This shows how to format the data for presentation to the
network. It also explains network configuration and the two forms of network training: incremental
training and batch training.

To define a fitting problem for the toolbox, arrange a set of Q input vectors as columns in a matrix. Then,
arrange another set of Q target vectors (the correct output vectors for each of the input vectors) into a
second matrix (see “Data Structures” for a detailed description of data formatting for static and time
series data).

The next section shows how to train a network to fit a data set, using the neural network fitting app,
nftool.

Using the Neural Network Fitting App

Open the Neural Network Start GUI with this command:

Nnstart
Click Fitting app to open the Neural Network Fitting App. (You can also use the command nftool.)

Click Next to proceed.


Use the Inputs and Targets options in the Select Data window when you need to load data from the
MATLAB® workspace.

Click Next to display the Validation and Test Data window, shown in the following figure.

The validation and test data sets are each set to 15% of the original data.

With these settings, the input vectors and target vectors will be randomly divided into three sets as
follows:

70% will be used for training.


15% will be used to validate that the network is generalizing and to stop training before overfitting.

The last 15% will be used as a completely independent test of network generalization.

Click Next.

The standard network that is used for function fitting is a two-layer feedforward network, with a sigmoid
transfer function in the hidden layer and a linear transfer function in the output layer. The default
number of hidden neurons is set to 10. You might want to increase this number later, if the network
training performance is poor. The number of neurons adapted is 100.

Click Next.

Select a training algorithm, then click Train. Levenberg-Marquardt (trainlm) is recommended for most
problems, but for some noisy and small problems Bayesian Regularization (trainbr) can take longer but
obtain a better solution. For large problems, however, Scaled Conjugate Gradient (trainscg) is
recommended as it uses gradient calculations which are more memory efficient than the Jacobian
calculations the other two algorithms use. This example uses the default Levenberg-Marquardt.

The training continued until the validation error failed to decrease for six iterations (validation stop).
Under Plots, click Regression. This is used to validate the network performance.

The following regression plots display the network outputs with respect to targets for training,
validation, and test sets. For a perfect fit, the data should fall along a 45 degree line, where the network
outputs are equal to the targets. For this problem, the fit is reasonably good for all data sets, with R
values in each case is nearly 1. If even more accurate results were required, you could retrain the
network by clicking Retrain in nftool. This will change the initial weights and biases of the network, and
may produce an improved network after retraining. Other options are provided on the following pane.

Click Next in the Neural Network Fitting App to evaluate the network.
At this point, you can test the network against new data.

If you are dissatisfied with the network's performance on the original or new data, you can do one of the
following:

Train it again.

Increase the number of neurons.

Get a larger training data set.

If the performance on the training set is good, but the test set performance is significantly worse, which
could indicate overfitting, then reducing the number of neurons can improve your results. If training
performance is poor, then you may want to increase the number of neurons.

If you are satisfied with the network performance, click Next.


Use this panel to generate a MATLAB function or Simulink® diagram for simulating your neural network.
You can use the generated code or diagram to better understand how your neural network computes
outputs from inputs, or deploy the network with MATLAB Compiler™ tools and other MATLAB code
generation tools. Click the simulink diagram to generate the neural network controller block.

Each time a neural network is trained, can result in a different solution due to different initial weight and
bias values and different divisions of data into training, validation, and test sets. As a result, different
neural networks trained on the same problem can give different outputs for the same input. To ensure
that a neural network of good accuracy has been found, retrain several times. The graphical image of
the neural network controller trained as in the above steps are provided below

You might also like