0% found this document useful (0 votes)
166 views

Intro 2 Netlab

NETLAB is a MATLAB toolbox for experimenting with neural networks. It allows users to create multilayer perceptrons (MLPs) using the mlp() function which specifies the number of inputs, hidden units, outputs, and activation function. NETLAB also provides functions for applying networks to data, calculating error, training networks using algorithms like gradient descent, and more. The document provides examples and explanations of how to use NETLAB's neural network functions.

Uploaded by

Bhaumik Bhuva
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views

Intro 2 Netlab

NETLAB is a MATLAB toolbox for experimenting with neural networks. It allows users to create multilayer perceptrons (MLPs) using the mlp() function which specifies the number of inputs, hidden units, outputs, and activation function. NETLAB also provides functions for applying networks to data, calculating error, training networks using algorithms like gradient descent, and more. The document provides examples and explanations of how to use NETLAB's neural network functions.

Uploaded by

Bhaumik Bhuva
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Introduction to NETLAB

• NETLAB is a Matlab toolbox for


experimenting with neural networks

• Available from:
http://www.ncrg.aston.ac.uk/netlab/index.php

• Installation: follow instructions from the site:

1. get three files: netlab.zip, nethelp.zip, foptions.m,


2. unzip them,
3. move foptions.m to the netlab directory,
4. install Matlab path to the netlab directory
Neural Networks 1
Creating Multilayer Perceptrons
>> net=mlp(nin, nhidden, nout, act_function)
creates a structure "net" , where

– nin=number of inputs
– nhidden=number of hidden units
– nout=number of outputs
– act_function= name of activation function for the
output layer (a string):
'linear', 'logistic', or 'softmax'

• Hidden units always use hyperbolic tangent


("logistic sigmoid scaled to (-1, 1)")
Neural Networks 2
Creating Multilayer Perceptrons: Example

>> net=mlp(2,3,1,'logistic')

creates a structure "net" with the following fields:

type: 'mlp'
nin: 2
nhidden: 3
nout: 1
nwts: 13
outfn: 'logistic'
w1: [2x3 double]
b1: [-0.2661 -0.0117 -0.0266]
w2: [3x1 double]
b2: 0.3873
Neural Networks 3
Creating Multilayer Perceptrons: Example

We may access and modify all the fields using the "."
operator.
E.g.:
>> a=net.w1

a=
0.5107 0.7603 0.8469
1.4655 0.8327 -0.6392

>>net.w1=rand(2,3);

Neural Networks 4
Applying the network to data

>> a=mlpfwd(net, x)
applies the network net to input data x (input
patterns are rows of x; x has to have net.nin
columns)

>>error=mlperr(net, x, t)
calculates error of network net, applied to input data
x, and desired outputs (targets) t

type 'help mlpfwd' and 'help mlperr' for more options!

Neural Networks 5
Training MLP

>> [net, options] = netopt(net, options, x, t, alg)


trains the network net on trainig data x (inputs) and t
(targets), using some options (a vector of 18
numbers) and a learning algorithm alg

Most important training algorithms:


'graddesc': gradient descent (backpropagation)
'quasinew': quasi-Newton optimization
'scg': scaled conjugate gradients (very fast)

>> [net, error] = mlptrain(net, x, t, iterations)


An 'easy' training function using scg optimization
procedure
Neural Networks 6
Options for 'graddesc'

The "options" argument is a vector of 18 numbers:


options(1) is set to 1 to display error values;
options(2) is the absolute precision required for the solution
(stop criterion)
options(3) is a measure of the precision required of the
objective function (another stop criterion)
options(7) determines search strategy.
If it is set to 1 then a line minimiser is used.
If it is 0 (the default), then each parameter update is a fixed
multiple (the learning rate) of the negative gradient added
to a fixed multiple (the momentum) of the previous
parameter update (backpropagation)
options(14) is the maximum number of iterations; default 100.
options(17) is the momentum (alpha); default 0.5.
options(18) is the learning rate (eta); default 0.01.
Neural Networks 7
Functions for single layer networks

net = glm(nin, nout, func)


create a generalized linear model; func may be
'linear' (linear regression), 'logistic' (logistic
regression), or 'softmax' (modeling probabilities)

y = glmfwd(net, x)
apply the model to data x

error = glmerr(net, x, t)
calculate the error

net = glmtrain(net, options, x, t)


train the network with help of the iterative
Neuralreweighted
Networks least squares (IRLS) algorithm 8
Some links

softmax:
the outputs of the net should sum-up to 1, so they
can be interpreted as probabilities; check on
http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-12.html

IRLS: http://en.wikipedia.org/wiki/IRLS

Line search and other optimization methods:


Chapter 10 from "Numerical Recipes" `

Neural Networks 9
To do:

1. Study the scripts and play with them as much as you


like ...
2. Train a network to "memorize" the 10 points from the
demo_sin_bad.m (we didn't succeed with it during
the lecture!)
3. Demonstrate the "early stopping" mechanism:
1. Create a data set (either for classification or regression
problem)
2. Split it at random into train and validation.
3. Write a script that trains a network on the train set and
validates it on the validation set; the script should be
plotting two learning curves; they should resemble plot
from slide 37 from the last lecture.
4. What is the optimal number of iterations?

Neural Networks 10

You might also like