0% found this document useful (0 votes)
76 views

Assignment-1 (MLP From Scratch) : Roll No: EDM18B055

The document describes the initialization and training of a multi-layer perceptron (MLP) neural network from scratch in Python. It defines functions for initializing the network with random weights, forward propagation to make predictions, backward propagation to calculate errors, and updating weights based on the errors. The network is trained on a sample input over multiple epochs while printing the loss, and plots the loss over epochs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views

Assignment-1 (MLP From Scratch) : Roll No: EDM18B055

The document describes the initialization and training of a multi-layer perceptron (MLP) neural network from scratch in Python. It defines functions for initializing the network with random weights, forward propagation to make predictions, backward propagation to calculate errors, and updating weights based on the errors. The network is trained on a sample input over multiple epochs while printing the loss, and plots the loss over epochs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Assignment-1 (MLP From scratch)

Roll No : EDM18B055
In [16]: # Initialize the network with given weights

def initialize_network():

network = list()

hidden_layer = [{'weights':[0.5,1.5,0.8]},{'weights':[0.8, 0.2,-1.6]} ]

network.append(hidden_layer)

output_layer = [{'weights':[0.9, -1.7, 1.6]},{'weights':[1.2, 2.1, -0.2]} ]

network.append(output_layer)

return network

#seed(1)

network = initialize_network()

for layer in network:

print(layer)

[{'weights': [0.5, 1.5, 0.8]}, {'weights': [0.8, 0.2, -1.6]}]

[{'weights': [0.9, -1.7, 1.6]}, {'weights': [1.2, 2.1, -0.2]}]

In [17]: network[0][0]['weights']

Out[17]: [0.5, 1.5, 0.8]

In [18]: from math import exp

# Calculate neuron activation for an input

def activate(weights, inputs):

activation = 0 #weights[0]

#print("#### in fucnction weights : " , weights)

#print("##### inputs in function : " ,inputs)

for i in range(0,len(weights)):

#print("***i",i)

activation += weights[i] * inputs[i]

return activation

# Transfer neuron activation

def transfer(activation):

return 1.0 / (1.0 + exp(-activation))

# Forward propagate input to a network output

def forward_propagate(network, row):

inputs = row

for layer in network:

new_inputs = [1]

for neuron in layer:

#print("**neuron['weights']",neuron['weights'])

#print("\n**inputs",inputs)

activation = activate(neuron['weights'], inputs)

neuron['output'] = transfer(activation)

new_inputs.append(neuron['output'])

inputs = new_inputs

return inputs

row = [1,0.7, 1.2]

output = forward_propagate(network, row)

print("Sample output after first iteration : " , output)

Sample output after first iteration : [1, 0.441370708075126, 0.9563777409741487]

Rough calculations area


In [4]: network

Out[4]: [[{'weights': [0.5, 1.5, 0.8], 'output': 0.9248398905178734},

{'weights': [0.8, 0.2, -1.6], 'output': 0.2728917836588705}],

[{'weights': [0.9, -1.7, 1.6], 'output': 0.441370708075126},

{'weights': [1.2, 2.1, -0.2], 'output': 0.9563777409741487}]]

Backpropagation functions
In [6]: # Calculate the derivative of an neuron output

def transfer_derivative(output):

return output * (1.0 - output)

# Backpropagate error and store in neurons

def backward_propagate_error(network, expected):

for i in reversed(range(len(network))):

layer = network[i]

errors = list()

if i != len(network)-1:

for j in range(len(layer)):

error = 0.0

for neuron in network[i + 1]:

error += (neuron['weights'][j] * neuron['delta'])

errors.append(error)

else:

for j in range(len(layer)):

neuron = layer[j]

errors.append(neuron['output']-expected[j])

for j in range(len(layer)):

neuron = layer[j]

neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])

# test backpropagation of error

# network = [[{'output': 0.7105668883115941, 'weights': [0.13436424411240122, 0.8474337369372327, 0.763774618976614]}],

# [{'output': 0.6213859615555266, 'weights': [0.2550690257394217, 0.49543508709194095]}, {'output': 0.6573693455986976, 'weights': [0.4494910647887381, 0.651592972722763]}]
expected = [1, 0]

backward_propagate_error(network, expected)

for layer in network:

print(layer)

[{'weights': [0.5, 1.5, 0.8], 'output': 0.9248398905178734, 'delta': -0.005288681914226109}, {'weights': [0.8, 0.2, -1.6], 'output': 0.2728917836588705, 'delta': 0.0630866297525632
1}]

[{'weights': [0.9, -1.7, 1.6], 'output': 0.441370708075126, 'delta': -0.13773709407665294}, {'weights': [1.2, 2.1, -0.2], 'output': 0.9563777409741487, 'delta': 0.03989946492218543
5}]

In [26]: # Update network weights with error

from termcolor import colored

from matplotlib import pyplot as plt

def update_weights(network, row, l_rate):

for i in range(len(network)):

inputs = row[1:]

if i != 0:

inputs = [neuron['output'] for neuron in network[i - 1]]

for neuron in network[i]:

for j in range(len(inputs)):

neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]

#neuron['weights'][-1] -= l_rate * neuron['delta']

print(network)

def train_network(network, train, l_rate, n_epoch):

sum_error = [0]*n_epoch

for epoch in range(n_epoch):

#for row in train:

outputs = forward_propagate(network, train)

expected = [1,0]

sum_error[epoch] += sum([(outputs[i]-expected[i])**2 for i in range(len(expected))])

backward_propagate_error(network, expected)
print(colored('>\nepoch=%d, lrate=%.3f, error=%.3f' % (epoch, l_rate, sum_error[epoch]),'blue',attrs=['bold']))

print("\n****Network information after epoch {}*****\n".format(epoch))

update_weights(network, train, l_rate)

#print("\n****sum_Error : ",sum_error)

plt.plot(sum_error)

plt.xlabel('# Epoch')

plt.ylabel('Loss')

plt.title("Loss")

plt.show()

network = initialize_network()

train_network(network, [1,0.7 ,1.2] , 0.5, 20)

# print(" ***** Final network info after ")

# for layer in network:

# print(layer)

>

epoch=0, lrate=0.500, error=0.195

****Network information after epoch 0*****

[[{'weights': [0.49814896133002085, 1.4968267908514643, 0.8], 'output': 0.9248398905178734, 'delta': -0.005288681914226109}, {'weights': [0.8220803204133972, 0.23785197785153794, -
1.6], 'output': 0.2728917836588705, 'delta': 0.06308662975256321}], [{'weights': [0.8363076204969492, -1.7187936606392837, 1.6], 'output': 0.441370708075126, 'delta': -0.1377370940
7665294}, {'weights': [1.2184503083851779, 2.105444118074825, -0.2], 'output': 0.9563777409741487, 'delta': 0.039899464922185435}]]

>

epoch=1, lrate=0.500, error=0.181

****Network information after epoch 1*****

[[{'weights': [0.49644681926311324, 1.49390883302248, 0.8], 'output': 0.9245563314505987, 'delta': -0.004863263048307508}, {'weights': [0.8450649849122418, 0.27725425984955737, -1.
6], 'output': 0.28263604802534664, 'delta': 0.06567046999669901}], [{'weights': [0.7713804682798238, -1.738641833311896, 1.6], 'output': 0.4254117279560214, 'delta': -0.14045039768
48156}, {'weights': [1.2365618259170772, 2.1109807928720503, -0.2], 'output': 0.9572430178291578, 'delta': 0.039178829706315206}]]

>

epoch=2, lrate=0.500, error=0.168

****Network information after epoch 2*****

[[{'weights': [0.4949141487347035, 1.4912813978309205, 0.8], 'output': 0.9242947149321868, 'delta': -0.004379058652599138}, {'weights': [0.8689572370665225, 0.3182124063997529, -1.
6], 'output': 0.29300021205771415, 'delta': 0.06826357758365922}], [{'weights': [0.7053789600185942, -1.7595642248437875, 1.6], 'output': 0.40930561899668566, 'delta': -0.142814855
90031065}, {'weights': [1.254345428739597, 2.1166181716366044, -0.2], 'output': 0.9580784710552251, 'delta': 0.03848037327320323}]]

>

epoch=3, lrate=0.500, error=0.155

****Network information after epoch 3*****

[[{'weights': [0.49357038743478504, 1.4889778070310602, 0.8], 'output': 0.9240584335402775, 'delta': -0.003839317999767008}, {'weights': [0.8937489857886979, 0.36071254706633926, -
1.6], 'output': 0.3040058990945623, 'delta': 0.07083356777764392}], [{'weights': [0.6384837207208796, -1.7815720834701447, 1.6], 'output': 0.3931353659734925, 'delta': -0.144785734
03940226}, {'weights': [1.271811421270902, 2.122364306913694, -0.2], 'output': 0.9588859851419804, 'delta': 0.03780278799986449}]]

>

epoch=4, lrate=0.500, error=0.142

****Network information after epoch 4*****

[[{'weights': [0.49243317941826137, 1.4870283075741626, 0.8], 'output': 0.9238507183454857, 'delta': -0.0032491657614961336}, {'weights': [0.9194196174891702, 0.404719344267149, -
1.6], 'output': 0.3156679827921286, 'delta': 0.07334466200134958}], [{'weights': [0.570892194399973, -1.804667244131786, 1.6], 'output': 0.3769875931209344, 'delta': -0.14632564542
885357}, {'weights': [1.2889695499125475, 2.128227020173793, -0.2], 'output': 0.9596673583173323, 'delta': 0.037144807707405234}]]

>

epoch=5, lrate=0.500, error=0.130

****Network information after epoch 5*****

[[{'weights': [0.49151775936451214, 1.4854590160534498, 0.8], 'output': 0.9236745244917645, 'delta': -0.0026154858678548807}, {'weights': [0.9459351258336008, 0.45017450142903004,
-1.6], 'output': 0.3279929226169233, 'delta': 0.07575859526980172}], [{'weights': [0.5028143600581447, -1.828841395587888, 1.6], 'output': 0.3609497614900891, 'delta': -0.14740654
31852998}, {'weights': [1.305829014879377, 2.134213744979558, -0.2], 'output': 0.9604243066067717, 'delta': 0.036505207234347255}]]

>

epoch=6, lrate=0.500, error=0.119

****Network information after epoch 6*****

[[{'weights': [0.49083642432678154, 1.4842910131316258, 0.8], 'output': 0.9235324218474482, 'delta': -0.0019466715363731797}, {'weights': [0.9732476406538392, 0.49699595540658137,
-1.6], 'output': 0.3409771540993575, 'delta': 0.07803575662925226}], [{'weights': [0.434467809089907, -1.8540756070680193, 1.6], 'output': 0.34510718826780257, 'delta': -0.1480111
566229938}, {'weights': [1.3223984807402633, 2.140331352960109, -0.2], 'output': 0.9611584666850882, 'delta': 0.03588280274500934}]]

>

epoch=7, lrate=0.500, error=0.109

****Network information after epoch 7*****

[[{'weights': [0.49039813337299587, 1.4835396572108503, 0.8], 'output': 0.9234264992545241, 'delta': -0.0012522598679591153}, {'weights': [1.001295416487801, 0.5450778568362303, -
1.6], 'output': 0.3546056517845985, 'delta': 0.08013650238274826}], [{'weights': [0.36607251467881796, -1.8803401316113624, 1.6], 'output': 0.32954013391329956, 'delta': -0.1481337
0520838223}, {'weights': [1.3386860863878025, 2.1465859677251378, -0.2], 'output': 0.9618713975682812, 'delta': 0.03527645277818649}]]

>

epoch=8, lrate=0.500, error=0.099

****Network information after epoch 8*****

[[{'weights': [0.49020826330667616, 1.483214165668588, 0.8], 'output': 0.9233582899939123, 'delta': -0.000542485903770531}, {'weights': [1.0300033143848148, 0.5942913960882543, -1.
6], 'output': 0.368850794113048, 'delta': 0.0820225654200399}], [{'weights': [0.29784565598433516, -1.907594482692636, 1.6], 'output': 0.31432120866866076, 'delta': -0.147779815124
49004}, {'weights': [1.3546994552781608, 2.152982773705699, -0.2], 'output': 0.9625645811206779, 'delta': 0.034685060098315415}]]

>

epoch=9, lrate=0.500, error=0.090

****Network information after epoch 9*****

[[{'weights': [0.49026853267793213, 1.483317484590741, 0.8], 'output': 0.9233287240721105, 'delta': 0.00017219820358852788}, {'weights': [1.0592837813448304, 0.6444864823054236, -
1.6], 'output': 0.38367165950752297, 'delta': 0.08365847702861548}], [{'weights': [0.2299968429580326, -1.9357877642517876, 1.6], 'output': 0.2995133181876333, 'delta': -0.14696567
161275417}, {'weights': [1.3704457068836315, 2.159525828553868, -0.2], 'output': 0.9632394212873855, 'delta': 0.03410757446389369}]]

>

epoch=10, lrate=0.500, error=0.081

****Network information after epoch 10*****

[[{'weights': [0.4905770893961532, 1.4838464389648343, 0.8], 'output': 0.9233381101467102, 'delta': 0.0008815906234887966}, {'weights': [1.089038303201726, 0.6954942340601016, -1.
6], 'output': 0.3990138692408089, 'delta': 0.08501291959112996}], [{'weights': [0.16272402497637883, -1.9648592247521315, 1.6], 'output': 0.2851683032478762, 'delta': -0.1457165414
1074006}, {'weights': [1.385931470358046, 2.166217888953165, -0.2], 'output': 0.9638972419375281, 'delta': 0.03354299644786475}]]

>

epoch=11, lrate=0.500, error=0.074

****Network information after epoch 11*****

[[{'weights': [0.49112874236796183, 1.484792129773649, 0.8], 'output': 0.9233861468469432, 'delta': 0.0015761513480246406}, {'weights': [1.1191592811044688, 0.7471301961790893, -1.
6], 'output': 0.41481006403555076, 'delta': 0.08605993686497951}], [{'weights': [0.09621027116241844, -1.9947390042712703, 1.6], 'output': 0.2713263426110903, 'delta': -0.144064872
6236207}, {'weights': [1.4011629014041158, 2.1730602602721927, -0.2], 'output': 0.9645392832489541, 'delta': 0.032990382405193974}]]

>

epoch=12, lrate=0.500, error=0.067

****Network information after epoch 12*****

[[{'weights': [0.49191530696472274, 1.4861405262252392, 0.8], 'output': 0.9234719604525731, 'delta': 0.0022473274193168832}, {'weights': [1.1495322599696844, 0.7991981599480303, -
1.6], 'output': 0.43098105844202406, 'delta': 0.08677993961490171}], [{'weights': [0.030621501204270193, -2.025349048440473, 1.6], 'output': 0.25801609704169925, 'delta': -0.142048
21102744614}, {'weights': [1.416145703228751, 2.1800526802538926, -0.2], 'output': 0.965166696695673, 'delta': 0.03244885056887356}]]

>

epoch=13, lrate=0.500, error=0.060

****Network information after epoch 13*****

[[{'weights': [0.4929260288670372, 1.4878731923434925, 0.8], 'output': 0.9235941638097753, 'delta': 0.00288777686375561}, {'weights': [1.1800384194347002, 0.8514944333166287, -1.
6], 'output': 0.4474376626679614, 'delta': 0.08716045561433053}], [{'weights': [-0.033894857623332914, -2.0566041710135683, 1.6], 'output': 0.24525549217839385, 'delta': -0.1397071
600398094}, {'weights': [1.4308851522619128, 2.1871932457563936, -0.2], 'output': 0.9657805389071633, 'delta': 0.031917588072151726}]]

>

epoch=14, lrate=0.500, error=0.054

****Network information after epoch 14*****

[[{'weights': [0.49414805077512663, 1.4899680870430743, 0.8], 'output': 0.9237509302233856, 'delta': 0.003491491165969806}, {'weights': [1.2105572237548572, 0.9038123835797548, -1.
6], 'output': 0.46408310086449234, 'delta': 0.08719658377187695}], [{'weights': [-0.09721039287065995, -2.0884132539266096, 1.6], 'output': 0.23305298047512316, 'delta': -0.1370835
6479168207}, {'weights': [1.4453861289954422, 2.1944783894329825, -0.2], 'output': 0.9663817649194357, 'delta': 0.03139585847025366}]]

>

epoch=15, lrate=0.500, error=0.049

****Network information after epoch 15*****

[[{'weights': [0.49556689046777114, 1.4924003836590363, 0.8], 'output': 0.9239400758783506, 'delta': 0.004053827693269973}, {'weights': [1.2409691170325645, 0.9559470577701104, -1.
6], 'output': 0.4808158951538775, 'delta': 0.08689112365059265}], [{'weights': [-0.1592155650479855, -2.1206805763893453, 1.6], 'output': 0.22140909397218259, 'delta': -0.134219033
8878414}, {'weights': [1.4596531538886937, 2.201902910269694, -0.2], 'output': 0.9669712215762374, 'delta': 0.030883009116556438}]]

>

epoch=16, lrate=0.500, error=0.044

****Network information after epoch 16*****

[[{'weights': [0.4971669050954878, 1.495143265877979, 0.8], 'output': 0.9241591449290028, 'delta': 0.004571470364904668}, {'weights': [1.271158145175465, 1.007699677443654, -1.6],
'output': 0.4975330343865411, 'delta': 0.08625436612257271}], [{'weights': [-0.21981907633496436, -2.153307260906335, 1.6], 'output': 0.21031810023981606, 'delta': -0.131153842105
05138}, {'weights': [1.4736904278261476, 2.209460058337685, -0.2], 'output': 0.9675496420025511, 'delta': 0.030378477591177795}]]

>

epoch=17, lrate=0.500, error=0.040

****Network information after epoch 17*****

[[{'weights': [0.4989317237645249, 1.4981686693106142, 0.8], 'output': 0.924405492469629, 'delta': 0.005042339054391801}, {'weights': [1.3010143876483315, 1.0588818073971396, -1.
6], 'output': 0.5141332139376653, 'delta': 0.08530354992247595}], [{'weights': [-0.27894691599368665, -2.186192814361578, 1.6], 'output': 0.19976959851754672, 'delta': -0.127926197
19460377}, {'weights': [1.487501876153274, 2.2171416703309115, -0.2], 'output': 0.9681176421089325, 'delta': 0.02988179633210044}]]

>

epoch=18, lrate=0.500, error=0.036

****Network information after epoch 18*****

[[{'weights': [0.500844637130311, 1.5014479493662476, 0.8], 'output': 0.9246763618688525, 'delta': 0.005465466759388881}, {'weights': [1.3304360918945022, 1.1093190146762892, -1.
6], 'output': 0.5305199238926518, 'delta': 0.08406201213191619}], [{'weights': [-0.3365412211985323, -2.2192367286151775, 1.6], 'output': 0.18974993389780068, 'delta': -0.124571812
53869724}, {'weights': [1.5010911949228327, 2.224938348884269, -0.2], 'output': 0.9686757199674414, 'delta': 0.029392594706527837}]]

>

epoch=19, lrate=0.500, error=0.032

****Network information after epoch 19*****

[[{'weights': [0.5028889381245724, 1.50495246535641, 0.8], 'output': 0.9249689541756864, 'delta': 0.005840859983603783}, {'weights': [1.3593314218370014, 1.1588538660062877, -1.6],
'output': 0.5466041800564019, 'delta': 0.0825580855499975}], [{'weights': [-0.3925590555905702, -2.2523400909476816, 1.6], 'output': 0.18024335540739217, 'delta': -0.12112370721017
302}, {'weights': [1.5144618977115136, 2.2328396757359337, -0.2], 'output': 0.9692242586360831, 'delta': 0.02891059797914331}]]

You might also like