0% found this document useful (0 votes)
12 views

Neural System Tutorial II 2024

Safe

Uploaded by

h210595v
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Neural System Tutorial II 2024

Safe

Uploaded by

h210595v
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

HARARE INSTITUTE OF TECHNOLOGY

SCHOOL OF ENGINEERING AND TECHNOLOGY


DEPARTMENT OF BIOMEDICAL ENGINEERING
B. TECH (HONOURS) DEGREE IN BIOMEDICAL ENGINEERING
EBE3201: NEURAL SYSTEMS IN MEDICINE
Tutorial II Questions (2024)
1. List the assumption that give way for McCulloch Pitts Theory.
2. Describe the winner take all learning rule.
3. Design a perceptron to implement the truth table of AND gate, use bipolar inputs and
target.
4. Describe the characteristics of continuous Hopfield memory and discuss how it can be
used to solve the travelling salesman problem.
5. Explain different type of activation function used in artificial neural networks.
6. Implement a perceptron to solve simple AND problem with two inputs.
7. Obtain the output of the neuron Y for the network shown below,

8. Discuss different learning mechanisms used in ANN.


9. a) For the network shown in figure, Calculate the net output to the neuron.
b) How does the network work.

10. Threshold function is not used as activation function in multilayer feed forward
network, why
11. Derive an equation for weight change for discrete perceptron network.
12. In the given neural network, compute the total error at output

13. Apply MAXNET algorithm with four neurons and inhibitory weights ϵ = 0.1
up to five iterations, The initial activations are a1(0) = 0.2, a2(0) = 0.4, a3(0) =
0.6, a4(0) = 0.8. Find activations for :
i. First Iteration,
ii. Second Iteration,
iii. Third Iteration.
14. Train auto associate memory network to find optimal weight matrix using outer product
rule to store unput row vector [1 1 1 1] and [-1 1 1 -1]. Find the weight matrix and check
with test vector using [1 1 1 1] and [-1 1 1 -1].
15. Distinguish between auto associative and hetero associative memories.
16. Explain the phases involved in pattern recognition processes.
17. Train a bidirectional associative network to store input vectors S=S1, S2, S3, S4 to the
output vectors T=T1, T2. Training input and target pairs are in binary form. Obtain the
weight vectors in bipolar form.
Input/Target S1 S2 S3 S4 T1 T2
1 1 0 0 0 0 1
2 1 1 0 0 0 1
3 0 0 0 1 1 0
4 0 0 1 1 1 0

18. Construct and test the hamming network to cluster three vectors, e(1) = [1 -1 -1 -1],
e(2) = [-1 -1 -1 1], the bipolar input vectors are x1 =[-1 -1 1 -1], x2 = [-1 -1 1 1], x3 = [-
1 -1 -1 -1].
19. Develop the training algorithm for supervised recurrent network. The network output
uses sigmoid function.
20. Perform hetero association using BAM for the set of pairs.
A1 = [1 0 0 0 0 1] B1 = [1 1 0 0 0]
A2 = [0 1 1 0 0 0] B2 = [1 0 1 0 0]
A3 = [0 0 1 0 1 1] B3 = [0 1 1 1 0]
21. Establish the association between the following input and output pairs using BAM.
A1 = [1 1 -1 -1] B1 = [1 1]
A2 = [1 1 1 1] B2 = [1 -1]

A3 = [-1 -1 1 1] B3 = [-1 1]

22. Using Hebb rule. Find the weights required to perform the following classification. The
vectors [-1 -1 1 -1] and [1 1 1 -1] belong to class 1(target = +1), vectors (-1 -1 1 1] and
[1 1 1 -1 -1] do not belong to class1(target= -1). Also using each of training x vectors
as input, test the response of the net.
23. Train the perceptron for the following sequence with η = 1 beginning from wo= [1 1 1]
a) Class 1: (3.1), (4.2), (5.3), (6.4)
b) Class 2: (2.2), (1.3), (2.6)
24. Derive the learning rule of Adaline network and explain the algorithm.
25. Derive he weight update rule for RBF neural networks and explain RBF nets.
26. Discuss them method which is used for accelerating the learning process of
backpropagation algorithm.
27. Calculate the new weights for the network shown using backpropagation algorithm for
input pattern [0,1], target output = 1, learning rate = 0.25 and use binary sigmoidal
activation function.

28. Describe components of a self-organisation mapping.


29. State important properties of a topographic map.
30. Discuss SOM as a feature map classifier.
31. Explain the Kohonen algorithm.
32. Consider a Korhonen net with two cluster units and five input units. The weight vectors
for the cluster units are w1= [1.0,0.8,0.6,0.4,0.2] and w2 = [0.2,0.4,0.6,0.8,1.0]. Use the
square of the Euclidean distance to find the winning cluster unit for the input pattern,
x= [0.5,1.0,0.5,0.0,0.0]. using a learning rate of 2, find the weights for the winning unit.
33. Describe the architecture of LVQ
34. Explain the algorithm and flow chart of LQV, step by step.
35. Describe the structure of RBF.
36. State the advantages of SVMs.
37. Discuss the stability-plasticity dilemma in ART.
38. List types of ART currently in use.
39. Describe the basic structure of ART and distinguish between ART 1 and ART 2.
40. Differentiate in active and inhibited F2 units in ART network.
41. State the importance of resonance in ART network.
42. Given n = 4, m = 3, ρ = 0.4, L = 2, bij(0) = 1/(1+n), tji = 0. Using ART1 algorithm:
i. Cluster vector x = (1, 1, 0, 0) in at most three clusters,
ii. Cluster vector x = (0, 0, 0, 1) in at most three clusters,
iii. Cluster vector x = (1, 0, 0, 0) in at most three clusters,
iv. Cluster vector x = (0, 0, 1, 1) in at most three clusters.

43. Apply adaptive resonance theory algorithm on the following set of vectors,
i1 = [1 1 0 0 0 0 1], i2 = [0 0 1 1 1 1 0], i3 = [1 0 1 1 1 0], i4 = [0 0 0 1 1 1 0], i5 = [1 1 0
1 1 1 0]. Use the vigilance parameter ρ = 0.7
44. State major differences between fuzzy logic and neural networks.
45. Explain properties of fuzzy sets
46. Distinguish between fuzzy sets and crisps sets
47. Explain features of membership functions.
48. Explain the operations performed on crisp sets using given data.
X= [1, 2, 3, 4, 5, 6, 7, 8, 9]
A= [1, 2, 3, 4, 5]
B= [3, 4, 5, 6]
C = [6, 7, 8, 9]
49. Consider two fuzzy sets of the set A ={(a1,0.2), (a2,0.7), (a3,0.4)}
B={(b1,0.5), (b2,0.6)}. Find the relation R (A x B)
50. Explain centre of gravity defuzzification method with an example.
51. Describe any five defuzzification methods used in medical neural fuzzy logic systems.
52. Explain the basic fuzzy inference algorithm, step by step.
53. Describe the architecture and operation of neural-fuzzy logic systems.
54. In the McCulloch-Pitts model of the perception of heat and cold, a cold stimulus applied
at times t-2 and t-1 is perceived as cold at time t. Can you modify the net to require cold
stimulus to be applied for three-time steps before cold is felt.
55. Describe any three types of membership functions used in neural fuzzy logic systems.
56. Describe the role neural networks in the following medical imaging processes,
i. Pre-processing
ii. Segmentation
iii. Object detection and classification
57. Explain the deep learning architecture used in medical image processing.
58. Explain the role of neural networks in diagnosis of breast tumours using sonographic
images.
59. Explain how neural networks are trained using the labelled mammogram datasets.
60. Describe the architecture of probabilistic neural networks (PNN).
61. Explain how probabilistic neural networks (PNN) are applied in ECG and EMG pattern
recognition.
62. Describe the role of multilayer perceptron (MLP) networks in cardiovascular disease
prediction, diagnosis and prevention.
63. Explain challenges, in applying neural networks in patient monitoring systems.
64. Describe the application of artificial neural networks in medical fields below:
i. Clinical Diagnosis,
ii. Precision Medicine,
iii. Medical image analysis and Interpretation,
iv. Biochemical Analysis,
v. Drug Development.

You might also like