Practical Machine Learning With R And Python Machine Learning In Stereo Tinniam V Ganesh download
Practical Machine Learning With R And Python Machine Learning In Stereo Tinniam V Ganesh download
https://ebookbell.com/product/practical-machine-learning-with-r-
and-python-machine-learning-in-stereo-tinniam-v-ganesh-11116082
https://ebookbell.com/product/practical-machine-learning-with-r-
tutorials-and-case-studies-carsten-lange-56923656
https://ebookbell.com/product/hyperparameter-tuning-for-machine-and-
deep-learning-with-r-a-practical-guide-1st-ed-2023-eva-bartz-48885888
Practical Machine Learning With Lightgbm And Python Andrich Van Wyk
https://ebookbell.com/product/practical-machine-learning-with-
lightgbm-and-python-andrich-van-wyk-55883900
https://ebookbell.com/product/practical-machine-learning-with-aws-
process-build-deploy-and-productionize-your-models-using-aws-1st-ed-
himanshu-singh-22417530
Practical Machine Learning With Spark Uncover Apache Sparks Scalable
Performance With Highquality Algorithms Gourav Gupta
https://ebookbell.com/product/practical-machine-learning-with-spark-
uncover-apache-sparks-scalable-performance-with-highquality-
algorithms-gourav-gupta-43743952
https://ebookbell.com/product/practical-machine-learning-
with-h2o-powerful-scalable-techniques-for-deep-learning-and-ai-1st-
edition-darren-cook-5733694
https://ebookbell.com/product/practical-machine-learning-with-python-
dipanjan-sarkar-raghav-bali-6808286
https://ebookbell.com/product/practical-machine-learning-with-python-
a-problemsolvers-guide-to-building-realworld-intelligent-systems-
dipanjan-sarkar-6827816
https://ebookbell.com/product/practical-machine-learning-with-rust-
creating-intelligent-applications-in-rust-joydeep-
bhattacharjee-10608512
Deep Learning from first principles –
Second Edition
In vectorized Python, R and Octave
This book is dedicated to the memory of my late Mom and Dad who
continue to be the force behind all my actions
This book is also dedicated to my wife Shanthi for her support and for
giving me the space to work, and finally to my daughter, Shreya, for
bringing joy to my life.
Table of Contents
Preface
Introduction
1. Logistic Regression as a Neural Network
2. Implementing a simple Neural Network
3. Building a L- Layer Deep Learning Network
4. Deep Learning network with the Softmax
5. MNIST classification with Softmax
6. Initialization, regularization in Deep Learning
7. Gradient Descent Optimization techniques
8. Gradient Check in Deep Learning
1. Appendix A
2. Appendix 1 – Logistic Regression as a Neural Network
3. Appendix 2 - Implementing a simple Neural Network
4. Appendix 3 - Building a L- Layer Deep Learning Network
5. Appendix 4 - Deep Learning network with the Softmax
6. Appendix 5 - MNIST classification with Softmax
7. Appendix 6 - Initialization, regularization in Deep Learning
8. Appendix 7 - Gradient Descent Optimization techniques
9. Appendix 8 – Gradient Check
References
Preface
You don’t understand anything until you learn it more than one
way. Marvin Minsky
The last decade and some, has witnessed some remarkable
advancements in the area of Deep Learning. This area of AI has
proliferated into many branches - Deep Belief Networks, Recurrent
Neural Networks, Convolution Neural Networks, Adversorial
Networks, Reinforcement Learning, Capsule Networks and the list
goes on. These years have also resulted in Deep Learning to move
from the research labs and closer to the home, thanks to progress in
hardware, strorage and cloud technology.
One common theme when you listen to Deep Learning pundits, is that
in order to get a good grasp of the Deep Learning domain, it is
essential that you learn to build such a network from scratch. It is
towards that end that this book was written.
In this book, I implement Deep Learning Networks from the basics.
Each successive chapter builds upon the implementations of the
previous chapters so that by chapter 7, I have a full-fledged, generic
L-Layer Deep Learning network, with all the bells and whistles. All
the necessary derivations required for implementing a multi-layer
Deep Learning network is included in the chapters. Detailed
derivations for forward propagation and backward propagation cycles
with relu, tanh and sigmoid hidden layer units, and sigmoid and
softmax output activation units are included. These may serve to jog
your memory of all those whose undergrad calculus is a little rusty.
The first chapter derives and implements logisitic regression as a
neural network in Python, R and Octave. The second chapter deals
with the derivation and implementation of the most primitive neural
network, one with just one hidden layer. The third chapter extends on
the principles of the 2nd chapter and implements a L-Layer Deep
Learning network with the sigmoid activation in vectorized Python, R
and Octave. This implementation can include an arbitrary number of
hidden units and any number of hidden layers for the sigmoid
activation output layer. The fourth chapter introduces the Softmax
function required for multi-class classification. The Jacobian of the
Softmax and cross-entropy loss is derived and then this implemented
to demonstrate multi-class classification of a simple spiral data set.
The fifth chapter incorporates the softmax implementation of the
fourth chapter into the L-Layer implementation in the 3rd chapter.
With this enhancement, the fifth chapter classifies MNIST digits using
Softmax output activation unit in a generic L-Layer implementation.
The sixth chapter addresses different initialization techniques like He
and Xavier. Further, this chapter also discusses and implements L2
regularization and random dropout technique. The seventh chapter
looks at gradient descent optimization techniques like learning rate
decay, momentum, rmsprop, and adam. The eight chapter discusses a
critical technique, that is required to ensure the correctness of the
backward propagation implementation. Specifically this chapter
discusses and implements ‘gradient checking’ and also demonstrates
how to find bugs in your implementation.
All the chapters include vectorized implementations in Python, R and
Octave. The implementations are identical. So, if your are conversant
in any one of the languages you can look at the implementations in
any other language. It should be a good way to learn the other
language.
Note: The functions that are invoked in each of the chapters are
included in Appendix 1-Appendix 8.
Feel free to check out the implementations by playing around with the
hyper-parameters. A good way to learn is to take the code apart. You
could also try to enhance the implementation to include other
activation functions like the leaky relu, parametric relu etc. Maybe,
you could work on other regularization or gradient descent
optimization methods. There may also be opportunities to optimize
my code with further vectorization of functions.
This course is largely based on Prof Andrew Ng’s Deep Learning
Specialization (https://www.coursera.org/specializations/deep-
learning).
I would like to thank Prof Andrew Ng and Prof Geoffrey Hinton for
making the apparent complexity of the Deep Learning subject into
remarkably simple concepts through their courses.
I hope this book sets you off on a exciting and challenging journey
into the Deep Learning domain
Tinniam V Ganesh
16 May 2018
Introduction
This is the second edition of my book ‘Deep Learning from first
principles: Second Edition – In vectorized Python, R and Octave’.
Since this book has about 70% code, I wanted to make the code more
readable. Hence, in this second edition, I have changed all the code to
use the fixed-width font Lucida Console. This makes the code more
organized and can be more easily absorbed. I have also included line
numbers for all functions and code snippets. Finally, I have corrected
some of the typos in the book.
Other books by the author (available on Amazon in paperback and
kindle versions)
1. Practical Machine Learning with R and Python: Second Edition
– Machine Learning in stereo
2. Beaten by sheer pace: Third Edition – Cricket analytics with
yorkr
3. Cricket analytics with cricketr:Third Edition
1. Logistic Regression as a Neural
Network
“You don’t perceive objects as they are. You perceive them as you
are.”
“Your interpretation of physical objects has everything to do with the
historical trajectory of your brain – and little to do with the objects
themselves.”
“The brain generates its own reality, even before it receives
information coming in from the eyes and the other senses. This is
known as the internal model”
David Eagleman - The Brain: The Story of You
The cancer data set has 30 input features, and the target variable
‘output’ is either 0 or 1. Hence, the sigmoid activation function will be
used in the output layer for classification.
The Loss, when the sigmoid function is used in the output layer, is
given by
(1)
3.Gradient Descent
3.1 Forward propagation
The forward propagation cycle of the Neural Network computes the
output Z and the activation the sigmoid activation function. Then
using the output ‘y’ for the given features, the ‘Loss’ is computed
using equation (1) above.
3.4Derivative of sigmoid
Let then
and
Therefore -(2)
The 3 equations for the 2 layer Neural Network representation of
Logistic Regression are
-(a)
-(b)
-
(c)
Where L is the loss for the sigmoid output activation function
By chain rule
therefore substituting the results of (d) & (e) into the equation above
we get
(f)
Finally
-(g)
– (h)
and from (f) we have
Therefore (g) reduces to
-(i)
Also
-(j)
Since
and using (f) in (j) we get
The gradient computes the weights at the input layer and the
corresponding bias by using the values
of and
return yPredicted
# Create weight vectors of zeros. The size is the number of features in the data
set=30
w=np.zeros((X_train.shape[1],1))
#w=np.zeros((30,1))
b=0
# Run gradient descent for 4000 times and compute the weights
parameters, grads, costs,idx = gradientDescent(w, b, X_train2, y_train2,
numIerations=4000, learningRate=0.75)
w = parameters["w"]
b = parameters["b"]
# Normalize X_test
X_test1=normalize(X_test)
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=X_test1.T
#Reshape y_test
y_test1=y_test.reshape(len(y_test),1)
y_test2=y_test1.T
Note: The Accuracy on the training and test set is 90.37% and
89.51%. This is comparatively poorer than the 96%, which the logistic
regression of sklearn achieves! But, this is mainly because of the
absence of hidden layers which is the real power of neural networks .
5. Neural Network for Logistic
Regression -R code
source("RFunctions-1.R")
# Define the sigmoid function
sigmoid <- function(z){
a <- 1/(1+ exp(-z))
a
}
return(yPredicted)
}
#Initialize w and b
w=zeros(size(X_train)(1,2),1);
b=0;
#Normalize training
X_train1=normalize(X_train);
X_train2=X_train1';
y_train1=y_train';
# Normalize X_test
X_test1=normalize(X_test);
#Transpose X_train so that we have a matrix as (features, numSamples)
X_test2=X_test1';
y_test1=y_test';
# Use the values of the weights generated from Gradient Descent
yPredictionTest = predict(w1, b1, X_test2);
yPredictionTrain = predict(w1, b1, X_train2);
#Compute Accouracy
trainAccuracy=100-mean(abs(yPredictionTrain - y_train1))*100
testAccuracy=100- mean(abs(yPredictionTest - y_test1))*100
trainAccuracy = 90.845
testAccuracy = 89.510
graphics_toolkit('gnuplot')
plot(idx,losses);
title ('Gradient descent- Cost vs No of iterations');
xlabel ("No of iterations");
ylabel ("Cost");
Conclusion
This chapter starts with a simple 2-layer Neural Network
implementation of Logistic Regression. Clearly, the performance of
this simple Neural Network is comparatively poor to the highly
optimized sklearn’s Logistic Regression. This is because the above
neural network did not have any hidden layers. Deep Learning &
Neural Networks achieve extraordinary performance because of the
presence of deep hidden layers
2. Implementing a simple Neural
Network
“What does the world outside your head really ‘look’ like? Not only is
there no color, there’s also no sound: the compression and expansion
of air is picked up by the ears and turned into electrical signals. The
brain then presents these signals to us as mellifluous tones and
swishes and clatters and jangles. Reality is also odorless: there’s no
such thing as smell outside our brains. Molecules floating through the
air bind to receptors in our nose and are interpreted as different
smells by our brain. The real world is not full of rich sensory events;
instead, our brains light up the world with their own sensuality.”
The Brain: The Story of You” by David Eagleman
“The world is Maya, illusory. The ultimate reality, the Brahman, is all-
pervading and all-permeating, which is colourless, odourless,
tasteless, nameless and formless“
Bhagavad Gita
1.Introduction
In the first chapter, I implemented Logistic Regression, in vectorized
Python, R and Octave, with a wannabe Neural Network (a Neural
Network with no hidden layers). In this second chapter, I implement a
regular, but somewhat primitive Neural Network, (a Neural Network
with just 1 hidden layer). This chapter implements classification of
manually created datasets, where the different clusters of the 2 classes
are not linearly separable.
You can clone and fork the vectorized implementations of the 3 layer
Neural Network for Python, R and Octave from Github at
DeepLearningFromFirstPrinciples
(https://github.com/tvganesh/DeepLearningFromFirstPrinciples/tree/master/Chap2
SimpleNeuralNetwork)
2.The 3 layer Neural Network
A simple representation of a 3 layer Neural Network (NN) with 1
hidden layer is shown below .
In the above Neural Network, there are two input features at the input
layer, three hidden units at the hidden layer and one output layer as it
deals with binary classification. The activation unit at the hidden layer
can be a tanh, sigmoid, relu etc. At the output layer, the activation is a
sigmoid to handle binary classification
Hence
And
Similarly
and
Using the values of the derivatives of sinhx and coshx from (b) above
we get
Since
-(d)
II) Derivatives
The log loss is given below
(g)
(h)
(i)
Using the results of (i) and (e ) we get
-(j)
-(k)
And
Simplifying we get
-(l) and
-(m)
The weights and biases (W1,b1,W2,b2) are updated for each iteration
thus minimizing the loss/cost.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model
colors=['black','gold']
cmap = matplotlib.colors.ListedColormap(colors)
X, y = make_blobs(n_samples = 400, n_features = 2, centers = 7,
cluster_std = 1.3, random_state = 4)
#Create 2 classes
y=y.reshape(400,1)
y=y%2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model
colors=['black','gold']
cmap = matplotlib.colors.ListedColormap(colors)
X, y = make_blobs(n_samples = 400, n_features = 2, centers = 7,
cluster_std = 1.3, random_state = 4)
#Create 2 classes
y=y.reshape(400,1)
y=y%2
source("DLfunctions2_1.R")
z <- as.matrix(read.csv("data.csv",header=FALSE)) #
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
[1093] c. 7.
[1094] Majestas est summa in cives ac subditos legibusque soluta postestas.
[1095] Hoc tamen singulare videri possit, quod, quæ leges populi rogatione
ac principis jussu feruntur, non aliter quam populi comitiis abrogari
possunt. Id enim Dellus Anglorum in Gallia legatus mihi confirmavit;
idem tamen confitetur legem probari aut respui consuevisse contra
populi voluntatem utcunque principi placuerit. He is evidently perplexed
by the case of England; and having been in this country before the
publication of his Latin edition, he might have satisfied himself on the
subject.
[1096] c. 9 and 10.
Rise and fall57. Perhaps the best chapter in the Republic of Bodin is
of states. the first in the fourth book, on the rise, progress,
stationary condition, revolutions, decline, and fall of states. A
commonwealth is said to be changed when its form of polity is
altered; for its identity is not to be determined by the long standing
of the city walls; but when popular government becomes monarchy,
or aristocracy is turned to democracy, the commonwealth is at an
end. He thus uses the word respublica in the sense of polity or
constitution, which is not, I think, correct, though sanctioned by
some degree of usage, and leaves his proposition a tautological
truism. The extinction of states may be natural or violent, but in one
way or the other it must happen, since there is a determinate period
to all things, and a natural season in which it seems desirable that
they should come to an end. The best revolution is that which takes
place by a voluntary cession of power.
Causes of 58. As the forms of government are three, it follows that
revolutions. the possible revolutions from one to another are six. For
anarchy is the extinction of a government, not a revolution in it. He
proceeds to develop the causes of revolutions with great extent of
historical learning and with judgment, if not with so much acuteness
or so much vigour of style as Machiavel. Great misfortunes in war, he
observes, have a tendency to change popular rule to aristocracy, and
success has an opposite effect; the same seems applicable to all
public adversity and prosperity. Democracy, however, more
commonly ends in monarchy, as monarchy does in democracy,
especially when it has become tyrannical; and such changes are
usually accompanied by civil war or tumult. Nor can aristocracy, he
thinks, be changed into democracy without violence, though the
converse revolution sometimes happens quietly, as when the
labouring classes and traders give up public affairs to look after their
own; in this manner Venice, Lucca, Ragusa, and other cities have
become aristocracies. The great danger for an aristocracy is, that
some ambitious person, either of their own body or of the people,
may arm the latter against them: and this is most likely to occur,
when honours and magistracy are conferred on unworthy men,
which affords the best topic to demagogues, especially where the
plebeians are wholly excluded: which, though always grievous to
them, is yet tolerable so long as power is intrusted to deserving
persons; but when bad men are promoted, it becomes easy to excite
the minds of the people against the nobility, above all, if there are
already factions among the latter, a condition dangerous to all
states, but mostly to an aristocracy. Revolutions are more frequent in
small states, because a small number of citizens is easily split into
parties; hence we shall find in one age more revolutions among the
cities of Greece or Italy than have taken place during many in the
kingdoms of France or Spain. He thinks the ostracism of dangerous
citizens itself dangerous, and recommends rather to put them to
death, or to render them friends. Monarchy, he observes, has this
peculiar to it, that if the king be a prisoner, the constitution is not
lost; whereas, if the seat of government in a republic be taken, it is
at an end, the subordinate cities never making resistance. It is
evident that this can only be applicable to the case, hitherto the
more common one, of a republic, in which the capital city entirely
predominates. “There is no kingdom which shall not, in continuance
of time, be changed, and at length also be overthrown. But it is best
for them who least feel their changes by little and little made,
whether from evil to good, or from good to evil.”
Astrological 59. If this is the best, the next is the worst chapter in
fancies of Bodin. It professes to inquire, whether the revolutions of
Bodin.
states can be foreseen. Here he considers, whether the
stars have such an influence on human affairs, that political changes
can be foretold by their means, and declares entirely against it, with
such expressions as would seem to indicate his disbelief in astrology.
If it were true, he says, that the conditions of commonwealths
depended on the heavenly bodies, there could be yet no certain
prediction of them; since the astrologers lay down their observations
with such inconsistency, that one will place the same star in direct
course at the moment that another makes it retrograde. It is obvious
that any one who could employ this argument, must have perceived
that it destroys the whole science of astrology. But, after giving
instances of the blunders and contradictions of these pretended
philosophers, he so far gives way as to admit that, if all the events
from the beginning of the world could be duly compared with the
planetary motions, some inferences might be deduced from them;
and thus giving up his better reason to the prejudices of his age, he
acknowledges astrology as a theoretical truth. The hypothesis of
Copernicus he mentions as too absurd to deserve refutation; since,
being contrary to the tenets of all theologians and philosophers and
to common sense, it subverts the foundations of every science. We
now plunge deeper into nonsense; Bodin proceeding to a long
arithmetical disquisition, founded on a passage in Plato, ascribing
the fall of states to want of proportion.[1108]
[1108] c. 2.
Influence of 63. The first chapter of the fifth book, on the adaptation
climate on of government to the varieties of race and climate, has
government.
excited more attention than most others, from its being
supposed to have given rise to a theory of Montesquieu. In fact,
however, the general principle is more ancient; but no one had
developed it so fully as Bodin. Of this he seems to be aware. No one,
he says, has hitherto treated on this important subject, which should
always be kept in mind, lest we establish institutions not suitable to
the people, forgetting that the laws of nature will not bend to the
fancy of man. He then investigates the peculiar characteristics of the
northern, middle, and southern nations, as to physical and moral
qualities. Some positions he has laid down erroneously; but, on the
whole, he shows a penetrating judgment and comprehensive
generalisation of views. He concludes that bodily strength prevails
towards the poles, mental power towards the tropics; and that the
nations lying between partake in a mixed ratio of both. This is not
very just; but he argues from the great armies that have come from
the north, while arts and sciences have been derived from the south.
There is certainly a considerable resemblance to Montesquieu in this
chapter; and like him, with better excuse, Bodin accumulates
inaccurate stories. Force prevails most with the northerns, reason
with the inhabitants of a temperate or middle climate, superstition
with the southerns; thus astrology, magic, and all mysterious
sciences have come from the Chaldeans and Egyptians. Mechanical
arts and inventions, on the other hand, flourish best in northern
countries, and the southerns hardly know how to imitate them, their
genius being wholly speculative, nor have they so much industry,
quickness in perceiving what is to be done, or worldly prudence. The
stars appear to exert some influence over national peculiarities; but
even in the same latitudes great variety of character is found, which
arises from a mountainous or level soil, and from other physical
circumstances. We learn by experience, that the inhabitants of hilly
countries and the northern nations generally love freedom, but
having less intellect than strength, submit readily to the wisest
among them. Even winds are not without some effect on national
character. But the barrenness or fertility of the soil is more
important; the latter producing indolence and effeminacy, while one
effect of a barren soil is to drive the people into cities, and to the
exercise of handicrafts for the sake of commerce, as we see at
Athens and Nuremburg, the former of which may be contrasted with
Bœotia.
64. Bodin concludes, after a profusion of evidence drawn from the
whole world, that it is necessary not only to consider the general
character of the climate as affecting an entire region, but even the
peculiarities of single districts, and to inquire what effects may be
wrought on the dispositions of the inhabitants by the air, the water,
the mountains and valleys, or prevalent winds, as well as those
which depend on their religion, their customs, their education, their
form of government; for whoever should conclude alike as to all who
live in the same climate would be frequently deceived; since, in the
same parallel of latitude, we may find remarkable differences even
of countenance and complexion. This chapter abounds with proofs
of the comprehension as well as patient research which distinguishes
Bodin from every political writer who had preceded him.
Means of 65. In the second chapter, which inquires how we may
obviating avoid the revolutions which an excessive inequality of
inequality.
possessions tends to produce, he inveighs against a
partition of property, as inconsistent with civil society, and against an
abolition of debts, because there can be no justice where contracts
are not held inviolable; and observes, that it is absurd to expect a
division of all possessions to bring about tranquillity. He objects also
to any endeavour to limit the number of the citizens, except by
colonisation. In deference to the authority of the Mosaic law, he is
friendly to a limited right of primogeniture, but disapproves the
power of testamentary dispositions, as tending to inequality, and the
admission of women to equal shares in the inheritance, lest the
same consequence should come through marriage. Usury he would
absolutely abolish, to save the poorer classes from ruin.
Confiscations66. Whether the property of condemned persons shall
—rewards. be confiscated is a problem, as to which, having given
the arguments on both sides, he inclines to a middle course, that the
criminal’s own acquisitions should be forfeited, but what has
descended from his ancestors should pass to his posterity. He speaks
with great freedom against unjust prosecutions, and points out the
dangers of the law of forfeiture.[1114] In the next, being the fourth
chapter of this book, he treats of rewards and punishments. All
states depend on the due distribution of these; but, while many
books are full of the latter, few have discussed the former, to which
he here confines himself. Triumphs, statues, public thanks, offices of
trust and command, are the most honourable; exemptions from
service or tribute, privileges, and the like, the most beneficial. In a
popular government, the former are more readily conceded than the
latter; in a monarchy, the reverse. The Roman triumph gave a
splendour to the republic itself. In modern times the sale of nobility,
and of public offices, renders them no longer so honourable as they
should be. He is here again very free-spoken as to the conduct of
the French, and of other governments.[1115]
[1114] c. 3.
[1115] c. 4.
Fortresses. 67. The advantage of warlike habits to a nation, and the
utility of fortresses, are then investigated. Some have objected to
the latter, as injurious to the courage of the people, and of little
service against an invader; and also, as furnishing opportunities to
tyrants and usurpers, or occasionally to rebels. Bodin, however,
inclines in their favour, especially as to those on the frontier, which
may be granted as feudal benefices, but not in inheritance. The
question of cultivating a military spirit in the people depends on the
form of polity: in popular states it is necessary; in an aristocracy,
unsafe. In monarchies, the position of the state with respect to its
neighbours is to be considered. The capital city ought to be strong in
a republic, because its occupation is apt to carry with it an entire
change in the commonwealth. But a citadel is dangerous in such a
state. It is better not to suffer castles, or strongholds of private men,
as is the policy of England; unless when the custom is so
established, that they cannot be dismantled without danger to the
state.[1116]
[1116] c. 5.
Cujacius, an 78. Such was the renown of Cujacius that, in the public
interpreter of schools of Germany, when his name was mentioned,
law rather
than a lawyer. every one took off his hat.[1128] The continual bickerings
of his contemporaries, not only of the old Accursian school, among
whom Albericus Gentilis was prominent in disparaging him, but of
those who had been trained in the steps of Alciat like himself, did
not affect this honest admiration of the general student.[1129] But we
must not consider Cujacius exactly in the light of what we now call a
great lawyer. He rejected all modern forensic experience with scorn,
declaring that he had misspent his youth in such studies. We have,
indeed, fifty of his consultations which appear to be actual cases.
But, in general, it is observed by Gravina that both he and the
greatest of his disciples “are but ministers of ancient jurisprudence,
hardly deigning to notice the emergent questions of modern
practice. Hence, while the elder jurists of the school of Bartolus,
deficient as they are in expounding the Roman laws, yet apply them
judiciously to new cases, these excellent interpreters hardly regard
anything modern, and leave to the others the whole honour of
advising and deciding rightly.” Therefore he recommends that the
student who has imbibed the elements of Roman jurisprudence in all
their purity from the school of Cujacius, should not neglect the
interpretations of Accursius in obscure passages; and, above all,
should have recourse to Bartolus and his disciples for the arguments,
authorities, and illustrations which ordinary forensic questions will
require.[1130]
[1128] Gennari, p. 246. Biogr. Univ.
[1129] Heineccius, ibid. Gennari, p. 242.
[1130] Gravina, p. 222, 230.
ebookbell.com