himanshPR
himanshPR
A REPORT ON
Comparative Analysis of K-Nearest Neighbor and
Bayesian Networks as Learning Mechanism
Submitted by
Himanshu KUMAR
( IT Department)
ACKNOWLEGMENT
ABSTRACT
INTRODUCTION
METHODOLOGY
DISCUSSION
CONCLUSION
APPLICATION
BIBLIOGRAPHY
ABSTRACT
There is no fixed value for K, however, one of the standard values that K
often assumes is ‘5’ i.e., for the majority voting process, the 5 neighbors
closest to the new data point are considered. To avoid mistakes and
confusion among two classes of data sets, generally, an odd value of K is
suitable. Another formula-based calculation for K can be done through this
formula:(1)
And, n is the overall count of data points.
Root Node The initial part of the Decision Tree from where the entire
data set starts getting divided further, into various possible sets that are
homogeneous in nature.
•
Child and Parent node: It is the base node also called the parental
node whereas the remaining nodes are simply called child nodes [40].
5.3. Attribute selection measures
Attribute selection measure (ASM) involves the collection of the optimum
attribute concerning the source node as well as for the sub-nodes. The two
major practices for ASM are:
The root node, say X, that contains the entire data set is considered the
starting point of the tree.
By using ASM look for the best matching characteristic from the data
set.
Develop the decision tree nodes only using the idyllic attribute.
CLASS NOTES
WEBSITE
www.google.com
https://www.javatpoint.com/machine-learning.
Google Scholar
https://images.app.goo.gl/eLBR6gBjRGnSyJ7S9.
Google Scholar