0% found this document useful (0 votes)
24 views

Cit 907

JAVA APPLICATION

Uploaded by

COLLETA OWINO
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Cit 907

JAVA APPLICATION

Uploaded by

COLLETA OWINO
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

ARRAY APPLICATION • Vectors and lists, which are crucial components of the C++ STL, are implemented

using arrays.

• Stacks and queues can also be implemented using arrays.

• Matrices, which are a crucial component of the mathematics library in every programming language, are
implemented using arrays. • Trees likewise utilize array implementation wherever possible because arrays
are easier to handle than pointers.

A structure with a root, branches, and leaf nodes is a decision tree.A test on an attribute is represented by
each internal node, a test result by each branch, and a class label by each leaf node.The highest hub in the
tree is the root hub.

The decision tree that follows is for the idea of "buy computer," and it shows whether or not a customer of a
company is likely to buy a computer.An attribute test is represented by each internal node.A class is
represented by each leaf node.

The following are the advantages of having a decision tree: It doesn't require any domain knowledge.

It is simple to understand.

A decision tree's learning and classification steps are quick and easy.

The ID3 (Iterative Dichotomiser) decision tree algorithm was created in 1980 by a machine researcher by the
name of J. Ross Quinlan.He then presented C4.5, which was ID3's replacement.C4.5 and ID3 use a naive
strategy.This algorithm does not allow for backtracking;Divide-and-conquer from the top down is used to
build the trees.

Making a decision tree using data partition D Algorithm's training tuples:Generate_decision_tree

Input:

Data partition D, which consists of a collection of training tuples and the class labels that go with them.

the list of potential attributes known as attribute_list.

An approach to selecting the splitting criterion that best divides the data tuples into distinct classes is known
as the attribute selection method.A splitting_attribute and either a splitting point or a splitting subset are
included in this criterion.

Output:

A node N is created using the Decision Tree Method;

Return N as a leaf node labeled with class C if all of the tuples in D belong to the same class, C;

Return N as a leaf node labeled with the majority class in D if attribute_list is empty;|| majority voting
applies attribute_selection_method(D, attribute_list) to determine the best splitting_criterion;

with splitting_criterion, label node N;

You might also like