Data Mining Practical (1)
Data Mining Practical (1)
1
Show the implementation of Naïve Bayes algorithm.
2
Show the implementation of Decision Tree.
3
Show the implementation of Clustering Algorithm.
4
Show the implementation of Apriori Algorithm
5
Show the implementation of Time Series Algorithm.
Practical No : 1
Aim: Show the implementation of Naïve Bayes algorithm.
Step 1:
1. Open Weka.
2. Click on the "Explorer" button to launch the Weka Explorer interface.
3. In the "Preprocess" tab, click on "Open file".
4. Select your dataset file (usually in .arff or .csv format) and click "Open".
Step 3: Apply the Naïve Bayes Algorithm
rust
Copy
bayes -> NaiveBayes
Select NaiveBayes.
Step 4: Configure Naïve Bayes (Optional)
If you need to tweak settings, click on the NaïveBayes classifier (after selecting it),
and the options will appear.
Adjust parameters if needed, although Naïve Bayes generally requires little tuning.
Step 5: Train and Evaluate the Model
rust
Copy
trees -> J48
Note: For clustering, the dataset should not have a class attribute because clustering
algorithms are unsupervised.
• Under Fields to forecast, select the attribute you want to predict (e.g., "Year" or "Pop").
Step 3: Apply the Time Series Algorithm (ARIMA)
rust
Copy
timeSeries -> ARIMA
3. Fine-tune the parameters of the learning algorithm if needed.
1. Visualize Predictions:
o Weka provides a graph comparing actual values and predicted
values for better.
OUTPUT
interpretability.
Practical No- 4
Aim: Show The Implementation of Clustering Algorithm.
Note: For clustering, the dataset should not have a class attribute because clustering
algorithms are unsupervised.
1) K-MEANS:
Step 3: Apply the Clustering Algorithm
rust
Copy
cluster -> SimpleKMeans
Step 4: Configure the Clustering Algorithm (Optional)
Step 1 − Treat each data point as single cluster. Hence, we will be having say
K clusters at start. The number of data points will also be K at start.
Step 2 − Now, in this step we need to form a big cluster by joining two closet
datapoints. This will result in total of K-1 clusters.
Step 3 − Now, to form more clusters we need to join two closet clusters. This
will result in total of K-2 clusters.
Practical NO- 5
The Apriori algorithm is commonly used for mining frequent itemsets and
association rule learning. Weka provides an easy-to-use interface to apply the
Apriori algorithm. Here’s a step-by-step guide on how to implement the Apriori
algorithm in Weka:
Format: Your data should be in the ARFF format or CSV. The dataset must be
transactional, where each transaction contains a list of items.