0% found this document useful (0 votes)
35 views

Data Analyis

This document summarizes a student's minor project comparing different machine learning techniques for load forecasting. The techniques analyzed include artificial neural networks (ANN), radial basis function networks (RBF), decay RBF neural networks (DRNN), support vector regression (SVR), extreme learning machines (ELM), improved second-order algorithms (ISO), and error correction algorithms (ErrCorr). The student finds that ErrCorr produced the most accurate results with the shortest training time, requiring only 100 hidden nodes to achieve lower error rates than other techniques requiring thousands of nodes.

Uploaded by

pppoo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Data Analyis

This document summarizes a student's minor project comparing different machine learning techniques for load forecasting. The techniques analyzed include artificial neural networks (ANN), radial basis function networks (RBF), decay RBF neural networks (DRNN), support vector regression (SVR), extreme learning machines (ELM), improved second-order algorithms (ISO), and error correction algorithms (ErrCorr). The student finds that ErrCorr produced the most accurate results with the shortest training time, requiring only 100 hidden nodes to achieve lower error rates than other techniques requiring thousands of nodes.

Uploaded by

pppoo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

MINOR PROJECT

PARMAR SAGAR RAJESHBHAI


(19BEE075)

GUIDANCE BY
PROF. CHINTAN PATEL
COMPARATIVE STUDY ON
FORECASTING TECHNIQUES
FLOW OF PRESENTATION

• Challenges during load forecasting


• Machine Learning
• Artificial Neural Network
• Radial Basis Function
• DRNN
• SVR
• ELM
• ISO
• ErrCorr
• Result and comparision
CHALLENGES DURING LOAD FORECASTING

• There is issues in planning, dispatching, scheduling and management operations of power


systems are the correctness and accuracy of predictions of short term electric loads.
And due to their nonstationary and nonlinear features which is affected by season,
weather conditions, etc., electrical load signals are characterized by highly
unpredictability which makes load forecasting difficult operation.
• Another challenges for utilities when developing load forecasting technique is great
volume of data provided by smart meter.
MACHINE LEARNING

• Machine learning (ML) is a type of artificial intelligence (AI) that allows software
applications to become more accurate at predicting outcomes without being explicitly
programmed to do so.
• Machine learning algorithms use historical data as input to predict new output values.
ARTIFICIAL NEURAL NETWORK

• A neural network is a method in artificial intelligence that teaches computers to process


data in a way that is inspired by the human brain.
• It is a type of machine learning process, called deep learning, that uses interconnected
nodes or neurons in a layered structure that resembles the human brain.
• Neural Networks are the building blocks of Machine Learning.
• The First Layer:
The 3 yellow perceptrons are making 3 simple decisions based on the input evidence.
Each single decision is sent to the 4 perceptrons in the next layer.
• The Second Layer:
The blue perceptrons are making decisions by weighing the results from the first layer.
This layer make more complex decisions at a more abstract level than the first layer.
• The Third Layer:
Even more complex decisions are made by the green perceptons.
RADIAL BASIS FUNCTION (RBF)

• RBF network only consists of an input layer, a single hidden layer, and an output layer.
Input Layer
• The input layer simply feeds the data to the hidden layers.
• No computation is performed.
Hidden layer
• The computations in the hidden layers are based on comparisons with prototype vectors which is a
vector from the training set.
• Each neuron computes the similarity between the input vector and its prototype vector.
Output layer
• The output layer uses a linear activation function for both classification or regression
tasks.

• The resulting prediction can be used for both classification or regression tasks.
DECAY RBF NEURAL NETWORKS (DRNN)

• DRNN is the simplest algorithm. In this the number of output weights ωh is same as the number of
training patterns. Here output weights are equal to the outputs of training patterns.
• DRNN networks have very fast learning processes because output weights are being set to the
output value of the outputs of the corresponding training pattern, but results are seldom satisfactory.
SUPPORT VECTOR REGRESSION

• In simple regression, the idea is to minimize the error rate while in SVR the idea is to fit the error
inside a certain threshold which means, work of SVR is to approximate the best value within a
given margin.
• In SVM (support vector machine) to separate groups of patterns, the number of patterns in the
analysis is significantly reduced. Instead of using all patterns, only patterns closest to the separation
surface are used and these patterns are known as support vectors.
• Various nonlinear functions can be used to create the separation surface. SVR can be relatively fast
when high accuracy is not required.
No of RBF γ C ε Training MAPE(%)  
  time(hours) (mean absolute
percent error)

          Training Forecast Forecast


2004-2009 2010 2011

9040 0.3160 1000 0.02 11.91 1.55 1.79 2.07

16152 0.0316 1000 0.02 4.351 2.14 2.33 2.57

11819 0.1 1000 0.02 12.578 1.75 1.99 2.36


EXTREME LEARNING MACHINES (ELMS)

• Gives better results than those obtained in DRNN and SVR.


• Here large numbers of RBF units are generated, then there is a good chance that a good function
approximation will be made.
• ELM is fast. But for high or proper accuracy sometimes a very large number of RBF units must be
used.
• As a result many parameters have to be stored hence execution time of the trained network is
relatively slow.
No of RBF Training time(hours) MAPE(%)  
  (mean absolute percent error)

    Training Forecast Forecast


2004-2009 2010 2011

200 0.027 2.23 2.53 2.75


500 0.079 1.83 2.10 2.28
1000 0.217 1.60 1.93 2.16
2500 0.613 1.34 1.81 2.17
3000 1.445 1.30 1.85 2.27
IMPROVED SECOND-ORDER ALGORITHM

• the ISO algorithm is capable of solving the same problem with about ten times a smaller network
with superb generalization abilities.
• Here the problem is that it is very computationally intensive and many trials with different starting
points are needed before an optimal solution is found.
No of RBF Iteration Training time(hours) MAPE(%)  
  (mean absolute percent
error)

  200   Training Forecast Forecast


2004-2009 2010 2011

20 - 1.29 2.32 2.47 2.73

50 - 6 1.79 2.02 2.22

70 - 25.03 1.69 1.95 2.20

100 - 48.41 1.63 1.95 2.20


ERROR CORRECTION ALGORITHM

• It is modification of ISO algorithm.


• The unique feature of ErrCor is that only one run is needed to obtain a close-to-optimal
solution.
• There is no need for random selection of starting point because this point is automatically
selected at the point with the largest error.
• Hence no need for multiple runs because each time, exact same results are obtained.
• ErrCor is capable of reaching the same training and validation errors with 10 to 100 times
smaller RBF networks than SVR or ELM.
Case 1: 8 input parameters
Input parameters
1. Dry bulb temperature in Fahrenheit
2. Dew point temperature in Fahrenheit
3. Hour of day
4. Day of week
5. Holiday/weekend indicator
6. Previous 24-hr average load
7. 24-hr lagged load
8. 168-hr previous lagged load

Output
24 hours ahead load
Case 2: 6 input parameters
1. Dry bulb temperature in Fahrenheit
2. Dew point temperature in Fahrenheit
3. Hour of day
4. Previous 24-hr average load
5. 24-hr lagged load
6. 168-hr (previous week) lagged load

Output
24 hours ahead load
DATA SHEET IMAGE
RESULT AND COMPARISION

• We can see that the DRNN algorithm was able to solve problem relatively fast, but the results
had over 20 times larger errors than the results reached or obtained with other algorithm. Hence it
was not used in further studies.
• ELM algorithm produced reasonable results and faster then other algorithms.
• SVR produced the worse results.
• Best results were obtained with the ErrCor algorithm. Therefore, the execution time for the
network obtained with the ErrCor is 25 times shorter than the execution time of the network
obtained with the ELM, and it is 90 times shorter than the network obtained with SVR.
Algorithm Numbers of Training time(Hours) MAPE(%)  
  RBFs (mean absolute
percent error)

      Training Forecast Forecast


2004-2009 2010 2011

DRNN 52608 0.19 63.87 79.86 74.74

SVR 9040 11.92 1.55 1.79 2.07

ELM 2500 0.24 1.34 1.83 2.19

ISO 70 25.03 1.69 1.95 2.20

ErrCorr original 100 21.51 1.47 1.80 2.02

ErrCorr modified 100 18.77 1.39 1.75 1.98


COMPARISON OF FORECASTING RESULTS OF
DIFFERENT ALGORITHMS
THANK YOU

You might also like