0% found this document useful (0 votes)
73 views

ELEN 867 Neural Network Design Experimental Assignment Two Modified CPN

The document discusses using a neural network model to forecast power system load data. Specifically, it uses a modified Counter Propagation Network (CPN) with a Kohonen Self-Organizing Map (KSOM) and Optimal Linear Associative Memory (OLAM) to predict weekly peak and valley loads 3 years into the future based on historical hourly load, temperature and dew point data from ISO New England from 2000-2002. The model achieved 13.4% and 13.8% error for valley load training and testing, and 7.7% and 10.7% error for peak load training and testing. The author concludes the modified CPN shows promise but improvements could be made by using the actual winning centers

Uploaded by

Charles Winley
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

ELEN 867 Neural Network Design Experimental Assignment Two Modified CPN

The document discusses using a neural network model to forecast power system load data. Specifically, it uses a modified Counter Propagation Network (CPN) with a Kohonen Self-Organizing Map (KSOM) and Optimal Linear Associative Memory (OLAM) to predict weekly peak and valley loads 3 years into the future based on historical hourly load, temperature and dew point data from ISO New England from 2000-2002. The model achieved 13.4% and 13.8% error for valley load training and testing, and 7.7% and 10.7% error for peak load training and testing. The author concludes the modified CPN shows promise but improvements could be made by using the actual winning centers

Uploaded by

Charles Winley
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

ELEN 867 Neural Network Design Experimental Assignment Two Modied CPN

Charles Winley Jr. Department of Electrical and Computer Engineering North Carolina Agricultural and Technical State University Greensboro, North Carolina, USA [email protected]

Abstract
This paper discusses the use of Neural Networks to evaluate and predict power system data. The data that will be evaluated is from the ISO New England for the years 20002002. ISO New England ensures the day-to-day reliable operation of New Englands bulk power generation and transmission system, by overseeing and ensuring the fair administration of the regions wholesale electricity markets, and by managing the comprehensive and regional planning process. This paper will focus on developing weekly peak load and weekly valley load models utilizing the hourly data given. [1] KEY WORDS Peak Load Model, Valley Load Model, OLAM, Neural Networks, Kohonen SOM, Forecasting

load demands, so these catastrophes can be avoided. These models can also help to identify the potential need for more power stations than what is currently available.

Overview

Introduction

The Counter Propagation Network (CPN) which was developed by Robert R. Hecht-Nielsen is a alternate function aproximator which can be developed on available input/output data. A modied CPN will be used to forecast the PSLs 3 years into the future. The modied CPN consists of a Kohonen SOM (KSOM) front-end (see Figure 1) and an Optimal Linear Associative Memory (OLAM) back-end (see Figure 16). The KSOM will be used to cluster the input data and the OLAM will be used to produce the output of the future week. The goal of this exercise to forecast what the Power System Load will be three years into the future.

Power systems are very dynamic in nature and it is very important for power systems to be simulated and modeled accurately for optimal performance of the system. Accurate models allow power companies to ensure that at minimum the Base Load is always being provided and will help to avoid Brownouts and Blackouts. The need for this became evident through the 2003 Northeastern Interconnect Blackout. Power system load models enable the Power companies to load forecast so they can develop maintenance schedules and plan for upgrades that may be need to be performed on various stations and substations on the power grid. As of December 31, 2002 ISO New England has had enough power generation to currently service 145% of the current peak load. Modeling the power system loads out into the future will help to ensure that there is enough power available to serve the peak 1

Figure 1. KSOM Architecture

Figure 3. Peak Power System Load Figure 2. OLAM Architecture

Data Preparation

The data being used initially consisted of hourly PSL??, Dry Bulb Temperature, and Dew Point Temperature data. To evaluate the data in weeks it was necessary to rst identify the daily peaks and valleys from the hourly data and then use the daily data to nd the highs and lows for each week. Before training the network some preprocessing may be needed such as randomizing the data and normalizing the data. Randomizing the data allows you to prevent the Neural Network from simply memorizing the output of the training set. If the network memorize the results you may be lead to believe that the network has been fully trained and when tested the network will not yield true results. This will leave the Network useless for anything beyond classifying a small group of information. It is also helpful to Normalize your data. In normalizing data, you will be able to ensure that the statistical distribution of values for each net input and output are roughly uniform. After randomizing the data and normalizing it, the data must be split into a Training Set and Testing Set.

Figure 4. Valley Power System Load

were equal to zero.

4.1

KSOM Learning Algorithm

1. Initialize weights, this can be done by either generating random values or with better perturbed vectors randomly selected from the input events 2. Get the new input vector from x(t). 3. Compute distance dj between the input Xi (t) and each center Wij (t) (equation. 1) 4. Find center with minimum distance dj

KSOM Training

The training procedure identies the center (weight) closest to the input x as the process winner. The process winner rotates slightly in the direction of the input. Before starting training, You must decide on a stoping condition for the KSOM. For this exercise I chose to stop training when the difference between the winning weight at epoch(n) and epoch(n 1)

5. Update winning center making it closer to the input vector (equation. 2) 6. Repeat steps 2-4 until stop condition is meet or you have reached the maximum number of epochs allotted

dj =
i=1

(Xi (t) Wij (t))2

(1) (2)

W new = W old + (Xi (t) Wij (t))

Winning neuron centers are set to 1 all other set to 0, i.e. winner-takes-all. This is the digitized data that will be passed on to the OLAM. There will be as many columns in the output matrix as there are centers and as many rows as the are events in X.

OLAM
Figure 5. Histogram of Winning Centners Training for Valley Load

The OLAM was created by Teuvo Khonen and Mikko J. Ruohonen. The OLAM consists of a Xeld and a Y-eld that are connected by weights. The weights are calculated using a least squares estimator. The recording recipe makes optimal use of LAM interconnection weights which guarantees perfect recall of stored memories given that the columns of both the X and Y elds are linearly independent.

5.1

OLAM Algorithm

If columns of X and Y are both linearly independent the forward OLAM weights may be calculated as (equation. 3) w = (X T X)1 X T Y (3) Figure 6. Histogram of Winning Centners Testing for Valley Load

The output of the system is calculated as (equation. 4): Y =X w (4)

Results

When training the CPN I received a percent error of 13.403% and the testing percent error was 13.843% for the Weekly Valley Load. For the Weekly Peak Load the training percent error was 7.6827% error and the testing percent error was 10.7343% for the Weekly Peak Load.

Forecast

Before the utilization of the modied CPN the task the PSL formula given was used to do an initial projection on the data to observe potential future pick and valley loads gures??. After making the projections for three years into the future it became apparent that the forecasted PSL was undershooting the values of the PSL which where known and the future projection would be insufcient. Using this forecast

would most like lead to Brownouts or Blackouts and these are the things that we are attempting to avoid through modeling the Power System Load data. The Dry Bulb and Dew Point Temperature where also forecasted gure17-gure20 in this manor, but the purpose for this was not the same as the PSL forecast doing this would give use the information needed when forecasting the PSL. Given that the OLAM will only output the PSL for a week in the future, the values for the Dry Bulb and Dew Point Temperature are needed in order to continue forecasting past week n+1, where n is the length of the data set. This would allow the future output to match with the input data set when sliding the window to calculate the next week into the future. After training and testing the data I used the last event in the data set along with the projected Dry Bulb and Dew Point Temperatures to forecast three years out. This projection made an increase larger than expected, I feel that this is not ideal because this

Figure 7. Histogram of Winning Centners Training for Peak Load

Figure 9. Training Error Valley Load

Figure 8. Histogram of Winning Centners Testing for Peak Load Figure 10. Testing Error Valley Load projection would cause a lot of power to be wasted. But compare the initial forecast to my nal forecast even though power would be wasted this forecast would ensure that there is always enough power being provided.

Conclusion

In conclusion the modied CPN was able to be used to project the data out in to the future and even though the projection took jump in magnitude that was not foreseen I feel that through alternate approach this problem can be xed. My rst change in my network would be to use the actual winning center information when taking the output of the KSOM to the OLAM instead of simply putting a one in the columns associated with the winning center. I also think that using maybe the top three winners rather than a winner take all approach would also help to improve performance.

Figure 11. Training Error Peak Load

Figure 12. Testing Error Peak Load

Figure 15. Peak 3yr Forecast using the modied CPN

Figure 13. Initial Peak 3yr Forecast

Figure 16. Valley 3yr Forecast using the modied CPN

Figure 14. Initial Valley 3yr Forecast using the PSL

Figure 17. Dry Bulb Temperature Peak Forecast

Figure 18. Dew Point Temperature Peak Forecast

Figure 19. Dry Bulb Temperature Peak Forecast

Figure 20. Dew Point Temperature Peak Forecast

Here is the Octave le that produced the analysis for this document:
############################################# Charles Winley # ELEN-867 Neural Network Design # Experimental Assignment 2 # Modified CPN # ############################################# %Peak, this is the code used with the weekly peak load %The code for the weekly valley load calculations are exactly same using the valley load

clc clear WPLDat=load(PeakDat.txt); WPL=WPLDat(:,3:columns(WPLDat)); PSL=WPL(:,1); DBTEM=WPL(:,2); DEWPT=WPL(:,3); figure(1), plot(PSL),title(Peak Power System Load),xlabel(Time (weeks)),ylabel(Power figure(2), plot(DBTEM),title(Peak Dry Bulb Temperature),xlabel(Time (Weeks)),ylabel( figure(3), plot(DEWPT),title(Peak Dew Point Temperature),xlabel(Time (Weeks)),ylabel( n=length(PSL); Base=[]; GR=[]; A1=[]; A2=[]; A3=[]; B1=[]; B2=[]; B3=[]; w0=pi/13; A11=[]; A22=[]; A33=[]; B11=[]; B22=[]; B33=[]; w02=pi/26; %this w will be used for DBTEM & DEWPT because there are only 2 periods unstea for I=1:n Base(I)=1; GR(I)=I; A1(I)=cos(w0*1*I); B1(I)=sin(w0*1*I); A2(I)=cos(w0*2*I); B2(I)=sin(w0*2*I); A3(I)=cos(w0*3*I); B3(I)=sin(w0*3*I); end for II=1:n Base2(II)=1; GR2(II)=I; A11(II)=cos(w02*1*II); B11(II)=sin(w02*1*II); A22(II)=cos(w02*2*II);

B22(II)=sin(w02*2*II); A33(II)=cos(w02*3*II); B33(II)=sin(w02*3*II); end

DP=[Base,GR,A1,B1,A2,B2,A3,B3]; DP2=[Base2,GR2,A11,B11,A22,B22,A33,B33]; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Inital Considerations w1=pinv(DP*DP)*DP*PSL; % Inital PSL Projection Using PSL formula w2=pinv(DP2*DP2)*DP2*DBTEM; % Inital DBTEM Projection Using PSL formula w3=pinv(DP2*DP2)*DP2*DEWPT; % Inital DEWPT Projection Using PSL formula %Inital Predictions for p=1:312 PSLhat(p)= w1(1) +w1(2)*p + w1(3)*cos(w0*p)+ w1(4)*sin(w0*p)+ w1(5)*cos(2*w0*p)+ w1(6 DBTEMhat(p)= w2(1) +w2(2)*p + w2(3)*cos(w02*p)+ w2(4)*sin(w02*p)+ w2(5)*cos(2*w02*p)+ DEWPThat(p)= w3(1) +w3(2)*p + w3(3)*cos(w02*p)+ w3(4)*sin(w02*p)+ w3(5)*cos(2*w02*p)+ end figure(4), plot(PSL,-c,PSLhat),title(Power System Load Prediction and Original Power S figure(5), plot(DBTEM,-c,DBTEMhat),title(Peak Dry Bulb Temperature Prediction and Orig figure(6), plot(DEWPT,-c,DEWPThat),title(Peak Dew Point Temperature Prediction and Ori %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Modified CPN PeakOriginal=[]; Beta=.5 %organize data using sliding window 5-sets of 3 for input and 1 output for i=1:length(WPL)-5 if (i==1) PeakOriginal(i,:)=[WPL(1,:),WPL(2,:),WPL(3,:),WPL(4,:),WPL(5,:),WPL(6,1)];

else PeakOriginal(i,:)=[WPL(1+(i-1),:),WPL(2+(i-1),:),WPL(3+(i-1),:),WPL(4+(i-1),:),WPL(5+(iendif end

D=PeakOriginal; %Normalization for c=1:columns(D) for r=1:rows(D) Norm(r,c)=(D(r,c)-min(D(:,c)))/(max(D(:,c))-min((D(:,c)))); end end for b=1:rows(Norm) Bias(b,:)=1; end Data=[Norm]; %add bias to data set % create indx using randint which can later be used to randomize the dataset indx1=randint(length(Data),1,[1,length(Data)]);

% create indx using randint which can later be used to randomize the dataset indx1=randint(length(Data),1,[1,length(Data)]); %uses the random numbers generated as indexes for the dataset for j=1:length(indx1) RData(j,:)=Data(indx1(j,:),:); end tr=floor(length(RData)*.75); Xtrn=RData(1:tr,1:columns(Data)-1); %remember inputs are 1-9 and output is 10 Ytrn=RData(1:tr,columns(Data)); Xtst=RData(tr+1:length(Data),1:columns(Data)-1); Ytst=RData(tr+1:length(Data),columns(Data)); %25 centers noc=25; %desired number of centers indx2=randint(noc,1,[1,length(Xtrn)]); %picks 25 random centers using numbers generated in indx3&4 for k=1:length(indx2) centersP(k,:)=Xtrn(indx2(k,:),:); end save(-ascii,initalC5.txt,centersP) Beta=.025; %train ksom Percent_Change=1; s=1; ct=0; while ct <4 | s>10000 for i=1:rows(Xtrn) for j=1:rows(centersP) for k=1:columns(centersP) dist(k)=(Xtrn(i,k)-centersP(j,k))2; end tempC(j)=sqrt(sum(dist)); end C(:,i)=tempC; [Win,WinI]=min(C(:,i)); Winner(i,s)=WinI; Distnace(i,s)=Win; Wold=centersP(WinI,:); Wnew=Wold+Beta*(Xtrn(i,:)-Wold); centersP(WinI,:)=Wnew; end if s>1 Percent_Change(s,:)=abs(sum(Winner(:,s)-Winner(:,s-1)))/rows(Winner)*100; if Percent_Change(s,:)==0 ct=ct+1; else ct=0; end end s=s+1;

end save(-ascii,finalC5.txt,centersP) figure(7),plot(Percent_Change),title(Percent of Change of Winning Centers Per Epoch),xl DfrmC=Winner(:,s-1)+Distnace(:,s-1); %distace from winning center %figure(2), scatter(DfrmC,1:length(Winner)) figure(8), hist(Winner(:,s-1)),title(Histogram of Winning Centers),xlabel(Center),yla %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % find centers for test for ii=1:rows(Xtst) for jj=1:rows(centersP) for kk=1:columns(centersP) tsdist(kk)=(Xtst(ii,kk)-centersP(jj,kk))2; end tstempC(jj)=sqrt(sum(tsdist)); end tsC(:,ii)=tstempC; [tsWin,tsWinI]=min(tsC(:,ii)); tsWinner(ii,:)=tsWinI; tsDistnace(ii,:)=tsWin; end tsDfrmC=tsWinner+tsDistnace; %distace from winning center %figure(4), scatter(tsDfrmC,1:length(tsWinner)) figure(9), hist(tsWinner) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Digitize the training data K=zeros(rows(Xtrn),noc); for c=1:length(Winner) K(c,Winner(c,s-1))=1; end %%Digitize the testing data Kts=zeros(rows(Xtst),noc); for c=1:length(tsWinner) Kts(c,tsWinner(c,:))=1; end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %OLAM W_hat=pinv(K*K)*K*Ytrn; Y_hat=K*W_hat; X=1:length(Y_hat); figure(10),plot(X,Y_hat,"1",X,Ytrn,"3"),title(Train Error) Error=abs(Ytrn-Y_hat); Percent_Error=sum(Error)/length(Error)*100

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Testing

%OLAM tsY_hat=Kts*W_hat; X=1:length(tsY_hat); %figure(27),plot(X,tsY_hat,"1",X,Ytst,"3"),title(Test Error) tstError=abs(Ytst-tsY_hat); tstPercent_Error=sum(tstError)/length(tstError)*100

for j=1:rows(tsY_hat); Y_actual(j)=tsY_hat(j)*(max(D(:,columns(D)))-min(D(:,columns(D))))+min(D(:,column Y_desired(j)=Ytst(j)*(max(D(:,columns(D)))-min(D(:,columns(D))))+min(D(:,columns( end

figure(11),plot(X,Y_actual,"1",X,Y_desired,"3"),title(Test Error after Denormilization) CPNPercentErrors=[Percent_Error;tstPercent_Error] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %FUTURE PREDICTIONS USING THE MODIFIED CPN fs=rows(Data); temps=[DBTEMhat(fs+1:length(DBTEMhat)),DEWPThat(fs+1:length(DBTEMhat))]; Xf=[Data(fs,4:columns(Data)),DBTEMhat(fs),DEWPThat(fs)]; %Normalization for cc=1:columns(Xf) for rr=1:rows(Xf) Xn(rr,cc)=(Xf(rr,cc)-min(D(:,cc)))/(max(D(:,cc))-min((D(:,cc)))); end end for f=1:rows(temps) XnB=[1,Xn]; for z=1:rows(centersP) for y=1:columns(centersP) ftsdist(y)=(XnB(y)-centersP(z,y))2; end ftstempC(z)=sqrt(sum(ftsdist)); end ftsC(:,1)=ftstempC; [ftsWin,ftsWinI]=min(ftsC(:,1)); ftsWinner(1,:)=ftsWinI; ftsDistnace(1,:)=ftsWin;

Kf=zeros(1,noc); for c=1:length(ftsWinner) Kf(c,ftsWinner(c,:))=1; end ftsY_hat=Kf*W_hat; new_val(f,:)=[ftsY_hat,temps(f,:)]; Forcasting(f,:)=[XnB(1:columns(XnB)),new_val(f,:)]; Xn=Forcasting(f,5:columns(Forcasting));

end FY=Forcasting(:,columns(Forcasting));

for fj=1:rows(FY); Forcast_Y(fj)=FY(fj)*(max(D(:,16))-min(D(:,16)))+min(D(:,16)); end figure(12),plot(Forcast_Y),title(Three Year Pediction of Peak Load),xlabel(Time (weeks

References
[1] G. Lebby, Modied cpn, 2009.

You might also like