0% found this document useful (0 votes)
83 views

Dynamic Line Rating Using Numerical Weather Predictions and Machine Learning A Case Study

Uploaded by

malekpour_ahmad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Dynamic Line Rating Using Numerical Weather Predictions and Machine Learning A Case Study

Uploaded by

malekpour_ahmad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 32, NO.

1, FEBRUARY 2017 335

Dynamic Line Rating Using Numerical Weather


Predictions and Machine Learning: A Case Study
José L. Aznarte and Nils Siebert

Abstract—In this paper, a dynamic line-rating experiment is complexity of the task and its financial and environmental re-
presented in which four machine-learning algorithms (generalized quirements. Dynamic line rating (DLR) is seen as an effective
linear models, multivariate adaptive regression splines, random solution to a more efficient exploitation of the existing infras-
forests and quantile random forests) are used in conjunction with
numerical weather predictions to model and predict the ampacity tructure, by estimating a dynamic value for the ampacity instead
up to 27 h ahead in two conductor lines located in Northern Ireland. of using the traditional fixed seasonal limits. The use of DLR
The results are evaluated against reference models and show a sig- is especially useful in the framework of wind power, as the
nificant improvement in performance for point and probabilistic cooling effect of wind on the cables is highly and conveniently
forecasts. The usefulness of probabilistic forecasts in this field is
correlated in time with wind power production.
shown through the computation of a safety-margin forecast which
can be used to avoid risk situations. With respect to the state of Studies indicate that DLR is a key tool to enhance the pen-
the art, the main contributions of this paper are an in depth look etration of distributed generation and smart grids [4], [5], as it
at explanatory variables and their relation to ampacity, the use of helps to ensure optimal operation increasing visibility through
machine learning with numerical weather predictions to model am- monitoring and allowing for reliable, automated and integrated
pacity, the development of a probabilistic forecast from standard
control systems which imply a safer and more reliable opera-
point forecasts, and a favorable comparison to standard reference
models. These results are directly applicable to protect and monitor tion of the infrastructure. More importantly, the multiplication
transmission and distribution infrastructures, especially if renew- of data sources and the complexity of the required SCADA
able energy sources and/or distributed power generation systems systems suggest the use of big data/computational intelligence
are present. techniques in their treatment [6].
Index Terms—Dynamic line rating, forecasting, machine learn- This paper presents an application of such techniques to the
ing, time series. DLR problem through a case study on ampacity forecasting for
two conductor lines in Northern Ireland. The main objective of
I. INTRODUCTION the study is to investigate the feasibility of providing automatic
HE GROWTH in the penetration of renewable energy forecasts of line ampacity up to 27 hours ahead that can be
T sources in the European electrical system (with 90 GW
wind farm installed capacity and more than 10% annual growth
used to protect the power system and to increase its operation
flexibility. Given the fact that the described forecasting method
[1]) has implied that transmission and distribution (T&D) net- can also be readily used to predict actual values of the dynamic
works need to adapt to the variable nature of these sources, ratings, the presented proposal is also a tool which not only can
increasing their capacities. Moreover, it is widely expected that be used to predict future limits of a line, but which can also be
much of the future growth of renewables may be based on used for real time monitoring or nowcasting.
distributed power generation systems (DG), which also raise The rest of the paper is as follows: Section II reviews the state
issues concerning the T&D networks: changes in the power of the art in dynamic line-rating forecasting while in Section III,
flows pattern or voltage and fault current levels, for example. the problem and the data on which the study will be based are
Although the deployment of DG units in convenient locations described. Section IV describes the forecasting approaches used
can alleviate some of the effects of intermittent power sources in the study, while the performance of these models is analyzed
on T&D networks, in case of customers not close enough to in Section V. Conclusions are finally drawn in Section VI.
the DG location, the effect might be an increase of losses and
congestions [2]. II. STATE OF THE ART
Replacing T&D line infrastructure is seen as an important
bottleneck towards the EU 20/20/20 objectives [3], due to the Over the last 40 years, research on overhead dynamic line
ratings, a relatively narrow field in electrical engineering, has
Manuscript received June 6, 2015; revised December 14, 2015; accepted seen an exponential growth. Since the seminal series of pa-
March 12, 2016. Date of publication March 29, 2016; date of current version pers by Davis in the late 70s [7], where Box-Jenkins stochastic
January 20, 2017. Paper no. TPWRD-00706-2015. (Corresponding author: J.
L. Aznarte). models were first applied to predict future ratings, many dif-
J. L. Aznarte is with the Department of Artificial Intelligence, Universidad ferent experiments have been put forward and several technical
Nacional de Educación a Distancia (UNED), 28040 Madrid, Spain and also solutions have been tested. Amongst the fist attempts, across
with MINES ParisTech—Centre PERSEE, Sophia Antipolis, France (e-mail:
[email protected]). the late 80s, [8] developed a forecasting system using a proba-
N. Siebert is with MINES ParisTech—Centre PERSEE, Sophia Antipolis, bilistic approach which took into account previous line loading
06904 France. and weather history. Approximately at the same time, a study
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. on the effect of variability in weather conditions on conduc-
Digital Object Identifier 10.1109/TPWRD.2016.2543818 tor temperature [9] lead into another forecasting system [10],
0885-8977 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.
336 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 32, NO. 1, FEBRUARY 2017

[11] which compared a weather-based model with a conductor III. DESCRIPTION OF THE DATA
temperature-based one. Dynamic line-rating forecasts consti-
This study concerns two 110 kV conductor lines between
tute an important input to network management solutions, as the Northern Irish cities of Omagh and Dungannon, of 10 km
described in [4], [12].
each. These lines are selected because they connect the recently
According to [13], [14], the technologies for the exploita-
installed wind farm power stations of the west to the highly
tion of overhead dynamic line ratings can be categorized into populated areas to the east. The lines are equipped with meteo-
sag-based techniques (which monitor the sag of the conduc-
rological stations on selected poles (5 on each line). These poles
tors through optical means) [1], [15], tension-based techniques
are located in an area between latitudes 54.53◦ N and 54.61◦ N
(use physical means to determine the tension of the conductor) and longitudes 7.24◦ E and 6.80◦ E. The poles equipped with me-
[16], [17], temperature-based (monitor the operating temper-
teorological stations will be referred to as reference poles. The
ature of the conductor through sensors installed in the line) two lines will be referred to as line A and line B. The poles are
[18], [10], [11] and current rating-based (calculate the maxi- numbered from 1 to 5 and for short they will be referred to with
mum current rating by monitoring or estimating the weather
the letter of the line and the number of the pole in a single code,
conditions and feeding them into one of the standard models). i.e., A1.
In this paper, we will center our attention in the last category, Three datasets were considered in this work: the instanta-
which has seen a bloom in the last two decades.
neous meteorological measures of each reference pole on each
Amongst previous current rating-based works, [19] applies of the two lines (hereafter called “the measures”), the numerical
an expert system to predict DLR in what is one of the first weather predictions (hereafter referred to as “the NWP”) and
applications of knowledge engineering techniques to this prob-
the computed rating at each pole, which are calculated using the
lem. In [20], a statistical risk analysis of the ratings of Polish NIE-adapted CIGRE standard [31].
overhead lines is performed and compared successfully against
The measures dataset contains, for each pole, the values
static line rating. In [21], a simple example of DLR model-
of the following variables sampled every 5 minutes: ambient
ing (using CIGRE equations) complemented with a protection temperature (in ◦ C), instantaneous wind speed, average wind
relay for back-up is presented. In [22], a rather shallow ex-
speed over 10 minutes (both in m s−1 ), instantaneous wind
periment establishes a comparison between the CIGRE model
direction (in degrees), solar radiation (in J m−2 ), current (in A),
and a proposed partial least squares model. Lange and Focken conductor temperature and internal temperature (both in ◦ C).
[23] investigate the gain in ampacity in situations of high wind
There is a particularity on the measures dataset. At some point
power production, concluding that there is a strong and use-
of the period of study, some of the Lynx conductors (109 MVA)
ful correlation. Another study of the correlation between wind were replaced by INVAR conductors (200 MVA) in order to
farm output and line rating of key overhead lines can be found
increase the capacity of different sections of the lines. This was
in [24]. In [25], [26], Monte Carlo methods are used to fore- of course taken into account in the study.
cast DLR. Artificial Neural Networks are used in [27] to model The NWP dataset comes from the deterministic meteorolog-
and forecast ampacity and temperature whereas authors in [28]
ical model of the European Centre for Medium-Range Weather
use weather forecast ensembles to produce probabilistic am- Forecasts (ECMWF), and its value lies on the the assumption
pacity forecasts which allow for uncertainty estimation. Finally, that NWP are a valuable source of information in order to
[29] develops an alternative model to the CIGRE standard. The
compute future ratings of the lines. The model produces four
model, linear in its formulation, is based on direct measures of variables: 2 m temperature (in K), surface solar radiation
meteorological variables and line current, eliminating the need downwards (in W m−2 s) and 10 m U and V components for
for inclusion of the mechanical and electrical properties of the
wind (in m s−1 ). For each variable, we have predictions made at
conductor. 00:00 and 12:00 UTC spanning 48 hours with a 3 hour time step.
Two comments can be made about this bibliographic review
The NWP are obtained for a grid of 9x13 points covering the
on the state of the art in DLR modeling and forecasting. On the
latitudes between 53.5◦ N and 55.5◦ N and the longitudes 8.25◦ E
one hand, some of the aforementioned studies are especially rel- and 5.25◦ E with a horizontal resolution of 0.25◦ . In order to
evant in our case as they are based on the same Northern Ireland obtain forecast values for each reference pole of the line, we
Electricity transport network considered in this work [22], [30], interpolated the values from the 4 closest grid points to each
[24], [27], [29] . On the other hand, it is remarkable that there pole.
are only a few studies considering machine-learning models or
other artificial intelligence-related approaches. Having proven
its suitability for modeling and forecasting in complex numeri- A. Explorative Analysis of the Data
cal problems, as is DLR, this fact highlights the relevance of the In order to make this analysis comprehensible, we will be
study presented in this paper. confining it to a single pole of the 20 under study whenever the
With respect to the state of the art, the main contributions of differences with the rest are not significant.
this paper are: an in depth look at explanatory variables and their 1) Data Quality Assessment: In order to overcome short
relation to ampacity, the use of machine learning with numerical gaps in the time series, we applied the procedure known as
weather predictions, the development of a probabilistic forecast last observation carried forward, i.e., when a missing datum is
from standard point forecasts and a comparison to baseline per- found, it is considered to have the same value of its immediate
sistence and a “physical” approach. predecessor. We applied this process for gaps of up to 3 data,

Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.
AZNARTE AND SIEBERT: DYNAMIC LINE RATING USING NUMERICAL WEATHER PREDICTIONS AND MACHINE LEARNING: A CASE STUDY 337

Fig. 1. Autocorrelation and partial autocorrelation functions for the ratings of


pole A1.

Fig. 3. Scatterplots of measured variables versus NWP predicted variables at


pole A1.

IV. METHODOLOGY
Fig. 2. Dispersion of line rating with respect to measured meteorological
variables for pole A1. In order to apply automatic learning on a dataset, it is nec-
essary to divide it into, at least, two subsets: a part of the
data that will be used strictly for learning (the train set) and
which results in failures of up to 15 minutes being estimated another part that will be used to evaluate the performance of
with past values. the trained model (the test set). In our case, we decided to fix
2) Rating and Measures: We examined how the rating re- 2010-01-01 as the split point, thus having an approximate 50%
lates to the other meteorological variables. In Fig. 1 we can split.
see the autocorrelation function and the partial autocorrelation Furthermore, to make the comparison fair, the same set of
function for one pole. The autocorrelation pattern suggests that input variables and lags has to be used for all the models (ex-
any statistical modeling of this series can benefit from the use of cept for persistence and downscaling). The selected set of input
autoregressive terms. Also, it shows that differencing the series variables can be divided into three groups:
could lead to more accurate models. 1) Lagged values of the computed rating series: lags t −
Fig. 2 shows how the rating relates to the measured meteo- 15, t − 30, t − 45, t − 60 and t − 1140 minutes (= 24 h).
rological variables. Unsurprisingly, wind has a clear effect on 2) Lagged values of the measured meteorological variables
rating. Wind direction shows a two-peak pattern which indicates (wind speed, wind direction, solar radiation and ambient
that the cooling effect of the wind is the same for orthogonal temperature), using the same set of lags as above.
wind directions with respect to the line. 3) NWP values for wind speed, wind direction, solar radi-
3) Relations Between Measures and NWP: In order to make ation and ambient temperature, downscaled as explained
the NWP data comparable with the measures, we interpolated in Section IV-A and resampled to a 15 minute frequency.
them both in space and time. In space, for each pole of the Therefore, a set of 29 explanatory variables was used as input
line, we found the four closest points of the NWP grid and to the statistical models, as shown in Fig. 4.
applied bi-linear interpolation to estimate the value of the fore- The forecasting requirements stated by NIE were to have
cast variables at the coordinates of the pole. In time, we needed predictions for the 15, 30, 45 minutes and 1, 2, 3, 4 hour horizons.
to have values every 15 minutes, so we resampled by inter- However, to increase the generality of this study, we decided to
polating linearly from the 3 hour horizons found in the NWP forecast up to 27 hours ahead with 15 minute time steps, in a
data. rolling window approach. Given that the NWP produced by the
We can show how these interpolated NWP data relate to the ECMWF are updated every 12 hours and that they are provided
actual measured variables at the poles. Fig. 3 shows scatterplots with at most a 7 hour delay, the width of the rolling window was
for wind speed and direction against the NWP predictions. set to 27 hours to always provide forecasts for the same number
We can see that the predicted wind direction is correlated to of horizons. This is outlined in Fig. 5
the actual measures. However, the low correlation coefficient is The forecasting procedure used in the experiments was based
due to the circular nature of this variable for which the linear on the idea of producing a forecast every 15 minutes with all
correlation coefficient is not well suited. With respect to wind the information available at that point in time. For example,
speed this correlation is much less evident. at an instant t, we know the value of the past meteorological
These correlation values provide an a priori estimate of the measures up to instant t, and hence we know the computed
achievable performance of medium term line-rating forecasts. rating up that instant t. We also have the most recently produced
The fact that wind direction is well predicted is very positive. NWPs, which, in the worst case, were produced 12 hours ago.
However the low correlation observed for wind speed indicates We use all this information to produce a forecast for the time
that obtaining accurate forecasts using a straightforward ap- t + k · 15min where k = 1 ... 108, i.e., for every 15 minutes up
proach is unlikely. to 27 hours.

Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.
338 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 32, NO. 1, FEBRUARY 2017

well, the values predicted for each pole would not be very differ-
ent from one another, as the coarse grid of the NWP model would
mean that two poles could share the same set of NWP data.
On the other hand, given the fact that, apart from the NWP
(interpolated both temporally and spatially as described in
Section III-A3, page 3), we have locally measured meteoro-
logical values, we could think of applying a simple statistical
downscaling procedure to the NWP data. This procedure is
based on the idea that the values predicted by a NWP model
for a single point are constantly proportional to what really
happens at that point. This stems from the fact that the local
conditions and spatial configuration affects the meteorological
values in a more or less constant manner (see Fig. 3).
Thus, we can use the training set to compute a regression1
for each meteorological variable to express the local measured
values for that variable as a function of the NWP forecast values.
Then, for the testing set, we use this learned relations between
NWP and actual values to locally adapt each NWP variable for
each pole. If we use these locally adapted NWP to compute the
rating, we can expect to obtain better results than persistence,
Fig. 4. Summary of the data preparation process depending on h, the fore- especially for longer term horizons.
casting horizon.
B. Machine Learning Statistical Models
Amongst the panoply of statistical and machine-learning
forecasting models, we chose models coming from three dif-
ferent paradigms: the multivariate adaptive regression splines
(MARS), the generalized linear models (GLM) and random
forests (RF).
1) Generalized Linear Models (GLM): The generalized lin-
ear model (GLM) [32], [33] is a flexible generalization of or-
dinary least squares regression. The GLM generalizes linear
regression by employing a link function that defines the rela-
Fig. 5. Flowchart for the point forecasting process. Train and test sets come tionship between the systematic component of the data and the
from the data preparation process outlined in Fig. 4. First the models are trained
using the train set and then used to predict the test set for different values of the dependent variable and by allowing the magnitude of the vari-
horizon h (in minutes). ance of each measurement to be a function of its predicted value.
Generalized linear models were formulated as a way of unify-
ing various other statistical models, including linear regression,
A. Basic Models: Persistence and Downscaling NWP logistic regression and Poisson regression.
2) Multivariate Adaptive Regression Splines (MARS): Mul-
Before going into deeper considerations about the statistical tivariate adaptive regression splines (MARS) [34], [35] is a
forecasting models that we will use, we consider two basic spline regression model that uses a specific class of basis func-
approaches that will serve as benchmarks for the comparison. tions as predictors in place of the original data. The MARS
The persistence model (also called the naive predictor) is the basis function transform makes it possible to selectively blank
simplest trivial predictor and predicts that future values will be out certain regions of a variable by making them zero, allow-
the same as the current value. It is a good ground for evaluation ing MARS to focus on specific sub-regions of the data. MARS
of other algorithms. Any noteworthy algorithm must perform at excels at finding optimal variable transformations and interac-
least as well or better than the naive predictor, and we will use tions, as well as determining the complex data structure that
it here as the basis for comparison. often hides in high dimensional data.
Thanks to the NWP we have a good source of information 3) Random Forests (RF): An RF predictor is an ensemble
about what is expected to happen in the future. This information of individual classification tree predictors [36]. For each ob-
is ignored in the persistence model (a fact that renders the com- servation, each individual tree votes for one class and the for-
parison with persistence not entirely fair) but can be used to build est predicts the class that has the majority of votes. The user
simple and robust predictors. On the one hand, we could com- has to specify the number of randomly selected variables to be
pute the rating directly out of the meteorological values forecast
by the NWP. This, however, has the disadvantage that NWP 1 As a first approach, and given the strong linear component of the relation
come from a global model covering a much wider area than the between measured and NWP variables (see Fig. 3), in this case a linear regression
area of interest, and hence local effects are not accounted for. As was used, but more complex alternatives could also be considered.

Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.
AZNARTE AND SIEBERT: DYNAMIC LINE RATING USING NUMERICAL WEATHER PREDICTIONS AND MACHINE LEARNING: A CASE STUDY 339

searched through for the best split at each node. The Gini index
[37] is used as the splitting criterion. The largest tree possi-
ble is grown and is not pruned. The root node of each tree in
the forest contains a bootstrap sample from the original data as
the training set.
4) Quantile Regression Forests (QRF): The algorithm quan-
tile regression forest (QRF) [38] is a generalization of the RF
model which provides a non-parametric and accurate way of
estimating conditional quantiles for high-dimensional predictor
variables.

C. Evaluation Criteria
The following evaluation criteria are routinely used in fore-
casting and are used here to asses the accuracy of the differ-
ent forecasting approaches. In this section ŷt+h|t represents the
forecast value for time t + h computed at time t while yt+h
represents the observed value at t + h. N is the number of
predictions. Fig. 6. Prediction errors for all the considered models over pole A1 (using
1) Normalized mean absolute error (MAE), as used in the testing data set).
framework of the Twenties European Project [39] (with
acceptable values fixed at below 35%):
N  
1   ŷt+h|t − yt+h 
NMAEh = (1)
N i=1  yt+h 

2) Normalized forecasting bias:


N
1  ŷt+h|t − yt+h
NBiash = (2)
N i=1 yt+h

V. RESULTS AND DISCUSSION


In this section we will describe the results of the experiments
carried out with the data. We can divide these experiments in
three categories: a comparison between the different available
models to decide if one is best suited to this problem, a com-
parison of the results amongst the 10 selected poles to detect if
the results are consistent and an application of a probabilistic
model and its evaluation.
Fig. 7. Average pole prediction errors for all the considered models (using
A. Forecast Model Comparison testing data set).

The first stage of the experiments was aimed at comparing


the different models described in the previous section, to try to
by each model for all poles. Fig. 7 shows these averaged criteria.
determine if one of them is better suited to our problem.
We verify that the results for pole A1 are representative for the
Fig. 6 shows the results of the comparison of the different
set of poles, and hence the conclusion remains the same: RF and
models for pole A1. It is clear from this figure that both RF
MARS obtain the best results.
and MARS produce the best results amongst the five models.
Given these results, and considering that MARS is more com-
The results for GLM are not bad either when compared to
putationally efficient than RF, we suggest to use MARS to com-
persistence and downscaling. Only during the first hour does
pute the line-rating forecasts.
persistence outperform the other models, which confirms that
the series is stable on the short term. It is also remarkable
that the acceptable value for the NMAE (1), set to 35% in the B. Comparing Results Amongst Poles
Twenties project, is actually beaten by persistence and that RF, Once the best model has been selected, we wanted to compare
MARS and GLM manage to produce results that are around the results amongst the different poles. Fig. 8 shows the results
15% maximum even for horizons over 20 hours. of MARS for each pole.
To extend the conclusion from pole A1 to the whole set of It is clear that the model manages to capture the inherent
reference poles, we computed the average error criteria obtained behavior of all the series, showing good results for all of them

Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.
340 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 32, NO. 1, FEBRUARY 2017

Fig. 10. Example of QRF probabilistic forecasts of the rating series (pole A1)
for a period in July 2010.

Fig. 11. Reliability of the probabilistic rating forecasts for 1, 2 and 3 hour
Fig. 8. Prediction errors for each pole using MARS. ahead horizons of the rating series (for pole A1).

C. Probabilistic Forecasting
Finally, we applied QRF, a probabilistic model which esti-
mates, instead of single (mean) predictions, the whole probabil-
ity function for each time t.
Fig. 10 shows an example of the output of QRF for pole A1
and some days of July 2010. As we can see, the model is able
to produce prediction intervals for each forecast horizon, which
Fig. 9. Measures of irradiance against NWP forecasts of irradiance for poles
A1, A3 and A4. can be useful in the decision making process inside the control
room. Observing the red dotted line, which corresponds to the
actual computed rating values, we can see that it falls within the
confidence intervals most of the time.
(around 14% NMAE in the longest horizons). However, pole A3 The evaluation of probabilistic forecasts is an open issue in the
seems to be an outlier with much worse forecast performances literature [40]. One of the most common tool to evaluate prob-
than the other poles. abilistic forecasts is the reliability diagram. Reliability refers to
We verified that the anomaly with A3 is consistent across the the degree of similarity between the forecasts and the observa-
RF and GLM models and that it does not appears when, from the tions. For probabilistic forecasts, one might think of reliability
three groups of input variables defined in Section V-A, the NWP as a measure of the bias of a probabilistic forecasting system.
are not used as inputs to the model. This of course implies that the We expect that the empirical coverage achieved by each quantile
anomaly is related to these variables, either to the original data or forecast should equal the specified proportion.
to the downscaling process applied to them. To shed some light In Fig. 11, we see that the reliability obtained by the QRF
over this issue, we can compare the downscaled NWP for several model is satisfactory for the three shown horizons and all prob-
poles. abilities. We can however notice that the reliability is centered
Fig. 9 shows evidence that the local measures for solar ir- around the ideal values although the model slightly over-predicts
radiance differ significantly between poles A3 and two other for low probabilities and under-predicts for higher probabilities.
poles A1 and A4 (of which A4 is located very close to The slight over-prediction for low quantiles is not necessarily
A3). Indeed, the measured irradiance at this pole is much negative in the frame of line-rating forecasting. The model is
lower than that observed for neighboring poles. This differ- slightly conservative which is positive since the rating should
ence could be due to a shadowing effect on the irradiance not to be exceeded.
sensor of A3 and would require further investigation. How- However, reliability is not sufficient to characterize the quality
ever, we assume that this evidence explains the difference in of a probabilistic forecast since a forecast based on climatology
the forecasting performance of the models when applied to is perfectly reliable and yet has no skill. A model is said to
pole A3. have no skill when it provides the same forecast distribution

Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.
AZNARTE AND SIEBERT: DYNAMIC LINE RATING USING NUMERICAL WEATHER PREDICTIONS AND MACHINE LEARNING: A CASE STUDY 341

Fig. 13. Example of a possible operational forecast using a safety margin: 1


hour ahead 10% quantile and the same minus the maximum difference with the
Fig. 12. Sharpness of the probabilistic forecasts for several horizons and three actual computed data in the last 6 months (for pole A1).
different intervals (for pole A1).

Note that the safety margin taken here is very conservative. A


for all situations. A skillful model will provide sharper distri- smaller safety margin could be chosen or the safety margin could
butions for more certain situations and wider distribution when be set dynamically by analyzing the NWP data to determine
the uncertainty on the outcome is higher. which situations have higher than usual uncertainty and require
Sharpness refers to the degree of concentration of the distri- large safety margins. Having a probabilistic forecast can allow
bution of the probabilistic forecast. If the density forecast takes the operator to choose the quantile that provides an acceptable
the form of a Dirac delta function, this would have maximum level of risk.
sharpness in that it suggests that the forecaster believes that one
particular value will occur with complete certainty. Reliability
is related to sharpness in the same way bias is related to variance VI. CONCLUSIONS AND PERSPECTIVES
in deterministic forecast evaluation. That is, there is usually a In this document we describe a dynamic line-rating forecast-
sharpness-reliability performance trade-off in the same way that ing (DLR) study. We have presented an accurate application
there is a bias-variance trade-off for point forecast models. to predict future values of the rating for two conductor lines.
The sharpness of the forecasts provided by QRF is assessed This procedure is based on data transformations and the results
by determining the inter-quantile ranges of the forecast distri- suggest to use multivariate adaptive regression splines (MARS)
butions. The idea behind inter-quantile ranges is to examine the as the core regression method. Our approach obtains good error
size of representative quantile intervals, i.e., the distance be- results which rank far below the key project indicators defined
tween the rating values provided for given forecast quantiles. in the Twenties Project for the DLR problem.
The distribution of inter-quantile ranges is presented in box These results prove the feasability of computing line-rating
plots with the minimum, maximum, the Q1, Q2, Q3 quartiles forecasts that can be used in the daily operation of the power
and mean of the distribution for different coverage rates. We system to lift some constraints while maintaining safe and
consider δ (β) = q (1 − β/2) − q (β/2) the size of the interval, reliable operating conditions. Moreover, together with its
where 1 − β is the nominal coverage rate of the interval, and prediction abilites, the presented method can also be used to
q (α) are quantiles such that P (X ≤ x) = α, where X is the compute the actual values of the dynamic ratings (or nowcasts).
variable of interest. Thus, it is expected that this approach can be used to permit
In Fig. 12 the sharpness for three coverage rates, 80%, 60% a better use of DLR as a tool to protect and monitor, in real
and 40%, is presented. As can be expected the median inter- time, T&D infrastructures with high penetration of distributed
quantile range decreases with the coverage rate. For all coverage generation and microgrids.
rates the inter-quantile range increases with forecast horizon However, before line-rating forecasts can enter the control
due to the increasing forecast uncertainty. Also, the forecast room, further analysis and refinement should be carried out both
can be said to be skillful since the minimum and especially from a purely forecasting perspective and from a network oper-
the maximum observed inter-quantile distances are significantly ation perspective. From a forecasting perspective several points
different from the median values. should be addressed. In the present study, a fixed set of reason-
Finally, to illustrate the operational usefulness of this ap- able explanatory variables were used as inputs to the forecasting
proach, Fig. 13 shows the actual computed rating together with models. Further analysis of the input variables through compu-
the predicted values of the 10% quantile of a probabilistic fore- tational intelligence feature selection methods could lead to the
cast for 1 hour ahead (green line). This means that the computed selection of a variable subset that leads to even more accurate
values will be above the forecast 10% of the time. We see that, at forecasts. Also, the forecast ratings were provided for a subset of
some points, the forecasts are higher than the actual computed reference poles. The next step would be to compute line-rating
values. To ensure the safety of the lines, we can substract from forecast for all reference poles and then derive the rating fore-
the forecasts a safety margin of, for example, the maximum cast for non-reference poles using a speed-up ratio approach.
difference between the 10% quantile and the computed rating In this way a more precise total line-rating forecast could be
over the last 6 months. Such a “safe” forecast is shown in blue, obtained. Once the complete line rating has been computed, the
and gives an idea of what could be achieved with this approach. behavior of the forecast errors should be further investigated in

Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.
342 IEEE TRANSACTIONS ON POWER DELIVERY, VOL. 32, NO. 1, FEBRUARY 2017

order to better understand the possible situations when the mod- [11] S. D. Foss and R. Maraio, “Evaluation of an overhead line forecast rat-
els over-estimate the line rating. Such an investigation would be ing algorithm,” IEEE Trans. Power Del., vol. 7, no. 3, pp. 1618–1627,
Jul. 1992.
necessary in order to define appropriate safety margins around [12] M. Simms and L. Meegahapola, “Comparative analysis of dynamic line
the rating forecasts. These safety margins should ensure that rating models and feasibility to minimise energy losses in wind rich power
the lines are not overloaded while allowing more capacity to networks,” Energy Convers. Manage., vol. 75, pp. 11–20, 2013.
[13] S. Jupe, M. Bartlett, and K. Jackson, “Dynamic thermal ratings: The state
be exploited than the fixed seasonal ratings. An evident further of the art,” in Proc. CIRED 21st Int. Conf. Elect. Distrib., Frankfurt,
refinement would be to provide weather dependent safety mar- Germany, 2011, pp. 1–4.
gins that dynamically asses the risk of over-estimating the line [14] C. R. Black and W. A. Chisholm, “Key considerations for the selection of
dynamic thermal line rating systems,” IEEE Trans. Power Del., vol. 30,
rating. no. 5, pp. 2154–2162, Oct. 2015.
From a network operation perspective the infrastructure pro- [15] L. Ren, J. Xiuchen, and S. Gehao, “Research for dynamic increasing
tection and capacity gains that can be obtained from using the transmission capacity,” in Proc. Int. Conf. Condition Monitor. Diagnosis,
2008, pp. 720–722.
forecasts provided in this study are clear, and an operational rat- [16] J. Raniga and R. K. Rayudu, “Stretching transmission line capabilities—
ing forecasting tool can be readily implemented. This tool will A transpower investigation” The Institution of Professional Engineers in
provide a capacity forecast that integrates a satisfactory safety New Zealand, 1999.
[17] J. Raniga and R. Rayudu, “Dynamic rating of transmission lines—A New
margin, as well as nowcasts with risk level alarms. Both products Zealand experience,” in Proc. IEEE Power Eng. Soc. Winter Meeting,
will be provided to control room operators in a consultative ca- 2000, vol. 4, pp. 2403–2409.
pacity in order to undergo an initial operational assessment and [18] J. Engelhardt and S. P. Basu, “Design, installation, and field experience
with an overhead transmission dynamic line rating system,” in Proc. IEEE
allow for user feedback to be collected, finally allowing for the Transm. Distrib. Conf., Sep. 1996, pp. 366–370.
inclusion of these forecasts in the standard network operating [19] T. L. Le, M. Negnevitsky, and M. Piekutowski, “Expert system application
procedures. for the loading capability assessment of transmission lines,” IEEE Trans.
Power Syst., vol. 10, no. 4, pp. 1805–1812, Nov. 1995.
[20] E. Siwy, “Risk analysis in dynamic thermal overhead line rating,” in Proc.
ACKNOWLEDGMENT Int. Conf. Probabilistic Meth. Appl. Power Syst., Jun. 2006, pp. 1–5.
[21] H. Yip, C. An, M. Aten, and R. Ferris, “Dynamic line rating protection for
The authors would like to thank Jeremy Colandairaj (Northern wind farm connections,” in Proc. IET 9th Int. Conf. Develop. Power Syst.
Protect., 2008, pp. 693–697.
Ireland Electricity) for his assistance during this study. [22] S. Abdelkader et al., “Dynamic monitoring of overhead line ratings in
wind intensive areas,” Proc. Eur. Wind Energy Conf., 2009.
[23] M. Lange and U. Focken, “Estimation of the increased ampacity of over-
REFERENCES head power lines in weather conditions with high wind power production,”
[1] H.-M. Nguyen, J.-L. Lilien, and P. Schell, “Dynamic line rating and am- presented at the 8th International Workshop on Large-Scale Integration of
pacity forecasting as the keys to optimise power line assets with the Wind Power into Power Systems, Bremen, Germany, 2009.
integration of res. the European project twenties demonstration inside [24] A. McLaughlin, M. Alshamali, J. Colandairaj, and S. Connor, “Appli-
Central Western Europe,” in Proc. 22nd Int. Conf. Exhibit. Elect. Distrib., cation of dynamic line rating to defer transmission network reinforce-
Jun. 2013, pp. 1–4. ment due to wind generation,” in Proc. 46th Int. Univ. Power Eng. Conf.,
[2] A. L’Abbate, G. Fulli, F. Starr, and S. D. Peteves, “Distributed power Sep. 2011, pp. 1–6.
generation in Europe: Technical issues for further integration,” Euro- [25] A. Michiorri and P. C. Taylor, “Forecasting real-time ratings for electricity
pean Commission, Joint Research Centre, Institute for Energy, 2008 distribution networks using weather forecast data,” in Proc. 20th Int. Conf.
Tech. rep. JRC 43063, 2008. [Online]. Available: http://citeseerx.ist. Exhibit. Elect. Distrib.—Part 1, Jun. 2009, pp. 1–4.
psu.edu/viewdoc/download?doi=10.1.1.370.3783&rep=rep1&type=pdf [26] D.-M. Kim, J.-M. Cho, H.-S. Lee, H.-S. Jung, and J.-O. Kim, “Prediction
[3] European Commission, “The 2020 climate and energy package” [Online]. of dynamic line rating based on assessment risk by time series weather
Available: http://ec.europa.eu/clima/policies/strategies/2020/index_en. model,” in Proc. Int. Conf. Probabilistic Meth. Appl. Power Syst., 2006,
htm, Accessed: 15 Sep. 2014. pp. 1–7.
[4] R. Lopez and J.-L. Coullon, “Enhancing distributed generation penetra- [27] J. Fu, D. Morrow, and S. Abdelkader, “Modelling and prediction tech-
tion in smart grids through dynamic ratings,” Proc. IEEE PowerTech, niques for dynamic overhead line rating,” in Proc. IEEE Power Energy
pp. 1–5, Jun. 2013. Soc. Gen. Meeting, Jul. 2012, pp. 1–7.
[5] P. Schell et al., “Large penetration of distributed productions: Dynamic [28] T. Ringelband, P. Schafer, and A. Moser, “Probabilistic ampacity fore-
line rating and flexible generation, a must regarding investment strategy casting for overhead lines using weather forecast ensembles,” Elect. Eng.,
and network reliability,” in Proc. Integr. Renew. Distrib. Grid, CIRED, vol. 95, no. 2, pp. 99–107, Jun. 2013.
May 2012, pp. 1–5. [29] D. Morrow, J. Fu, and S. Abdelkader, “Experimentally validated partial
[6] P. D. Diamantoulakis, V. M. Kapinas, and G. K. Karagiannidis, “Big data least squares model for dynamic line rating,” IET Renew. Power Gen.,
analytics for dynamic energy management in smart grids,” Big Data Res., vol. 8, no. 3, pp. 260–268, Apr. 2014.
vol. 2, no. 3, pp. 94–101, Sep. 2015. [30] J. Black, J. Colandairaj, S. Connor, and B. O’Sullivan, “Equipment and
[7] M. W. Davis, “A new thermal rating approach: The real time thermal methodology for the planning and implementation of dynamic line ratings
rating system for strategic overhead conductor transmission lines—Part on overhead transmission circuits,” in Proc. IEEE Int. Modern Elect.
I: General description and justification of the real time thermal rating Power Syst., Sep. 2010, pp. 1–6.
system,” IEEE Trans. Power App. Syst., vol. PAS-96, no. 3, pp. 803–809, [31] CIGRE Working Group, “Thermal behaviour of ovehead conductors,”
May 1977. 2002 CIGRE, Tech. Brochure 207.
[8] J. Hall and A. Deb, “Prediction of overhead transmission line ampacity [32] J. A. Nelder and R. W. M. Wedderburn, “Generalized linear models,”
by stochastic and deterministic models,” IEEE Trans. Power Del., vol. 3, J. Roy. Stat. Soc. Ser. A, vol. 135, no. 3, pp. 370–384, 1972.
no. 2, pp. 789–800, Apr. 1988. [33] Wikipedia, Generalized linear model, 2011. [Online]. Available:
[9] S. D. Foss, S. H. D. S. Lin, R. A. Maraio, and H. N. M. P. C. Schrayshuen, http://en.wikipedia.org/wiki/Generalized_linear_models
“Effect of variability in weather conditions on conductor temperature [34] J. H. Friedman, “Multivariate adaptive regression splines,” Ann. Stat.,
and the dynamic rating of transmission lines,” IEEE Trans. Power Del., vol. 19, no. 1, pp. 1–67, 1991.
vol. 3, no. 4, pp. 1832–1841, Oct. 1988. [35] D. Steinberg and P. L. Colla, MARS User Guide, San Diego, CA, USA:
[10] S. D. Foss and R. Maraio, “Dynamic line rating in the operating en- Salford Systems, 1999.
vironment,” IEEE Trans. Power Del., vol. 5, no. 2, pp. 1095–1105, [36] L. Breiman, “Random forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32,
Apr. 1990. 2001, 10.1023/A:1010933404324.

Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.
AZNARTE AND SIEBERT: DYNAMIC LINE RATING USING NUMERICAL WEATHER PREDICTIONS AND MACHINE LEARNING: A CASE STUDY 343

[37] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification José L. Aznarte, photograph and biography not available at the time of publi-
and Regression Trees, Belmont, CA, USA: Wadsworth, 1984. cation.
[38] N. Meinshausen, “Quantile regression forests,” J. Mach. Learn. Res.,
vol. 7, pp. 983–999, 2006.
[39] Twenties consortium, Project Objectives & KPI, Tech. Rep. Deliv-
erable 2.1, 2010. [Online]. Available: http://www.twenties-project.eu/
documents/D_2_1_Objectives_KPIs_Final.pdf
[40] P. E. McSharry, P. Pinson, and R. Girard, Methodology for the evaluation
of probabilistic forecasts, SafeWind European Project, Tech. Rep. Dp-
6.2, 2009. [Online]. Available: http://www.safewind.eu/images/Articles/
Deliverables/swind_deliverable_dp-6.2_forecast_verification_v2.1.pdf Nils Siebert, photograph and biography not available at the time of publication.

Authorized licensed use limited to: Kansas State University. Downloaded on February 17,2023 at 22:45:02 UTC from IEEE Xplore. Restrictions apply.

You might also like