0% found this document useful (0 votes)
136 views16 pages

Evaluating The Effectiveness of Modern Forecasting Models in Predicting Commodity Futures Prices in Volatile Economic

This document summarizes a study that evaluated the effectiveness of modern forecasting models in predicting commodity futures prices during volatile economic times. The study tested artificial intelligence (AI) models like neural networks against traditional statistical models using data from 2018 to 2022. The main findings were that AI models provided more accurate short-term (21 days) and medium-term (125 days) forecasts, especially during calm economic periods. Incorporating these AI forecasts could help companies better manage risks by hedging against commodity price changes.

Uploaded by

vynska amalia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views16 pages

Evaluating The Effectiveness of Modern Forecasting Models in Predicting Commodity Futures Prices in Volatile Economic

This document summarizes a study that evaluated the effectiveness of modern forecasting models in predicting commodity futures prices during volatile economic times. The study tested artificial intelligence (AI) models like neural networks against traditional statistical models using data from 2018 to 2022. The main findings were that AI models provided more accurate short-term (21 days) and medium-term (125 days) forecasts, especially during calm economic periods. Incorporating these AI forecasts could help companies better manage risks by hedging against commodity price changes.

Uploaded by

vynska amalia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

risks

Article
Evaluating the Effectiveness of Modern Forecasting Models in
Predicting Commodity Futures Prices in Volatile
Economic Times
László Vancsura 1 , Tibor Tatay 2, * and Tibor Bareith 3

1 Doctoral School of Management and Business Administration, Hungarian University of Agriculture and Life
Sciences, Kaposvári Campus, 7400 Kaposvár, Hungary
2 Department of Statistics, Finances and Controlling, Széchenyi István University, 9026 Győr, Hungary
3 Department of Investment, Finance and Accounting, Hungarian University of Agriculture and Life Sciences,
Kaposvári Campus, 7400 Kaposvár, Hungary
* Correspondence: [email protected]

Abstract: The paper seeks to answer the question of how price forecasting can contribute to which
techniques gives the most accurate results in the futures commodity market. A total of two families
of models (decision trees, artificial intelligence) were used to produce estimates for 2018 and 2022 for
21- and 125-day periods. The main findings of the study are that in a calm economic environment,
the estimation accuracy is higher (1.5% vs. 4%), and that the AI-based estimation methods provide
the most accurate estimates for both time horizons. These models provide the most accurate forecasts
over short and medium time periods. Incorporating these forecasts into the ERM can significantly
help to hedge purchase prices. Artificial intelligence-based models are becoming increasingly widely
available, and can achieve significantly better accuracy than other approximations.

Keywords: commodity market; price forecast; risk management; time series; artificial intelligence;
neural network; planning

Citation: Vancsura, László, Tibor JEL Classification: Q02; C22; C45; C53; G32
Tatay, and Tibor Bareith. 2023.
Evaluating the Effectiveness of
Modern Forecasting Models in
Predicting Commodity Futures Prices 1. Introduction
in Volatile Economic Times. Risks 11:
Forecasting commodity prices can affect the performance of companies. The turnover
27. https://doi.org/10.3390/
of trading firms and firms producing goods is affected by future prices. The costs of users of
risks11020027
commodities are also determined by the prices of these products. The magnitude of profits
Academic Editors: Andrea Bencsik, naturally affects the value of firms (Carter et al. 2017). Commodity prices should be taken
Janos Fustos and Maria Jakubik into account when planning revenues and costs. Hedging strategies should be developed
to reduce profit volatility. Hedging the risk of price changes also has costs. Therefore, more
Received: 28 December 2022
Revised: 9 January 2023
accurate forecasting of commodity price movements is a value-adding factor in the above
Accepted: 17 January 2023
considerations.
Published: 22 January 2023 Changes in commodity prices also affect the cost of credit (Donders et al. 2018). Of
course, the volatility of a firm’s profits also affects the perception of credit risk. Higher
volatility leads to higher financing costs. Proper forecasting of commodity price movements
can support the design of a strategy to reduce profit volatility.
Copyright: © 2023 by the authors. A variety of methods are used to forecast the prices of different commodities and
Licensee MDPI, Basel, Switzerland. shares. They are basically grouped into three main categories. The first includes traditional
This article is an open access article statistical methods, the second some kind of artificial intelligence-based methods, and
distributed under the terms and the third so-called hybrid methods (Kim and Won 2018; Vidal and Kristjanpoller 2020;
conditions of the Creative Commons Zolfaghari and Gholami 2021). In our paper, we focus only on the group of predictive
Attribution (CC BY) license (https:// models based on artificial intelligence, which includes the following algorithms: ANNs
creativecommons.org/licenses/by/
(Artificial Neural Networks), DNNs (Deep Neural Networks), GAs (Genetic Algorithms),
4.0/).

Risks 2023, 11, 27. https://doi.org/10.3390/risks11020027 https://www.mdpi.com/journal/risks


Risks 2023, 11, 27 2 of 16

SVM (Support Vector Machine), and FNNs (Fuzzy Neural Networks). Artificial intelligence-
based models have several advantages over traditional statistical models because of their
complexity and much higher predictive accuracy. Because of their learning ability, AI-based
models are able to recognise patterns in the data, such as non-linear movements. Exchange
rates exhibit non-stationary and non-linear movements that traditional statistical models
are unable to detect, and AI-based methodologies have taken the lead in this area over
time.
In their study, Gonzalez Miranda and Burgess (1997) modelled the implied volatility
of IBEX35 index options using a multi-layer perceptual neural network over the period
November 1992 to June 1994. Their experience shows that forecasting with nonlinear NNs
generally produces results that dominate over forecasts from traditional linear methods.
This is due to the fact that the NN takes into account potentially complex nonlinear
relationships that traditional linear models cannot handle well.
Hiransha et al. (2018) produced forecasts of stock price movements on the NSE and
NYSE. They based their analysis on the following models: multi-layer perceptual model,
RNN, LSTM (Long-Short-Term Memory) and CNN (Convolutional Neural Network).
Based on empirical analysis, CNN performed the best. The results were also compared
with the outputs of the ARIMA method and in this comparison, CNN was also the optimal
choice.
Ormoneit and Neuneier (1996) used a multilayer perceptual and density estimator
neural network to predict the volatility of the DAX index for the period January 1983 to
May 1991. In comparing the two models, they concluded that the density estimator neural
network outperformed the perceptual method without a specific target distribution.
Hamid and Iqbal (2004) applied the ANN methodology to predict the volatility of S&P
500 index futures. From their empirical analysis, they concluded that ANNs’ forecasts are
better than implied volatility estimation models.
Ou and Wang (2009) conducted research on trend forecasting of the Hang Seng index
using tree-based classification, K-nearest neighbor (KNN), SVM, Bayesian clustering and
neural network models. The final result of the analysis showed that SVM is able to
outperform the other predictive methods.
Ballings et al. (2015) compared the AdaBoost, Random Forest, Kernel factory, SVM,
KNN, logistic regression and ANN methods using stock price data from European compa-
nies. They tried to predict stock price trajectories one year ahead. The final result showed
that Random Forest was the best performer.
Nabipour et al. (2020) compared the predictive ability of nine different machine
learning and two deep learning algorithms (Recurrent Neural Network, Long-Short-Term
Memory) on stock data of financial, oil, non-metallic mineral and metallic materials compa-
nies on the Tehran Stock Exchange. They concluded that RNN and LSTM outperformed all
other predictive models.
Long et al. (2020) used machine learning (Random Forest, Adaptive Boosting), bi-
directional deep learning (BiLSTM) and other neural network models to investigate the
predictability of Chinese stock prices. BiLSTM was able to achieve the highest performance,
far outperforming the other forecasting methods.
Fischer and Krauss (2018) examined data from the S&P500 index between 1992 and
2015. Random Forest, logistic regression and LSTM were used for the forecasts. Their final
conclusion was that the long short-term memory algorithm gave the best results.
Nelson et al. (2017) applied the multi-layer perceptual, Random Forest, and LSTM
models to Brazilian stock market data to answer the question of which of the three models
is the most accurate predictor. They concluded that the LSTM was the most accurate.
Nikou et al. (2019) analysed the daily price movements of the iShares MSCI UK
exchange-traded fund over the period January 2015 to June 2018. ANN, Support Vector
Machine (SVM), Random Forest and LSTM models were used to generate the predicted
values. LSTM obtained the best score, while SVM was the second-most accurate.
Risks 2023, 11, 27 3 of 16

Recent research (van der Lugt and Feelders 2019; Hajiabotorabi et al. 2019) comparing
the predictive ability of ANNs and RNNs concluded that RNNs can outperform traditional
neural networks. Also prominent among these methods is the long-short-term memory
(LSTM) model, which has been applied to a wide range of sequential datasets. This model
variant has the advantage of showing high adaptability in the analysis of time series
(Petersen et al. 2019).
In their research, Thi Kieu Tran et al. (2020) demonstrated that the temporal impact of
past information is not taken into account by ANNs for predicting time series, and therefore
deep learning methods (DNN) have recently been increasingly used. A prominent group
of these are Recurrent Neural Networks (RNNs), which have the advantage of providing
feedback in their architecture.
Kaushik and Giri (2020) compared LSTM, vector autoregression (VAR) and SVM for
predicting exchange rate changes. Their analysis revealed that the LSTM model outper-
formed both SVM and VAR methods in forecasting.
Basak et al. (2019) used XGBoost, logistic regression, SVM, ANN and Random Forest
to predict stock market trends. The results showed that Random Forest outperformed the
others.
Siami-Namini et al. (2018) examined data from the S&P500 and Nikkei 225 indices in
their study. The final conclusion was that the superiority of LSTM over ARIMA prevailed.
Liu (2019) focused on the prediction of the S&P500 index and Apple stock price in
his study. He concluded that over a longer forecasting time horizon, LSTM and SVM
outperform the GARCH model.
Based on the above, the LSTM is considered to be quite good in terms of predictive
performance, but it has a serious shortcoming, namely that it cannot represent the multi-
frequency characteristics of time series, and therefore it does not allow the frequency
domain of the data to be modelled. To overcome this problem, Zhang et al. (2017) proposed
the use of Fourier transform to extract time-frequency information. In their research, they
combined this method with a neural network model; however, these two types are mutually
exclusive since the information content of the time domain is not included in the frequency
domain and the information of the frequency domain does not appear in the time domain.
This ambiguity is addressed by the wavelet transform (WT). This and the ARIMA model
were compared by Skehin et al. (2018) with respect to FAANG (Facebook, Apple, Amazon,
Netflix, Google) stocks listed on the NASDAQ. They concluded that in all cases except
Apple, ARIMA outperformed WT-LSTM for the next-day stock price prediction.
Liang et al. (2019) investigated the predictive performance of the traditional LSTM
and the LSTM model augmented with wavelet transform on S&P500 index data. Their
work demonstrated that WT-LSTM can outperform the traditional long-short-term memory
method.
Liang et al. (2022) studied the evolution of the gold price. They propose the application
of a novel decomposition model to predict the price. First, decomposition into sub-layers
of different frequencies is performed. Then, the prediction is performed for each sublayer
using long short-term memory, convolutional neural networks and convolutional attention
block module (LSTM-CNN-CBAM). Secondly, the long-short-term memory, convolutional
neural networks, and convolutional block attention module (LSTM-CNN-CBAM) joint
forecasting all sublayers. The last step is to summarise the partial results. Their results
show that the collaboration between LSTM, CNN and CBAM can enhance the modelling
capability and improve prediction accuracy. In addition, the ICEEMDAN decomposition
algorithm can further improve the accuracy of the prediction, and the prediction effect is
better than other decomposition methods.
Nickel is becoming an increasingly important raw material as electromobility develops.
Ozdemir et al. (2022) discuss the medium- and long-term price forecast of nickel in their
study. They employ two advanced deep learning architectures, namely LSTM and GRU.
The MAPE criterion is used to evaluate the forecasting performance. For both models,
their forecasting ability has been demonstrated. In addition to the prediction capability,
Risks 2023, 11, 27 4 of 16

the speed of the calculations is tested. When processing high resolution data, speed can
be an important factor. The study found that GRU networks were 33% faster than LSTM
networks.
For copper price prediction, Luo et al. (2022) propose a two-phase prediction architec-
ture. The first phase is the initial forecasting phase. The second phase is error correction. In
the first phase, factors that could affect the price of copper are selected. After selecting the
three most influential factors, a GALSTM model is developed. This is necessary to make
the initial forecasts. A 30-year historical data series is then used to validate the model.
Companies are diverse in terms of financial risk (Ali et al. 2022). A more accurate
price forecasting model can contribute to better enterprise risk management. It can help
reduce earnings volatility. It can also support the development of a more effective hedging
strategy.
The study aims to test modern forecasting techniques. A total of two families of models
(decision trees, artificial intelligence) will be used to produce estimates. The question is
which of the tested techniques gives more accurate forecasting results. The tests are carried
out for eight commodity products across the categories of oil, gas, and precious metals.

2. Data and Methods


The most commonly used metrics in the literature for evaluating the predictive models
and assessing their accuracy are root mean square error (RMSE), mean absolute error
(MAE), and mean absolute percentage error (MAPE) (Nti et al. 2020).
(a) Root mean squared error (RMSE): this performance indicator shows an estimation of
the residuals between actual and predicted values.
r
n
1
RMSE =
n ∑ (yi − ŷi )2 (1)
i =1

where ŷi is the estimated value produced by the model, yi is the actual value, and n is
the number of observations.
(b) Mean Absolute Error (MAE): this indicator measures the average magnitude of the
error in a set of predictions.
n
1
MAE =
n ∑ |yi − ŷi | (2)
i =1

(c) Mean Absolute Percentage Error (MAPE): this indicator measures the average magni-
tude of the error in a set of predictions and shows the deviations in percentage.
n
1 yi − ŷi
MAPE =
n ∑ yi
(3)
i =1

The forecasts are more reliable and accurate if these indicators have lower values. It is
important to note that RMSE minimizes larger deviations more due to squaring, so this
metric can give more extreme values than MAE. The former should be interpreted as a
value, while the MAPE should be interpreted as a percentage (deviations expressed as
a percentage of the original value). For this reason, the MAPE can be used to compare
different instruments because it does not depend on the nominal size of the price. As our
study examined indices from around the world and the effects of two different negative
economic events, we used the MAPE indicator for comparability in the overall assessment
of the models.
Support Vector Machine (SVM)
The SVM is used to predict time series where the state of the variables is not constant
or the classical methods are not justified due to the high complexity. SVRs are a subtype
of SVM used to predict future price paths. SVM is able to eliminate irrelevant and high
Risks 2023, 11, 27 5 of 16

variance data in the predictive process and improve the accuracy of the prediction. SVM is
based on the structural minimization of risk taken from statistical training theory. SVM
can be used in financial data modelling as long as there are no strong assumptions. SVM is
based on a linear classification of the data that seeks to maximize reliability. The optimal
fit of the data is achieved using second-order programming methods, which are well-
known methods for solving constraint problems. Prior to linear classification, the data is
propagated through a phi function into a wider space so that the algorithm can classify
highly complex data. This algorithm thus uses a nonlinear mapping to convert the main
data to a higher dimension and linear optimality to separate the hyperplane (Nikou et al.
2019).
The decision boundary is defined in Equation (4), where SVMs can map the input
vectors xi ∈ Rd into a high
 dimensional feature space Φ( xi ) ∈ H and Φ is mapped by the
kernel function K xi , x j .

n
f ( x ) = sgn( ∑ αi yi ∗ K ( x, xi ) + b) (4)
i =1

SVMs convert non-separable classes into separable ones with linear, non-linear, sig-
moid, radial basis and polynomial kernel functions. The formula of the kernel functions is
shown in Equations (5)–(7), where γ is the constant of the radial basis function and d is the
degree of the polynomial function. The two adjustable parameters in the sigmoid function
are the slope α and the intercepted constant c.

RBF : K xi x j = exp(−γ k xi − x j k2 )

(5)
 d
Polinomial : K xi x j = xi ∗ x j + 1 (6)
 
Sigmoid : K xi x j = tanh αxiT y + c

(7)

SVMs are often very efficient in high-dimensional spaces and in cases where the
number of dimensions is larger than the number of samples. However, to avoid overfitting,
the number of features in the choice of regularisation members and kernel functions should
be much larger than the number of samples (Nabipour et al. 2020). The hyperparameters of
SVM model can be found in Table 1.

Table 1. Hyperparameters of SVM model.

Model Parameters Value


Kernel RBF
SVR C 100
Gamma 0.1
Source: own editing.

Random Forest (RF)


Random Forest (RF) is a combination of several decision trees, developed to achieve
better prediction performance than when using only a single decision tree. Each decision
tree in RF is based on a bootstrap pattern using binary recursive partitioning (BRP). In
the BRP algorithm, a random subset of input vectors is selected and then all possible
combinations of all input vectors are evaluated. The resulting best split is then used
to create a binary classification. These processes are repeated recursively within each
successive partition and are terminated when the partition size becomes equal to 1. Two
important fine-tuning parameters are used in the modelling of RF: one is the number of
branches in the cluster (p), and the other is the number of input vectors to be sampled in
each split (k). Each decision tree in RF is learned from a random sample of the data set
(Ismail et al. 2020).
Risks 2023, 11, 27 6 of 16

To build the RF model, three parameters must be defined beforehand: the number
of trees (n), the number of variables (K) and the maximum depth of the decision trees
(J). Learning sets (Di , i = 1, . . . ., n) and variable sets (Vi , i = 1, . . . ., n) of decision trees
are created by random sampling with replacement, which is called bootstrapping. Each
decision tree of depth J generates a weak learner τi from each set of learners and variables.
The hyperparameters of RF model can be found in Table 2. Then, these weak learners are
used to predict the test data, and an ensemble of n trees {τi }in=1 is then generated. For a
new sample, the RF can be defined as follows (Park et al. 2022):

1 n
R̂( x ) =
n ∑i=1 r̂i (x) (8)

where r̂i ( x ) is the predicted value of τi .

Table 2. Hyperparameters of RF model.

Model Parameters Value


Max depth 10
RFR
Number of trees 100
Source: own editing.

Extreme Gradient Boost (XGBoost)


XGBoost is a model based on decision trees. Compared to other tree-based models,
XGBoost can achieve higher estimation accuracy and much faster learning speeds due to
parallelisation and decentralisation. Other advantages of the XGBoost method are that it
uses regularisation to prevent overfitting, has built-in cross-validation capability, and han-
dles missing data professionally. The XGBoost model incorporates multiple classification
and regression trees (CART). XGBoost performs binary splitting and generates a decision
tree by segmenting a subset of the data set using all predictors. This creates two subnodes.
The XGBoost model with multiple CART can be defined as follows:

K n o 
ŷi = ∑ f k ( xi ), f k ∈ F, F = f ( x ) = wq( x ) q : Rm → T, w ∈ R T (9)
k =1

where R is the number of trees and F is the total number of CARTs occurring. f k corresponds
to each independent tree and the weight of each leaf. The objective function of the XGBoost
model can be defined as follows:
1
Obj =∑i l (yi , ŷi ) + ∑k Ω( f k ), Ω ( f ) = γT + λ k w k2 (10)
2
where l is a loss function measuring the difference between yi and ŷi . Ω ( f k ) is a regularisa-
tion term that prevents overfitting by defining the complexity of the model. Assuming that
ŷt is the predicted value at time t, the objective function can be written as (Han et al. 2023):

( t −1)
Obj(t) = ∑i=1 l (yi , ŷi + f t ( xi )) + Ω ( f t ) (11)

The hyperparameters of XGBoost model can be found in Table 3.

Table 3. Hyperparameters of XGBoost model.

Model Parameters Value


Max depth 10
XGBoost
Number of trees 100
Source: own editing.
Risks 2023, 11, 27 7 of 16

Gated Recurrent Unit (GRU)


GRU is a type of recurrent neural network (RNN) that can provide outstanding
performance in predicting time series. It is similar to the other neural network model
(LSTM) discussed in more detail in the next subchapter, but GRU has lower computational
power requirements, which can greatly improve learning efficiency.
It has the same input and output structure as a simple RNN. The internal structure
of the GRU unit contains only two gates: the update gate zt , and the reset gate rt . The
update gate zt determines the value of the previous memory saved for the current time,
and the restore gate rt determines how the new input information is to be combined with
the previous memory value. Unlike the LSTM algorithm, the zt update gate can both forget
and select the memory contents, which improves computational performance and reduces
runtime requirements. The GRU context can be defined by the following equations:

zt = σ (Wz ht−1 + Uz xt ) (12)

rt = σ (Wr ht−1 + Ur xt ) (13)


ht = tanh(W0 (ht−1 ⊗ r ) + U0 xt )
e (14)
ht = zt ⊗ e
h t + (1 − z t ) ⊗ h t −1 (15)
where σ() is a logistic sigmoid function, i.e., σ(x) = 1/1 + e− x ; ht−1 is the hidden state
of the neuron at the last moment. Wz and Uz are the weight matrices of the update gate.
Wr and Ur are the weight matrices of the reset gate. W0 and U0 are the weight matrices
of the temporary output. xt is the input value at time t, and e
ht and ht are the information
vectors that provide hidden layer output and temporary unit state at time t (Xiao et al.
2022). The hyperparameters of GRU model can be found in Table 4.

Table 4. Hyperparameters of GRU model.

Model Parameters Value


Hidden Layers 2
Hidden layer neuron count 150
Batch size 32
GRU Epochs 100
Activation tanh
Learning rate 0.001
Optimizer Adam
Source: own editing.

Long-Short-Term Memory (LSTM)


LSTM is a type of recurrent neural network (RNN) often used in sequential data
research. Long-term memory refers to learning weights and short-term memory refers to
the internal states of cells. LSTM was created to solve the problem of the vanishing gradient
of RNNs, the main change of which is the replacement of the middle layer of the RNN by
a block (LSTM block). The main feature of LSTM is the possibility of long-term affilation
learning, which was impossible for RNNs. To predict the data associated with the next
time point, it is necessary to update the weight values of the network, which requires the
maintenance of the initial time interval data. An RNN could only learn a limited number
of short-term affilations; however, RNNs cannot learn long-term time series. LSTM can
handle them adequately. The structure of the LSTM model is a set of recurrent subnets,
called memory blocks. Each block contains one or more autoregressive memory cells and
three multiple units (input, output, and forget) that perform continuous write, read, and
Risks 2023, 11, 27 8 of 16

cell operation control (Ortu et al. 2022). The LSTM model is defined by the following
equations:
Input gate : It = σ ( Xt Wxi + Ht−1 Whi + bi (16)
Forgetting gate : Ft = σ ( Xt Wx f + Ht−1 Wh f + b f (17)
et = tan h( Xt Wxc + Ht−1 Whc + bc
Gated unit : C (18)
Ct = Ft ∗ Ct−1 + It ∗ C
et (19)
Output gate : Ot = σ ( Xt Wxo + Ht−1 Who + bo ) (20)
where h is the number of hidden units, Xt is the small batch input of a given time unit t,
Ht−1 is the hidden state of the data from the previous period, σ is the sigmoid function,
Wxi and Whi are the weight matrix of the input gate, and bi is the offset term of the input
gate. Wx f and Wh f are the weight matrix of the forgetting gate, and b f is the offset term of
et is the candidate memory cells, Wxc and Whc are the weight matrix
the forgetting gate. C
of the gated unit, and bc is the offset term of the gated unit. Ct is the new cell state at the
current time, and Ct−1 is the cell state at the previous time. Wxo and Who are the weight
matrix of the output gate, and bo is the offset term of the output gate (Dai et al. 2022).
The hyperparameters of the models are specified in Table 5. In order to make an
even more accurate comparison, we tried to harmonize the hyperparameters of algorithms
belonging to the same main type.

Table 5. Hyperparameters of LSTM model.

Model Parameters Value


Hidden Layers 2
Hidden layer neuron count 150
Batch size 32
LSTM Epochs 100
Activation tanh
Learning rate 0.001
Optimizer Adam
Source: own editing.

In the study, a total of eight commodities were included: Brent oil, copper, crude
oil, gold, natural gas, palladium, platinum and silver. The data was downloaded from
Yahoo Finance using Python. An appropriate database size is important in forecasting
(Hewamalage et al. 2022). The complete database contains the futures price data for the
period 1 January 2010 to 31 August 2022, which was split into two parts. In the first
case, focusing on the interval between 1 January 2010 and 31 August 2018, and in the
second case, focusing on the interval between 1 January 2014 and 31 August 2022, we
included commodity market instruments that are considered to be the most liquid, with
turnover that stands out among other assets. For both studies, we split the datasets into
learning and validation samples in the proportion of approximately 94% and 6%. In the first
estimation, learning database covers the period between 1 January 2010 and 28 February
2018 (98 months), while the validation interval between 1 March 2018 and 31 August
2018 (6 months). In the second estimation (2022), the learning database covers the period
between 1 January 2014 and 28 February 2022 (98 months), while the validation interval
between 1 March 2022 and 31 August 2022 (6 months). The descriptive statistics of the
commodity market for the full dataset are presented in Table 6.
Risks 2023, 11, 27 9 of 16

Table 6. Descriptive statistics of variables.

Commodity N Average Median StD Rsd Min Max


Brent oil 3151 77.39 73.08 26.01 0.34 19.33 127.98
Copper 3187 3.20 3.11 0.67 0.21 1.94 4.93
Crude oil 3188 70.99 68.78 22.95 0.32 −37.63 123.70
Gold 3186 1442.47 1335.55 248.22 0.17 1050.80 2051.50
Natural gas 3188 3.41 3.09 1.21 0.35 1.48 9.68
Palladium 3153 1118.43 790.50 649.64 0.58 407.95 2985.40
Platinum 3169 1194.00 1062.20 317.81 0.27 595.90 1905.70
Silver 3185 21.45 18.96 6.54 0.30 11.73 48.58
Source: own editing.

The two periods (2018 and 2022) reflect a significantly different general economic
situation, which is not reflected in the descriptive statistics for the whole period. To obtain
a more accurate picture and to assess the performance of the forecasting algorithms, it is
important to know the period for which the forecast is made. This is shown in the following
two tables (Tables 7 and 8).

Table 7. Descriptive statistics of variables (1 January 2018–31 August 2018).

Commodity N Average Median StD Rsd Min Max


Brent oil 168 72.08 72.72 4.46 0.06 62.59 79.80
Copper 168 3.02 3.07 0.18 0.06 2.56 3.29
Crude oil 169 66.42 66.36 3.58 0.05 59.19 74.15
Gold 168 1290.65 1311.45 49.77 0.04 1176.20 1362.40
Natural gas 169 2.84 2.82 0.18 0.06 2.55 3.63
Palladium 164 981.96 974.77 56.11 0.06 845.20 1112.35
Platinum 164 907.36 909.65 66.87 0.07 768.70 1029.30
Silver 168 16.24 16.40 0.69 0.04 14.42 17.55
Source: own editing.

Table 8. Descriptive statistics of variables (1 January 2022–31 August 2022).

Commodity N Average Median StD Rsd Min Max


Brent oil 168 104.01 105.12 10.83 0.10 78.98 127.98
Copper 168 4.20 4.36 0.46 0.11 3.21 4.93
Crude oil 168 100.12 100.02 10.88 0.11 76.08 123.70
Gold 167 1841.90 1836.20 76.61 0.04 1699.50 2040.10
Natural gas 168 6.56 6.66 1.78 0.27 3.72 9.68
Palladium 167 2158.80 2129.00 255.40 0.12 1769.20 2979.90
Platinum 167 963.41 958.30 69.72 0.07 826.40 1152.50
Silver 167 22.32 22.36 2.28 0.10 17.76 26.89
Source: own editing.

The two tables above show that not only are the average and median rates in 2022
higher, but the standard deviation is also several times higher than in 2018. The relative
standard deviation values are also 2–3 times higher. In such an environment, the accuracy
of the forecast is reduced, but its importance for enterprise risk management is increased.
Risks 2023, 11, 27 10 of 16

According to the correlation matrix (Table 9), Brent and crude oil (0.8344), gold and
silver (0.8023) move together, while the other commodities show negligible correlation and
no common movement.

Table 9. Correlation between variables.

Brent Oil Copper Crude Oil Gold Natural Gas Pallad. Platin. Silver
Brent oil 1
Copper 0.3061 1
Crude oil 0.8344 0.3162 1
Gold 0.1298 0.2756 0.1404 1
Natural gas 0.0882 0.0681 0.1077 0.0127 1
Palladium 0.2448 0.4210 0.2525 0.3867 0.0547 1
Platinum 0.2461 0.4222 0.2459 0.6026 0.0603 0.5830 1
Silver 0.2116 0.4094 0.2211 0.8023 0.0409 0.4624 0.6486 1
Source: own editing.

3. Results and Discussion


The basis for the evaluation of the results is the MAPE indicator, which is scale-
independent and therefore suitable for comparing both over time and across instruments.
The results are calculated for forecast periods of 21 and 125 days based on five different
forecasting algorithms for robustness. The forecast was made for both 2018 and 2022, the
reason being that the macroeconomic environment had changed significantly by 2022, due
to the Russian-Ukrainian war, among other factors. The different forecasting methods are
described in detail in the methodology chapter.
The results of the Support Vector Machine (SVM) forecast are presented in Table 10,
which shows the mean absolute percentage error (MAPE) values per commodity for 2018
and 2022, for 21- and 125-day forecast horizons.

Table 10. Support Vector Machine (SVM) forecast results.

21 Days 125 Days


SVM
2018 2022 2018 2022
Brent oil 0.0366 0.0815 0.0257 0.0511
Copper 0.0186 0.0227 0.0244 0.0203
Crude oil 0.0292 0.1023 0.0224 0.0836
Gold 0.0180 0.0155 0.0149 0.0100
Natural gas 0.0160 0.0369 0.0150 0.1231
Palladium 0.0234 0.0716 0.0269 0.0423
Platinum 0.0718 0.0283 0.0925 0.0286
Silver 0.1355 0.0223 0.1435 0.0374
Average 0.0436 0.0476 0.0456 0.0496
Source: own editing.

The average error of the SVM-based estimation ranges from 4.36% to 4.96% for the
selected commodities in the sample, with no significant difference between the forecast
horizons and the years under study. Of course, the longer the time interval, the worse the
accuracy of the forecast, but not significantly. When looking on a product-by-product basis,
more significant changes are already visible. For the two types of oil, but especially for
crude oil, the 125-day forecast is estimated with a multiple error compared to the 21-day
Risks 2023, 11, 27 11 of 16

forecast. For natural gas, the 125-day estimate for 2022 is more than three times greater
than in 2018, with a significant increase in pricing uncertainty. There is no more significant
change in the forecast than this, with only silver showing an increase of 1.5 times error rate.
A special feature of the Random Forest (RF) decision tree is that it uses several types
of decision trees, so it employs a very different methodological approach than the Support
Vector Machine (SVM) that preceded it. The estimation results are presented in Table 11, in
the same structure as the SVM.

Table 11. Random Forest (RF) forecast results.

21 Days 125 Days


RFR
2018 2022 2018 2022
Brent oil 0.0183 0.0712 0.0183 0.0405
Copper 0.0106 0.0243 0.0140 0.0211
Crude oil 0.0280 0.0608 0.0222 0.0419
Gold 0.0085 0.0146 0.0074 0.0105
Natural gas 0.0168 0.0471 0.0169 0.2354
Palladium 0.0161 0.0664 0.0189 0.0420
Platinum 0.0102 0.0302 0.0159 0.0232
Silver 0.0125 0.0277 0.0127 0.0211
Average 0.0151 0.0428 0.0158 0.0545
Source: own editing.

For this forecast procedure, there is already a significant difference between the overall
mean error for 2018 and 2022. For the 21-day forecast for 2018 and 2022, the sample MAPEs
are significantly different (p = 0.004), while for the 125-day forecast the difference is not
statistically confirmed. Random forest gives forecasts that are orders of magnitude more
accurate than SVM for the four-week time horizon. However, when looking at the year
2022, including the 125-day forecast, it can be seen there is a significant error in the RF.
For natural gas, there is a difference is much higher over the six-month time horizon, a
significant error of over 20%. In addition, it is important to highlight the positive results—
for example, the gold price forecast shows an error of less than 1% in three out of four cases,
which is outstanding, but relatively accurate forecasts are also seen for copper and silver.
The Extreme Gradient Boost (XGBoost) prediction can be classified in the same family
of decision trees as Random Forest (RF). The results of XGBoost are shown in Table 12.

Table 12. Extreme Gradient Boost (XGBoost) forecast results.

21 Days 125 Days


XGBoost
2018 2022 2018 2022
Brent oil 0.0198 0.0716 0.0183 0.0408
Copper 0.0113 0.0261 0.0142 0.0212
Crude oil 0.0266 0.0607 0.0230 0.0414
Gold 0.0080 0.0152 0.0069 0.0107
Natural gas 0.0157 0.0427 0.0167 0.2105
Palladium 0.0163 0.0653 0.0188 0.0426
Platinum 0.0098 0.0303 0.0158 0.0233
Silver 0.0113 0.0280 0.0118 0.0204
Average 0.0148 0.0425 0.0157 0.0514
Source: own editing.
Risks 2023, 11, 27 12 of 16

Comparing the results of XGBoost with RF, very similar results can be seen, without
any significant difference. For the 21-day forecast, the average difference between the
years 2018 and 2022 is significantly different (p = 0.003), i.e., on average XGBoost was able
to give a more accurate forecast in 2018 than in 2022, with less noise. Similar to the RF,
the longer-term natural gas forecast contains the largest error, and precious metals show
the lowest MAPE values. Díaz et al. (2020) found that decision trees (random forest and
gradient boosting regression tree) provide a more reliable prediction of the copper price
than linear methods, but the random walk process performs still better.
The results of the Gated Recurrent Unit (GRU) forecast are shown in Table 13. The
GRU model can be classified as a family of neural networks, i.e., the prediction is based on
an algorithm using artificial intelligence. This is a very different concept to decision trees.

Table 13. Gated Recurrent Unit (GRU) forecast results.

21 Days 125 Days


GRU
2018 2022 2018 2022
Brent oil 0.0180 0.0691 0.0187 0.0387
Copper 0.0107 0.0226 0.0134 0.0202
Crude oil 0.0192 0.0680 0.0181 0.0393
Gold 0.0085 0.0169 0.0065 0.0105
Natural gas 0.0155 0.0435 0.0139 0.1140
Palladium 0.0136 0.0611 0.0176 0.0400
Platinum 0.0086 0.0287 0.0118 0.0231
Silver 0.0096 0.0263 0.0112 0.0250
Average 0.0130 0.0420 0.0139 0.0389
Source: own editing.

The results of the Gated Recurrent Unit (GRU) confirm expectations, with the average
MAPE value being the lowest in all categories compared to previous models (SVM, RF,
XGBoost). The 21- and 125-day predictions for the year 2018 are considered to be outstand-
ingly good, with an average error for the sample of around 1.3%. The results of the t-tests
show that for the first time, the error rates for 2018 are statistically justified (p = 0.002 and
p = 0.047) to be lower than the 2022 values for both time horizons (21 and 125 days). Based
on these results, the GRU-based estimator produces very good forecasts for the four-week
period in a relatively smooth economic period free of turbulence. It is also important to
highlight that the forecasts for 2022 show the highest accuracy, with an improvement of
more than 1 percentage point over the 125-day period to 2022 compared to previous models.
The forecast for natural gas, which showed an error of over 20% for the decision trees, has
been reduced to 11% for GRU, although this is still a significant error. It is also interesting
to note that for the neural network-based estimation, the trend was reversed, with the
longer time horizon estimation resulting in a lower average error, but the two results are
not significantly different. As before, the prediction of precious metals shows high accuracy.
Ozdemir et al. (2022) used GRU and LSTM models to forecast the price of nickel from 2022
to 2031 over the long term. The MAPE of the two estimation methods are very similar
(about 7%), but the GRU models required on average 33% less computation time.
The Long-Short-Term Memory (LSTM) is part of a family of neural network models
similar to the Gated Recurrent Unit (GRU) algorithm, and is built to provide reliable
estimates primarily over longer time scales. The results of the LSTM are shown in Table 14.
Risks 2023, 11, 27 13 of 16

Table 14. Long-Short-Term Memory (LSTM) forecast results.

21 Days 125 Days


LSTM
2018 2022 2018 2022
Brent oil 0.0167 0.0732 0.0176 0.0413
Copper 0.0134 0.0212 0.0156 0.0225
Crude oil 0.0198 0.0699 0.0191 0.0405
Gold 0.0085 0.0170 0.0064 0.0108
Natural gas 0.0238 0.0472 0.0228 0.1337
Palladium 0.0135 0.0619 0.0176 0.0424
Platinum 0.0087 0.0288 0.0114 0.0229
Silver 0.0153 0.0244 0.0178 0.0188
Average 0.0150 0.0429 0.0160 0.0416
Source: own editing.

The LSTM results, similar to the GRU, are very convincing, but not better than the
overall sample averages. It is important to note that the differences are only a few decimal
points. At the 21-day time horizon, the 2018 values are significantly lower (p = 0.005); the
same can be said for the 125-day horizon at 10% confidence level (p = 0.087). Busari and
Lim (2021) used LSTM and GRU methodologies among others to forecast spot oil prices in
their study. In their study, the forecast time horizon is 6 days and the training-validation
database ratio is 75–25%. Their results show a forecasting accuracy (MAPE) of around
10–11%, compared to our results where the 21-day forecast error is below 2% for the year
2018 and 6.8–7% for the year 2022.
For natural gas, which is considered critical, the MAPE for the 125-day estimate
for 2022 is 13.37%, also higher than the 11.4% for GRU. In another study, also dealing
with natural gas forecasting, GRU outperformed LSTM models (Wang et al. 2021). The
MAPE value is higher than our own results, but the authors used weekly data. The most
spectacular MAPE improvement was between hybrid GRU (PSO-GRU) and LSTM, with a
difference of more than 1 percentage point. The advantage of LSTM should be reflected in
the long-range forecast, but compared to GRU, it performed better in 3/8 cases for 2018
and 2/8 cases for 2022 for the longer 125-day time horizon. Of course, this does not exclude
the possibility that LSTM might not perform better on a longer time horizon, but there is
no conclusive data available to test this for 2022.

4. Conclusions
The study aimed to answer the question of how accurate the futures price forecasts
of eight selected commodities (incorporating oil, gas and precious metals) using different
models (decision trees, neural networks) are in different economic environments, and to
what extent these forecasts can be used for corporate risk management. The six months
of the year 2022 (from March to August), characterized by inflationary pressures, the
Russian-Ukrainian war and global chip shortages, while the control period was chosen as
six months in 2018 before the COVID-19 epidemic, which is considered a calm economic
environment.
Enterprise risk management has different time horizons depending on the exposure to
risk. For commodities, we assumed production lead times and warehousing, and therefore
did not look at the very short term. Two periods were defined—a short period of one
month (21 days), and a medium period (125 days). These are the time periods that can
be planned for inventory management. Enterprise risk management comes to the fore in
at least two respects when it comes to raw material replenishment. The necessary stock
should be available at the right place and time, and stocks should be purchased at the
best possible price. The second aspect is the use of forecasting algorithms of varying
Risks 2023, 11, 27 14 of 16

complexity as a decision support tool. The most accurate forecast possible helps achieve the
so-called perfect timing that all investors—in this case, the purchasing department—desire.
Purchasing at the best possible price means lower cost price and therefore higher profits
and a market advantage over competitors.
The results show that forecast accuracy is higher in calmer economic environments,
which is due to the fact that in a less volatile environment, forecasting is easier (see descrip-
tive statistics for 2018 and 2022). More importantly, artificial intelligence neural networks
also produce better results in commodity markets than decision trees and other approach
models. For the year 2022, the MAPE (Mean Absolute Percentage Error) indicators show
an average value of around 4%, i.e., the difference between the model estimate and the
real data. In the control period, this indicator is around 1.5%. This difference is approxi-
mately equal to the increase in standard deviation and relative standard deviation that has
occurred. Due to the Russian-Ukrainian war, oil and especially gas prices displayed the
worst accuracy, followed by palladium. Precious metals price forecasts showed the highest
accuracy. It can also be concluded that in the calmer economic period (also supported by the
average MAPE values), we obtained a more accurate estimate in the case of shorter forecast
periods, while the same conclusion cannot be drawn in the more volatile period (2022).
Overall, we managed to achieve the most accurate estimate with the GRU model, which
even slightly outperformed the LSTM algorithm. In the case of the examined instruments
and periods, the weakest performance was produced by the SVM.
Our study is limited insofar as the models were only used individually and not com-
bined. However, hybrid models may have several advantages over the traditional approach.
Liang et al. (2022) used different hybrid neural models to predict spot and forward gold
prices. Their results show that hybrid models provide more accurate estimates than LSTM
models. The implementation of an error-correction hybrid model for copper price forecast-
ing has resulted in significant MAPE improvements of anything up to 1 percentage point
or more (Luo et al. 2022). The study also does not cover the use of independent variables
such as technical analysis tools. Their application can help the training of models and thus
the recognition of past technical levels.

Author Contributions: Conceptualization, L.V., T.T. and T.B.; methodology, L.V.; formal analysis,
T.B.; data curation, L.V.; writing—original draft preparation, L.V., T.T. and T.B.; writing—review and
editing, L.V., T.T. and T.B.; funding acquisition, T.T. All authors have read and agreed to the published
version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: All data used in the present study are publicly available.
Conflicts of Interest: The authors declare no conflict of interest.

References
Ali, Shoukat, Ramiz ur Rehman, Wang Yuan, Muhammad Ishfaq Ahmad, and Rizwan Ali. 2022. Does Foreign Institutional Ownership
Mediate the Nexus between Board Diversity and the Risk of Financial Distress? A Case of an Emerging Economy of China.
Eurasian Business Review 12: 553–81. [CrossRef]
Ballings, Michel, Dirk Van den Poel, Nathalie Hespeels, and Ruben Gryp. 2015. Evaluating multiple classifiers for stock price direction
prediction. Expert Systems with Applications 42: 7046–56. [CrossRef]
Basak, Suryoday, Saibal Kar, Snehanshu Saha, Luckyson Khaidem, and Sudeepa Roy Dey. 2019. Predicting the direction of stock
market prices using tree-based classifiers. The North American Journal of Economics and Finance 47: 552–67. [CrossRef]
Busari, Ganiyu Adewale, and Dong Hoon Lim. 2021. Crude Oil Price Prediction: A Comparison between AdaBoost-LSTM and
AdaBoost-GRU for Improving Forecasting Performance. Computers & Chemical Engineering 155: 107513. [CrossRef]
Carter, David A., Daniel A. Rogers, Betty J. Simkins, and Stephen D. Treanor. 2017. A review of the literature on commodity risk
management. Journal of Commodity Markets 8: 1–17. [CrossRef]
Dai, Yeming, Qiong Zhou, Mingming Leng, Xinyu Yang, and Yanxin Wang. 2022. Improving the Bi-LSTM model with XGBoost and
attention mechanism: A combined approach for short-term power load prediction. Applied Soft Computing 130: 109632. [CrossRef]
Risks 2023, 11, 27 15 of 16

Díaz, Juan D., Erwin Hansen, and Gabriel Cabrera. 2020. A Random Walk through the Trees: Forecasting Copper Prices Using Decision
Learning Methods. Resources Policy 69: 101859. [CrossRef]
Donders, Pablo, Mauricio Jara, and Rodrigo Wagner. 2018. How sensitive is corporate debt to swings in commodity prices? Journal of
Financial Stability 39: 237–58. [CrossRef]
Fischer, Thomas, and Christopher Krauss. 2018. Deep learning with long short-term memory networks for financial market predictions.
European Journal of Operational Research 270: 654–69. [CrossRef]
Gonzalez Miranda, F., and Andrew Neil Burgess. 1997. Modelling market volatilities: The neural network perspective. The European
Journal of Finance 3: 137–57. [CrossRef]
Hajiabotorabi, Zeinab, Aliyeh Kazemi, Faramarz Famil Samavati, and Farid Mohammad Maalek Ghaini. 2019. Improving DWT-RNN
model via B-spline wavelet multiresolution to forecast a high-frequency time series. Expert Systems with Applications 138: 112842.
[CrossRef]
Hamid, Shaikh A., and Zahid Iqbal. 2004. Using neural networks for forecasting volatility of S&P 500 Index futures prices. Journal of
Business Research 57: 1116–25. [CrossRef]
Han, Yechan, Jaeyun Kim, and David Enke. 2023. A machine learning trading system for the stock market based on N-period Min-Max
labeling using XGBoost. Expert Systems with Applications 211: 118581. [CrossRef]
Hewamalage, Hansika, Klaus Ackermann, and Christoph Bergmeir. 2022. Forecast Evaluation for Data Scientists: Common Pitfalls
and Best Practices. Data Mining and Knowledge Discovery. [CrossRef]
Hiransha, M., Gopalakrishnan E. A., Vijay Krishna Menon, and Soman K. P. 2018. NSE stock market prediction using deep-learning
models. Procedia Computer Science 132: 1351–62. [CrossRef]
Ismail, Mohd Sabri, Mohd Salmi Md Noorani, Munira Ismail, Fatimah Abdul Razak, and Mohd Almie Alias. 2020. Predicting next day
direction of stock price movement using machine learning methods with persistent homology: Evidence from Kuala Lumpur
Stock Exchange. Applied Soft Computing 93: 106422. [CrossRef]
Kaushik, Manav, and Arun Kumar Giri. 2020. Forecasting Foreign Exchange Rate: A Multivariate Comparative Analysis be-
tween Traditional Econometric, Contemporary Machine Learning & Deep Learning Techniques. arXiv arXiv:2002.10247.
doi:10.48550/arXiv.2002.10247. [CrossRef]
Kim, Ha Young, and Chang Hyun Won. 2018. Forecasting the volatility of stock price index: A hybrid model integrating LSTM with
multiple GARCH-type models. Expert Systems with Applications 103: 25–37. [CrossRef]
Liang, Xiaodan, Zhaodi Ge, Liling Sun, Maowei He, and Hanning Chen. 2019. LSTM with Wavelet Transform Based Data Preprocessing
for Stock Price Prediction. Mathematical Problems in Engineering 2019: 1340174. [CrossRef]
Liang, Yanhui, Yu Lin, and Qin Lu. 2022. Forecasting gold price using a novel hybrid model with ICEEMDAN and LSTM-CNN-CBAM.
Expert Systems with Applications 206: 117847. [CrossRef]
Liu, Yang. 2019. Novel volatility forecasting using deep learning–long short term memory recurrent neural networks. Expert Systems
with Applications 132: 99–109. [CrossRef]
Long, Jiawei, Zhaopeng Chen, Weibing He, Taiyu Wu, and Jiangtao Ren. 2020. An integrated framework of deep learning and
knowledge graph for prediction of stock price trend: An application in Chinese stock exchange market. Applied Soft Computing,
106205. [CrossRef]
Luo, Hongyuan, Deyun Wang, Jinhua Cheng, and Qiaosheng Wu. 2022. Multi-step-ahead copper price forecasting using a two-phase
architecture based on an improved LSTM with novel input strategy and error correction. Resources Policy 79: 102962. [CrossRef]
Nabipour, Mojtaba, Pooyan Nayyeri, Hamed Jabani, S. Shahab, and Amir Mosavi. 2020. Predicting stock market trends using machine
learning and deep learning algorithms via continuous and binary data; a comparative analysis on the Tehran stock exchange.
IEEE Access 8: 150199–150212. [CrossRef]
Nelson, David M., Adriano C. Pereira, and Renato A. de Oliveira. 2017. Stock market’s price movement prediction with LSTM neural
networks. Paper presented at 2017 International joint conference on neural networks (IJCNN), Anchorage, AK, USA, May 14–19;
pp. 1419–26. [CrossRef]
Nikou, Mahla, Gholamreza Mansourfar, and Jamshid Bagherzadeh. 2019. Stock price prediction using DEEP learning algorithm and its
comparison with machine learning algorithms. Intelligent Systems in Accounting, Finance and Management 26: 164–74. [CrossRef]
Nti, Isaac Kofi, Adebayo Felix Adekoya, and Benjamin Asubam Weyori. 2020. A systematic review of fundamental and technical
analysis of stock market predictions. Artificial Intelligence Review 53: 3007–57. [CrossRef]
Ormoneit, Dirk, and Ralph Neuneier. 1996. Experiments in predicting the German stock index DAX with density estimating neural
networks. Paper presented at IEEE/IAFE 1996 Conference on Computational Intelligence for Financial Engineering (CIFEr), York,
NY, USA, March 24–26; pp. 66–71. [CrossRef]
Ortu, Marco, Nicola Uras, Claudio Conversano, Silvia Bartolucci, and Giuseppe Destefanis. 2022. On technical trading and social
media indicators for cryptocurrency price classification through deep learning. Expert Systems with Applications 198: 116804.
[CrossRef]
Ou, Phichhang, and Hengshan Wang. 2009. Prediction of stock market index movement by ten data mining techniques. Modern Applied
Science 3: 28–42. [CrossRef]
Ozdemir, Ali Can, Kurtuluş Buluş, and Kasım Zor. 2022. Medium-to long-term nickel price forecasting using LSTM and GRU networks.
Resources Policy 78: 102906. [CrossRef]
Risks 2023, 11, 27 16 of 16

Park, Hyun Jun, Youngjun Kim, and Ha Young Kim. 2022. Stock market forecasting using a multi-task approach integrating long
short-term memory and the random forest framework. Applied Soft Computing 114: 108106. [CrossRef]
Petersen, Niklas Christoffer, Filipe Rodrigues, and Francisco Camara Pereira. 2019. Multi-output bus travel time prediction with
convolutional LSTM neural network. Expert Systems with Applications 120: 426–35. [CrossRef]
Siami-Namini, Sima, Neda Tavakoli, and Akbar Siami Namin. 2018. A comparison of ARIMA and LSTM in forecasting time series.
Paper presented at 2018 17th IEEE international conference on machine learning and applications (ICMLA), Orlando, FL, USA,
December 17–20; pp. 1394–401. [CrossRef]
Skehin, Tom, Martin Crane, and Marija Bezbradica. 2018. Day ahead forecasting of FAANG stocks using ARIMA, LSTM networks and
wavelets. Paper presented at the 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science (AICS 2018), Dublin,
Ireland, December 6–7.
Thi Kieu Tran, Trang, Taesam Lee, Ju-Young Shin, Jong-Suk Kim, and Mohamad Kamruzzaman. 2020. Deep learning-based maximum
temperature forecasting assisted with meta-learning for hyperparameter optimization. Atmosphere 11: 487. [CrossRef]
van der Lugt, Bart J., and Ad J. Feelders. 2019. Conditional forecasting of water level time series with RNNs. In International Workshop
on Advanced Analysis and Learning on Temporal Data. Cham: Springer, pp. 55–71. [CrossRef]
Vidal, Andrés, and Werner Kristjanpoller. 2020. Gold volatility prediction using a CNN-LSTM approach. Expert Systems with
Applications 157: 113481. [CrossRef]
Wang, Jun, Junxing Cao, Shan Yuan, and Ming Cheng. 2021. Short-Term Forecasting of Natural Gas Prices by Using a Novel
Hybrid Method Based on a Combination of the CEEMDAN-SE-and the PSO-ALS-Optimized GRU Network. Energy 233: 121082.
[CrossRef]
Xiao, Haohan, Zuyu Chen, Ruilang Cao, Yuxin Cao, Lijun Zhao, and Yunjie Zhao. 2022. Prediction of shield machine posture using
the GRU algorithm with adaptive boosting: A case study of Chengdu Subway project. Transportation Geotechnics 37: 100837.
[CrossRef]
Zhang, Liheng, Charu Aggarwal, and Guo-Jun Qi. 2017. Stock Price Prediction via Discovering Multi-Frequency Trading Patterns.
Paper presented at the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining—KDD ’17,
Halifax, NS, Canada, August 13–17.
Zolfaghari, Mehdi, and Samad Gholami. 2021. A hybrid approach of adaptive wavelet transform, long short-term memory and
ARIMA-GARCH family models for the stock index prediction. Expert Systems with Applications 182: 115149. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like