0% found this document useful (0 votes)
4 views

Forecasting of Economic Recession using Machine Learning

The document discusses the use of machine learning techniques to forecast economic recessions, focusing on the Chicago Federation National Activity Index (CFNAI) and various economic indicators. It highlights the challenges of predicting recessions and the importance of accurate forecasting for policymakers, utilizing datasets from March 1967 to June 2022. The study demonstrates that machine learning models can achieve high accuracy in predicting recession probabilities, with scaled models performing better than unscaled ones.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Forecasting of Economic Recession using Machine Learning

The document discusses the use of machine learning techniques to forecast economic recessions, focusing on the Chicago Federation National Activity Index (CFNAI) and various economic indicators. It highlights the challenges of predicting recessions and the importance of accurate forecasting for policymakers, utilizing datasets from March 1967 to June 2022. The study demonstrates that machine learning models can achieve high accuracy in predicting recession probabilities, with scaled models performing better than unscaled ones.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Forecasting of Economic Recession using Machine

Learning
Vedanta Bhattacharya Ishaan Srivastava
Department of Electronics and Department of Electronics and
Communication Engineering Communication Engineering
Amity University Uttar Pradesh Amity University Uttar Pradesh
Noida, Uttar Pradesh, India Noida, Uttar Pradesh, India
[email protected] [email protected]
Abstract— A that firms' risk
significant, highly widespread, and long-lasting drop in management and financing policies had a significant impact
economic activity is described as a recession. Since the on the degree to which firms were impacted by the financial
time it touches the high end of the previous expansion to crisis [1] (Brunnermeier, 2009). Exploration of the 2008
the trough, economists all around the world tend to crisis by Erkens et all [2] hypothesise that this is due to (a)
search for the length and the factors in between this firms with more independent boards raised more equity
length that constitutes the recession. Recessions may only financing during the crisis, which caused existing
last a few months, but it may take years for the economy shareholders' wealth to be transferred to debtholders, and (b)
to bounce back and reach back to its previous high point. firms with higher institutional ownership took on more risk
before the crisis, which resulted in greater investor losses
In this paper, we investigate various datasets and its pros during the economic crisis.
and cons and try to give the shot to speculate the chances Failure to forecast recessions is a recurring theme in
of recession, based on Machine Learning techniques and economic forecasting. The challenge of predicting output
backed by concrete datasets for better approximation of gaps is extremely contemporary and has profound
result. Within a Machine Learning framework, we try to implications for economic policy. Of course, early notice of
put in the values, specifically the Chicago Federation an impending output decline is crucial for policymakers,
National Activity Index (CFNAI) and its 2 components: who can then quickly alter monetary and fiscal measures to
Monthly Average of 3 and CFNAI Diffusion Index from either prevent a recession or lessen its effects on the real
a period of March, 1967 to June, 2022, along with the economy. The NBER estimates [3] that only five recessions
S&P-500 index and various parameters associated with have occurred in the United States since 1980, compared to
it (like 10yr Tbond, % Tbond, 2 year spread fedrate, 34 altogether since 1854. The plummet that occurred due to
CPI etc) from the day of the recorded period. Using the double-dip falls in the early 1980s and the global
unscaled, scaled and tuned models of machine learning financial crisis of 2008 can be termed as those which were
technique, the model was able to predict the chances, far worse than either the Great Depression or the depression
ranging from 87% when the models used were unscaled, of 1937–1938.
to slightly more than 94% for scaled models. Both the With more potential regressors than there are observations,
models are different from each other, and are compiled the usage of Machine Learning models is now proven to be
together for comparison and the purpose of having a able to lead and supervise massive volumes of data and
surety about the model being correct in these regards. provide high assurances in terms of accuracy of the eventual
model. Chen et al. considered the healthcare as one of the
Keywords— Recession, Machine Learning, Regression, sector affected by the Great Recession of 2008 [4] and thus,
ML algorithms examined the paper based on the health care expenditures
along the health care spending distribution, based on the
I. INTRODUCTION Medical Expenditure Panel Survey (MEPS) dataset from
Recession is defined as a period of economic downturn 2005-06 and 2007-08. To determine the various
wherein the economy starts to contract and it mostly gets relationships between the recession and health care spending
evident in the country’s growth as the country’s GDP along the health care expenditure distribution, quantile
plunges and the stock market feels the reverberations and multivariate regressions are used.
starts to go down, more commonly termed as “bear market”.
II. LITERATURE REVIEW
A series of local or international market shocks can trigger a
global financial crisis, which can then turn into a worldwide Although machine learning algorithms have long been
economic crisis because of the interconnection of the employed in categorization issues, they are now increasingly
financial markets. In other cases, a single economic power being used in the social sciences, notably in the field of
that is quite huge and a member of the "big-enough" financial sectors (Dabrowski, 2016) [5]. The hidden Markov
economies to generate turmoil in other countries could be model, switching linear dynamic system, and Naive Bayes
the source of an economic crisis. This was the case, for switching linear dynamic system models were all
instance, with the subprime crisis or what is commonly implemented in this work. The hidden Markov Model stated
called as the Great Recession of 2008, which began in the that: First, the assumption of a limited horizon states that the
US and spread to other European nations and inevitably probability of being in a state at a given time depends only
worldwide as a problem of sovereign debt. The studies argue on that state (t-1) and second, the assumption of a stationary
process states that, given the current state, the conditional The CFNAI Diffusion Index represents the difference
(probability) distribution over the next state remains between the total absolute value of the weights for the
constant.. Nyman and Ormerod used the ML’s Random indicators that are hidden underneath whose contribution to
Forest technique for the dataset between 1970(Q2)- the Chicago Fed National Activity Index is always
1990(Q1), and from 1990-2007 which comprised of GDP affirmative in a given month and the total absolute value of
growth period [6]. This model was able to predict the result of the weights for those indicators whose contribution is
about six quarters ahead and the results were stronger in negative or neutral in that same month over a three-month
case of the economy of the UK compared to the US. In period. The CFNAI Diffusion Index, whenever below the
another paper [7] both of them extended their analysis by threshold of -0.35, have indicated that the economy is under
looking at how each of the explanatory variables affected recession. Fisher et all in their paper outlined the failure of
the Great Recession of the late 2000s. They were able to other policies and other Federation banks to capture the
further the investigation by breaking down business and accurate representation of recession and thereon, gave some
non-financial household debt into separate categories and statistical model that was the determinant of the current
discovered that the Great Recession was significantly Index and also calculated the magnitude to which it could go
[19]
influenced by both household and non-financial company .
debt though, their explanatory models exhibit significant
non-linearity. The other dataset that was used for the paper was the S&P-
Using the 3 months to 2 year period, the Treasury term 500 data. The Standards & Poors-500 (hereon S&P-500) is
spread was used as a benchmark by Liu and Moench [8] and an index very similar to the Nifty-100 or the Nifty-50 back
they paid particular attention to the subject of whether or not in India, which has been used since ages to measure the
the leading indicators surveyed before in the literatures go stock performances of 500 major corporations that are listed
beyond the Treasury term spread to provide insight into on American stock exchanges, from NYSE to NASDAQ. It
potential future recessions. The Italian economy was is one of the most common and the most sought-after equity
employed as the database, and machine learning guided index. More than $5.4 trillion was invested in assets linked
tools were used as the analysis method, in the paper [9] to the index's performance by the end of the year 2020, with
presented by Paruchuri, who investigated the idea of this list reaching its highest point on 2 January, 2022.
machine learning in economic forecasting. By examining the
financial characteristics that can be used as a recession One of the key reason to pick it was the historical evidence
indicator, Estrella and Mishkin [10] conducted additional and the volume of the data that it presents, from December
analysis of the US recession, wherein they were making 1927 to current, updated daily (as and when the market
conclusions from the result they got (about one to eight trades). The paramaters associated with S&P-500, along
quarters ahead) and Stock prices, currency exchange rates, with the high and low price and trading volume includes
interest rates, and monetary aggregates were assessed 10yr Tbond, % Change Tbond ,2yr SpreadFedrate ,%
separately as well as in relation to other financial and non- Change Fedrate, Nonfarm Payrolls , % Change Payrolls ,
financial indicators. CPI and % Change CPI Date.
III. METHODOLOGY AND DATASET Since the features we needed for our dataset were not
A. Dataset conveniently included in a downloadable dataset, we had to
download each feature separately and combine them
The first thought of using the dataset was the quarterly GDP
together into one dataframe. We were able to pull each
growth rate of major economies around the world, but it was
economic feature separately from FRED (Federal Reserve
quickly rejected since there weren’t too many available data
Economic Data from the Federal Reserve Bank of St. Louis)
(eg- after the disintegration of USSR, the quarterly data for
using Quandl, which also had the added bonus of
Russia is available only after 1993, while the Chinese
automatically calculating selected transformations if we
maintained the data only after 1990). Yearly data wasn’t
chose to do so, and the financial feature was downloadable
favourable either due to it being too little (maintained and in
from Yahoo! Finance so we downloaded the dataset and
the open domain only from the 1960s onwards, with major
created the transformed variables in Excel and imported the
data from countries like France available only after 1970s),
dataset as a CSV. We created Recession labels from a list
so there wasn’t any continuity. The reason why the search
of start and end dates. We finally concatenated each feature
for the dataset was diluted to only the US economy is due to
and the labels using an inner join to create one data-frame.
the fact that being the largest economy of the world, with
After creating a correlation heatmap, we selected the
major trade governed through the US Dollar, whenever it
features we wanted to include in our final dataset (each of
faces any economic stress, the world faces the aftershock.
the features fall under a certain category; employment,
One of the major indicators about any recession that has
monetary policy, inflation, bond market, or stock market).
been triggered by the US was the CFNAI (Chicago Fed
Finally we saved this dataset as a CSV and performed some
National Activity Index). This index is a weighted average
of 85 actual economic activity indicators for the preceding
period. A single, summary measurement of a common
element in the entire national economic statistics is offered
by the CFNAI. As a result, changes in the CFNAI
throughout time closely mirror times of economic expansion
and contraction.

descriptive statistics on the data.


2)Automatic Transform of the Target Variable: An
approach that could be followed instead of the
aforementioned one is to automatically take charge of the
transform and inverse transform.This can be achieved by
using the “TransformedTargetRegressor” object that tend to
Fig 1: The Ratio Scale of S&P-500 Bull and Bear Market (Credit: encapsulate a given model and a scaling object.The
Yardemi) “TransformedTargetRegressor” tends to prepare the
B. Importance of Data Scaling transform of the targeted variable with the help of the same
training data used to fit the model. On applying that inverse
transform on any new data provided when calling the
It is typical for data to contain scales of values that fluctuate
predict() function, it returns those predictions in the correct
from variable to variable. A variable could be expressed in
scale. To use the TransformedTargetRegressor, it is defined
feet, metres, and so forth.
by specifying the model and the transform object to use on
the target.
Some machine learning techniques, such as normalisation,
which scales all variables to values between 0 and 1,
perform significantly better when all variables are scaled b. Rescaling Options
within the same range. This has an influence on algorithms
like support vector machines and k-nearest neighbours as This wasn't covered before, but there are essentially just two
well as methods like linear models and neural networks that options for rescaling features. Consider the case where x1
use a weighted sum of the input. runs from -1 to 1, whereas x2 ranges from 99 to 101: both of
As a result, scaling input data is a recommended practise, as these features have (about) the same standard deviation, but
is experimenting with different data transforms, such as x2 has a much bigger mean. Consider the following
employing a power transform to make the data more normal scenario: x1 is still between -1 and 1, while x2 is between -
(better suit a Gaussian probability distribution). 100 and 100. They have the same mean this time, but x2 has
This is true for output variables, also known as target a significantly greater standard deviation. Gradient descent
variables, such as numerical values expected when and similar algorithms can become slower and less
modelling regression predictive modelling problems. trustworthy in each of these circumstances. As a result, we
It is frequently useful to scale or convert both the input and want to make sure that all features have the same mean and
target variables in regression situations. standard deviation.
It is simple to scale input variables. You can use the scale
objects manually in scikit-learn, or the more handy Pipeline, C. Model used in the project
which allows you to chain a series of data transform objects There were two main models that were used in the making
together before utilising your model. of this project, namely
The Pipeline will fit the scale objects to the training data and
apply the transform to new data, such as when using a model  Linear Regression
to predict something.  Logistic Regression

a. Scaling of Target Variables Linear Regression:


A linear method that corresponds to the proximity of the
There are two methods for scaling target variables.
relationship between a scalar response and one or more
The first option is to handle the transform manually, while
interpretive variables is linear regression (the responses
the second is to use a new automatic method.
mentioned above are also known as dependent and
independent variables). In contrast to multivariate linear
1. Transform the target variable manually. regression, which predicts numerous correlated
2. Transform the target variable automatically. variables dependent on each other, instead of just a
single scalar variable, this phrase is more specific.
1)When the Target Variable is changed manually In linear regression, linear predictive functions are used
Effectively managing the scaling of the target variable to model associations, with the parameters that remain
manually entails manual constructing and the application of unknown in the model being estimated from the data.
the scaling object to the data. Such type of models are called linear models. The
It involves the following steps: conditional mean of the response is typically considered
to be a related function of the values of the explanatory
 Create the object that tends to be used for transform variables (or predictors); the conditional median or
e.g. a MinMaxScaler. some another quantile is usually placed.
 Then, Fit this transform function on the training
dataset. The model parameters are quite easy to be understood due to
 Apply the transform function to the subdivided the linear shape. Additionally, linear model theories (which
dataset consisting of training and testing datasets. are mathematically easy and fair) are widely known.
 Invert the transform for any forecast that was made. Furthermore, a lot of contemporary modelling tools are built
on the foundation of linear regression [20]. If we take an
example, linear regression more often than not provides a
good round-off to the underlying regression function,
especially when the sample size is tiny or the signal is very In general, comparing predictors' (unstandardized)
faint. regression coefficients to determine their relative relevance
is not a good idea because:
The first regression analysis method for which the  the regression coefficients for numerical predictors
researchers undertook an in-depth research and saw plenty will be determined by the units of measurement of
of its use in actual applications was linear regression. This is each predictor. it makes no sense, for example, to
because models with linear dependence on their uncertain equate the influence of years of age to centimetres
parameters are in a simple way fits the linear models than of height, or the effect of 1 mg/dl of blood glucose
their non-linear counterparts. Plus, it is simpler to determine to 1 mmhg of blood pressure.
the statistical attributes and characters of the resulting  the regression coefficients for categorical predictors
estimators. will be determined by how the categories were
The equation to show the linear regression is given as: defined. for example, the coefficient of the variable
y=mx+c +e smoking will be determined by how many
Where m= the line’s slope categories you construct for this variable and how
c=intercept; and you handle ex-smokers.
e= representation of the error that the model may
have. 1) Comparing standardized regression coefficients

The standardised version of model variables is substituted to


derive the standardised regression coefficients.

A variable that has a mean of 0 and a standard deviation of 1


is said to be standardised. Remove the mean and divide by
the standard deviation for each value of the variable. The
Fig 2. Line of best fit and linear regression (via upgrad) unit of measurement for each predictor changes to its
standard deviation when the predictors in a regression model
The line of best fit is based on changing m's and c's values. are standardised. We think that by using the same unit to
The discrepancy between the observed and predicted values measure each variable in the model, their values will
is known as the predictor error. The values of m and c are become comparable. Standardized coefficients offer certain
chosen so as to provide the least amount of predictor error. advantages, including their ease of application and
It's crucial to remember that an outlier can affect a simple interpretation, as the variable with the highest standardized
linear regression model. As a result, it shouldn't be applied coefficient is considered the most significant, and so on.
to large data sets. They also provide an objective measure of importance,
unlike other methods that rely on domain knowledge to
The approach of the least squares method is frequently create an arbitrary common unit for judging the importance
utilized to fit linear regression models, although alternative of predictors. However, standardized coefficients have
methods exist. For instance, least absolute deviations limitations, as the standard deviation of each variable is
regression reduces the "lack of fit" in another norm, whereas estimated from the study sample, making it dependent on the
ridge regression (which involves an L2-norm penalty) and sample distribution, size, population, and study design. Even
lasso regression minimize a penalized version of the least a small change in any of these factors can significantly
squares cost function (which involves an L1-norm penalty). affect the value of the standard deviation, leading to
Additionally, the least squares method can be applied to unreliable standardized coefficients. In such cases, a
non-linear models. Therefore, even though they are closely variable with a higher standard deviation may have a larger
related, "least squares" and "linear model" are not standardized coefficient, and hence may appear more
interchangeable terms. When the dimensions of the training important in the model, even if it's not.
set exceed the number of data points, the conventional
Linear Regression Classification (LRC) approach fails to Comparing the impact of each predictor on the model's
yield accurate results. [21] accuracy

One major application of linear regression is to analyse and When using linear regression, you can examine the increase
comprehend a data set rather than simply predict outcomes. in the model's R2 that comes from each additional predictor
Following a regression, one can learn from the learnt or, conversely, the decrease in R2 that results from each
weights how much each characteristic influences the result. predictor being deleted from the model.
For instance, if we have two characterestics namely, A and In logistic regression, you can evaluate the reduction in
B that are used to predict a rate of success, and the weight deviance that happens as each predictor is included in the
learnt by A is significantly greater than the learned weight model.
for B, this indicates that the occurrence of the trait A is more The main idea behind comparing predictors is to assess the
correlated with success than the occurrence of the trait B, impact of each predictor in relation to a chosen reference
which is interesting in and of itself. Unfortunately, feature predictor, by comparing the change in one predictor required
scaling undermines this: because we rescaled the training to replicate the effect of another predictor on the outcome
data, the weight for A (a′1) no longer corresponds to A variable Y. This method is particularly useful when a natural
values in the real world. reference predictor is available, such as comparing the
effects of different chemicals on lung cancer relative to
smoking, which can be considered a reference for all lung customised advertisement, we want to be
carcinogens. However, when a natural reference is not absolutely certain that the customer will respond
available, it is best to use another method to evaluate favourably to the advertisement because a
variable importance. negative response could result in the loss of
Another method involves selecting a fixed value or change potential sales from the customer.
in the outcome variable Y and comparing the change in each
predictor necessary to produce that fixed outcome. For The main difference that arise between logistic and linear
example, becoming a smoker is equivalent to losing 10 years regression is that the range of logistic regression is
of age in terms of the 10-year risk of death from all causes contained within a value between 0 and 1. In contrast to
for a middle-aged man. This method can also be useful in linear regression, logistic regression does not require nor
assessing the impact of predictors on the outcome variable does it demand a linear relationship between the input and
Y. output variables. This is because the odds ratio were
For example, when it comes to the 10-year risk of death converted to a non-linear log transformation. The
from all causes for a middle age man, becoming a smoker is mathematical function can be defined as:
equivalent to losing 10 years of age. 1
Logistic function f(x): −x
a. Logistic Regression 1+ e

A statistical model called the logistic model, also referred to Where x is an input variable
as the logit model, estimates the likelihood of an event by
converting the event's log-odds into a linear combination of
one or more independent variables. Logistic regression,
commonly referred to as logit regression, calculates a
logistic model's parameters in regression analysis (the
coefficients in the linear combination). According to its
formal definition, binary logistic regression has a single
binary dependent variable (two classes, coded by an Fig 3: Illustration of a sigmoid function on x-y graph
indicator variable) with the values "0" and "1," whereas the Even though a number of algorithms, including SVM, K-
independent variables can either be continuous variables or nearest neighbours, and logistic regression, demand that
binary variables (two classes, coded by an indicator features be normalised, Principle Component Analysis
variable) (any real value) [22] [23] . (PCA) provides a good illustration of why normalisation is
Only when a selection threshold is included does logistic crucial. In PCA, the elements that maximise variance are
regression become a classification approach. The what we are most interested in. If those attributes are not
classification problem itself determines the threshold value, scaled, PCA may find that the direction of maximal variance
which is a crucial component of logistic regression. The more closely relates to the 'weight' axis if one component
precision and recall levels have a significant impact on the (for example, human height) fluctuates less than another (for
choice of the threshold value. In an ideal situation, precision example, weight) due to their respective scales (metres vs.
and recall should both equal 1, but this is very rarely the kilogrammes). This is obviously false because a change in
case. height of one metre can be thought of as being far more
significant than a change in weight of one kilogramme.
In the precision-recall tradeoff, we use these cases:
IV. RESULTS
1) Low Precision/High Recall: We choose a
decision value that has a low value of Precision From the ML strategies deployed, we were able to create a
or a high value of Recall in applications where Confusion Matrix for the sake of accuracy. Also the value
we wish to lower the number of false negatives were displayed in percentage for a clearer grasp about the
model’s reliability in layman’s term.
without necessarily reducing the number of false
positives. For instance, in a cancer diagnosis
application, we don't want any impacted patients The accuracy varied from 87% to a little more than 94% as
evidenced from the result. The 1 in the chart displayed the
to be labelled as unaffected without paying close
chances of recession, while a 0 indicated little chances of
attention to whether the patient is receiving a
recession.
false cancer diagnosis. This is due to the fact that Also used were the CFNAI diffusion index for the values till
additional medical conditions can identify the June 2022, after which some discrepancies were noticed
absence of cancer but cannot detect its presence from the subsequent releases of the CFNAI datasets.
in a candidate who has once been rejected. From Fig 9 and 10, we can see the accuracy of the plotted
versus the actual CFNAI Diffusion Index, which (if it
2) High Precision/Low Recall: We select a decision differs) has only minute differences between them. The
value that has a high value of Precision or a low monthly average of 3 plot was similarly plotted using the
value of Recall in applications where we wish to ML technique using the dataset.
cut down on false positives without necessarily
cutting down on false negatives. For instance, if
we are predicting whether a customer will
respond favourably or unfavourably to a
Fig 4. Unscaled Logistic Regression accuracy monthly average of 3 plot was similarly plotted using the
ML technique using the dataset extracted from the CFNAI.
The dataset was taken from the year 1962 to the June of
2022.
V. REFERENCES
[1] M. Brunnermeier (2009) Deciphering the liquidity and
credit crunch 2007-08. J. Econ. Perspect
Fig 5. Confusion Matrix for Unscaled logistic Regression
[2] Erkens, D. H., Hung, M., & Matos, P. (2012). Corporate
governance in the 2007–2008 financial crisis:
Evidence from financial institutions worldwide.
Journal of Corporate Finance, 18(2), 389–411.
https://doi.org/10.1016/j.jcorpfin.2012.01.005
Fig 6. Scaled Logistic Regression accuracy [3] Business Cycle Dating. (n.d.). NBER.
https://www.nber.org/research/business-cycle-
dating

[4] Chen, J., Vargas-Bustamante, A., Mortensen, K., &


Thomas, S. B. (2013b). Using Quantile Regression
to Examine Health Care Expenditures during the
Great Recession. Health Services Research, 49(2),
705–730. https://doi.org/10.1111/1475-6773.12113
Fig 7. Confusion Matrix for Scaled logistic Regression

[5] Dabrowski, J. J., Beyers, C., & de Villiers, J. P. (2016).


Systemic banking crisis early warning systems using
dynamic Bayesian networks. Expert Systems With
Applications,62, 225–242.
https://doi.org/10.1016/j.eswa.2016.06.024
Fig 8. Tuned Logistic Regression with scaled accuracy
[6] Nyman, R., & Ormerod, P. (2017). Predicting economic
recessions using machine learning
algorithms. arXiv preprint arXiv:1701.01428.

[7] Nyman, R., & Ormerod, P. (2020). Understanding the


Fig 9. CFNAI Diffusion Index plotted using ML technique. Notice the Great Recession Using Machine Learning
trough coincides with the period of economic recession. Algorithms. arXiv.
https://doi.org/10.48550/ARXIV.2001.02115

[8] Liu, W., & Moench, E. (2016). What predicts US


recessions? International Journal of Forecasting, 32(4),
1138–1150. https://doi.org/10.1016/j.ijforecast.2016.02.007

Fig 10. Actual CFNAI Diffusion Index [9] Paruchuri, H. (2021). Conceptualization of Machine
Learning in Economic Forecasting. Asian Business
Review, 11(2), 51–58.
https://doi.org/10.18034/abr.v11i2.532

Fig 11. CFNAI MA_3 plotted using ML technique. [10] Estrella and F. S. Mishkin, "Predicting U.S. recessions:
financial variables as leading indicators", The
From Fig 9 and 10, we can see the accuracy of the plotted Review of Economics and Statistics, vol. 80, no. 1,
graph versus the actual CFNAI Diffusion Index, which (if it pp. 45-61, 1998.
differs) has only minute differences between them. The

You might also like