Macro Stress Testing LR
Macro Stress Testing LR
1. INTRODUCTION
Globally, stress testing is being used by banking and financial regulatory authorities as a robust
risk management tool to quantify and evaluate the ability of financial resistance of banks. In any
country, banks are the main players of the economy’s financial system and they being credit
institutions form the backbone of their financial services sector. Furthermore, banks offer also to
be a potential source of contagion consequent to its role of a payments mechanism.
Also compared to portfolio stress tests in which institute specific tests are conducted respectively,
(Blaschke, T. Jones, Majnoni, & Martinez Peria, 2001) point out that the rationale behind
aggregate testing is to assist regulators in detecting structural weaknesses and system wide risk
vulnerabilities that could affect a disruption on the economy’s financial markets. The assessment
provided is of probable externalities and market failures like liquidity getting extinguished or a
drop in asset quality, while portfolio stress testing is conducted to assess the internal risk
management framework of the particular organization. Moreover, individually oriented stress tests
typically undermine market liquidity, a phenomenon that occurs out of collective market reaction,
that in turn is made good by aggregate level testing.
As a result, system wide stress testing or more formally called, Macro Prudential Stress Tests are
conducted on a representative sample of banks of the country in order to adjudge, assess and/or
improve the resilience of the national economy’s financial sector to systemic macro-economic
shocks with potential destabilizing effects like the financial crisis of 2007-2008, the Internet
Bubble of 2000, and the European Sovereign Debt Crisis of 2009 and continuing.
Concerning the methodologies used to employ Macro-Prudential Bank Stress Tests several models
have been propounded by mostly National Agencies or Financial Institutions, as well as compared
to find the most effective ones. Since this project focuses on the Credit Risk aspect of banks’ risk
which is being stress tested, only respective methodologies are concerned.
2. MACRO PRUDENTIAL STRESS TESTS AROUND THE GLOBE
Macro Prudential stress tests are today widely conducted and encouraged in the United States and
the European Union.
The Dodd-Frank Act of 2010 passed by the Obama legislation in response to the financial crisis
of 2007-09, mandates a yearly supervisory stress test of the US financial sector and also the Federal
Reserve’s capital plan of 2011 requires that all US Bank Holding Companies (BHCs) with
combined assets of $50 billion or more, develop and disclose capital plans to the Fed annually.
(Acharya, Engle, & Pierret, 2014)
Consequently, the Fed has since 2011, conducted compulsory systemic stress tests as part of its
almanac Comprehensive Capital Analysis and Review (CCAR). Further (Board of Governors of
the Federal Reserve, 2011) as cited in Acharya, Engle, & Pierret, 2014) also report the Dodd-Frank
Act methodology of stress testing, banks are needed to pass supervisory limits on the four ratios
of:
(Acharya, Engle, & Pierret, 2014) report that in 2009, with the European Sovereign Debt crisis of
2011 as the trigger point, the Committee of European Banking Supervisors initiated the famous
EU wide stress tests, now a mandatory piece of legislation, pushed forth by even the Basel
Committee on Banking Supervision (BCBS). Unlike in the United States, in the European Union,
stress tests are performed in a bottom-up manner. As per the method, banks considered
individually submit their specific stress reports to the European Banking Authority (EBA) after
review by the National Supervisory Authority (NSA). Hence the EU stress tests are commonly
referred to as Micro-Prudential stress tests. In the process, it is the European Central Bank (ECB)
that defines the stressed macro-economic scenario and thus maintains consistency in the aim of
the specific tests. Relative to regulations on credit risk, the (European Banking Authority, 2017)
maintains that no negative impairments are allowed, except and exclusively from S2 to S1:
(Drehmann, 2009) has provided a well-structured categorization of the various models available
to stress the banking system’s credit risk exposure. He has provided a clear division of the models
in terms of the following:
Primary methodologies solely depended on time series analysis to evaluate the influence of Macro
variables on credit risk movements. Out of the two initial methods suggested by, (Blaschke, T.
Jones, Majnoni, & Martinez Peria, 2001) generate an ‘Integrated Approach’ that takes into account
both expected and unexpected losses to deliver an assessment of whether the revised capital ratio
is adequate enough a buffer against external blows. However, the approach has got inherent
limitations such as ambiguity related to loan provisioning that provides an inaccurate estimation
and also that it is based on backward looking information which may not completely define the
innate evolution of the creditworthiness of the borrower, specifically during times of key
infrastructural changes in the finance sector.
As per the usage by UK’s Bank of England staff, (Bunn, Cunningham, & Drehmann, 2005)
describe the systemic stress testing models developed by the staff. Regular macro-economic
factors like Gross Domestic Product (GDP), unemployment, interest rate, equity market indexes,
or inflation and also a unique group of variables not normally considered in macro based models
such as income gearing ratio or Loan to Value (LTV) ratio are taken into consideration and proved
to be driving the aggregate default rates of the banking sector. In turn, the effect of the aggregate
default rates and metrics of Loss Given Default (LGD) of aggregate write-offs for corporate,
mortgage and unsecured lending in the UK is estimated and evaluated.
Additionally, (Pesola, 2007) contributes to the concept by empirically proving in his study that it
is not macro-economic shocks solely that affect stress onto the banking variables or the banking
sector in silo, but it is the financial fragility of the particular institution combined with systemic
variables that cause a jolt to the financial stability of banks by affecting their loan losses. In
particular, he assumes that the joint impact of a shock scenario and the particular bank’s fragility
is multiplicative in the following sense:
As a result, this makes the effect of a macro-economic shock on loan losses nonlinear with the
prevailing bank fragility increasing the impact of the shock.
In short he argues that loan losses are driven by macro shocks and their impact aggravated by a
more fragile financial system., where the financial fragility is metric given by aggregate
indebtedness of banks. (Pesola, 2007) finds strong evidence for this by conducting a panel data
analysis on the loan loss data from ten European countries.
One of the initial researchers to cover the gap of this limitation is Bank of England’s (Pain, 2003).
The datasets of his empirical study consisted of organization specific data accessed largely form
the published yearly reports of eleven of the major UK banks across a given time series. Seven of
them being commercial banks of Barclays, Bank of Scotland, Lloyds-TSB, Midland, NatWest,
Royal Bank of Scotland, Standard Chartered and 4 of them being mortgage banks of Abbey
National, Alliance & Leicester, Halifax and Northern Rock. Specifically, cross sectional analysis
was conducted over a time period to form a Panel Data analytical study, thus taking into account
the individual exposures and quality of variables specific to each of the banks. The paper confirmed
with its analysis that the composition of banks’ portfolio is also an important explanatory factor,
in deciding its reactionary course to macro-economic movements.
Gathering up on the study, (Van Den End, Hoeberichts, & Tabbae, 2006) base their study on the
Dutch banking sector using a two-phased approach. In other words, two basic equations are used
to model credit risk. Equation 1 helps establish the correlation estimate between aggregate default
rates and some key macro-economic variables. Next, the equation 2, uses the default rates together
with the macro variables are used to describe Loan Loss Provisions (LLP) and convert defaults
into losses. This is found using a panel of Dutch banks, where bank specific attributes are not
explicitly identified but involved using fixed effects. Ultimately, the product of the two equations
is Loss Given Default (LGD) that is identified using:
In this case too, exposures respective to the particular banks are taken into account, using a panel
data set of the largest five banks of the Netherlands.
(Quagliariello, 2007) conducted a nationwide stress test on an unbalanced panel of 207 specific
Italian banking intermediaries whose data of accounting ratios was used. The sample selected
covered over 90% of the consolidated summated assets of the Italian Banking industry.
(Quagliariello, 2007) proved from his study that bank financial health indicators which he took to
be loan loss provisions and flow of new bad debt, are driven by both – macro-economic variables
of GDP, stock market index, spreads and interest rates – and – bank related variables as well such
as Cost to Income ratio (C2I Ratio), Credit Growth, Return on Assets ratio, Flow of New Bad
Debts ratio, Capital Asset ratio and the like. Moreover, the main finding of the paper was to show
the cyclical behaviour of bank riskiness in tandem with movements over the business cycle
affecting the economy as a whole. Given the feedback mechanism that generally occurs in a
rational market coupled with behavioural/reactionary tendencies of banks to shocks in the
economy (that typically takes 1 to 2 years to affect), the recessionary impact is multiplied and
becomes deep and sustainable. In this his seminal study, the researcher employs both static and
dynamic models that help improve model fit. Also, the study projects empirical evidence of the
income smoothing hypothesis which suggests that banks use provisions to smooth their earnings
and not necessarily in response to expected loan losses or economic shocks. However, one of the
issues to consider in relation to the model specified in this paper is whether the dynamic models,
though comparatively accurate, are ideal for the purpose of communication. The point in case is,
if lagged dependent variables prove to be a considerable influencer of the outcomes, such a model
may appear ambiguous on the transmission mechanism from the shock to the impact, which is
actually required for storytelling. Thus, the researcher, supports the income smoothing hypothesis
with his results, rather than explaining fundamental credit risk exposures in a more granular
fashion.
Here comes the problem of parametric statistical techniques which often result in big estimation
errors because data related to aggregate default series or loan loss provisions or even the flow of
new bad debt, is typically classified as information that a bank holds confidential, and thus remains
limited or inaccessible. This affects the measurement of the credit risk of the portfolio at any point
in time and across time periods.
Accordingly, (Basurto & Padillo, 2006) came up with a framework with the ability to measure
portfolio credit risk sufficiently, especially in data restricted environments, which is in fact highly
essential. Within the framework of a stress testing exercise, the credit risk measurement model
proposed in the paper is a combined structure that involves the implementation of the Conditional
Probability of Default (CPoD) methodology and the Consistent Information Multivariate Density
Optimizing (CIMDO) methodology as given by (Basurto & Padillo, 2006). The CPoD technique
is a construct that integrates the effects of macro level and financial changes into the metric of
credit risk and thereby recover robust estimators in settings of short time series. This in turn is
based on the objective of improving credit risk measurement through time. On the other hand, as
compared to the typical standard credit risk models that are data intensive or require large data for
input, the CIMDO models the bank’s Portfolio Multivariate Distributions (PMDs), by neither
forcing upon it, any parametric multivariate distribution nor assuming default correlation
structures among the loans in a portfolio. Opposed to the convention, this it does by relaxing the
usage of potentially unrealistic assumptions in environments where data availability is difficult,
while modelling the PMDs. Over all, since data requirements needed for the application of the
above two mentioned methodologies are less taxing than that originally required for the typical
models, this method to evaluate credit risk is much more plausible. In short, what both the
researchers suggest is what is known as a Minimum Cross Entropy approach. In practicality this
approach permits them to recover better estimates of conditional probabilities of default (PDs) for
loan classes in various sectors or risk rating categories. Since it is non parametric, non linearities,
which the existing models ignore, are also caught by the prediction model. Additionally, they
produce the portfolio loss distribution by using only aggregate time series like non-performing
loans. Again, minimum cross entropy approach forms the cornerstone of this method and thus it
does not depend on underlying correlated default structures among loans that need to be assumed.
The authors implemented their empirically proven methodology in Denmark. In a nutshell, the
portfolio loss distribution is simulated and after that, expected and unexpected losses
corresponding to three test scenarios of varying degrees was conducted. (Basurto & Padillo, 2006)
b. MODELS BASED ON MARKET DATA
The original macro stress tests used cumulative national level times series data of the bank
variables. However, the portfolio stress tests conducted by banks individually by themselves
involves them using firm specific information of variables that drive credit risk of the banking
organization. This characteristic framework of the model based on utilizing market data for stress
testing is derived from the pivotal idea in financial literature up till date – that of (Merton, 1973)
pricing of corporate debt – in which he experimentally establishes and uses the risk structure of
interest rates. He propounded in this paper, his influential concept that a company’s equity can
alternatively also be defined as nothing but a call option derivative on the underlying assets of the
balance sheet, that has got company outstanding debt as the strike price. It is generally observed
that the movement of the prices of options in the derivative market is no less than a stochastic,
random process, making its drivers difficult to estimate. However, it was also offered that by
inverting or rearranging the option pricing formula or famously the Black Scholes Merton model
formula, one could recover the asset value and the parameters leading its stochastic process from
observable information. From this, the probability of default, which is nothing but an estimate of
the likelihood that debtors would default on their payment obligations, can be calculated over a
given specific time horizon.
Another critical paper published in the area of market data based stress testing is that of (Pesaran,
Schuermann, Treutler, & Weiner, 2003). The paper developed a structure for modelling
conditional loss distributions with the inclusion of risk factor dynamics. A dynamic global macro
econometric model has asset value changes of a credit assortment linked to it, thus permitting
systemic factors to be isolated form idiosyncratic shocks, or shocks unique to the particular
institution. Business cycle that are both domestic and foreign and even their transnational linkages
are the primary drivers of default probabilities of banks in an open economy. Consequently, the
model has been developed keeping in mind, control for firm specific heterogeneity as well as the
ability to generate multi-period forecasts of the entire loss distribution, subject to certain detailed
macro-economic scenarios. In the end, the paper helps in developing a conditional modelling
framework for the analysis credit risk that in turn helps ascertain the explicit linkage between a
portfolio of credit assets and the underlying international macro-economic system. Working on a
sample of globally active firms, the authors of the research conducted, statistically relate changes
in equity prices to changes in ratings, that include a default category. The group then utilizes
econometrics of the panel data style in order to identify the systematic risk drivers for the given
set of stock returns and their movement over time. Their selected set includes interest rates,
inflation, exchange rates and the stock index, when interestingly GDP was found not to be that
affective a factor variable. The peculiarity of the paper lies in the unique way that interactions of
systematic risk drivers across international boundaries are modelled. Apart from the above
mentioned, this particular feature of the model allows them to assess the impact of changes in
foreign risk factors on national firms that can be really essential to the output of the entire stress
testing exercise.
A similar test based on the above study’s model was used for a Euro area stress test, the results of
which were released through a paper by (Castrein, Dees, & Zaher, 2008). This paper at the basic
level follows the model as specified in the immediately above paper in the sense that the data
generation method of systematic risk factors if of the same type. The only differentiating feature
is that their paper links the median sectoral Probability of Default (PD) as predicted by Moody’s
KMV to macro-economic variable factors. To break it down further, they employ a mean group
estimator which gives way for heterogeneous slope coefficients across banking companies. The
focus of the paper was primarily on the linkage between the global macro-financial variables and
the institute’s default probabilities – an integral part of financial sector stress testing frameworks.
The researchers show to evaluate the Euro area corporate industry probability of default, under a
host of shock scenarios considering both domestic and foreign level macro-economic shocks. This
is done using the Global Autoregressive (GVAR) model and then subsequently constructing a
linking satellite equation for the firm level Expected Default Frequencies (EDFs). The concluding
results specifically confirm the fact that at the Euro are combined level, the median EDFs react
most to stresses to the exchange rate, oil prices, GDP and equity prices. They also point out that
some intuitive deviations to these results exist when sector-level EDFs are considered. The GVAR
is used as a source to generate shocks to the financial system. The GVAR is founded on country
or area specific vector error correction models, where domestic and foreign variables interact
together. The aggregate Satellite-GVAR model allows for a richer representation of the
international transmission of shocks to corporate sector credit quality than a framework that uses
a simple Euro are VAR, as it helps to understand and account for various global linkages (direct,
second-round and third-market effects as well as transmission through financial variables).
Ultimately, the research proves that Satellite-GVAR model appears to be a much valuable tool for
analyzing extreme but still probable worldwide macro-financial shock scenarios designed for
financial sector stress testing purposes.
While the above articulated model relies heavily on equity return data that is available from the
market, (Gupton, Finger, & Bhatia, 1997) develop a modelling framework on another source of
publicly available financial information. In order to arrive at loss distributions of credit portfolios,
matrixes that chart out ratings transitions of individual bank organizations are used. The authors,
heading risk management research at JP Morgan, through this technical document published in
1997, developed a trademarked in house framework for quantifying credit risk in portfolios
consisting of typical credit instruments and products subject to counterparty default probabilities.
The technical study was also funded/co-sponsored by the Bank of America, the Bank of Montreal,
BZW, Deutsche Morgan Grenfell, KMV Corporation, Swiss Bank Corporation and the Union
Bank of Switzerland. The name of the product is ‘CreditMetricsTM’. The CreditMetrics measure
was developed the intention of capturing credit risk as accurately as possible and with as sparse
data as is available, by seeking to construct that which cannot be easily observed like in the case
of market risk, in which daily liquid price observations are found. Another advantage of
CreditMetrics, as opposed to older traditional models, is that this model is not built on the
assumption that market returns are distributed normally, making the result as realistic as possible,
albeit with a certain natural degree of error. This model aims to be a natural extension or support
to earlier or more popular metrics of measuring credit risk such Value at Risk (VaR), that is based
on the assumption of normally distributed returns. As explained earlier, the fundamental basis on
which the model is constructed is what is known as ‘migration analysis’, which is the study of
changes in the credit quality of the names over time and alternatively also referred to as transition
matrices. It started off with attempting to address the challenges posed in estimating portfolio
credit risk, which is practically quiet difficult. Even equity prices aren’t a good substitute for
measuring credit risk due fundamental differences between the both. One is that while equity
returns as per empirical observation and evidence, are relatively symmetric and can be well
approximated using normal or Gaussian distribution, on the other hand, credit returns are typically
skewed and in include fat tails (caused by the phenomenon of default). The second is the problem
associated with modelling correlations. The lack of quality data related to credit makes it difficult
to estimate any type of credit correlation directly from past data, which in turn requires the
assumption that either credit correlations are uniform across the portfolio or proposing a model to
capture credit quality with better parameters. Ultimately the model specifies a methodology with
the following steps:
1. Calculating the joint probabilities of the movements of the different constituents in the
portfolio
2. Extending credit risk calculation for standalone exposures to multiple exposure scenarios.
3. Finally help estimate the marginal risk calculation, that identifies concentrations built into
a portfolio and advices on potential risk-dampening strategies.
However, (Gupton, Finger, & Bhatia, 1997) too leave the technical document with a limitation,
the question of identifying explicitly the observable factors driving the default correlations of
borrowers.
A vital constraint related to the above enlisted models that are entirely based market related data,
is simply the fact that they solely rely on market data. In other words, if a stress test had to be
conducted on banks that have a large constitution of non-listed companies and households as their
customers/as part of their portfolio, models on market data would be out rightly redundant, given
that there exists little to no information on unlisted companies and households.
Furthermore, the robustness of the models working on market data has been questioned due to the
several empirical problems that they give rise to. For instance, in order to implement a simple
Merton model, interest rates are required to be assumed as constant and liabilities involved with
the company balance sheet are equivalent to only a single zero coupon bond with the horizon of
the interest equal to its maturity. Despite there being literature to somewhat address this particular
issue of the Merton model, (Jarrow, Deventer, & Wang, 2003) conduct a finding, though warning
further research, that empirical evidence based on their experiments of robust tests on Merton’s
structural model for valuing credit risk, firmly disqualifies the Merton type models, even after
assuming noise representing 20% of the inconsistencies observed, an indeed conservative
assumption.
Therefore, the alternative prototypes to calculate stress tests by coupling them with the addressal
of the above mentioned issues, were developed as reduced form econometric models.
Early papers published in this respect were that of (Beaver, 1996) and (Altman, 1968), which use
only accounting variables and factors such as leverage, loss provisions, new bad debts, interest
coverage ratios and the like as independent variables to explain the movement in defaults of the
banking organization, the dependent variable. (Altman, 1968) paper specifically aims at assessing
the quality of ratio analysis as an analytical tool, throughout which he uses the prediction of
corporate bankruptcy as an illustrative scenario. Specifically, a group of economic, monetary and
financial ratios, will be evaluated in their ability to adequately and accurately predict bankruptcy,
wherein a multiple discriminant statistical methodology is employed.
Apart from the above, (Wilson, 1997) further recommends using a probit model, as he himself
does in his experiment and identifies the fact that even macro-economic factors form part of
systematic risk drivers other than just firm-specific variables. In detail, the paper justifies a new
and intuitive methods which involves making a table of the exact loss distribution arising out from
correlated credit events for any random portfolio of counterparty exposures, all the way down to
individual contract level. The losses are measured on a marked-to-market basis that explicitly
recognizes the potential impact of defaults and credit migrations. The model in itself consists of
two critical parts. The primary is a ‘multi-factor model of systematic default risk’. Joint
simulations of the conditional, correlated, average default and credit migration probabilities for
every rating segment, is conducted. The second is a methodology wherein discrete loss distribution
for any portfolio of credit exposures is charted. This is begotten by convoluting the conditional,
marginal loss distributions of the individual positions to develop the aggregate loss distribution,
with default correlations between different counterparties determined by the systematic risk
driving the correlated average default rates. Several papers have used this framework setup for use
in conducting credit risk stress testing (Boss, 2002) (Sorge & Virolainen, 2006). Better current
models, like the ones enlisted by (Shumway, 2001) (Hillegeist, Keating, Cram, & Lundstedt,
2004), make use of time based macro variables and firm specific covariates to predict Probabilities
of Default (PDs), on the basis of hazard rate models.
Usually, the literature on stress testing concentrates much of its effort on developing models that
are able to forecast PDs for a one-year horizon from the date of the shock. However, on the flip
side it has been noticed and proved through various trials carried out that, the time horizon which
is considered for forecasting affects the empirical findings or results of the stress test on the
motivators of credit risk.
A glaring example in this context is that of conditional probability of default, which is not the same
for the second year as compared to the first year of estimation even if the predictor explanatory
variables remain constant throughout the horizon period.
The paper that most aptly represents the above claims is that of (Duffie, Saita, & Wang, 2007). In
order to predict term-structures of conditional probabilities of corporate default that also
incorporate the movements of systemic as well as firm individual covariates, they provide a set of
maximum likelihood estimators. This helps estimate the hazard rates’ term structure that is derived
from a mean reverting time series process. Taking over 390000 firm months of data spanning the
period of 1984 to 2004, the sample included US Industrial firms. They were used to prove the fact
the structure of conditional future default probabilities is dependent on the firm’s length to
defaulting (i.e. a volatility adjusted measure of leverage), on the lagging S&P 500 returns, on the
US economy interest rates and the on firm’s trailing stock returns. A firm’s deviation in its distance
to default has a considerable impact on the term structure of future default hazard rates than does
a comparatively significant in the term structure of other covariates.
Side by side also, (Campbell, Hilscher, & Szilagyi, 2006) straight away estimates PDs at various
time horizons in the future, conditional on survival up to that period. Their research article explores
the lead indicators of corporate or commercial bankruptcies and the value pricing of financially
shocked stocks whose failure probability, estimated form a dynamic logit model using accounting
and market variables is high. It has been observed that since 1981 financially distressed stocks
have provided abysmally low returns. Despite this feature, they show higher standard deviations,
market betas, and loading on value and small-cap risk factors than those stocks with lower risk of
failure. These set of characteristics are especially true of stocks that possess arbitrage-related
frictions. One of their main contributions is that the risk of failure cannot be adequately
summarized by the measure of length to default inspired by the Merton model.
Another study that concentrates more on non-linearities is one by (Drehmann, Patton, & Sorenson,
2005) that uses a large data set of corporate defaults in the UK. Specifically, they explore the effect
of probable non-linearities on credit risk in a Vector Autoregression (VAR) setup. Aggregate
liquidation rates and firm specific PDs which quarter based are used as measure of credit risk. The
paper shows that the results of the non-linear VAR are critically different to results using standard
linear models, especially when large shocks are being dealt with. More importantly, different
conclusions are derived from the accounting for non-linearites in the underlying economic
environment, for credit risk projects. Furthermore, the shape of the impulse response function
changes with non-linearity.
This paper was conducted following that of (Jorda, 2004) which is built on the pivotal idea that
estimating local projections at each period of interest rather than extrapolating into increasingly
distant horizons from a given model, as is generally followed in VAR models. It is debated that
generalized linear models used as econometric specifications are nothing but first order Taylor
series approximations of the true model. Linearity rarely represents the actual picture and that too
when it is used in studying the impact of small shocks around the equilibrium of the process.
However, stress tests in the first place were developed to analyze the impact of huge systemic and
endemic shock situations, in which the actual impact may be much far from linearity.
Further research in the topic has proved that default correlations are not the result of observable
variables alone. (Das, Duffie, Kapadia, & Saita, 2007) factor in this particular unobservable factor
in their selected set of inputs so that the output is significantly fatter tails of loan loss distributions.
The conclusive remarks state that defaults gather around in time or concentrate at a point in time
since firms’ default intensity processes are interrelated and because there could still be contagion
or feedback effects even after taking into account the above mentioned intensities.
Additionally, (Jimenez & Mencia, 2007) consider a sectoral structure, in which macroeconomic
variables as well as unobservable credit risk factors, that are useful in capturing contagion effect
across sectors, are used to affect default frequencies and the total number of loans. In this spirit,
the authors combine and sum total loan specific defaults up into default frequencies by taking a
sample of ten corporates and two households in Spain. Their work exhibits that default rates show
strong serial dependence and are influenced by GDP and interest rates up to four lags. Latent
factors cause fatter tails in the loss distribution because they are important drivers for cross-sectoral
contagion.
The selection of the final methodology to be used in a stress testing exercise is extremely
complicated scenario because there are a host of factors that affect the decision. Whatever the
objective, the first and foremost hurdle to stress testing is data availability. Unlike as in the case
of market risk testing, as mentioned before, which can be simulated using linear or rather
comparatively stable equity price movements as the base; the same is not so with the case of the
complicated credit risk stress testing procedure. There is no defined market variable that is stable
or fairly predictive enough to form the base for credit risk related information or data points of
institutions. Thus despite having such a vast literature as a base on the topic of bank financial credit
risk stress testing, as is outlined in the above review of literature, a comparatively simple model is
used to conduct one in the Indian context. Keeping in mind the constraints related to resources in
terms of data availability which is characterized majorly as risk related proprietary information of
banking companies, time and even scope of the project, a basic methodology to conduct the stress
test of the Indian banking sector is undertaken, and is explained in detail in the methodology
section. In response to the financial crisis and the sovereign debt crisis in the US and the Europe
respectively, stress testing as a risk management framework has been made mandatory and carried
out extensively or encouragingly in the particular regions apart from a few other countries like
Japan and South Africa.
As witnessed in the cases presented above, majority of the exercises, research and reports emanate
from stress testing exercises conducted in the west. Apart from its conduct being mandated by the
law, it is also taken up enthusiastically at the individual level by specific banking organizations.
However, the gap this research project aims to fill is related to stress testing in Indian risk
management context. Apart from a small column in the yearly Financial Stability Report (FSR)
published by the Reserve Bank of India (RBI), the term ‘stress testing’ is found nowhere in the
financial jargon/dictionary of India Inc. Given that stress testing develops over the limitations of
the popular Value at Risk (VaR) metric of risk measurement, and that stress testing provides a
much better accurate and detailed picture of the financial health and resilience ability of the
organization, this project is being done to conduct a basic level stress test on a select set of Indian
Banks, whose data was readily available. Ultimately this project is a basic simulation of a systemic
or rather Macro Prudential stress testing on the Indian Banking sector represented by a select set
of 5 Indian banks, chosen solely on the basis of ease in data accessibility and availability. Taking
the Bank of Italy’s stress testing exercise as the basis, the macro-economic variables affected by
an exogenous shock linked to it by the BIQM model (out of the scope of this study), causes
movements in a set of indicators that represent the financial health of the Indian banks considered.
The performance of the indicators is adjudged based on their reactionary movement to stress
scenarios of three different degrees. The indicators’ performance is then used to conclude about
the financial health and the robustness of the risk management level of the banks.