0% found this document useful (0 votes)
2 views

tutorial6-EBC2090

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

tutorial6-EBC2090

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Growth Project Empirical Econometrics:

Tutorial Assignments

Block Period 2 2024-2025

Getting Started in R
You will have to analyze your data in R. R is a computer programming language that can be used for
statistical analyses. You can find a manual on http://www.r-project.org/, following “Manuals” to “An
Introduction to R”, or you can consult an interactive introductory course on https://www.datacamp.com.
Throughout the tutorial assignments initial help will be provided to appropriately analyze your data in
R. !! Importantly, we will provide initial help for specific functions the first time that you will need to make
use of this function. If in a later assignment (in the same tutorial or in a next tutorial), you will again
have to make use of a function that was introduced to you already earlier on, then that function will not be
repeated !!

Software Access
Start by download R from https://cran.r-project.org. A user-friendly interface is available via RStudio,
to be downloaded from https://www.rstudio.com/products/rstudio/download. You will need to have
downloaded R and RStudio preferably before the first lecture, but definitely before handing in the first
tutorial assignment in week 1 as you will need it!
When you open RStudio, the screen is divided into three main windows, as shown in Figure 1. First, the
console (left) where you can directly give in commands and where output is returned. Under the default text
that is already given in this window, you can see the “>” sign, followed by the text cursor. This is called the
“prompt”, through which R indicates that it is ready to execute a new command. Second, the environment
window (top right) gives an overview of all objects in memory. Third, in the bottom right window, plots are
returned, help files can be accessed and packages can be downloaded.

First Steps in R
R can be used as a simple calculator. Try to enter
1350 + 6750
in the console. R will give you the following answer:
[1] 8100

If you want to re-execute your last command, you do not have to type it in all over again. Just press on
the upwards arrow button on your keyboard and the last command will reappear. If you want, you can now
make adjustment to this command by using the left and right arrow buttons.

Creating objects. You can also create objects in R. You could consider an object to be a “box” to which
you give a name and in which you store data such as numbers, vectors, matrices, etc. Suppose that we want
to create the object “x” to which we want to attribute the value “5”. You can do this by executing the
command
x <- 5
where you attribute, through the command <-, the value 5 to the object x.

1
Figure 1: RStudio and its different windows.

If you would now want to ask R what the object “x” contains, you can simple give the command
x
and R will reply with
[1] 5

Similarly, you can attribute a vector to “x” through the function c() where the elements in the vector
are separated by a comma:
x <- c(3,7,10)
If you now ask R what the object “x” contains, R will reply with
[1] 3 7 10
You can also access individual elements in a vector through the brackets [ ]. For instance: if you would
like to find out what the second element of “x” is, you can give the command
x[2]
and R will reply with
[1] 7

When assigning objects, please take into account that you cannot use object names that belong to R’s
internal vocabulary. For example, you cannot create an object with the name “sqrt”, as this is reserved for
computing the square root of a number. Furthermore, R is case sensitive! “x” and “X” are thus two different
objects!

Logarithm and differences. Throughout the course, we will often make use of logarithmic and/or dif-
ference transformations to make our time series stationary. Let “x” again be defined as the vector described
previously:
x <- c(3,7,10)
You can now create a new object called “log x” that represents the natural logarithm of “x”
log_x <- log(x)
To see the value of “log x”, type in
log_x
in the console and you get
[1] 1.098612 1.945910 2.302585

Similarly, you can define “d x” as the first differences of “‘x”


d_x <- diff(x)

Page 2
Figure 2: RStudio and R scripts.

d_x
[1] 4 3
which indeed returns the differences between the consecutive elements in the vector (i.e. 7 − 3 = 4 and
10 − 7 = 3). Finally, let us compute “dlog x” as the first differences of the log-transformed “x”
dlog_x <- diff(log(x))
dlog_x
[1] 0.8472979 0.3566749
Computing such a log-difference transformation will turn out to be useful to obtain growth rates of our time
series, as you will learn during the tutorials.

If you ever need some further documentation on one of R’s functions, for example on the log function,
you can use the question mark functionality:
?log
after which the function documentation pops up in the bottom right window of RStudio.
If you would like to execute a function, for example taking the logarithm of a number, but you do not
know the exact name of the function, you can try:
??logarithm
and several documentations files will be suggested.

R Scripts
Suppose that you have been working in R for several hours, but it is getting late and you want to continue
your work tomorrow. If you would now close R, all of your work would be gone! To avoid this problem, we
will not give R commands directly through the R console, but save them in an R script. Hence, for your
own records, and with a view on preparing your final paper, it is a good idea to keep a systematic record of
all your workflow through R scripts.
You can open a new R script by clicking in the menu bar (at the top of RStudio) on “File”, “New File”,
“R Script”. The left panel then gets divided into two windows, see Figure 2: The top one is your R script,
the bottom one is the console we have been using until now. In the R script, you can now enter commands
like we did before and execute them by first selecting the command and then clicking “Run” (keyboard short
cuts are available but depend on your system). When you are finished working in R, save the script (“File”
and then “Save as”). You can later re-open these R scripts in RStudio to continue working with them.
When you write R scripts, the code can become very long and obscure. To clarify your work, you can
use commentary lines in R to provide your code with additional info. Such commentary lines should always
be start with the “#” sign. You could even include output of your analysis as comments in your R script.

Page 3
Data Source
The data sets provided to you come from the Penn World Table (PWT) version 10.0. PWT is a secondary
data source, conveniently and freely accessible through the website of the Groningen Growth and Develop-
ment Centre (GGDC), where you will also find detailed and up-to-date documentation: www.rug.nl/ggdc/
productivity/pwt.
When using these data, please refer to the following paper: Feenstra, Robert C., Robert Inklaar and
Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review,
105(10), 3150-3182, available for download at www.ggdc.net/pwt.

Introduction to the Data


Recall the basic GDP identity from macroeconomics:
Y =C +G+I +X −M
where GDP (represented by Y) is decomposed in four major categories: private consumption (C), government
consumption (G), investments (I), and international trade (exports X minus imports M ). This identity is
the major foundation for national accounts and holds both in current and in constant prices (apart from an
unavoidable statistical discrepancy):
YU = CU + GU + IU + XU − M U ,
YO = CO + GO + IO + XO − M O .
The notation is conventional: Y U (Y O) for GDP in current (constant) prices; CU (CO) for private consump-
tion expenditures in current (constant) prices; GU (GO) for government expenditures in current (constant)
prices; IU (IO) for investment expenditures in current (constant) prices; XU (XO) for exports in current
(constant) prices; and M U (M O) for imports in current (constant) prices.
You will use annual data on these macroeconomic aggregates for one specific country and for the longest
time period available, both in current prices (actual values) and in constant prices. Constant prices means
that the value of the macroeconomic aggregate has been recalculated using price levels of some fixed base
year. As a rule, national account data are available for the main OECD countries starting sometime between
1950 and 1970. In the case of developing countries or regime changes data start later. Table 1 provides and
overview of the key variables contained in your data set, together with their acronyms we will use in the
tutorial assignments to refer to a particular variable.
Note that successive observations of a time series are distinguished by a time index t , as in CUt , IUt , Y Ut .
The index t runs from the first to the last time period (in our case years), and this is written as t = 1, . . . , T .
This notation is only needed in algebraic equations. Computer programs like R never loose track of the time
index but keep it implicit. In the tutorial assignments, we may omit the time indexes when convenient.
In addition to System of National Accounts (SNA) data, you might need variables like the population
size. There are of course many more variables that could potentially be useful. Some main candidates that
are included in the data files are listed in Table 2. Interested students are encouraged to go to the website of
the Penn World Project and download the raw data files. The constructed files on Canvas are created from
the data sets “pwt100” and “pwt100-na-data”. For every variable we use in our data set, make sure to trace
it back to the appropriate variable in the raw data files.1
While most tutorial assignments only require you to use the variables in the provided data files, more
examples of potentially useful variables are representative interest rates (short or long term, accounting for
both the cost of borrowing and wealth effects); an unemployment rate (to reflect job insecurity); a stock
market index (tracking wealth effects and the economic climate); etc. You may gather more potentially
interesting variables, but make sure to take note of (a) the sources, (b) the definitions, and (c) the units of
measurement of any data that you look up.
If you collect additional data, go ahead and prepare a coherent data set. An Excel spreadsheet can be
useful at this stage, but must be used very carefully so as to ensure reproducibility. When ready this data
set will have to be loaded into R.
1 You should be able to directly retrieve all variables except for the variable KO, which you can construct as KO =
rkna ∗ q gdp/rgdpna, please see www.ggdc.net/pwt for extensive documentation.

Page 4
Table 1: Overview of key GDP components and identifier variables.

V ariableN ame Description


COU N T RY Country name (country identifier)
COU N T RY CODE 3-letter ISO country code (country identifier)
CU R Currency unit
Y EAR Year (time identifier with equally spaced, unique and recognisable values)

YU GDP (or GNP) at actual, cUrrent national prices


= nominal GDP (or nominal GNP) in macroeconomics texts
YO GDP (or GNP) at base year (2017), cOnstant national prices
= real GDP (or real GNP) in macroeconomics texts
CU Private Consumption valued at actual, cUrrent national prices
= nominal consumption expenditures in macroeconomics texts
CO Private Consumption valued at cOnstant, base year national prices
= real consumption expenditures in macroeconomics texts
GU Government expenditures valued at actual, cUrrent national prices
= nominal Government expenditures (or government consumption)
GO Government expenditures valued at base year, cOnstant national prices
= real Government expenditures (or real government consumption)
IU Investments (Gross Capital Formation) valued at actual, cUrrent national prices
= nominal investment expenditures
IO Investments (Gross Capital Formation) valued at base year, cOnstant national prices
= real investment expenditures
XU eXports of goods and services valued at actual, cUrrent national prices
= nominal exports in macroeconomics texts
XO eXports of goods and services valued in cOnstant, base year national prices
= real exports in macroeconomics texts (also “exports volume”)
MU iMports of goods and services valued at actual, cUrrent national prices
= nominal imports in macroeconomics texts
MO iMports of goods and services valued in cOnstant, base year national prices
= real imports in macroeconomics texts (also “imports volume”)

Table 2: Additional candidate variables.

V ariableN ame Description


P OP Population size (total number of inhabitants in millions)
KO Physical capital stock valued at base year, cOnstant prices
EM P L Number of persons engaged/ EMPLoyed (in millions)
(constructed by accumulating past investments after depreciation)
HCAP Human CAPital index, based on years of schooling and returns to education
EXR Exchange Rate (national currency/USD (market or estimated))

Page 5
Tutorial 1: Exploring R and Reviewing Regression Analysis
In this tutorial, you will learn to work with R, you will inspect your data through plots and you will review
the basics of regression analysis.

1. Getting started in R.
(a) Read the section “Getting Started in R” at the start of this document to get you started with
this tutorial.
(b) Start by creating a directory on your computer, for instance, “tutorialsEBC2090”. This directory
should contain all files we use in this tutorial assignment.
(c) Go to Canvas and download the data file of your country. This is an .RData file. For instance,
the data file for the Netherlands is “NLD data.RData”. Save the RData file of your country in
the directory on your computer that you have just created (i.e. tutorialsEBC2090 in my case).
(d) Open RStudio and open a new R script. Give it an appropriate name, for instance, “EBC2090-
tutorial1” and save the file. To do this, click (in the menu bar at the top of RStudio) on “File” and
then “Save as”. Enter EBC2090-tutorial1 (or another file name) under “File name” and navigate
to the directory of your choosing (tutorialsEBC2090 in my case) to save the file there. You will
see that the file will be saved as an .R file.
(e) It is good practice to start your R script by clearing your environment in R, this can be done by
typing the following line into your R script
rm(list=ls())
and then pressing “Run” to execute it. To get more information on this function, remember that
you can execute the code ?rm and consult the corresponding documentation in the help-window.
(f) Next, you need to tell R the location of your working directory. This is the directory “tutori-
alsEBC2090” where we will save all the files we use in this tutorial. You can do this by clicking on
“Session”, “Set Working Directory”, and finally “Choose Directory”. Now scroll to the location of
your directory tutorialsEBC2090 and click on Open. You will see that this executes the command
in the form of
setwd("C:/..../tutorialsEBC2090")
in the R console to set the working directory in R. Note that the “...” in the command above will
not appear since the specific path will be dependent on the location of your directory on your
laptop, and hence different for everyone. Copy this command from the console into your R script
(on a new line) such that you can remember it and execute it for later use!
2. Importing the data.
(a) We are now ready to import our data into R. To load your data set into R, you can type the
command
load("NLD_data.RData")
into your R script and execute it. Naturally, if your country is not the Netherlands, you need to
write the appropriate name of your RData file here but also in the remainder of the exercises! You
should notice that in the environment window (top right panel of RStudio), the object “NLD data”
is now listed. If so, then you have successfully imported your data into R!
(b) Let us now inspect all variables that are included in your data file. To to this, type the command
View(NLD_data)
into your R script and execute it. This opens up a new window (new header will appear next
to your R script) with a spreadsheet type of view on your data. Scroll through your data set to
inspect it.
(c) If you want to know the names given to your variables in your data set, you can use
names(NLD_data)
By giving R the command
attach(NLD_data)
you can now address your variables with the names that were given to them!

Page 6
3. Time series plots. Visually inspect your data. That is the way to get to know your data and to trace
data errors. We start by making time series (line) plots.

(a) R (as any other software package) does not assume your variable is a time series by default, instead
it assumes it is a typical numerical variable (for numeric data). We thus need to explicitly declare
that the variable Y U is a time series. In R, create a new time series object, “YU ts”, by using
the function ts:
YU_ts <- ts(NLD_data$YU, start = 1950, frequency = 1)
where we tell R that the data set starts in year 1950 (start = 1950; ! check this, as this may be
different for your country!) and the data are annual (frequency = 1).
(b) Now make a time series plot by using the command
ts.plot(YU_ts)
Discuss the properties of the time series. Note that there are many additional arguments in the
plot function, to change the axis labels, to make the line thicker, .... you can explore these on
your own via the documentation provided in R.
(c) Optional Tip: It is convenient to save your plots as separate files, such that you can show them in
class or later include them in your paper. To save a figure in, for instance, .pdf format, you first
tell R to open a pdf file, where you give the file a name (time-plot-YU), and also set the width
and the height of the file. On the next line you then write the command for the figure you want
to plot. Then R will fill in the .pdf file with the figure (possibly even several figures on subsequent
pages!). You need to finally tell R (on the third line below) that it should close the .pdf file, as
you do not want to further add content to it:
pdf(file = "time-plot-YU.pdf", width = 6, height = 6)
ts.plot(YU_ts)
dev.off()
If you successfully created and closed the pdf file, it should appear in your working directory and
you can open the pdf file to inspect your plot! Final note: if you want to overwrite the content of
your pdf-file (for instance run the code again if you noticed you made a mistake), make sure that
your pdf file is closed on your laptop, otherwise R can not (over-)write the file!
(d) Now let us plot two time series on the same graph: namely Y U and Y O. You should start by
declaring the latter also as a time series (see instructions above!), give it the name“YO ts”. To
plot several time series on the same plot, you may use:
ts.plot(YU_ts, YO_ts, ylab = "YU versus YO", col = c("blue", "black"))
where we now specified what R should use as label on the vertical axis (via ylab), and where
we indicate that the first series should be visualized in blue, the second in black; you can choose
different colors! To add a legend to your graph, you can execute the following command after
your plot command:
legend("topleft", legend = c("YU", "YO"), col = c("blue", "black"), lty = c(1,1))
where you first indicate the position of the legend, then the argument legend specifies the text
that needs to be displayed, followed by the colors for the lines and the line type; where lty=c(1,1)
indicates that both lines– hence you use a vector –are corresponding to line type 1; namely a solid
line. Note that you can also add the lty argument in the ts.plot function; try what happens if you
use lty=c(2,2)...
Discuss the figure. Where do the series cross, and why? Are the series trending over time or do
they fluctuate around a constant mean?
(e) Now repeat the same exercise, hereby plotting Y U , Y O, IU , IO all on one graph! Discuss the
figure as you did above.

4. Scatter plots. Now inspect scatter plots, relating one variable to the other. Make a scatter of IO on
Y O: The first variable on the vertical axis, the second on the horizontal axis via the command
plot(x = YO, y = IO)

(a) Describe what a dot in this scatter plot represents.

Page 7
(b) Observe whether or not a relationship seems to emerge, and whether or not it could be approxi-
mated by a linear function.

5. Simple Regressions. Run a simple regression of investment (in constant prices) on output (in constant
prices) and a constant term:
IOt = β0 + β1 Y Ot + ut . (1)

In R, use the function lm to estimate a linear regression model. You can name the object however you
want, here we give the following name:
fit_IO_on_YO <- lm(IO~YO)
Note that an intercept is included by default. Then ask for a summary of your fitted linear regression
model:
summary(fit_IO_on_YO)

(a) What are the values of the estimates β̂0 and β̂1 ?
(b) Interpret the coefficient β̂1 .
(c) Is the variable Y Ot significant at 5% significance level? Answer this question in three different
ways: based on the (i) t-statistic, (ii) p-value, (iii) 95% confidence interval around β1 . Note that
the former two are displayed in the output, but you need to compute the 95% confidence interval
manually!
(d) You may also ask R to compute the 95% confidence interval:
confint(fit_IO_on_YO, parm= "YO", level = 0.95)
Verify your manual computation against the R output!
(e) Inspect the overall goodness-of-fit in terms of the R2 . Give its value and interpret it. Do you
notice something unusual?

6. Residual Inspection. Closely inspect the residuals of your estimated regression of IOt on Y Ot . In R,
the residuals of the regression are saved in your fitted regression object fit IO on YO (together with a
lot of other useful information). To see what information is saved in the fitted object, ask for:
names(fit_IO_on_YO)
you will notice that there is a slot “residuals” which indeed contains the residuals of the estimated
regression. You can access any output slot in the fitted regression object through the dollar symbol,
hence to plot the residuals, use
plot(fit_IO_on_YO$residuals, type = "l")
where the type argument indicates that you want to make a line graph.

(a) Examine the time pattern of the residuals. Does it seem that the residual series is distributed in
accordance with the assumption of random sampling?
(b) Plot the residual series against Y Ot in a scatter plot (the former on the vertical axis, the latter
on the horizontal axis). Does a visual inspection of the residual series suggest that they satisfy
the assumption of constant variance (homoskedasticity)?
(c) Plot a histogram summarising the frequency distribution of the residuals using the command
hist(fit_IO_on_YO$residuals)
Does the assumption of normality seem plausible?
(d) Formally whether the residuals are normally distributed using the Jarque-Bera test to test the
null of normality. It measures how much the skewness (asymmetry) and the kurtosis (curvature
and tail thickness) of the residual series differ from those of a Normal distribution.
This test is contained in a specific R library, namely the library tseries. So first we need to
install this library in R. You can install this library in R by going to the bottom left panel in
R, click in the menu bar on “Packages”, then “Install”. Type the name of the package, namely
tseries, and click on “Install”. R will install the package for you. Once this is done, then you

Page 8
need to load the library into R, such that you can access the functions in this library. To load the
library in R, use the command
library(tseries)
You can now perform the Jarque-Bera test:
jarque.bera.test(fit_IO_on_YO$residuals)
What is your conclusion?

Important note on R libraries: Installing a library only needs to be done once, but in case you
would like to make use of functions in the library, you need to load the library in every R session!

7. Log-Log Specification. Consider the simple log-log regression model:

ln IOt = β0 + β1 ln Y Ot + ut . (2)

(a) Generate the variable ln IOt which is the natural logarithm of the variable IOt :
lnIO_ts <- log(IO_ts)
Do the same for ln Y Ot .
(b) Make a time series plot of ln IOt , ln Y Ot . How does this plot compare/differ to the one you made
of IOt , Y Ot ? Can you think of reasons to apply the log-transformation?
(c) Make a scatter plot of ln IOt , ln Y Ot . Do you see a relationship emerging, and could it be
approximated by a linear function?
(d) Estimate the simple regression model in equation (2).
(e) What are the values of the estimates β̂0 and β̂1 ?
(f) Interpret the coefficient β̂1 . Be careful!, it has a different interpretation than in regression (1)!
(g) Is the variable ln Y Ot significant at 5% significance level? Answer this question in three different
ways: based on the (i) t-statistic, (ii) p-value, (iii) 95% confidence interval around β1 .
(h) Give the value of the R2 and interpret it.
(i) Inspect the residuals of the log-log model as you did in part 5. What are your conclusions?

8. Specification in Log-Differences. Consider the regression model in log-differences:

∆ ln IOt = β0 + β1 ∆ ln Y Ot + ut . (3)

where ∆ ln IOt = ln IOt − ln IOt−1 . The term “log-difference” is short for (first) logarithmic difference.
A log-difference should be interpreted as a rate of change (or growth rate). Log-differences are often
preferable to ordinary percentage changes because unlike percentage changes they are additive and
symmetric. It is important to understand the properties of logarithms! (See e.g. Appendix A.4 of
Wooldridge.)

(a) Generate the variable dlnIO which is the first difference of lnIO using the command:
dlnIO_ts <- diff(log(IO_ts))
Do the same for ∆ ln Y Ot .
(b) How many observations are available for the variable IOt and how many are available for ∆ ln IOt ?
Explain the difference!
(c) Make a time series plot of ∆ ln IOt , ∆ ln Y Ot . How does this plot compare/differ to the one you
made of ln IOt , ln Y Ot ?
(d) Make a scatter plot of ∆ ln IOt , ∆ ln Y Ot . Do you see a relationship emerging, an could it be
approximated by a linear function?
(e) Estimate the simple regression model in equation (3).
(f) What are the values of the estimates β̂0 and β̂1 ?

Page 9
(g) Interpret the coefficient β̂1 .
(h) Is the variable ∆ ln Y Ot significant at 5% significance level? Answer this question in three different
ways: based on the (i) t-statistic, (ii) p-value, (iii) 95% confidence interval around β1 .
(i) Give the value of the R2 and interpret it. Do you observe a difference compared to the earlier
regressions you ran?
(j) Inspect the residuals of the model in log-differences as you did in part 5. What are your conclu-
sions?

Page 10
Tutorial 2: Basic Time Series Regressions
In this tutorial, you will discuss basic time series concepts and time series regressions while revising how to
perform a joint hypothesis test. By default, we work with a significance level of 5% in this tutorial and in
the next ones!
!! Important Reminder: We will provide initial help for specific functions the first time that you will
need to make use of this function. Hence, functions introduced in the previous tutorial assignment(s) will
not be repeated here. You can always look back at the previous tutorial assignments if you no longer know
what the appropriate functions to use are !!

1. Setting up R. Set-up your R script as you did for the first tutorial, so: set your working directory, and
import your data into R.
2. Visual Inspection of Stationarity.
(a) Make a time series plot of ln IOt and ln Y Ot . Are these time series stationary? Discuss.
(b) Make a correlogram of ln IOt and ln Y Ot . Discuss the values of the autocorrelations at the first
couple of lags.
In R, use the command
acf(lnIO_ts)
to display the correlogram of a particular time series (assuming you created a time series object
for the log-transformed IO variable).
(c) Do the same for ∆ ln IOt and ∆ ln Y Ot : discuss stationarity based on the time series plot and
discuss the values of the autocorrelations.

3. Autoregressive Model for ln IOt . Consider the AutoRegression of order 1, denoted AR(1), for ln IOt :

ln IOt = β0 + β1 ln IOt−1 + ut . (4)

(a) To generate the response and predictor variable in regression model (4), you can make use of the
function embed:
lags_lnIO <- embed(lnIO, dimension = 2)
which generates a new matrix where the response ln IOt is contained in the first column and the
predictor ln IOt−1 is contained in the second column. More lags can be obtained by adjusting the
argument dimension.
(b) Inspect the newly created matrix lags IO via the function View. How many observations (rows)
does the matrix lags IO have?
(c) You can then access the first column in a matrix via [, 1], and the second column via [, 2] to
generate the response and predictor needed for estimating model (4):
lnIO_0 <- lags_lnIO[, 1]
lnIO_1 <- lags_lnIO[, 2]
where we use the notation x to denote the xth lag of a certain variable.
(d) Estimate the AR(1) model in equation (4) using the function lm.
(e) Interpret the value of the estimate β̂1 .
(f) What does the value of the estimate β̂1 tell you about the stationarity of the series ln IOt ?

4. Autoregressive Model for ∆ ln IOt . Consider the AR(1) model for ∆ ln IOt :

∆ ln IOt = β0 + β1 ∆ ln IOt−1 + ut . (5)

(a) Generate the variable dlnIO (the series IO in first differences). Then generate the variable dlnIO 0
and dlnIO 1 (respectively the response and predictor in equation (5)) via the function embed.

Page 11
(b) Estimate the AR(1) model in equation (5). How many observations are used to estimate this
model?
(c) Interpret the value of the estimate β̂1 .
(d) What does the value of the estimate β̂1 tell you about the stationarity of the series ∆ ln IOt ?
We will return to the topic of stationarity, unit roots and unit root tests in Tutorial 3! In this tutorial,
let us consider static time series regression models, finite distributed lag models and autoregressive
distributed lag models for ln IOt .

5. Static Model. Return to the log-log regression model for ln IOt :

ln IOt = β0 + β1 ln Y Ot + ut . (6)

(a) Explain what a static time series regression means and why the regression in equation (6) is one.
(b) Estimate model (6). Make a (i) line plot of the residuals, as well as a (ii) correlogram of the
residuals. Are the residuals autocorrelated?
(c) In case the residuals are autocorrelated: does this cause the OLS estimator to be biased?
(d) In case the residuals are autocorrelated: does this cause problems for inference (t-statistics, p-
values, ....). If so, can you think if solutions to circumvent this problem?

6. Finite Distributed Lag Model. Estimate the Finite Distributed Lag model of order one, denoted as
FDL(1):
ln IOt = β0 + β1 ln Y Ot + β5 ln Y Ot−1 + ut . (7)

Note: It will become clear later on in the assignment why we use β5 and not β2 in front of ln Y Ot−1 .
(a) Explain what a dynamic time series regression means and why the regression in equation (7) is
one.
(b) Generate the variables lnYO 0 and lnYO 1 by using the function embed.
(c) Estimate the FDL(1) model in equation (7). What would happen if you execute the following
code in R:
fit_FDL <- lm(lnIO ~ lnYO_0 + lnYO_1)
(d) To remove the first observation from a vector, you may use the notation [-1]. Generate the new
variable:
lnIO_0 <- lnIO[-1]
Discuss why the following regression will give you the desired outcome:
fit_FDL <- lm(lnIO_0 ~ lnYO_0 + lnYO_1)
Note: we have now over-written the variable lnIO 0 since we also generated this variable in
Assignment 3(c) above. But in fact, the definition of lnIO 0 here and the one given in Assignment
3(c) gives you exactly the same result. Discuss!
(e) Based on the regression output, (manually) draw a picture of the lag distribution: summarizing
the effect of ln Y O on ln IO at lag zero, one and two.
(f) What is the value of the estimated impact multiplier?
(g) What is the value of the estimated long-run multiplier?
(h) Test the joint null hypothesis H0 : β1 = β5 = 0 versus the alternative that at least one of the two
betas is different from zero.
In R, this joint hypothesis test, where we test the joint nullity of all regression parameters (apart
from the intercept) is by default reported in the summary output of your lm object, namely on
the last line. What is the value of the F -statistic? What is the corresponding p-value? What do
you conclude?

Page 12
7. Recap Multiple Hypothesis Testing. Consider the choice between current and constant prices. Extend-
ing the investment function with price indexes allows a formal comparison between nominal and real
specifications, by means of statistical hypothesis tests. We will work with the implicit price deflators
Y Ut IUt
P Yt = and P It = .
Y Ot IOt
(a) Run the extended regression of real investment on a constant, current and lagged real output,
both price indexes, and one lagged price index:

ln IOt = β0 + β1 ln Y Ot + β2 ln P It + β3 ln P Yt + β4 ln P It−1 + β5 ln Y Ot−1 + ut . (8)

You need to generate all your variables first. Assume that you name the fitted regression model
in (8) fit lnIO ur. Then present your regression output and test the separate hypotheses that
each price coefficient separately (β2 , β3 , β4 ) is in fact zero.
(b) Now consider the hypothesis that the price coefficients in equation (8) are all three zero:

H0 : β 2 = β 3 = β 4 = 0

(versus the alternative that at least one is different from zero). Note that if the hypothesis is true,
then regression (8) reduces to the regression (7).
Give the formula in Wooldridge to test this joint hypothesis.
(c) We will start by computing the F -statistic manually in R. You will have to compute sum of
squared residuals (SSR) of two regression models. Which regression models? You can use the
following code to obtain the SSR of, for instance, regression model (8):
SSR_ur <- sum(fit_lnIO_ur$residuals^2)
What is the value of the F -statistic? What are the degrees of freedom? Do you reject the null
hypothesis H0 : β2 = β3 = β4 = 0 or not?
Note that you can compute the critical values of the F-distribution in R via the function qf. Use
the documentation in R to appropriately fill in the arguments in the function to compute the
critical value for your country!
(d) We can also opt to directly perform the F -test in R. To this end, we can use the linearHypothesis
function in the R package car. Start by installing the package car. You can then use the following
command to directly obtain the F -test:
linearHypothesis(fit_lnIO_ur, c("lnPI_0=0", "lnPY_0=0", "lnPI_1=0"), test="F")
where you simply write out the restrictions under the null hypothesis by refering to the variable
names of the corresponding parameters. The argument test="F" ensures that you compute the
F -test. Does the output provided by R match with your manual computation? Interpret the
output.

(e) Next, an important hypothesis is that of price homogeneity, i.e., the theory that absolute price
levels are unimportant and instead only relative prices matter. Price homogeneity is a theoretical
property considered as desirable in economic models, at least in the long run. It implies the
absence of money illusion. Define the relative price of investment goods and services, P IRt :
P It
P IRt ≡ .
P Yt
Under strict price homogeneity, the three price indexes in the regression 8 may be replaced by
the single relative price, P IRt :

ln IOt = γ0 + γ1 ln Y Ot + γ2 ln P IRt + γ5 ln Y Ot−1 + ut . (9)

Write down the null hypothesis of strict price homogeneity in terms of the regression coefficients
of 8 (so in terms of the βs!). To this end, start from equation 9 and plug in the definition of P IRt .
Which restrictions on the regression coefficients of 8 arise then?

Page 13
(f) Test the null hypothesis of strict price homogeneity using the linearHypothesis function in R.
What do you conclude?

(g) A weaker version of price homogeneity would allow for short-run deviations of strict homogeneity.
One way to do this is to introduce an effect of investment price inflation dlnPIt = ∆ ln P It next
to relative prices:

ln IOt = γ0 + γ1 ln Y Ot + γ2 ln P IRt + γ3 ∆ ln P It + γ5 ln Y Ot−1 + ut . (10)

Here absolute price levels still play no role, but the rhythm of price changes does; price homo-
geneity holds only in the longer run.
Write down the hypothesis of weak price homogeneity in terms of the regression coefficients of (8)
(so in terms of the βs!). To this end, start from equation (10) and plug in the definition of P IRt
and ∆ ln P It . Which restrictions on the regression coefficients of (8) arise then?
(h) Test the null hypothesis of weak price homogeneity. What do you conclude?

(i) Finally, test the hypothesis that (8) simplifies to a simple relation between nominal investment
and nominal output:
ln IUt = γ0 + γ1 ln Y Ut + ut . (11)

This hypothesis too implies joint coefficient restrictions , and you need to find out precisely what
these restrictions are. Start from equation (11) and plug in the definitions of IUt and Y Ut . Which
restrictions on the regression coefficients of (8) arise then (so in terms of the βs!)?
(j) Test the null hypothesis you derived in part (h). What do you conclude?

8. AutoRegressive Distributed Lag Model. Consider the ARDL(1,1) model

ln IOt = β0 + β1 ln Y Ot + β2 ln Y Ot−1 + β3 ln IOt−1 + ut . (12)

(a) Explain why the regression in equation (12) is a dynamic time series regression.
(b) Explain in words (so intuitively) the difference between the FDL(1) in equation (7) and the
ARDL(1,1) in equation (12).
(c) Estimate the ARDL(1,1) model in R using the lm function.
(d) What is the value of the impact multiplier?
(e) What is the value of the long-run multiplier? To this end, start from the equilibrium model (see
model with ? notation on the lecture slides) and solve for the coefficient in front of ln Y O.

Page 14
Tutorial 3: Unit Roots, Trends, Unit Root Tests and Spurious
Regressions
In this tutorial, you will learn how to perform unit root tests, how to determine the order of integration of
a series, how to recognize spurious regressions and you will dive into one of its solutions: ARDL models.
1. Setting up R. Set-up your R script as you did for the previous tutorials, so: set your working directory,
and import your data into R.
2. Visual Inspection of Stationarity (Recap)
(a) Make a time series plot of ln IOt and ln Y Ot . Are these time series likely to be stationary?
Discuss.
(b) Make a time series plot of ∆ ln IOt and ∆ ln Y Ot . Are these time series likely to be stationary?
Discuss.

3. Visual Inspection of Trends. Consider the regression model for ln IOt with a trend:

ln IOt = β0 + β1 t + ut . (13)

For the trend t you can generate a variable trend in R via the commands:
n <- length(lnIO)
trend <- 1:n
where the function length returns the length of the variable (hence it gives you the sample size n),
and the command 1:n simply returns you a sequence of numbers from 1 to n in steps of one.
(a) Estimate regression model (13) using the lm function and carefully interpret the estimated coef-
ficient β̂1 .
(b) Save the residual of model (13). What do the residuals intuitively represent?
(c) Make a time series plot of the residual series. Do you think it is likely that ln IOt has a deter-
ministic or a stochastic trend? Explain the difference between both in your answer!
(d) If a series is trend stationary, what does this mean? Does it then have a deterministic or a
stochastic trend?

(e) Repeat the same exercise for ln Y Ot . What is your conclusion: is ln Y Ot likely to have a deter-
ministic or a stochastic trend?

4. Dickey-Fuller Unit Root Test (with constant and trend). To formally test whether a series has a
stochastic or a deterministic trend, we need to perform a unit root test with constant and trend.

(a) What is the null hypothesis of this unit root test? What is the alternative hypothesis?
(b) We start by running the Dickey-Fuller (DF) test (with constant and trend) for ln IOt .
In R, start by installing the package bootUR, which offers a wide range of unit root test. After
loading the library, you can then use the commands
df_lnIO <- adf(lnIO, deterministics = "trend", max_lag = 0)
df_lnIO
to perform a Dickey Fuller test (max lag = 0, hence zero lagged first difference terms included)
unit root test with a constant and trend term included (deterministics = "trend"). Present
the output of the unit root test for ln IOt , how should you interpret it?
(c) What is your conclusion for ln IOt : does it have a stochastic or a deterministic trend?
(d) How to proceed in case of a stochastic trend? How to proceed in case of a deterministic trend?

(e) Repeat the same exercise for ln Y Ot : does it have a stochastic or a deterministic trend?

Page 15
5. Augmented Dickey-Fuller Unit Root Test (with constant and trend). Now consider the “Augmented”
Dickey-Fuller (ADF) unit root test.
(a) How does the ADF unit root test differ from the DF test? Why is the augmentation needed?
(b) Run the ADF test for ln IOt .
In R, use the commands
adf_lnIO <- adf(lnIO, deterministics = "trend")
adf_lnIO
This function automatically includes lagged difference terms in the test equation, by using an
information criterion based on Akaike Information Criterion to determine how many of these
terms should be added.
Present the output of the unit root test for ln IOt . What is your conclusion for ln IOt : does it
have a stochastic or a deterministic trend?

(c) Repeat the same exercise for ln Y Ot . What do you conclude?

6. Bootstrap union of rejection test. In the previous exercise, we used the ADF test as a unit root test,
which is by far the most popular unit root test. Still the ADF test requires us to specify which
deterministic components to include in the test equation (a constant and a trend in case the series
displays a trend; a constant only when the series displays no trend). To relieve the user of making
this choice (in case it is not so clear cut), you may use the union of rejections test instead. The null
hypothesis and alternative hypothesis stay the same as before.
(a) Run the test via the command:
union_lnIO = boot_union(lnIO)
Present the output of the test for ln IOt . What is your conclusion for ln IOt : does it have a
stochastic or a deterministic trend?
(b) Repeat the same exercise for ln Y Ot . What do you conclude?

7. Unit Root Test on the series in log-differences. The series ln IOt or ln Y Ot will never be stationary (at
most trend-stationary). (Remind yourself why this is the case!). We now test whether the series in
log-differences are stationary.

(a) Perform the union of rejections test on ∆ ln IOt . What is the null hypothesis? What is the
alternative hypothesis? Present your output of the test. How should you interpret it?
(b) After having ran the unit root test on ln IOt and ∆ ln IOt what do you conclude about the order
of integration of ln IOt ?
Explain the difference between a series that is I(1) (“integrated of order one”) or I(0) (“integrated
of order zero) in your answer!

(c) Repeat the same exercise for ∆ ln Y Ot .

8. Static Regression for the series in log-levels (revisited) and Spurious Regressions. Re-consider the static
regression model for the series in log-levels:

ln IOt = β0 + β1 ln Y Ot + ut . (14)

(a) Given the outcome of your unit root tests, is the static regression model (14) possibly a spurious
regression?
Explain what a spurious regression means and what drives this!
(b) Is it “safe” to interpret the regression output of model (14)?
(c) Re-inspect the value of the R2 . Is it spurious? Should we interpret it?
(d) What are solutions to the spurious regression problem?

Page 16
(e) Which solutions have we considered already in earlier tutorials, which haven’t we considered yet?

9. Static Regression for the series in first differences and Spurious Regressions. Re-consider the static
regression model for the series in first differences:

∆ ln IOt = β0 + β1 ∆ ln Y Ot + ut . (15)

(a) Given the outcome of your unit root tests, is the static regression model 15 possibly a spurious
regression?
(b) Is it “safe” to interpret the regression output of model 15?
(c) Re-inspect the value of the R2 . Is it spurious? Should we interpret it?

10. ARDL models: short-run and long-run effects. In this tutorial, we will zoom into one of the solutions
for spurious regression problems, namely ARDL models. Consider the ARDL model

yt = β0 + β1 xt + β2 xt−1 + β3 yt−1 + ut (16)

where you may take yt ≡ ln IOt and xt ≡ ln Y Ot .


(a) Revisit the assumptions needed for OLS to be unbiased or consistent. Can we still rely on strict
exogeneity of the regressors in ARDL models? Why (not)?
(b) Estimate the ARDL(1,1) model. Note that you have estimated this model already in Tutorial 2!
Below, we further investigate the estimation output.

We now examine how a permanent rise in xt (a “permanent shock”) affects the conditional mean
of yt in the following years. Define three time horizons: the same year as the shock (short-run),
one year later (medium-run), and many years later (long-run), with corresponding effects analyzed
below.
(c) Short-run. Define the same-year effect, known as the “impact multiplier”, as

θ1 ≡ E (yt | xt , yt−1 , xt−1 , . . .) . (17)
∂xt

Finding the impact multiplier for the ARDL(1,1) model should be easy (you did this already in
Tutorial 2)! It is simply the instantaneous partial derivative:

θ1 ≡ E (yt | xt , . . .) = β1 .
∂xt

What is the value of the impact multiplier for the ARDL(1,1) model you estimated?
(d) Medium-run. Define the cumulative effect after two years, known as the “two-year (interim)
multiplier”, as
 
∂ ∂
θ2 ≡ + E (yt | xt , yt−1 , xt−1 , . . .)
∂xt ∂xt−1

= θ1 + E (yt | xt , yt−1 , xt−1 , . . .) . (18)
∂xt−1
This is the sum of the impact multiplier and the second-year partial effect of the shock.
To obtain the two-year (interim) multiplier for model 16, start by substituting away yt−1 as
follows:

yt = β0 + β1 xt + β2 xt−1 + β3 (β0 + β1 xt−1 + β2 xt−2 + β3 yt−2 + ut−1 ) + ut


= β0 (1 + β3 ) + β1 xt + (β2 + β3 β1 ) xt−1 + β3 (β2 xt−2 + β3 yt−2 + ut−1 ) + ut .

Page 17
From this expression, you can easily obtain the second-year partial effect as the partial derivative


E (yt | xt , . . .) = β2 + β3 β1
∂xt−1
as the coefficient in front of xt−1 . The two-year multiplier is found as the sum of these two partials,

θ 2 ≡ β1 + β2 + β3 β1 .

What is the value of the two-year multiplier for the ARDL(1,1) model you estimated?
(e) Long-run. Define the cumulative long-run effect, known as the “total multiplier” as

!
X ∂
θ∞ ≡ E (yt | xt , yt−1 , xt−1 , . . .) . (4.6.iii)
i=0
∂xt−i

This is the sum of all partial effects, at impact and in the entire sequel of years.
To determine long-run effects in a model, we establish whether the model admits a state where all
variables have converged to some static “equilibrium” level. See what happens when you drop all
time subscripts and replace them by stars (to indicate constant equilibrium values), then solve the
resulting relationship for the dependent variable. For instance, the ARDL(1,1) model becomes

y∗ = β0 + β1 x∗ + β2 x∗ + β3 y∗ + u∗ .

Setting u∗ we can solve for y∗ :


β0 β1 +β2 β0
y∗ = 1−β3 + 1−β3 x∗ = 1−β3 + θ ∞ x∗ . (4.6.v)

This is a stationary state (assuming β3 < 1) which can be viewed as a hypothetical long-run
equilibrium of the model. The coefficient of x∗ here denoted as θ∞ = β1−β
1 +β2
3
is the long-run effect
on y∗ of shocks in the explanatory variable (Cf. Wooldridge § 10.2, Problem 10.3).
What is the value of the long-run multiplier for the ARDL(1,1) model you estimated?

(f) Now, let us investigate whether the impact, two-year and long-run multipliers are significantly
different from zero.
Obtain the standard error of the estimated impact multiplier, this should be easy. Is the impact
multiplier significantly different from zero?
(g) Obtaining the standard error for the two-year and long-run multipliers is more difficult. Let us
consider the two-year multiplier.
To obtain a standard error for the two-year multiplier estimate θ̂2 , you need to apply the reshuffling
or “theta trick” (Wooldridge § 4.4). In this example, the reshuffling trick is to substitute out one
of the β 0 s, say β1 , from the estimating equation (16) in favor of θ2 , using
θ2 −β2
β1 = 1+β3 ,

and then to rearrange the terms in (16) so as to estimate θ2 :


θ2 −β2
yt = β0 + 1+β3 xt + β2 xt−1 + β3 yt−1 + ut .

The above equation is nonlinear in its coefficients. To implement the last nonlinear regression in
R, you need to use the function nls and enter it as an explicit algebraic equation in your software:
nls_theta2 = nls(lnIO_0 ~ beta0 + ((theta2 -beta2)/(1+ beta3))*lnYO_0 + beta2*lnYO_1 +
beta3*lnIO_1, start = list(beta0 = 1, theta2 = 1, beta2 = 1, beta3 = 1))
The estimated coefficient “theta2” is a direct estimate of the two-year multiplier θ2 and its stan-
dard error is reported along with the estimate! Note that we provide some starting values, which
are the values in the list since nonlinear estimation procedures are iterative. In the code above,

Page 18
we initialize all parameters at one, but you can use their actual values since you have computed
these before! So you can use these as starting values to ensure faster convergence of the non-linear
least squares estimation. As a double check: verify that the estimated values of the betas and the
thetas in the output of the nls estimation coincide with the values you obtained above, as they
should!
Implement this procedure to get the standard error for θ̂2 . Is the two-year multiplier significant?
(h) Implement a similar “theta” trick to get the standard error for θ̂∞ . Start by re-expressing β1 in
favor of θ∞ . Which expression to you get? Run the non-linear regression to get the standard
error. Is the long-run multiplier significant?

Page 19
Tutorial 4: Cointegration and More On ARDL Models
In this tutorial, you will learn to investigate whether two series are cointegrated and you will discuss more
on ARDL models.

1. Setting up R. Set-up your R script as you did for the previous tutorials, so: set your working directory,
and import your data into R.
2. Explaining the set-up. We will investigate the hypothesis that the pair (ln IOt , ln Y Ot ) is “cointe-
grated”. To this end, we will follow two different paths. First, we will assume we know the exact
value of the long-run parameter binding the two series. Second, we will recognize our ignorance, and
estimate the long-run parameter.
Before we start: Briefly explain in words what it means that two series are cointegrated!
3. Cointegration between investment and output with known long-run parameter.
(a) Define the following series:
LN AP IOt ≡ ln IOt − ln Y Ot .
This is the logarithm of the Average Propensity to Invest AP IOt ≡ IOt /Y Ot (in constant prices).
Generate this variable in R.
(b) If LN AP IOt is I (0) , while its components ln IOt and ln Y Ot are I (1) , then ln IOt and ln Y Ot
are cointegrated. In this case, investment expenditures will, over time, adapt proportionally to
variations in output.
What is the known value of the long-run parameter binding the two series lnI O and ln Y O in this
case?
(c) Make a line graph of the series LN AP IOt . Does it look stationary?
(d) Now, apply the Augmented Dickey-Fuller (ADF) test to the LN AP IOt series using the adf
function in the bootUR package. Which deterministic terms do you include in your unit root test?
(e) What is your conclusion: is LN AP IOt stationary or not? What does this mean in terms of the
pair of series (ln IOt , ln Y Ot ): are they cointegrated or not?

4. Cointegration between investment and output with estimated long-run parameter. Consider the regres-
sion model
ln IOt = β0 + β1 ln Y Ot + ut . (19)

(a) Where have you encountered this model before? What did we discuss back then?
(b) Estimate the regression model (19). What is the value of β̂1 .
(c) Save the residuals of the estimated regression model (19) and make a time series plot of the
residuals. Do they look stationary?
(d) Now apply the Engle-Granger approach to test for cointegration. The first step is done already:
you estimated the static cointegrating regression (19). The second step is to test for a unit root in
the residuals ût of the cointegrating regression. For this, use an ADF test. Which deterministic
terms do you include in your unit root test?
(e) Obtain the output of the ADF test on the residuals. Do not inspect the output yet but first
explain an important difference between the ADF test performed here on the residuals, and let us
say any ADF test we performed earlier on, so for instance the ADF test performed on the series
ln IOt to determine its order of integration?
(f) Note that the ADF critical values and p-values reported by R (or any other software package)
are in this case NOT appropriate for the cointegration test. This is because they ignore the fact
that the test is applied to a residual series ût rather than the “true errors” ut , which of course
are unobservable. The residual series is a stand-in for the “true errors” and is calculated so as to

Page 20
minimise its sum of squares (the OLS principle). This tends to make it appear somewhat more
stationary than it actually is.

Critical values for Engle-Granger ADF cointegration test


Number of variables involved α = 0.01 α = 0.05 α = 0.10
2 −3.96 −3.41 −3.12
3 −4.36 −3.80 −3.52
4 −4.73 −4.16 −3.84
5 −5.07 −4.49 −4.20

So do not interpret the p-value of the ADF unit root test on the residuals that R provides,
but compare the value of the test statistic (from the output) to the correct critical value in the
provided table. What do you conclude: does the residual series have a unit root or not? What
does this mean in terms of the pair of series (ln IOt , ln Y Ot ): are they cointegrated or not?
(g) Maybe some of you rejected the null hypothesis of no cointegration in part 2, yet failed to do so
in part 3. Strangely, it looks as if the arbitrarily postulated elasticity in part 2 is closer than the
freely estimated one in part 3 to the true long-run effect. Could it be that the estimation step
in part 3 is causing a loss of power in the cointegration test? Try to explain the phenomenon.
(Hint: Which test is more powerful, if valid? )

Finally a note to end: Cointegration tests can be misleading. One pitfall is low power: the
probability of a Type II (“acceptance”) error can be high. Failing to reject the null hypothesis
should not automatically lead to accepting it. Another pitfall is the possibility that more than
two variables are involved in a cointegration relationship. For instance, the relationship between
investment and GDP may only emerge if we also control for the interest rate at which businesses
can get bank loans. The Engle-Granger testing technique extends to the case of more variables,
but there are two caveats. First, the critical values for the test need to be adjusted (see table).
Second, when there are three or more variables involved, there can potentially also be multiple
cointegration relationships, which complicates the theory a lot. A system approach tackling such
complications by assuming Normality and using Maximum Likelihood was developed by Johansen.
It is, however, beyond the scope of this course.

5. Back to the ARDL model. Re-consider once more the ARDL(1,1) model

yt = β0 + β1 xt + β2 xt−1 + β3 yt−1 + ut , (20)

where you can take yt ≡ ln IOt and xt ≡ ln Y Ot (as in Tutorial 3).


The ARDL(1,1) model in equation (20) is relatively general. We will now consider a couple of special
cases. We will investigate whether it is statistically acceptable to simplify the general model into
simpler ones, and we will determine the impact, two-year and long-run multipliers for each model
(similarly to Tutorial 3).
Start by obtaining the regression output of the ARDL(1,1) model (once more; no discussion needed).
We will not go through all specific models in the tutorials, but consider a selection of them!
6. Two-period Distributed Lag Model. Consider the Finite Distributed Lag Model of order one:

yt = β0 + β1 xt + β2 xt−1 + ut . (21)

This is a finite (two-period) distributed lag model, where the lagged dependent variable does not
appear and only one lag of the explanatory variables does (Wooldridge § 10.2). This can also be seen
as a ARDL(0,1) Model. Remember that we have encountered this model before in the tutorials...

(a) Formulate the restriction under which the ARDL(1,1) simplifies into the ARDL(0,1) model as a
null hypothesis.

Page 21
(b) Test whether the simplifying restriction is statistically acceptable. What is your conclusion?
(c) Estimate the ARDL(0,1) model (21). What is the value of the impact multiplier? Is it statistically
significant?
(d) Derive the expression of the value of the two-year multiplier (see definition Tutorial 3). What is
the value of the estimated two-year multiplier in your case? Is it statistically significant?
Reminder: You need to test a hypothesis involving a combination of parameters. If the combina-
tion is linear in the parameters, you may use the function linearHypothesis from the previous
tutorial. If the combination is non-linear in the parameters, you need to use the theta-trick and
use the nls function from the previous tutorial.
(e) Derive the expression of the value of the long-run multiplier (see definition Tutorial 3). What is
the value of the estimated long-run multiplier in your case? Is it statistically significant?
Final note: While you should derive the expressions of the impact, two-year and long-run mul-
tiplier from regression model (21) above, importantly (as a double check): notice that you can
use the same expression (you derived for the general ARDL(1,1) during last tutorial) for all the
special cases as well, by taking in the proper coefficient restrictions (which in some cases is very
easy)... Illustrate this double check here!

7. Partial Adjustment Model. Consider the ARDL(1,0) model

yt = β0 + β1 xt + β3 yt−1 + ut . (22)

Here, the only lagged variable on the right-hand side is the lagged dependent variable. This specification
is called a partial adjustment model in the jargon of econometrics.
(a) The apparent simplicity of model (22) is misleading. Show that (22) actually implies an infinite
distributed lag. You can show this by recursive substitution (i.e. plug in the model equation for
yt−1 at the right-hand-side and continue from there).
(b) Formulate the restriction under which the ARDL(1,1) simplifies into the ARDL(1,0) model as a
null hypothesis.
(c) Test whether the simplifying restriction is statistically acceptable. What is your conclusion?
(d) Estimate the ARDL(1,0) model (22). What is the value of the impact multiplier? Is it statistically
significant?
(e) Derive the expression of the value of the two-year multiplier (see definition Tutorial 3). What is
the value of the estimated two-year multiplier in your case? Is it statistically significant?
(f) Derive the expression of the value of the long-run multiplier (see definition Tutorial 3). What is
the value of the estimated long-run multiplier in your case? Is it statistically significant?

8. Error Correcting Model. In an Error Correction Model (ECM), lags of both the dependent and the
explanatory variables appear:

∆yt = α0 + α1 ∆xt + δ (yt−1 − βxt−1 ) + ut . (23)

[Cf. Eq. (18.38) of Wooldridge.]

(a) Estimate this equation in two steps. First, regress yt on xt and obtain the residuals ût from that
regression. Then, regress ∆yt on ∆xt , ût−1 .
Note: Be careful, the series ∆yt , ∆xt and ût will have different lengths! Which observation of
which time series do you need to remove to correctly estimate the ECM model?
(b) You should observe that there is a close connection between ECMs and co-integration: they
depend on one another! This is the famous “representation theorem” of Engle & Granger (1987).
Explain this connection!
Hint: what is the term in parentheses namely yt−1 − βxt−1 ? What do we know about this term
if yt and xt are cointegrated?

Page 22
(c) If there is cointegration, what sign to you expect for the coefficient δ of the error correction term?
If there is no co-integration relationship, what should the coefficient δ tend to in large samples?
(d) Lastly, why is no test of restrictions under which the ARDL(1,1) simplifies into this ECM model
asked here?

9. Unit-elasticity Restricted ECM. An important special case of the ECM imposes the condition that the
long-run elasticity of the dependent variable with respect to the independent one equals 1. In equation
(23), this means that β = 1, and the model may therefore be written

∆yt = α0 + α1 ∆xt + δ (yt−1 − xt−1 ) + ut . (24)

(a) Where have you encountered the cointegration relation with known long-run elasticity before?
What did you conclude then?
(b) Formulate the restriction under which the ARDL(1,1) simplifies into model (24) as a null hypoth-
esis.
Hint: Demonstrate that here one linear constraint is imposed on the coefficients of the ARDL(1,1)
model to arrive at this restricted model.
(c) Test whether the simplifying restriction is statistically acceptable. What is your conclusion?
(d) Estimate the restricted unit-elasticity ECM. To this end, you first need to define a new variable
being the difference between yt−1 and xt−1 which you can then directly use as (second) explanatory
variable is (24). (Note that you have defined the variable yt − xt already before. Which variable
was this? Now you need to obtain its lag....). Pay attention to the lengths of the variables
involved!
What is the value of the impact multiplier? Is it statistically significant?
(e) Derive the expression of the value of the two-year multiplier (see definition Tutorial 3). What is
the value of the estimated two-year multiplier in your case? Is it statistically significant?
(f) Derive the expression of the value of the long-run multiplier (see definition Tutorial 3). What is
the value of the long-run multiplier in your case? Can you obtain a standard error here?

10. Model in Growth Rates. Consider a simple model in growth rates:

∆yt = α0 + α1 ∆xt + ut . (25)

[Cf. Eq. (18.36) of Wooldridge.] Lags of both xt and yt are now implicit in ∆xt and ∆yt .

(a) Formulate the restriction under which the ARDL(1,1) simplifies into model (25) as a null hypoth-
esis.
Hint: Demonstrate that here a set of two linear constraints is imposed on the coefficients of the
ARDL(1,1) model.
(b) Test whether the simplifying restriction is statistically acceptable. What is your conclusion?
(c) Estimate the model (25) in growth rates. What is the value of the impact multiplier? Is it
statistically significant?
(d) Derive the expression of the value of the two-year multiplier (see definition Tutorial 3). What is
the value of the estimated two-year multiplier in your case? Is it statistically significant?
(e) Can you derive the expression of the value of the long-run multiplier (see definition Tutorial 3).
Why (not)? Explain this intuitively.

11. Static Model. Re-consider the static model

yt = β0 + β1 xt + ut . (26)

Page 23
(a) Formulate the restriction under which the ARDL(1,1) simplifies into model (26) as a null hypoth-
esis.
(b) Test whether the simplifying restriction is statistically acceptable. What is your conclusion?
(c) Estimate the static model (26). What is the value of the impact multiplier? Is it statistically
significant?
(d) Derive the expression of the value of the two-year multiplier (see definition Tutorial 3). What is
the value of the estimated two-year multiplier in your case? Is it statistically significant?
(e) Derive the expression of the value of the long-run multiplier (see definition Tutorial 3). What is
the value of the estimated long-year multiplier in your case? Is it statistically significant?

12. Implied long-run equilibrium. For models in equations (20) (ARDL(1,1)), (21) (ARDL(0,1)), (22)
(ARDL(1,0)), (24) (unit-elasticity ECM), and (26) (static model) of the previous task, write out
the long-run equilibrium (see ? notation Tutorial 3) in terms of the original variables. Which one
of the six relationships implies that the “Average Propensity to Invest in real terms”, defined as
AP IOt ≡ IOt /Y Ot , has a constant long-run equilibrium value, independent of Y Ot ? What is the
estimate of that long-run equilibrium value, and how does it compare to the historical values of AP IOt
in your data set?

Page 24
Important remark for your interim assignment, to be submitted by Friday November 22, 17u: You are
expected to hand in a 6 × 4 table presenting estimates of the multipliers for the six different models we
discussed. The six rows of the table are for six different models, all of which you already estimated. The
first column provides the null hypothesis of the restrictions you tested in the previous assignment together
with the corresponding p-value you obtained in R. In the last three columns, each cell should contain three
ingredients: (i) the formula for the elasticity, (ii) the estimated elasticity you obtained from R and (iii)
below it, in parentheses,its standard error (SE). The estimated elasticity is the expected percentage change
in investments resulting from a 1% permanent rise in output. The three columns correspond to the effect
(i) within the same year (the impact multiplier); (ii) within two years (the two-year multiplier), and (iii) in
the far future (the long-run or total multiplier). As a recap, the six models are:
(M1) the general dynamic ARDL(1,1), see equation (20)
(M2) the two-period distributed lag model, see equation (21)
(M3) the partial adjustment model, see equation (22)
(M4) the unit-elasticity restricted ECM, see equation (24)
(M5) the model in growth rates, see equation (25); and finally,
(M6) the static model, see equation (26).

For your interim assignment, hand in a .pdf file on Canvas containing


• Your name, student number and country choice.
• The table for your country, see Table 3 for an expected outline.
• After the table, include your derivations for the restrictions as well as for the multipliers. A scan of
hand-written notes is fine provided your solutions are readable!
• After your derivations, you can paste your R-code, organized per model, that you used to generate the
content of the table.

Table 3: Outline of Table to be handed in.


Model Restrictions Impact Multiplier θ1 Two-year Multiplier θ2 Long-run Multiplier θ∞
M1 None θ1 = ... θ2 = ... θ∞ = ...
θ̂1 = ... θ̂2 = ... θ̂∞ = ...
(SE = ...) (SE = ...) (SE = ...)
M2 H0 : ... θ1 = ... θ2 = ... θ∞ = ...
p-value=... θ̂1 = ... θ̂2 = ... θ̂∞ = ...
(SE = ...) (SE = ...) (SE = ...)
M3 H0 : ... θ1 = ... θ2 = ... θ∞ = ...
p-value=... θ̂1 = ... θ̂2 = ... θ̂∞ = ...
(SE = ...) (SE = ...) (SE = ...)
M4 H0 : ... θ1 = ... θ2 = ... θ∞ = ...
p-value=... θ̂1 = ... θ̂2 = ... θ̂∞ = ...
(SE = ...) (SE = ...) (SE = ...)
M5 H0 : ... θ1 = ... θ2 = ... θ∞ = ...
p-value=... θ̂1 = ... θ̂2 = ... θ̂∞ = ...
(SE = ...) (SE = ...) (SE = ...)
M6 H0 : ... θ1 = ... θ2 = ... θ∞ = ...
p-value=... θ̂1 = ... θ̂2 = ... θ̂∞ = ...
(SE = ...) (SE = ...) (SE = ...)

Page 25
Tutorial 5: Specification Tests
In this tutorial, you will submit your investment-growth specification to several quality controls. You can
(and should) do this for all the regressions you run (in your paper). In this tutorial, we focus, as an example,
on the two-period distributed lag model

ln IOt = β0 + β1 ln Y Ot + β2 ln Y Ot−1 + ut . (27)

1. Setting up R. Set-up your R script as you did for the previous tutorials, so: set your working directory,
and import your data into R.
2. Tests of the distributional assumptions. Let us submit equation (27) to more tests concerning the
adequacy of the statistical model. The aspect of the model that we examine is the set of assumptions
made about the statistical distribution of the error term.

(a) Estimate regression model (27) and save the estimated model in R as object fit fdl. Save the
residuals of the estimated model (27). Form a visual impression of the acceptability of the constant
variance (homoskedasticity) assumption. Does heteroskedasticity bias the estimated coefficients?
Does it invalidate statistical tests?
(b) A more formal test is the Breusch-Pagan test for heteroskedasticity. Explain how the test works,
and which null hypothesis (versus which alternative hypothesis) is being tested.
(c) In R, start by installing the package lmtest, which contains the Breusch-Pagan test in the function
bptest. You can then load the package and use the command:
library(lmtest)
bptest(fit_fdl, varformula = ~ lnYO_0 + lnYO_1)
to get the value of the test statistic and the corresponding p-value. The argument varformula
indicates the explanatory variables for explaining the residual variance. What do you conclude?
(d) There is an important drawback of the Breusch-Pagan test, which one? To circumvent this
drawback we turn to the White test instead. Explain how the test works, and which null hypothesis
(versus which alternative hypothesis) is being tested.
(e) To perform the White test in R, you may again use the bptest function, you only need to adjust
the argument varformula:
bptest(fit_fdl, varformula = ~ lnYO_0 + lnYO_1 + I(lnYO_0^2) + I(lnYO_1^2) +
lnYO_0*lnYO_1)
to correctly reflect that squared terms as well as cross-products should also be considered as
explanatory variables for explaining the residual variance. Note that the function I(·) is used
around the squared terms such that R correctly recognizes the expression for taking the square.
What do you conclude?
(f) Re-estimate your equation replacing the ordinary coefficient standard errors by heteroskedasticity-
robust standard errors, which were (along with the corresponding test) promoted in econometrics
by White.
In R, start by installing the package sandwich. After loading the library into R, you can use the
commands:
library(sandwich)
coeftest(fit_fdl, vcov = vcovHC(fit_fdl, type = "HC"))
to get the estimated regression coefficients and corresponding standard errors, t-statistics and
p-values that are robust to heteroskedasticity.
Do the estimated coefficients change in the regression output? Do the standard errors, t-statistics,
p-values change? Explain.
(g) Finally, perform a joint hypothesis test H0 : β1 = β2 = 0 with heteroskedasticity robust standard
errors using the function linearHypothesis in the R package car. You can do so by simply
adding the argument vcov. with the heteroskedasticity-robust standard errors:
linearHypothesis(fit_fdl, c("lnYO_0 = 0", "lnYO_1 = 0"),

Page 26
test="F", vcov. = vcovHC(fit_fdl, type = "HC"))
after loading the library into your R session. What do you conclude?

3. Testing for autocorrelation. Now we investigate the possibility of first-order serial correlation, a.k.a.
autocorrelation, among the disturbances ut . Its presence would indicate a violation of the assumption
that the disturbances are independent or at least not serially correlated.

(a) Does autocorrelation bias the estimated coefficients? Does it invalidate the tests conducted so
far?
(b) Re-consider regression (27). We will now test for first-order positive autocorrelation in the resid-
uals ût . using different approaches.
To begin, inspect two graphs: the time series plot of residuals (ût on t) and a scatter of the
residuals on their lagged values (ût on ût−1 ). Recall that you can use the function embed to
create the time series ût on ût−1 . What do the graphs suggest?
(c) Now formally test whether the residuals are white noise using the Ljung-Box test. What is the
null hypothesis, what is the alternative hypothesis?
In R, you can use the commands:
k <- round(sqrt(n))
Box.test(fit_fdl$residuals, type = "Ljung-Box", lag = k, fitdf = df)
where k is the rounded square root of the sample size n and df is the number of degrees of freedom,
which you should set equal to df = k − number of parameters in the estimated model.
What do you conclude?
(d) Secondly, try an asymptotic t-test in a regression of the residuals on the lagged residuals (ût on
ût−1 ). What do you conclude?
(e) There are important drawbacks of the asymptotic t-test approach, which ones? To circumvent
these drawbacks we turn to the Breusch-Godfrey test. How does it relate to (or differ from) the
test in the preceding item?
(f) Implement the Breusch-Godfrey test in R using the function bgtest in the package lmtest:
bgtest(fit_fdl, order = 2)
where the argument order = 2 indicates that you test for 2nd order autocorrelation (you can
argue to include more lags for your country if needed). What do you conclude?

4. Autocorrelation-robust standard errors. The White variance-covariance estimator used earlier still
assumes that the disturbances of the estimated equation are uncorrelated. There exists another for-
mula, due to Newey & West, for the calculation of standard errors that are robust against both serial
correlation and heteroskedasticity (a.k.a. HAC: Heteroskedasticity and Autocorrelation Consistent).
Re-estimate your investment equation (27) replacing the ordinary coefficient standard errors by the
Newey-West (HAC) standard errors. In R you can use the sandwich package for this, simply use an-
other type of covariance estimator in the function: coeftest(fit_fdl, vcov = vcovHAC(fit_fdl))
namely the one that allows for HAC standard errors.

(a) Do the estimated coefficients change in the regression output?


(b) Do the standard errors, t-statistics, p-values, confidence intervals change? Explain.
(c) Finally, note that to perform any linear hypothesis test with the function linearHypothesis, you
can also use vcovHAC(fit fdl) as the argument vcov. to make your inference robust against
the presence of serial autocorrelation and heteroskedasticity. As an example, perform the joint
hypothesis test H0 : β1 = β2 = 0 with HAC standard errors, what do you conclude?

5. Test of the linearity of the specification. We continue submitting the estimated equation (27) to tests
that examine the adequacy of the linear specification of the equation.

Page 27
(a) We will submit our regression to Ramsey’s RESET specification test. Explain how the test works,
and which null hypothesis (versus which alternative hypothesis) is tested.
(b) In R, you can use the resettest function that is contained in the library lmtest:
resettest(fit_fdl)
Interpret the test outcome and draw your conclusions.

6. Testing for structural breaks. We submit equation (27) to a test for structural change, a.k.a. test of
structural stability, or test of parameter constancy. Note that, for ease of exposition, we make use of
the classical inference tools here. In case your output above indicated presence of heteroskedasticity
and/or serial correlation, you know that you should use the corresponding standard errors by adjusting
the argument vcov. in function linearHypothesis.

(a) Select a plausible break point so as to split your sample (period) in two subsamples (subperiods).
The best way to pick a break date is by reference to a major historical event.
(b) Re-estimate equation (27) for each subsample as well as for the complete sample. Note that
the ‘break date’ is the first year affected by the parameter shift! To run the regressions on the
subsamples in R, you need to subset all of your variables involved in the regressions. Assume
you work with a break year corresponding to observation 30. Your first subsample will run from
observation 1 until 29. Your second subsample will run from observation 30 to n, with n denoting
the sample size. You can subset your response variable lnIO 0 (using notation introduced in
tutorial 2) for instance like this:
lnIO_0_sub1 <- lnIO_0[1:29]
which can then be included in the regression for the first subsample. Similarly for all other
variables in this regression.
(c) Start by visually comparing the estimated coefficients across the three regression models.
(d) Formally test the null hypothesis of no structural break yourself based on the formula of the F -test
given in the lecture slides. What is the value of the F -statistic? What do you conclude?
Recall: You manually computed a similar F -statistic in tutorial 2...
(e) An alternative way to test for shifts in the regression coefficients is to introduce a dummy variable
equal to 0 before the break point and 1 from then on, as well as interactions between the dummy
and regressors. So estimate the model

ln IOt = β0 +α0 dummyt +β1 ln Y Ot +β2 ln Y Ot−1 +α1 ln Y Ot ×dummyt +α2 ln Y Ot−1 ×dummyt +ut
(28)
with different intercepts and slopes for the two subperiods; see Wooldridge § 7.4.
Generate the dummy variable in R. For the example with a break in observation 30, this can
simply be done with the command:
dummy <- c(rep(0, 29), rep(1, n-29))
where you repeat the value 0 for 29 times, and then repeat the value 1 for n − 29 times. You can
then also create the interaction terms, as for example
lnYO_0_dum <- lnYO_0*dummy
to then include the interaction terms in the regression model.
(f) Estimate equation (28). For subperiod 1: what is the value of the intercept, the effect of ln Y Ot
and the effect of ln Y Ot−1 ? For subperiod 2: what is the value of the intercept, the effect of
ln Y Ot and the effect of ln Y Ot−1 ?
(g) Compare the estimated coefficients in equation (28) to the estimated coefficients of equation (27)
obtained on both subsamples. What do you observe?
(h) Assume we want to test the null hypothesis that there is no structural break, in none of the
parameters. Formulate this null hypothesis in terms of the parameters in equation (28).
(i) Test this null hypothesis in R using the function linearHypothesis. What is the value of the
F -statistic? What is the p-value? Does the value of the F -statistic match the one you computed
in part (d)?

Page 28
(j) What is your final conclusion? Does your equation appear to have constant coefficients, i.e., does
it seem to be ‘stable’ over time?

Page 29
Tutorial 6: Vector AutoRegressive Models
In this tutorial, you will learn how to estimate Vector Autoregressive Models. We will consider a VAR model
for investment growth and output growth:
    X p     
∆lnIOt c1 φj,11 φj,12 ∆lnIOt−j u1,t
= + + . (29)
∆lnY Ot c2 φj,21 φj,22 ∆lnY Ot−j u2,t
j=1

1. Setting up R. Set-up your R script as you did for the previous tutorials, so: set your working directory,
and import your data into R.

2. Obtaining stationary time series. Generate the series in growth rates: ∆ ln IOt and ∆ ln Y Ot .
(a) Make a time series plot of ∆ ln IOt . Does the series look stationary?
(b) Test whether the series ∆ ln IOt is stationary by running an appropriate unit root test in the
package bootUR. Be precise on the test you run!
(c) What is your conclusion for ∆ ln IOt : is the series stationary or not?
(d) Repeat the same steps for ∆ ln Y Ot .
(e) In case you conclude that (one or more) series is non-stationary, discuss possible reasons for this.
Should we be careful in the remainder then?

3. Estimating a VAR(1). Estimate the VAR model in equation (29) taking p = 1, so including one lag.
In R, start by installing the package vars and loading it into R. You can then use the function VAR
use the commands:
library(vars)
VAR_data <- data.frame(dlnIO_ts, dlnYO_ts)
fit_var1 <- VAR(VAR_data, p = 1)
summary(fit_var1)

(a) Inspect the estimation output for each equation. Write down the two equations of the estimated
VAR model with the estimated coefficients.
(b) What is the value of the estimated coefficient φ̂1,12 . Interpret it. Is it significant?
(c) What is the value of the estimated coefficient φ̂1,21 . Interpret it. Is it significant?
(d) R reports two values of the R2 . What are these values? Interpret them.

4. Validating the VAR(1). Save the residuals of the estimated VAR(1).


In R, you can use the function resid to get the residual series from the estimated VAR:
resid_var1 <- resid(fit_var1)
resid_dlnIO <- resid_var1[, 1]
resid_dlnYO <- resid_var1[, 2]
We hereby name the residuals of the first series resid dlnIO, the residuals of the second series
resid dlnYO.

(a) Make a line graph of û1,t . Do you suspect autocorrelation? Is this problematic?
(b) Make a correlogram of û1,t . Recall that you can use the function acf to this end. Is autocorrelation
present? Is this problematic?
(c) Repeat the same steps for û2,t .

Page 30
(d) Now we consider a cross-correlogram of the series û1,t and û2,t . It displays the correlation between
û1,t+k and û2,t for different values of k (lags of û1,t and leads of û1,t for respectively negative and
positive values of k. You can use the function ccf in R to this end:
ccf(x = resid_dlnIO, y = resid_dlnYO)
In the VAR framework, is correlation of û2,t with the lags of û1,t allowed? Is correlation of û2,t
with the leads of û1,t allowed? Is correlation between both residuals series at k = 0 allowed?
What is the latter? Discuss the cross-correlogram.

5. Selecting the order of the VAR. Now we use automatic lag selection with information criteria to select
the order of the VAR model.
In R, you may use the command:
VARselect(VAR_data)
to consider VAR models for p = 1, . . . , 10.
(a) What is the lag order you select based on AIC?
(b) What is the lag order you select based on BIC (indicated as SC in R)?
(c) Re-estimate the VAR model making use of one of the two selected orders (do choose a p 6= 1 as
we did that above already). For instance, to estimate a VAR(3), use:
fit_var3 <- VAR(VAR_data, p = 3)

(d) What changes in the estimation output?


(e) Validate your newly estimated VAR in the same way as you did for the VAR(1) model.

6. Granger Causality in VARs. Consider the VAR(1) you estimated. We will now investigate Granger
causality relations.
(a) Write down the null hypothesis for the test that output growth does not Granger cause investment
growth.
(b) In R, use the command:
causality(x = fit_var1, cause = "dlnYO_ts")$Granger
to this whether output growth does not Granger cause investment growth.
What is the p-value? What do you conclude?
(c) Repeat the same steps to test the null hypothesis that investment growth does not Granger cause
output growth.
(d) Have a closer look at the p-values of the Granger causality output. Did you also see them in the
output of the estimated VAR(1)? Explain.
(e) Repeat the same exercise for the higher order VAR (with p > 1) that you estimated.
(f) Does your conclusion change on whether output growth Granger causes investment growth?
(g) Does your conclusion change on whether investment growth Granger causes output growth?
(h) Did you now see the p-values of the Granger causality output popping up in the output of the
estimated VAR(3)?

7. Impulse response functions in VARs. Finally, we will inspect the impulse response functions of the
higher order VAR (with p > 1) that you estimated.

(a) Explain in words what the impulse response functions represent.


(b) In R, you can obtain the impulse response functions via the commands:
irf_var3 <- irf(fit_var3, ortho = F)
plot(irf_var3)
Note that in the R console the output “Hit <Return> to see next plot: ” appears. The function
starts by giving the first set of impulse response functions in the plot-window. To see the second
set, you first need to hit return.

Page 31
(c) Carefully interpret both impulse responses. Which responses are significant?

Page 32

You might also like