0% found this document useful (0 votes)
18 views

Econometrics for finance (2017-I)

Uploaded by

akaluteshome41
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Econometrics for finance (2017-I)

Uploaded by

akaluteshome41
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Econometrics for finance

Part 1: True/False Questions

1. False - Homoscedasticity refers to the assumption that the variance of the errors (residuals) is
constant across all levels of the independent variable(s). It does not specifically refer to the
linearity of the random errors for all values of x.

2. True - Plotting values of X against corresponding residuals is indeed a method for determining
the linearity of X-Y data. If the residuals show a pattern or trend, it may indicate non-linearity in
the relationship between X and Y.

3. True - The prediction line in a regression model provides a point estimate of the value of Y for
any given x. It is an estimate of the expected value of Y based on the regression equation.

4. False - A strong linear relationship between X and Y does not necessarily imply causality.
Correlation and causation are distinct concepts. A strong linear relationship simply indicates that
there is a strong association between X and Y, but it does not establish a cause-and-effect
relationship.

5. True - A high coefficient of determination (R-squared) indicates that a larger proportion of the
variation in Y is explained by X. It suggests that the regression model accounts for a significant
portion of the variability in the dependent variable (Y).

6. False - Plotting X against the variability of the residual is not a method for determining the
variance of the difference between observed and predicted values of Y in simple linear
regression. This is typically done by calculating the mean squared error (MSE) or the standard
deviation of the residuals.

Multiple Choice Questions

1. b) $49\% $ of variations in Y are explained by X

Explanation: The correlation coefficient (r) measures the strength and direction of the linear
relationship between two variables. In this case, a correlation coefficient of 0.7 indicates a
moderate to strong positive linear relationship. The coefficient of determination (R-squared) is
the square of the correlation coefficient, which represents the proportion of variance in the
dependent variable (Y) that is predictable from the independent variable (X). Therefore, $r^2 =
0.7^2 = 0.49$, indicating that 49% of the variations in Y are explained by X.

2. b) Dicky- Fuller test - Heteroscedasticity

Explanation: The Dickey-Fuller test is used to test for the presence of a unit root in a time series,
which is related to the concept of stationarity. Heteroscedasticity, on the other hand, refers to the
assumption that the variance of the errors (residuals) is not constant across all levels of the
independent variable(s). Therefore, the pair "Dicky- Fuller test – Hetero scedasticity" is not
correctly matched.

3.

4. C

5. B
6. B

7. D

8. B

9. A

10. Dropping a variable from a model may lead to what is called:

a) Specification error.

Explanation: Dropping a variable from a model can result in a specification error because the
model may no longer accurately represent the relationship between the remaining variables and
the dependent variable.

11. What do residuals represent?

b) difference between the actual Y values and the predicted Y values

Explanation: Residuals represent the difference between the actual observed values of the
dependent variable (Y) and the predicted values obtained from the regression model.

12. Assuming a linear relationship between X and Y, which of the following is true if the
coefficient of correlation $(r)$ equals 0?

d. is no correlation
Explanation: If the coefficient of correlation $(r)$ equals 0, it indicates that there is no linear
correlation between the variables X and Y.

13. What information is provided by the coefficient of determination?

c. The proportion of total variation in Y that is explained by X

Explanation: The coefficient of determination, denoted as $R^2$, represents the proportion of


the total variation in the dependent variable (Y) that is explained by the independent variable (X)
in a regression model.

14. In linear regression, the variable being predicted is usually called which of the following?

a. Dependent variable.

Explanation: In linear regression, the variable being predicted is referred to as the dependent
variable.

15. What does the Y intercept $(b0)$ represent?

c. The predicted value of Y when $X=0$

Explanation: The Y intercept $(b0)$ represents the predicted value of the dependent variable (Y)
when the independent variable (X) is equal to zero.

Part III: Discussion Questions

1. What is a variable? List types of variables.


A variable is a characteristic or measurement that can vary among individuals or over time.
Types of variables include:

- Continuous variables: Variables that can take on an infinite number of values within a given
range (e.g., height, weight).

- Discrete variables: Variables that can only take on specific, distinct values (e.g., gender,
number of children).

- Categorical variables: Variables that represent categories or groups (e.g., race, marital status).

2. Differentiate covariance, correlation, and coefficient of determination.

- Covariance: Measures the average product of the differences between paired scores of two
variables.

- Correlation: Measures the strength and direction of the linear relationship between two
variables. It is standardized covariance.

- Coefficient of determination: Also known as $R^2$, it represents the proportion of the total
variation in the dependent variable that is explained by the independent variable in a regression
model.

3. What happens when you include an irrelevant variable in a regression?

When you include an irrelevant variable in a regression, it can lead to inefficiency and potential
multicollinearity issues. The irrelevant variable may not contribute to predicting the dependent
variable and may even distort the estimates of the relevant variables.

4. What is multicollinearity and why is it a problem?


Multicollinearity occurs when two or more independent variables in a regression model are
highly correlated with each other. It is a problem because it can lead to inflated standard errors
for the regression coefficients, making them less reliable and potentially leading to incorrect
conclusions about the relationships between the variables.

You might also like