Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

economic econometric , Exercises of Introduction to Econometrics

econometric in application for chapter 3

Typology: Exercises

2017/2018
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 02/01/2018

james111
james111 🇨🇦

4

(1)

1 document

1 / 10

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
11
Introductory Econometrics A Modern Approach 6th Edition Wooldridge
Solutions Manual
Solutions Manual, Instructor Manual, Answer key for all chapters, Appendix
chapter, Data Sets - Minitab , Data Sets - R are included. Download link:
https://testbankarea.com/download/introductory-econometrics-modern-
approach-6th-edition-jeffrey-m-wooldridge-solutions-manual/
Test Bank for Introductory Econometrics: A Modern Approach 6th
Edition by Jeffrey M. Wooldridge
Completed download:
https://testbankarea.com/download/introductory-econometrics-modern-
approach-6th-edition-jeffrey-m-wooldridge-test-bank/
CHAPTER 3
SOLUTIONS TO PROBLEMS
3.1 (i) hsperc is defined so that the smaller it is, the lower the student’s standing in high
school. Everything else equal, the worse the student’s standing in high school, the lower his/her
expected college GPA.
(ii) Just plug these values into the equation
colgpa
= 1.392 .0135(20) + .00148(1050) = 2.676.
(iii) The difference between A and B is simply 140 times the coefficient on sat, because
hsperc is the same for both students. So A is predicted to have a score .00148(140)
.207
higher.
(iv) With hsperc fixed,
colgpa
= .00148sat. Now, we want to find sat such that
colgpa
= .5, so .5 = .00148(sat) or sat = .5/(.00148)
338. Perhaps not surprisingly, a
large ceteris paribus difference in SAT score almost two and one-half standard deviations is
needed to obtain a predicted difference in college GPA or a half a point.
3.3 (i) If adults trade off sleep for work, more work implies less sleep (other things equal), so
1
< 0.
(ii) The signs of
2
and
3
are not obvious, at least to me. One could argue that more
educated people like to get more out of life, and so, other things equal, they sleep less (
< 0).
The relationship between sleeping and age is more complicated than this model suggests, and
economists are not in the best position to judge such things.
pf3
pf4
pf5
pf8
pf9
pfa
Discount

On special offer

Partial preview of the text

Download economic econometric and more Exercises Introduction to Econometrics in PDF only on Docsity!

Introductory Econometrics A Modern Approach 6th Edition Wooldridge

Solutions Manual

Solutions Manual, Instructor Manual, Answer key for all chapters, Appendix

chapter, Data Sets - Minitab , Data Sets - R are included. Download link:

https://testbankarea.com/download/introductory-econometrics-modern-

approach-6th-edition-jeffrey-m-wooldridge-solutions-manual/

Test Bank for Introductory Econometrics: A Modern Approach 6th

Edition by Jeffrey M. Wooldridge

Completed download:

https://testbankarea.com/download/introductory-econometrics-modern-

approach-6th-edition-jeffrey-m-wooldridge-test-bank/

CHAPTER 3

SOLUTIONS TO PROBLEMS

3.1 (i) hsperc is defined so that the smaller it is, the lower the student’s standing in high school. Everything else equal, the worse the student’s standing in high school, the lower his/her expected college GPA.

(ii) Just plug these values into the equation

colgpa = 1.392  .0135(20) + .00148(1050) = 2.676.

(iii) The difference between A and B is simply 140 times the coefficient on sat , because hsperc is the same for both students. So A is predicted to have a score .00148(140) . higher.

(iv) With hsperc fixed,  colgpa = .00148 sat. Now, we want to find  sat such that  colgpa = .5, so .5 = .00148( sat ) or  sat = .5/(.00148) 338. Perhaps not surprisingly, a

large ceteris paribus difference in SAT score – almost two and one-half standard deviations – is needed to obtain a predicted difference in college GPA or a half a point.

3.3 (i) If adults trade off sleep for work, more work implies less sleep (other things equal), so

(ii) The signs of  2 and  3 are not obvious, at least to me. One could argue that more

educated people like to get more out of life, and so, other things equal, they sleep less (  2 < 0).

The relationship between sleeping and age is more complicated than this model suggests, and economists are not in the best position to judge such things.

(iii) Since totwrk is in minutes, we must convert five hours into minutes:  totwrk = 5(60) = 300. Then sleep is predicted to fall by .148(300) = 44.4 minutes. For a week, 45 minutes less sleep is not an overwhelming change.

(iv) More education implies less predicted time sleeping, but the effect is quite small. If we assume the difference between college and high school is four years, the college graduate sleeps about 45 minutes less per week, other things equal.

(v) Not surprisingly, the three explanatory variables explain only about 11.3% of the variation in sleep. One important factor in the error term is general health. Another is marital status and whether the person has children. Health (however we measure that), marital status, and number and ages of children would generally be correlated with totwrk. (For example, less healthy people would tend to work less.)

3.5 (i) No. By definition, study + sleep + work + leisure = 168. Therefore, if we change study , we must change at least one of the other categories so that the sum is still 168.

(ii) From part (i), we can write, say, study as a perfect linear function of the other independent variables: study = 168  sleepworkleisure. This holds for every observation, so MLR.3 is violated.

(iii) Simply drop one of the independent variables, say leisure :

GPA =  0 +  1 study +  2 sleep +  3 work + u.

Now, for example,  1 is interpreted as the change in GPA when study increases by one hour,

where sleep , work , and u are all held fixed. If we are holding sleep and work fixed but increasing study by one hour, then we must be reducing leisure by one hour. The other slope parameters have a similar interpretation.

3.7 Only (ii), omitting an important variable, can cause bias, and this is true only when the omitted variable is correlated with the included explanatory variables. The homoskedasticity assumption, MLR.5, played no role in showing that the OLS estimators are unbiased.

(Homoskedasticity was used to obtain the usual variance formulas for the ˆ j .) Further, the

degree of collinearity between the explanatory variables in the sample, even if it is reflected in a correlation as high as .95, does not affect the Gauss-Markov assumptions. Only if there is a perfect linear relationship among two or more explanatory variables is MLR.3 violated.

3.9 (i)  1 < 0 because more pollution can be expected to lower housing values; note that  1 is

the elasticity of price with respect to nox.  2 is probably positive because rooms roughly

measures the size of a house. (However, it does not allow us to distinguish homes where each room is large from homes where each room is small.)

1 3 1 1 1 1 1 3 2 2 1 1 1 1

n n i i i i i n n i i i i

r x ru

r r

  ^ ^ 

 

Conditional on all sample values on x 1 , x 2 , and x 3 , only the last term is random due to its dependence on ui. But E( ui ) = 0, and so

1 3 1 1 1 3 2 1 1

E( ) = + ,

n i i i n i i

r x

r

  ^ 

which is what we wanted to show. Notice that the term multiplying  3 is the regression

coefficient from the simple regression of xi 3 on r^ ˆ i 1.

3.13 (i) For notational simplicity, define szx = 1

n i i i

z z x

  this is not quite the sample

covariance between z and x because we do not divide by n – 1, but we are only using it to

simplify notation. Then we can write  1 as

1 1

n i i i zx

z z y

s

^ 

This is clearly a linear function of the yi : take the weights to be wi = ( ziz )/ szx. To show

unbiasedness, as usual we plug yi =  0 +  1 xi + ui into this equation, and simplify:

0 1 1 1

0 1 1 1

1 1

n i i i i zx n n i zx i i i i zx n i i i zx

z z x u

s

z z s z z u

s

z z u

s

 

where we use the fact that 1

n i i

z z

  = 0 always. Now szx is a function of the zi and xi and the

expected value of each ui is zero conditional on all zi and xi in the sample. Therefore, conditional on these values,

1 1 1 1

( )E( )

E( )

n i i i zx

z z u

s

 ^  

because E( ui ) = 0 for all i.

(ii) From the fourth equation in part (i) we have (again conditional on the zi and xi in the sample),

2 (^1 ) (^1 2 )

2 (^2 ) 2

Var ( ) (^) ( ) Var( ) Var( )

n (^) n i i (^) i i i (^) i zx zx n i i zx

z z u (^) z z u

s s

z z

s

 (^) 

 ^  

 ^ 

 (^) 

because of the homoskedasticity assumption [Var( ui ) = ^2 for all i ]. Given the definition of szx ,

this is what we wanted to show.

(iii) We know that Var( ^ ˆ 1 ) = ^2 /^2

1

[ ( ) ].

n i i

x x

  Now we can rearrange the inequality in

the hint, drop x from the sample covariance, and cancel n -1^ everywhere, to get

2 2 1

[ ( ) ]/

n i zx i

z z s

^ ^ ≥^2

1

1/[ ( ) ].

n i i

x x

  When we multiply through by ^2 we get Var(  1 )  Var(

^ ˆ 1 ), which is what we wanted to show.

3.15 (i) The degrees of freedom of the first regression is nk – 1 = 353 – 1 – 1 = 351.The degrees of freedom of the second regression is nk – 1 = 353 – 2 – 1 = 350. The standard error is smaller than the simple regression equation because one more explanatory variable is included in the second regression. The SSR falls from 326.196 to 198.475 when another explanatory variable is added, and the degrees of freedom also falls by one, which affects the standard error.

(ii) Yes, there is a positive moderate correlation between years and rbisyr.

VIF years = 1 1−𝑅𝑦𝑒𝑎𝑟𝑠^2 =^

1 1−0.597 = 2.48139; from this value, we can say that there is little collinearity between years and rbisyr.

(ii) We cannot include profits in logarithmic form because profits are negative for nine of the companies in the sample. When we add it in levels form, we get

log( salary )  4.69  .161 log( sales )  .098 log( mktval ) .000036 profits

n  177, R^2 .299.

The coefficient on profits is very small. Here, profits are measured in millions, so if profits increase by $1 billion, which means  profits = 1,000 – a huge change – predicted salary

increases by about only 3.6%. However, remember that we are holding sales and market value fixed. Together, these variables (and we could drop profits without losing anything) explain almost 30% of the sample variation in log( salary ). This is certainly not “most” of the variation.

(iii) Adding ceoten to the equation gives

log( salary )  4.56  .162 log( sales )  .102 log( mktval )  .000029 profits .012 ceoten

n  177, R^2 .318.

This means that one more year as CEO increases predicted salary by about 1.2%.

(iv) The sample correlation between log( mktval ) and profits is about .78, which is fairly high. As we know, this causes no bias in the OLS estimators, although it can cause their variances to be large. Given the fairly substantial correlation between market value and firm profits, it is not too surprising that the latter adds nothing to explaining CEO salaries. Also, profits is a short term measure of how the firm is doing, while mktval is based on past, current, and expected future profitability.

C3.5 The regression of educ on exper and tenure yields

educ = 13.57  .074 exper + .048 tenure + r ˆ 1.

n = 526, R^2 = .101.

Now, when we regress log( wage ) on r ˆ 1 we obtain

log( wage ) = 1.62 + .092 r ˆ 1

n = 526, R^2 = .207.

As expected, the coefficient on r ˆ 1 in the second regression is identical to the coefficient on educ

in equation (3.19). Notice that the R- squared from the above regression is below that in (3.19).

In effect, the regression of log( wage ) on r^ ˆ 1 explains log( wage ) using only the part of educ that is

uncorrelated with exper and tenure ; separate effects of exper and tenure are not included.

C3.7 (i) The results of the regression are

math 10  20.36  6.23 log( expend ) .305 lnchprg

n = 408, R^2 = .180.

The signs of the estimated slopes imply that more spending increases the pass rate (holding lnchprg fixed) and a higher poverty rate (proxied well by lnchprg ) decreases the pass rate (holding spending fixed). These are what we expect.

(ii) As usual, the estimated intercept is the predicted value of the dependent variable when all regressors are set to zero. Setting lnchprg = 0 makes sense, as there are schools with low poverty rates. Setting log( expend ) = 0 does not make sense, because it is the same as setting expend = 1, and spending is measured in dollars per student. Presumably this is well outside any sensible range. Not surprisingly, the prediction of a (^)  20 pass rate is nonsensical.

(iii) The simple regression results are

math 10  69.34 11.16 log( expend )

n = 408, R^2 = .030. and the estimated spending effect is larger than it was in part (i) – almost double.

(iv) The sample correlation between lexpend and lnchprg is about .19, which means that, on average, high schools with poorer students spent less per student. This makes sense, especially in 1993 in Michigan, where school funding was essentially determined by local property tax collections.

(v) We can use equation (3.23). Because Corr( x 1 , x 2 ) < 0, which means  1  0 , and ^ ˆ 2  0 ,

the simple regression estimate,  1 , is larger than the multiple regression estimate, ^ ˆ 1. Intuitively,

failing to account for the poverty rate leads to an overestimate of the effect of spending.

C3.9 (i) The estimated equation is

2

gift mailsyear giftlast propresp n R

The R -squared is now about .083, compared with about .014 for the simple regression case. Therefore, the variables giftlast and propresp help to explain significantly more variation in gifts in the sample (although still just over eight percent).

(v) VIF pctsgle = 1 1−𝑅^2 =^

1 1−0.3795 = 1.6116. VIF free = 1 1−𝑅^2 =^

1 1−0.4455 = 1.8034. VIF lmedinc = 1 1−𝑅^2 =^

1 1−0.3212 = 1.4732.

By comparing the three variables, it is very clear that the variable free has the highest VIF. No, this knowledge does not affect the model to study the causal effect of single parenthood on math performance.

More download links:

introductory econometrics a modern approach 6th edition solutions pdf

introductory econometrics a modern approach test bank

introductory econometrics a modern approach pdf

introductory econometrics wooldridge solution manual pdf

introductory econometrics a modern approach 6th edition pdf

introductory econometrics a modern approach 5th edition solutions pdf

introductory econometrics a modern approach 5th edition pdf

introductory econometrics a modern approach 4th edition solutions manual

pdf

wooldridge introductory econometrics solutions 5e