**Econometrics Beat: Dave Giles' Blog**, and kindly contributed to R-bloggers)

^{2}I

_{n}] , (1)

^{-1}X’y, has the following sampling distribution:

^{2}(X’X)

^{-1}] . (2)

__not__all be the same, so each element of b will usually have a different variance.

_{i}, we start off with the following probability statement:

_{c}< (b

_{i}– β

_{i}) / s.e.(b

_{i}) < t

_{c}] = (1 – α) , (3)

_{c}is chosen to ensure that the desired probability of (1 – α) is achieved. Equation (3) is then re-written (equivalently) as:

_{i}– t

_{c}s.e.(b

_{i}) < b

_{i}< β

_{i}+t

_{c}s.e.(b

_{i})] = (1 – α), (4)

_{i}, 100(1 – α)% of the time:

_{i}– t

_{c}s.e.(b

_{i}) , b

_{i}+ t

_{c}s.e.(b

_{i})] . (5)

_{i}. Making the interval symmetric about this point ensures that we get the shortest (and hence most informative) interval for any fixed values of n, the sample size, and α. (See

**here**and

**here**for more details.)

*interval*(that applies to a single element of b) to that of a confidence

*region*, that can be associated with two elements of b at once.

Note that equation (4) is a statement that gives us the probability that the scalar random variable, b

_{i}, lies in some interval on the real line. So, if we are interested in two elements of b at once, consider a probability statement of the following type:

Pr.[(b_{i} , b_{j})’ is in R] = (1 – α) . (6)

This is just a statement that gives us the probability that a random vector lies in some two-dimensional region, say R. Just as equation (4) can be manipulated to give us the confidence interval in equation (5), the statement in equation (6) can be manipulated to give us a confidence region that has the corresponding interpretation. That is, it will be region whose boundaries are random, and the interpretation will be that if we repeatedly construct such a region many, many times, then such regions will cover the true value of the vector, (β_{i} , β_{j})’ , 100(1 – α)% of the time.

Now, the question is, “what does such a region look like?”. Look back at the sampling distribution for the full b vector in equation (2), and comments (i) and (ii) that follow it. In addition, recall that if the full b vector follows this multivariate normal distribution, then all of the marginal distributions associated with its elements will also be normal. In particular, the sub-vector associated with the pair of elements that we are interested in will have the following bivariate normal sampling distribution:

(b_{i} , b_{j})’ ~ N[(β_{i} , β_{j})’ , V*] , (7)

where the elements in the (estimated) covariance matrix, V*, come from the appropriate (2 × 2) sub-matrix of σ^{2}(X’X)^{-1}, in (2). The leading diagonal; elements of this sub-matrix will be var.(b_{i}) and var.(b_{j}), and the off-diagonal; elements will each be cov.(b_{i} , b_{j} ). Given points (i) and (ii) above, in general var.(b_{i}) ≠ var.(b_{j}) and cov.(b_{i} , b_{j} ) ≠ 0.

Just as the univariate (scalar) normal random variable b_{i} becomes a univariate Student-t random variable when we standardize it, and replace the unobserved s.d.(b_{i}) with the observable s.e.(b_{i}), the bivariate normal random variable becomes a bivariate Student-t random variable when we essentially standardize each element and replace the unobserved cov.(b_{i} , b_{j}), s.d.(b_{i}) and s.d.(b_{j}) with the observable côv(b_{i} , b_{j}), s.e.(b_{i}) and s.e.(b_{j}). (Of course, these quantities are obtained from the appropriate sub-matrix of .) __Strictly speaking__, what we do is to work with the (b_{i} , b_{j})’ vector in its entirety, and construct a new vector,

(b_{i}* , b_{j}*)’ = V*^{-1/2}(b_{i} , b_{j})’, (8)

where V*is the *estimated* covariance matrix for (b_{i} , b_{j})’, and V*^{-1/2} satisfies the relationship V*^{-1/2}V*^{-1/2} = V*^{-1}.

_{i}, our bivariate confidence region will be centered at the point located by the value taken by the vector point estimator, (b

_{i}, b

_{j})’.

**recent post**. In that handout we saw that in the case of a bivariate normal density, with a fixed mean vector, the factors that really mattered were any differences between the variances of the two random variables, and the magnitude and sign of the covariance between them. The same is going to apply here when we consider the bivariate Student-t distribution. The variances in this case depend solely on the “degrees of freedom” parameter, which is just (n – k) in our case.

_{}

_{ }

**fMultivar package**in R. The code I used is

**on this blog’s code page,**

**here**. (In fact, the code also generates dynamic “animated” views of the bivariate densities as they are rotated and viewed from various perspectives.)

As you would anticipate from the earlier **blog post**, if the two elements of the random vector have different variances and/or there is a non-zero covariance between them, the plots change. Specifically, the circular contours will become elliptical. The following two graphs relate to the case where the correlation is changed from 0.0 to -0.7, and you know already (from the **earlier post**) what would happen if the correlation were (say) 0.4:

*confidence region*. Look at the last contour plot, and focus on the contour that is labeled 0.1. This is giving us a region within which 90% of the bivariate density lies. That single elliptical line marks out a (random) region that has 90% probability content. Notice that the elliptical contours are all “centered” at the point (0 , 0).

When we relate this to the construction of a confidence region for (β_{1} , β_{2})’ in our regression model, we can see that once we choose the confidence level (say, 95%) we are concentrating on just one of the contours, and that the region will be “centered” at the point determined by (b_{1} , b_{2})’ . This is what we see when we estimate a regression model with EViews, and then select ** View**,

**,**

*Coefficient Diagnostics**. We then have the opportunity to specify the confidence level, which coefficients are to be considered in a pair-wise manner, and how we want to display the individual confidence intervals for each individual coefficient:*

**Confidence Ellipse**In this last plot, we see that the confidence ellipse for a 95% confidence level is “centered” at the point (1.42, -0.007), which corresponds to the OLS estimates for the intercept and slope coefficients in the regression output above. If we repeated this exercise many, many times then 95% of the regions created would cover the true values of the intercept and slope coefficients in this model. Of course, we will never know if this particular regions does. Notice that the dotted straight vertical lines in the confidence ellipses plot give us the limits for a 95% confidence interval for just the intercept coefficient by itself. In this case this interval has a lower limit of just under 1.2, and an upper limit of just under 1.7. Using the OLS regression output above, you should be able to quickly determine the exact values for the limits of this interval. In the same manner, the two horizontal straight dotted lines give us the lower and upper limits for a 95% confidence interval for just the slope coefficient by itself. Again, you can use the OLS regression output to convince yourself that these limits are correct.

Note that as you increase the confidence level, the area of the confidence ellipse will increase, in the same way that a confidence interval becomes wider as you increase the confidence level, *ceteris paribus*. Finally, the direction in which the confidence ellipse is “sloped” in this example indicates that b_{1} and b_{2} must have a negative covariance. This is readily verified by selecting ** View**,

**and observing that the covariance is -0.000251:**

*Covariance matrix*The EViews workfile that * *I used for this example is available on the code page for this blog,** ****here****, **and the data are** here**.

*Postscript ~ What would the confidence region look like if we were dealing with 3 coefficients*?

**leave a comment**for the author, please follow the link and comment on their blog:

**Econometrics Beat: Dave Giles' Blog**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...