Parametric Inference: Karlin-Rubin Theorem

[This article was first published on Analysis with Programming, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

A family of pdfs or pmfs $\{g(t|\theta):\theta\in\Theta\}$ for a univariate random variable $T$ with real-valued parameter $\theta$ has a monotone likelihood ratio (MLR) if, for every $\theta_2>\theta_1$, $g(t|\theta_2)/g(t|\theta_1)$ is a monotone (nonincreasing or nondecreasing) function of $t$ on $\{t:g(t|\theta_1)>0\;\text{or}\;g(t|\theta_2)>0\}$. Note that $c/0$ is defined as $\infty$ if $0< c$.
Consider testing $H_0:\theta\leq \theta_0$ versus $H_1:\theta>\theta_0$. Suppose that $T$ is a sufficient statistic for $\theta$ and the family of pdfs or pmfs $\{g(t|\theta):\theta\in\Theta\}$ of $T$ has an MLR. Then for any $t_0$, the test that rejects $H_0$ if and only if $T >t_0$ is a UMP level $\alpha$ test, where $\alpha=P_{\theta_0}(T >t_0)$.
Example 1
To better understand the theorem, consider a single observation, $X$, from $\mathrm{n}(\theta,1)$, and test the following hypotheses: $$ H_0:\theta\leq \theta_0\quad\mathrm{versus}\quad H_1:\theta>\theta_0. $$ Then $\theta_1>\theta_0$, and the likelihood ratio test statistics would be $$ \lambda(x)=\frac{f(x|\theta_1)}{f(x|\theta_0)}. $$ And we say that the null hypothesis is rejected if $\lambda(x)>k$. To see if the distribution of the sample has MLR property, we simplify the above equation as follows: $$ \begin{aligned} \lambda(x)&=\frac{\frac{1}{\sqrt{2\pi}}\exp\left[-\frac{(x-\theta_1)^2}{2}\right]}{\frac{1}{\sqrt{2\pi}}\exp\left[-\frac{(x-\theta_0)^2}{2}\right]}\\ &=\exp \left[-\frac{x^2-2x\theta_1+\theta_1^2}{2}+\frac{x^2-2x\theta_0+\theta_0^2}{2}\right]\\ &=\exp\left[\frac{2x\theta_1-\theta_1^2-2x\theta_0+\theta_0^2}{2}\right]\\ &=\exp\left[\frac{2x(\theta_1-\theta_0)-(\theta_1^2-\theta_0^2)}{2}\right]\\ &=\exp\left[x(\theta_1-\theta_0)\right]\times\exp\left[-\frac{\theta_1^2-\theta_0^2}{2}\right] \end{aligned} $$ which is increasing as a function of $x$, since $\theta_1>\theta_0$.
Figure 1. Normal Densities with $\mu=1,2$.
By illustration, consider Figure 1. The plot of the likelihood ratio of these models is monotone increasing as seen in Figure 2, where rejecting $H_0$ if $\lambda(x)>k$ is equivalent to rejecting it if $T\geq t_0$.
Figure 2. Likelihood Ratio of the Normal Densities.
And by factorization theorem the likelihood ratio test statistic can be written as a function of the sufficient statistics since the term, $h(x)$ will be cancelled out. That is, $$ \lambda(t)=\frac{g(t|\theta_1)}{g(t|\theta_0)}. $$ And by Karlin-Rubin theorem, the rejection region $R=\{t:t>t_0\}$ is a uniformly most powerful level-$\alpha$ test. Where $t_0$ satisfies the following: $$ \begin{aligned} \mathrm{P}(T>t_0|\theta_0)&=\mathrm{P}(T\in R|\theta_0)\\ \alpha&=1-\mathrm{P}(X\leq t_0|\theta_0)\\ 1-\alpha&=\int_{-\infty}^{t_0}\frac{1}{\sqrt{2\pi}}\exp\left[-\frac{(x-\theta_0)^2}{2}\right]\operatorname{d}x \end{aligned} $$ Hence the quantile of the $1-\alpha$ probability, which is $z_{\alpha}$ is equal to $t_0$, that is $z_{\alpha}=t_0$, and thus we reject $H_0$ if $T>z_{\alpha}$.

Example 2
Now consider testing the hypotheses, $H_0:\theta\geq \theta_0$ versus $H_1:\theta< \theta_0$ using the sample $X$ (single observation) from Beta($\theta$, 2), and to be more specific let $\theta_0=4$ and $\theta_1=3$. Can we apply Karlin-Rubin? Of course! Visually, we have something like in Figure 3.
Figure 3. Beta Densities Under Different Parameters.
Note that for this test, $\theta_1<\theta_0$, and so the likelihood ratio test statistics is simplified as follows: $$ \begin{aligned} \lambda(x)&=\frac{f(x|\theta_1=3, 2)}{f(x|\theta_0=4, 2)}=\frac{\displaystyle\frac{\Gamma(\theta_1+2)}{\Gamma(\theta_1)\Gamma(2)}x^{\theta_1-1}(1-x)^{2-1}}{\displaystyle\frac{\Gamma(\theta_0+2)}{\Gamma(\theta_0)\Gamma(2)}x^{\theta_0-1}(1-x)^{2-1}}\\ &=\frac{\displaystyle\frac{\Gamma(5)}{\Gamma(3)\Gamma(2)}x^{2}(1-x)}{\displaystyle\frac{\Gamma(6)}{\Gamma(4)\Gamma(2)}x^{3}(1-x)}=\frac{\displaystyle\frac{12\Gamma(3)}{\Gamma(3)\Gamma(2)}x^{2}(1-x)}{\displaystyle\frac{20\Gamma(4)}{\Gamma(4)\Gamma(2)}x^{3}(1-x)}\\ &=\frac{3}{5x}, \end{aligned} $$ which is decreasing as a function of $x$, see the plot of this in Figure 4. And we say that $H_0$ is rejected if $\lambda(x) > k$ if and only if $T < t_0$. Where $t_0$ satisfies the following equations: $$ \begin{aligned} \mathrm{P}(T < t_0|\theta_0)&=\mathrm{P}(X < t_0|\theta_0)\\ \alpha&=\int_{0}^{t_0}\frac{\Gamma(\theta_0+2)}{\Gamma(\theta_0)\Gamma(2)}x^{\theta_0-1}(1-x)^{2-1}\operatorname{d}x\\ \alpha&=\int_{0}^{t_0}\frac{\Gamma(6)}{\Gamma(4)\Gamma(2)}x^{3}(1-x)\operatorname{d}x. \end{aligned} $$ Hence the quantile of the $\alpha$ probability, $x_{\alpha}=t_0$. And thus we reject $H_0$ if $T < x_{\alpha}$.
Figure 4. Likelihood Ratio of the Beta Densities.

Reference

  1. Casella, G. and Berger, R.L. (2001). Statistical Inference. Thomson Learning, Inc.

To leave a comment for the author, please follow the link and comment on their blog: Analysis with Programming.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)