Parametric Inference: Karlin-Rubin Theorem
[This article was first published on Analysis with Programming, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
A family of pdfs or pmfs ${g(t|theta):thetainTheta}$ for a univariate random variable $T$ with real-valued parameter $theta$ has a monotone likelihood ratio (MLR) if, for every $theta_2>theta_1$, $g(t|theta_2)/g(t|theta_1)$ is a monotone (nonincreasing or nondecreasing) function of $t$ on ${t:g(t|theta_1)>0;text{or};g(t|theta_2)>0}$. Note that $c/0$ is defined as $infty$ if $0< c$.
Consider testing $H_0:thetaleq theta_0$ versus $H_1:theta>theta_0$. Suppose that $T$ is a sufficient statistic for $theta$ and the family of pdfs or pmfs ${g(t|theta):thetainTheta}$ of $T$ has an MLR. Then for any $t_0$, the test that rejects $H_0$ if and only if $T >t_0$ is a UMP level $alpha$ test, where $alpha=P_{theta_0}(T >t_0)$.
Example 1To better understand the theorem, consider a single observation, $X$, from $mathrm{n}(theta,1)$, and test the following hypotheses: $$ H_0:thetaleq theta_0quadmathrm{versus}quad H_1:theta>theta_0. $$ Then $theta_1>theta_0$, and the likelihood ratio test statistics would be $$ lambda(x)=frac{f(x|theta_1)}{f(x|theta_0)}. $$ And we say that the null hypothesis is rejected if $lambda(x)>k$. To see if the distribution of the sample has MLR property, we simplify the above equation as follows: $$ begin{aligned} lambda(x)&=frac{frac{1}{sqrt{2pi}}expleft[-frac{(x-theta_1)^2}{2}right]}{frac{1}{sqrt{2pi}}expleft[-frac{(x-theta_0)^2}{2}right]}\ &=exp left[-frac{x^2-2xtheta_1+theta_1^2}{2}+frac{x^2-2xtheta_0+theta_0^2}{2}right]\ &=expleft[frac{2xtheta_1-theta_1^2-2xtheta_0+theta_0^2}{2}right]\ &=expleft[frac{2x(theta_1-theta_0)-(theta_1^2-theta_0^2)}{2}right]\ &=expleft[x(theta_1-theta_0)right]timesexpleft[-frac{theta_1^2-theta_0^2}{2}right] end{aligned} $$ which is increasing as a function of $x$, since $theta_1>theta_0$.
Figure 1. Normal Densities with $mu=1,2$. |
Figure 2. Likelihood Ratio of the Normal Densities. |
Example 2
Now consider testing the hypotheses, $H_0:thetageq theta_0$ versus $H_1:theta< theta_0$ using the sample $X$ (single observation) from Beta($theta$, 2), and to be more specific let $theta_0=4$ and $theta_1=3$. Can we apply Karlin-Rubin? Of course! Visually, we have something like in Figure 3.
Figure 3. Beta Densities Under Different Parameters. |
Figure 4. Likelihood Ratio of the Beta Densities. |
Reference
To leave a comment for the author, please follow the link and comment on their blog: Analysis with Programming.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.