Solution

STA 261 – Winter 2011 – Practice Problems Week 4 – Solution
9.69
. Thus, the MOM estimator is ˆ  21YY1 .
Since Y is a consistent estimator of μ, by the Law of Large Numbers ˆ converges in
probability to θ. However, this estimator is not a function of the sufficient statistic so it
can’t be the MVUE.
9.70
Since μ = λ, the MOM estimator of λ is ˆ  m1  Y .
9.71
Since E(Y) = 1 = 0 and E(Y2) = 2 = V(Y) = σ2, we have that ˆ 2  m2 
9.74
a. First, calculate 1 = E(Y) =  2 y (   y ) /  2 dy = θ/3. Thus, the MOM estimator of θ
It is easy to show that μ =
1
 2
so that  
2  1
1
1
n

n
Y 2.
i 1 i

0
is ˆ  3Y .
b. The likelihood is L()  2 n  2 n i 1 (  yi ) . Clearly, the likelihood can’t be factored
n
into a function that only depends on Y , so the MOM is not a sufficient statistic for θ.
9.77
Here, 1 = E(Y) = 23  . So, the MOM estimator of θ is ˆ  23 Y .
9.79
For Y following the given Pareto distribution,


E (Y )     y  dy    y 1
  1


  /(   1) .
The mean is not defined if α < 1. Thus, a generalized MOM estimator for α cannot be
expressed.
9.82
The likelihood function is L()  n r n
 y 
r 1
n
i 1
i
a. By Theorem 9.4, a sufficient statistic for θ is
b. The log–likelihood is


exp  i 1 yir /  .

n
n
Yr .
i 1 i


ln L( )   n ln   n ln r  ( r  1) ln i 1 yi  i 1 yir /  .
n
By taking a derivative w.r.t. θ and equating to 0, we find ˆ 
n
1
n

n
Yr .
i 1 i
c. Note that ˆ is a function of the sufficient statistic. Since it is easily shown that
E (Y r )   , ˆ is then unbiased and the MVUE for θ.
1
9.83
a. The likelihood function is L()  ( 2  1)  n . Let γ = γ(θ)= 2θ + 1. Then, the
likelihood can be expressed as L( )    n . The likelihood is maximized for small values
of γ. The smallest value that can safely maximize the likelihood (see Example 9.16)
without violating the support is ˆ  Y( n ) . Thus, by the invariance property of MLEs,
ˆ  1 Y  1.
(n)
2
b. Since V(Y) =
9.84
( 2 1) 2
12
. By the invariance principle, the MLE is Y( n )  / 12.
2
This exercise is a special case of Ex. 9.85, so we will refer to those results.
a. The MLE is ˆ  Y / 2 , so the maximum likelihood estimate is y / 2 = 63.
b. E( ˆ ) = θ, V( ˆ ) = V( Y / 2 ) = θ2/6.
c. The bound on the error of estimation is 2 V (ˆ )  2 (130 ) 2 / 6 = 106.14.
d. Note that V(Y) = 2θ2 = 2(130)2. Thus, the MLE for V(Y) = 2(ˆ ) 2 .
9.85
a. For α > 0 known the likelihood function is
 1
n
n
1
L() 
y
exp  i 1 yi /  .

n n
i 1 i
[ ()] 
The log–likelihood is then






ln L()   n ln[()]  n ln   (   1) ln i 1 yi  i 1 yi / 
n
n
so that
d
d
ln L( )   n /   i 1 yi /  2 .
n
Equating this to 0 and solving for θ, we find the MLE of θ to be
n
ˆ  1
Y  1Y .
n


i 1 i
b. Since E(Y) = αθ and V(Y) = αθ2, E( ˆ ) = θ, V( ˆ ) =  2 /( n) .
c. Since Y is a consistent estimator of μ = αθ, it is clear that ˆ must be consistent for θ.
d. From the likelihood function, it is seen from Theorem 9.4 that U =

n
Y is a
i 1 i
sufficient statistic for θ. Since the gamma distribution is in the exponential family of
distributions, U is also the minimal sufficient statistic.
9.88
The likelihood function is L()  (  1) n
 y  . The MLE is ˆ  n / 

n
i 1
i
n
i 1
ln Yi .
This is a different estimator that the MOM estimator from Ex. 9.69, however note that the
MLE is a function of the sufficient statistic.
2
9.91
Refer to Ex. 9.83 and Example 9.16. Let γ = 2θ. Then, the MLE for γ is ˆ  Y( n ) and by
the invariance principle the MLE for θ is ˆ  Y / 2 .
(n )
9.95
The quantity to be estimated is R = p/(1 – p). Since pˆ  Y / n is the MLE of p, by the
invariance principle the MLE for R is Rˆ  pˆ /(1  pˆ ).
3