Statistics
1) Consider iid random variables X1, X2, . . . , Xn having pdf
f(x|θ) = ½
exp(θ − x) x ≥ θ,
0 x < θ,
for some θ ∈ (−∞, ∞).
(a) (3 points) The MLE of θ is the sample minimum, X(1). Give the expected value of the MLE.
(b) (3 points) Give the MSE of the MLE.
(c) (10 points) Give the appropriate Pitman estimator of θ (choosing between the estimator for a
location parameter and the estimator for a scale parameter).
(d) (3 points) Give the MSE of the Pitman estimator. (It can be noted that the Pitman estimator has
a smaller MSE than the MLE. For this problem, the UMVUE is equal to the Pitman estimator,
and so they have the same MSE value. But for some estimation settings, the Pitman estimator has
a smaller MSE than both the MLE and the UMVUE.)
2) Suppose that we observe iid random variables X1, X2, . . . , Xn having pmf
fX(x|θ) = θ(1 − θ)
x−1
I{1,2,3,…}(x),
where θ ∈ Θ = (0, 1). Consider the prior density π(θ) = I(0,1)(θ). (Note: Parts (b) and (d) are worth
no points, and solutions for them should not be submitted. But I encoursge you to do them, since they
can provide checks of your work for parts (a) and (c).)
(a) (4 points) Give the pdf of the postrior distribution.
(b) (0 points — don’t submit a solution for this part) The mode of the posterior density (the value of θ
which maximizes π(θ|x)) is sometimes called the generalized maximum likelihood estimate. (Note
that the generalized mle doesn’t depend on the choice of the loss function. We can employ this
estimate even if we don’t identify an appropriate loss function. Of course, if you can identify an
appropriate loss function, then it makes sense to use it by finding the estimator which minimizes
the Bayes risk. In part (c), we consider a squared-error loss function, and the corresponding Bayes
estimator is the mean of the posterior distribution instead of its mode.) Use the posterior density
requested in part (a) and find the generalized mle. (The estimate is 1
x
, which is different from, but
simliar to, the estimate requested for part (c) — for large sample sizes the two estimates will be
very close to one another.)
(c) (4 points) Give the Bayes estimate of θ based on the squared-error loss function.
(d) (0 points — don’t submit a solution for this part) Show that the Bayes estimator corresponding to
part (c) is a consistent estimator.
3) (10 points) Consider the N(θ, 1) random variable X, the prior density π(θ) = (2π)
−1/2
exp(−θ
2/2) (for
all θ ∈ (−∞, ∞)), and the loss function
l(θ, ˆθ) = exp Ã
ˆθ − θ
2
!
−
(
ˆθ − θ)
2
− 1.
(Note: This is a special case of Zellner’s LINEX loss function. It is asymmetric in that it penalizes
overestimation differently than underestimation.) Find the posterior density, and give the Bayes estimate
of θ based on the loss function given above. (Note: Be sure to give both items that are requested —
draw a box around (or highlight) each on your submitted solutions.) Note that the estimate is to be
based on a single observation, x, and not a sample of size n. (Note: With this loss function we do not
1
have that the Bayes estimate of θ is E(θ | X = x). Instead, the Bayes estimate is the value of a which
minimizes
Z
Θ
l(θ, a) p(θ|x) dθ =
Z
Θ
e
(a−θ)/2
p(θ|x) dθ −
1
2
a
Z
Θ
p(θ|x) dθ +
1
2
Z
Θ
θp(θ|x) dθ −
Z
Θ
p(θ|x) dθ.
While evaluating the first of these integrals may be a bit messy, the other ones should be relatively easy
if you keep in mind that the π(θ|x) is a valid pdf.)
PLACE THIS ORDER OR A SIMILAR ORDER WITH US TODAY AND GET A GOOD DISCOUNT
