Answer to Question #85788 in Statistics and Probability for Rohit Jain

Question #85788
Which of the following statements are true or false?Give a short proof or a counter example in support of your answer.
(i)If the correlation coefficient between x and y is 0.75,then the correlation coefficient between (2+5x) and (-2y+3) is -0.75.
(ii)If P(A)=0.5,P(AUB)=0.7 and A and B are independent events,then P(B)=2/5.
(iii)Let X_1,X_2,...,X_n be a random sample of size n from N(0,σ^2).Then S^(2)_0=∑^n_i=1 X^(2)_i/σ^(2) follows normal distribution.
(iv)A maximum likelihood estimator is always unbiased.
(v)The mean deviation is least when deviations are taken about the mean.
1
Expert's answer
2019-03-04T14:27:44-0500

(i) The correlation coefficient is:


"r(x, y) = \\frac{cov(x, y)}{\\sqrt{\\mathbb V(x) \\mathbb V(y)}} = 0.75"

Hence using the linearity of covariance:

"r(2 + 5 x, -2 y + 3) = \\frac{cov(2 + 5 x, -2 y + 3)}{\\sqrt{\\mathbb V(2 + 5 x) \\mathbb V(-2 y + 3)}} = \\frac{-10 cov(x,y)}{25 \\sqrt{\\mathbb V(x) 4 \\mathbb V(y)}} = \\frac{-10}{10} 0.75 = -0.75"

So the first is true.

(ii) if P(A) = 0.5, P(A or B) = 0.7 and A and B are independent, then by Inclusion–exclusion principle:


"P(A \\cup B) = P(A) + P(B) - P(A \\cap B) = P(A) + P(B) - P(A) P(B)""P(B) = \\frac{P(A \\cup B) - P(A)}{1 - P(A)} = \\frac{0.2}{0.5} = 2\/5"

So the second is true.

(iii) Let's take n = 1, then

"S = \\frac{X_1 ^2}{\\sigma ^2}"

Since X_1 is N(0, sigma^2), S follows "N^2(0,1)" . To see that it is not gaussian, one may ask what is the probability of S being less then zero. Clearly it is zero. But any normal distributed random variable would have non-zero probability of this event, hence the statement is wrong.

(iv) The simple counter-exemple is the following:

Consider the sample X_i of uniformly distributed random variables on the interval "(0,\\theta)".

The maximum likelihood estimator for theta would be

"\\bar \\theta = \\max_{1\\leq i \\leq n} X_i"

To see this consider the product of densities:


"L = \\frac{1}{\\theta} 1_{0 < X_1 < \\theta} \\cdot \\frac{1}{\\theta} 1_{0 < X_2 < \\theta} ... \\frac{1}{\\theta} 1_{0 < X_n < \\theta}"


If "\\bar \\theta < max X_i" then one of the indicators is 0 (the function "1_{X \\in A}" is called the indicator of the event A, it takes value 1 if X in A, 0 if not) and the whole L is 0.

if "\\bar \\theta \\geq max X_i" then

"\\frac{1}{\\bar \\theta^n} \\geq \\frac{1}{(\\max_i X_i)^n}"

but as mle is the argmax of L, so

"\\bar \\theta = \\max_i X_i"


Now consider:

"P(\\bar \\theta < \\theta) > 0, P(\\bar \\theta \\geq \\theta) = 0"

So clearly such an estimator cannot be unbiased.

(v) I assume you meant mean squared deviation. We want to prove that

"\\mathbb E X = \\arg \\min_F \\sum_{i=1}^n (X_i - F)^2"


Where

"\\mathbb E X = \\frac{1}{n} \\sum_{i=1}^n X_i"


Let's take


"\\sum_{i=1}^n (X_i - F)^2 = \\sum_{i=1}^n (X_i \\pm \\mathbb E X - F)^2 = \\sum_{i=1}^n [(X_i - \\mathbb E X)^2 + ( \\mathbb E X - F)^2 + 2 (X_i - \\mathbb E X) ( \\mathbb E X - F)]"


The last one here is 0:


"\\sum_i [2 (X_i - \\mathbb E X) ( \\mathbb E X - F)] =2 ( \\mathbb E X - F) \\sum_i (X_i - \\frac{1}{n}\\sum_j X_j)=2 ( \\mathbb E X - F) (\\sum_i X_i -\\sum_j X_j) = 0"

And we can rewrite:


"\\sum_{i=1}^n (X_i - \\mathbb E X)^2 +n ( \\mathbb E X - F)^2 \\geq \\sum_{i=1}^n (X_i - \\mathbb E X)^2"

Thus we see that "\\sum_{i=1}^n (X_i - F)^2" is greater than "\\sum_{i=1}^n (X_i - \\mathbb E X)^2" for any choice of F.





Need a fast expert's response?

Submit order

and get a quick answer at the best price

for any assignment or question with DETAILED EXPLANATIONS!

Comments

No comments. Be the first!

Leave a comment

LATEST TUTORIALS
New on Blog
APPROVED BY CLIENTS