(i) The correlation coefficient is:
Hence using the linearity of covariance:
"r(2 + 5 x, -2 y + 3) = \\frac{cov(2 + 5 x, -2 y + 3)}{\\sqrt{\\mathbb V(2 + 5 x) \\mathbb V(-2 y + 3)}} = \\frac{-10 cov(x,y)}{25 \\sqrt{\\mathbb V(x) 4 \\mathbb V(y)}} = \\frac{-10}{10} 0.75 = -0.75"
So the first is true.
(ii) if P(A) = 0.5, P(A or B) = 0.7 and A and B are independent, then by Inclusion–exclusion principle:
So the second is true.
(iii) Let's take n = 1, then
"S = \\frac{X_1 ^2}{\\sigma ^2}"Since X_1 is N(0, sigma^2), S follows "N^2(0,1)" . To see that it is not gaussian, one may ask what is the probability of S being less then zero. Clearly it is zero. But any normal distributed random variable would have non-zero probability of this event, hence the statement is wrong.
(iv) The simple counter-exemple is the following:
Consider the sample X_i of uniformly distributed random variables on the interval "(0,\\theta)".
The maximum likelihood estimator for theta would be
"\\bar \\theta = \\max_{1\\leq i \\leq n} X_i"To see this consider the product of densities:
If "\\bar \\theta < max X_i" then one of the indicators is 0 (the function "1_{X \\in A}" is called the indicator of the event A, it takes value 1 if X in A, 0 if not) and the whole L is 0.
if "\\bar \\theta \\geq max X_i" then
"\\frac{1}{\\bar \\theta^n} \\geq \\frac{1}{(\\max_i X_i)^n}"but as mle is the argmax of L, so
"\\bar \\theta = \\max_i X_i"Now consider:
"P(\\bar \\theta < \\theta) > 0, P(\\bar \\theta \\geq \\theta) = 0"So clearly such an estimator cannot be unbiased.
(v) I assume you meant mean squared deviation. We want to prove that
"\\mathbb E X = \\arg \\min_F \\sum_{i=1}^n (X_i - F)^2"Where
"\\mathbb E X = \\frac{1}{n} \\sum_{i=1}^n X_i"Let's take
The last one here is 0:
And we can rewrite:
Thus we see that "\\sum_{i=1}^n (X_i - F)^2" is greater than "\\sum_{i=1}^n (X_i - \\mathbb E X)^2" for any choice of F.
Comments
Leave a comment