A sample of 40 children from Kafue state showed that the mean time they spend watching television is 28.50 hours per week with a standard deviation of 4 hours. Another sample of 35 children from Chilanga showed that the mean time spent by them watching television is 23.5 hours per week with a standard deviation of 5 hours. Using a 2.5% significance level, can you conclude that the mean time spent watching television by children in Kafue state is greater than that for children in Chilanga? Assume that the standard deviations for the two populations are equal
"H_0:\\mu _1\\leqslant \\mu _2\\\\H_1:\\mu _1>\\mu _2\\\\n_1=40\\\\n_2=35\\\\\\bar{x}_1=28.5\\\\\\bar{x}_2=23.5\\\\s_1=4\\\\s_2=5\\\\s^2=\\frac{\\left( n_1-1 \\right) {s_1}^2+\\left( n_2-1 \\right) {s_2}^2}{n_1+n_2-2}=20.1918\\\\T=\\frac{\\bar{x}_1-\\bar{x}_2}{\\sqrt{s^2\\left( \\frac{1}{n_1}+\\frac{1}{n_2} \\right)}}=\\frac{40-35}{\\sqrt{20.1918\\left( \\frac{1}{40}+\\frac{1}{35} \\right)}}=4.80746\\sim t_{n_1+n_2-2}=t_{83}\\\\P-value:\\\\P\\left( T\\geqslant 4.80746 \\right) =F_{t,83}\\left( -4.80746 \\right) =3.36\\cdot 10^{-6}"
Since the P-value is less than the significance level, the null hypothesis is rejected, the mean time spent watching television by children in Kafue state is greater than that for children in Chilanga
Comments
Leave a comment