Applied Statistics and Probability for Engineers, 5th edition
March 20, 2017 | Author: MuhammadHazmiMokhtar | Category: N/A
Short Description
Solution CHAPTER 7...
Description
Applied Statistics and Probability for Engineers, 5th edition
10 February 2010
CHAPTER 7 Section 7-2 7-1.
The proportion of arrivals for chest pain is 8 among 103 total arrivals. The proportion = 8/103.
7-2.
The proportion is 10/80 =1/8.
7-3.
P(2.560 ≤ X ≤ 2.570) = P
(
2.560 − 2.565 0.008 / 9
≤ σX/− µn ≤
2.570 − 2.565 0.008 / 9
)
= P(−1.875 ≤ Z ≤ 1.875) = P( Z ≤ 1.875) − P( Z ≤ −1.875) = 0.9696 − 0.0304 = 0.9392
7-4.
X i ~ N (100,10 2 )
µ X = 100 σ X =
n = 25
σ
10
=
=2
25
n
P[(100 − 1.8(2)) ≤ X ≤ (100 + 2)] = P (96.4 ≤ X ≤ 102) = P⎛⎜ 96.42−100 ≤ ⎝
X −µ
σ/ n
≤ 102 −2100 ⎞⎟ ⎠
= P ( −1.8 ≤ Z ≤ 1) = P ( Z ≤ 1) − P ( Z ≤ −1.8) = 0.8413 − 0.0359 = 0.8054
7-5.
σ
µ X = 520 kN / m 2 ; σ X = P ( X ≥ 525 ) = P
(
X −µ
σ /
n
n ≥
=
25 = 10.206 6
525 − 520 10 . 206
)
= P ( Z ≥ 0 . 4899 ) = 1 − P ( Z ≤ 0 .4899 ) = 1 − 0 .6879 = 0 . 3121
7-6. n=6 σX =
n = 49 σ
= n = 1.429
3.5
σX =
6
n = 0.5
σ X is reduced by 0.929
psi
7-7.
σ
Assuming a normal distribution,
7-1
=
3.5 49
Applied Statistics and Probability for Engineers, 5th edition
σ
µ X = 13237;σ X =
n
=
345 = 154.289 5
P (17230 ≤ X ≤ 17305) = P
(
17230 −13237 154.289
−17237 ≤ σX/− µn ≤ 17305 154.289
10 February 2010
)
= P(−0.045 ≤ Z ≤ 0.4407) = P( Z ≤ 0.44) − P( Z ≤ −0.045) = 0.6700 − 0.482 = 0.188 σ
7-8.
σX =
7-9.
σ 2 = 25
=
50
σX =
= 22.361 psi = standard error of X
5
n
σ n 2
2
⎛ ⎞ ⎛ 5 ⎞ n = ⎜ σ ⎟ = ⎜ ⎟ = 11.11 ~ 12 ⎝ σ X ⎠ ⎝ 1.5 ⎠ 7-10.
Let Y = X − 6 µX =
a + b (0 + 1) = = 2 2
1 2
µX = µX σ X2 = σ X2 =
(b − a) 2 = 12
σ2 n
=
1 12
12
1 12
1 144
=
σ X = 121 µ Y = 12 − 6 = −5 12 1 σ Y2 = 144
1 Y = X − 6 ~ N (−5 12 , 144 ) , approximately, using the central limit theorem.
7-11.
n = 36 µX =
a + b (3 + 1) = =2 2 2
σX =
(b − a + 1) 2 − 1 = 12
µ X = 2, σ X = z=
2/3 36
=
(3 − 1 + 1)2 − 1 = 8 = 12 12 2/3 6
X −µ σ/ n
Using the central limit theorem:
7-2
2 3
Applied Statistics and Probability for Engineers, 5th edition ⎛ P(2.1 < X < 2.5) = P⎜ 2.21−/ 32 < Z < ⎜ ⎝ 6
2.5 − 2 2/3 6
10 February 2010
⎞ ⎟ ⎟ ⎠
= P(0.7348 < Z < 3.6742) = P( Z < 3.6742) − P( Z < 0.7348) = 1 − 0.7688 = 0.2312
7-12. µ X = 8.2 minutes σ X = 15 . minutes
n = 49 σX =
σX
15 .
=
n
= 0.2143
49
µ X = µ X = 8.2 mins
Using the central limit theorem, X is approximately normally distributed. a) P ( X < 10) = P( Z <
10 − 8.2 ) = P( Z < 8.4) = 1 0.2143
8.2 10 −8.2 4) P(Z >
4 −5 ) 20
= P ( Z > −0.2236) = 1 − P ( Z ≤ −0.2236) = 1 − 0.4115 = 0.5885
b) P (3.5 ≤ X 1 − X 2 ≤ 5.5)
P( 3.520−5 ≤ Z ≤
5.5− 5 20
) = P ( Z ≤ 0.1118) − P( Z ≤ −0.3354)
= 0.5445 − 0.3687 = 0.1759 7-14.
If µ B = µ A , then X B − X A is approximately normal with mean 0 and variance Then, P ( X B − X A > 3.5) = P ( Z >
3.5 − 0 20.48
σ B2 25
+
σ A2 25
= 20.48 .
) = P ( Z > 0.773) = 0.2196
The probability that X B exceeds X A by 3.5 or more is not that unusual when µ B and µ A are equal. Therefore, there is not strong evidence that µ B is greater than µ A . 7-15.
Assume approximate normal distributions.
7-3
Applied Statistics and Probability for Engineers, 5th edition 2
( X high − X low ) ~ N (60 − 55, 416 +
42 16
10 February 2010
)
~ N (5,2) P ( X high − X low ≥ 2) = P( Z ≥
2 −5 ) 2
= 1 − P( Z ≤ −2.12) = 1 − 0.0170 = 0.983
Section 7-3
7-16.
s 1.815 = = 0.4058 a) n 20 Variance = σ 2 = 1.8152 = 3.294 SE Mean = σ X =
b) Estimate of mean of population = sample mean = 50.184 7-17.
s 10.25 = SE Mean → = 2.05 → N = 25 N N 3761.70 Mean = = 150.468 25 Variance = S 2 = 10.252 = 105.0625 Sum of Squares ss Variance = → 105.0625 = → SS = 2521.5 25 − 1 n −1
a)
b) Estimate of mean of population = sample mean = 150.468 7-18.
a)
b)
7-19.
()
E θ =E
(∑
()
n i =1
(X
Bias = E θ − θ = 7-20.
⎛ 2n ⎜ ∑ Xi E X 1 = E ⎜⎜ i =1 2n ⎜ ⎝
( )
E (X 2 )
⎛ n ⎜ ∑ Xi = E ⎜ i =1 ⎜ n ⎜ ⎝
)
− X ) / c = (n − 1)σ 2 / c 2
1
( n − 1) σ 2 σ 2 = σ 2 ( n − 1 − 1) c
c
⎞ ⎟ 2n ⎟ = 1 E ⎛⎜ X ⎞⎟ = 1 (2nµ ) = µ i⎟ ⎟ 2n ⎜ ∑ ⎝ i =1 ⎠ 2n ⎟ ⎠ ⎞ ⎟ n ⎟ = 1 E ⎛⎜ X ⎞⎟ = 1 (nµ ) = µ , i⎟ ⎟ n ⎜∑ ⎝ i =1 ⎠ n ⎟ ⎠
The variances are V (X1 ) =
σ2 2n
and V (X 2 ) =
X 1 and X 2 are unbiased estimators of µ.
σ2 n
7-4
; compare the MSE (variance in this case),
Applied Statistics and Probability for Engineers, 5th edition
10 February 2010
1 MSE (Θˆ1 ) σ 2 / 2n n = 2 = = ˆ 2 2 n σ /n MSE (Θ 2 )
Since both estimators are unbiased, examination of the variances would conclude that X1 is the “better” estimator with the smaller variance. 7-21.
( )
1 [E ( X 1 ) + E ( X 2 ) + 7
( )
1 [E (2 X 1 ) + E ( X 6 ) + E ( X 7 )] = 1 [2µ − µ + µ ] = µ 2 2
E Θˆ1 = E Θˆ 2 =
+ E ( X 7 )] =
1 1 (7 E ( X )) = (7 µ ) = µ 7 7
a) Both Θˆ1 and Θˆ 2 are unbiased estimates of µ since the expected values of these statistics are equivalent to the true mean, µ.
( )
⎡ X 1 + X 2 + ... + X 7 ⎤ 1 ⎥ = 2 (V ( X 1 ) + V ( X 2 ) + 7 ⎣ ⎦ 7
b) V Θˆ1 = V ⎢ V (Θˆ1 ) =
( )
+ V ( X 7 )) =
1 1 (7σ 2 ) = σ 2 49 7
σ2 7
1 ⎡ 2 X1 − X 6 + X 4 ⎤ 1 ⎥ = 2 (V (2 X 1 ) + V ( X 6 ) + V ( X 4 ) ) = 4 (4V ( X 1 ) + V ( X 6 ) + V ( X 4 )) 2 ⎣ ⎦ 2 1 = 4σ 2 + σ 2 + σ 2 4 1 = (6σ 2 ) 4
V Θˆ 2 = V ⎢
(
)
3σ 2 V (Θˆ 2 ) = 2
Since both estimators are unbiased, the variances can be compared to select the better estimator. The variance of Θˆ1 is smaller than that of Θˆ 2 , Θˆ1 is the better estimator. 7-22.
Since both Θˆ1 and Θˆ 2 are unbiased, the variances of the estimators can compared to select the better estimator. The variance of θ2 is smaller than that of θˆ1 thus θ2 may be the better estimator. Relative Efficiency =
7-23.
E (Θˆ1 ) = θ
MSE (Θˆ1 ) V (Θ1 ) 10 = = = 2.5 4 MSE (Θˆ 2 ) V (Θˆ 2 )
E (Θˆ 2 ) = θ / 2
Bias = E (Θˆ 2 ) − θ
=
θ θ −θ = − 2 2
V (Θˆ1 ) = 10
V (Θˆ 2 ) = 4
For unbiasedness, use Θˆ1 since it is the only unbiased estimator. As for minimum variance and efficiency we have: Relative Efficiency =
(V (Θˆ1 ) + Bias 2 )1 where bias for θ1 is 0. (V (Θˆ ) + Bias 2 ) 2
2
Thus, Relative Efficiency =
(10 + 0) 40 = 2⎞ ⎛ 16 + θ2 ⎜ 4 + ⎛⎜ −θ ⎞⎟ ⎟ ⎜ ⎝ 2 ⎠ ⎟⎠ ⎝
(
)
If the relative efficiency is less than or equal to 1, Θˆ1 is the better estimator.
7-5
Applied Statistics and Probability for Engineers, 5th edition Use Θˆ1 , when
40 (16 + θ2 )
10 February 2010
≤1 40 ≤ (16 + θ2 )
24 ≤ θ2 θ ≤ −4.899 or θ ≥ 4.899
If −4.899 < θ < 4.899 then use Θˆ 2 . For unbiasedness, use Θˆ1 . For efficiency, use Θˆ1 when θ ≤ −4.899 or θ ≥ 4.899 and use Θˆ 2 when −4.899 < θ < 4.899 . 7-24.
E (Θˆ1 ) = θ
No bias
V (Θˆ1 ) = 12 = MSE(Θˆ1 )
E (Θˆ 2 ) = θ
No bias
V (Θˆ 2 ) = 10 = MSE(Θˆ 2 )
MSE(Θˆ 3 ) = 6 [note that this includes (bias2)] Bias To compare the three estimators, calculate the relative efficiencies: E (Θˆ 3 ) ≠ θ
7-25.
MSE (Θˆ1 ) 12 = = 1 .2 , MSE (Θˆ 2 ) 10
since rel. eff. > 1 use Θˆ 2 as the estimator for θ
MSE (Θˆ1 ) 12 = =2, MSE (Θˆ 3 ) 6
since rel. eff. > 1 use Θˆ 3 as the estimator for θ
MSE (Θˆ 2 ) 10 = = 1 .8 , MSE (Θˆ 3 ) 6
since rel. eff. > 1 use Θˆ 3 as the estimator for θ
Conclusion: Θˆ 3 is the most efficient estimator with bias, but it is biased. Θˆ 2 is the best “unbiased” estimator. n1 = 20, n2 = 10, n3 = 8 Show that S2 is unbiased: ⎛ 20 S12 + 10 S 22 + 8S 32 ⎞ ⎟⎟ E S 2 = E ⎜⎜ 38 ⎝ ⎠ 1 = E 20S12 + E 10S 22 + E 8S 32 38 1 1 = 20σ 12 + 10σ 22 + 8σ 32 = 38σ 2 = σ 2 38 38
( )
( (
) (
) ( ))
(
)
(
)
Therefore, S2 is an unbiased estimator of σ2 .
2
∑ (X i − X ) n
7-26.
Show that
i =1
n
is a biased estimator of σ2 :
a)
7-6
Applied Statistics and Probability for Engineers, 5th edition ⎛ n 2 ⎜ ∑ (X i − X ) E ⎜ i =1 ⎜ n ⎜ ⎝
10 February 2010
⎞ ⎟ ⎟ ⎟ ⎟ ⎠
⎛ σ 2 ⎞ ⎞⎟ 1 ⎛ n 1⎛ n ⎞ 1⎛ n 2⎞ ⎟ E ⎜ ∑ ( X i − nX ) ⎟ = ⎜ ∑ E X i2 − nE X 2 ⎟ = ⎜⎜ ∑ µ 2 + σ 2 − n⎜⎜ µ 2 + n ⎝ i =1 n ⎟⎠ ⎟⎠ ⎠ n ⎝ i =1 ⎠ n ⎝ i =1 ⎝ σ2 1 1 = nµ 2 + nσ 2 − nµ 2 − σ 2 = ((n − 1)σ 2 ) = σ 2 − n n n
( )
=
(
( )
(
)
)
(X − X ) is a biased estimator of σ2 . ∴ ∑ i 2
n
⎡
b) Bias = E ⎢ ∑
(X
2 i
⎢ ⎣
− nX
) ⎤⎥ − σ 2
2
⎥ ⎦
n
=σ2 −
σ2 n
−σ 2 = −
σ2 n
c) Bias decreases as n increases.
7-27.
( )
a) Show that X 2 is a biased estimator of µ2. Using E X2 = V( X) + [ E ( X) ]2
( )
E X2 =
=
1 n2
2 2 1 ⎛ n 1 ⎛⎜ ⎛ n ⎞ ⎞ ⎡ ⎛ n ⎞⎤ ⎞⎟ E X V X E X = + ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ∑ i n2 ⎜ ⎝ ∑ i ⎢ ∑ i ⎥ ⎟ n 2 ⎝ i =1 ⎠ ⎝ i =1 ⎠ ⎣ ⎝ i =1 ⎠⎦ ⎠
⎛ 2 ⎛ n ⎞2 ⎞ 1 2 ⎜ nσ + ⎜ ∑ µ ⎟ ⎟ = nσ 2 + (nµ ) 2 ⎜ ⎟ n ⎝ i =1 ⎠ ⎠ ⎝
(
)
σ2 1 2 2 2 2 n n E X σ + µ = + µ2 n2 n Therefore, X 2 is a biased estimator of µ.2 =
(
)( )
( )
b) Bias = E X 2 − µ 2 =
σ2 n
+ µ2 − µ2 =
σ2 n
c) Bias decreases as n increases. 7-28.
a) The average of the 26 observations provided can be used as an estimator of the mean pull force because we know it is unbiased. This value is 336.36 N. b) The median of the sample can be used as an estimate of the point that divides the population into a “weak” and “strong” half. This estimate is 334.55 N. c) Our estimate of the population variance is the sample variance or 54.16 N2. Similarly, our estimate of the population standard deviation is the sample standard deviation or 7.36 N. d) The estimated standard error of the mean pull force is 7.36/26½ = 1.44. This value is the standard deviation, not of the pull force, but of the mean pull force of the sample. e) No connector in the sample has a pull force measurement under 324 N.
7-29.
Descriptive Statistics
Variable N Oxide Thickness 24
Mean 423.33
Median 424.00
7-7
TrMean 423.36
StDev 9.08
SE Mean 1.85
Applied Statistics and Probability for Engineers, 5th edition
10 February 2010
a) The mean oxide thickness, as estimated by Minitab from the sample, is 423.33 Angstroms. b) Standard deviation for the population can be estimated by the sample standard deviation, or 9.08 Angstroms. c) The standard error of the mean is 1.85 Angstroms. d) Our estimate for the median is 424 Angstroms. e) Seven of the measurements exceed 430 Angstroms, so our estimate of the proportion requested is 7/24 = 0.2917
7-30.
1 1 E ( X ) = np = p n n p ⋅ (1 − p ) b) We know that the variance of pˆ is so its standard error must be n
a) E ( pˆ ) = E ( X n) =
p ⋅ (1 − p ) . To n
estimate this parameter we would substitute our estimate of p into it.
7-31.
a) E ( X1 − X 2 ) b) s.e. =
= E ( X1) − E ( X 2 ) = µ1 − µ 2
V ( X 1 − X 2 ) = V ( X 1 ) + V ( X 2 ) + 2COV ( X 1 , X 2 ) =
σ 12 n1
+
σ 22 n2
This standard error could be estimated by using the estimates for the standard deviations of populations 1 and 2. c)
⎛ (n1 − 1) ⋅ S12 + (n2 − 1) ⋅ S 2 2 ⎞ 1 ⎡ (n1 − 1) E ( S12 ) + (n2 − 1) ⋅ E ( S2 2 ) ⎤ E (S p ) = E ⎜ ⎟= ⎣ ⎦ n1 + n2 − 2 ⎝ ⎠ n1 + n2 − 2 1 n +n −2 2 ⎡(n1 − 1) ⋅ σ 12 + (n2 − 1) ⋅ σ 2 2 ) ⎤ = 1 2 = σ =σ2 ⎣ ⎦ n1 + n2 − 2 n1 + n2 − 2 2
7-32.
a) E(µˆ ) = E(αX1 + (1 − α ) X 2 ) = αE( X1) + (1 − α )E( X 2 ) = αµ + (1 − α )µ b) s.e.( µˆ ) = V (α X 1 + (1 − α ) X 2 ) = α 2 V ( X 1 ) + (1 − α ) 2 V ( X 2 ) = α2 = σ1
c)
σ 12 n1
+ (1 − α ) 2
σ 22 n2
= α2
σ 12 n1
+ (1 − α ) 2 a
α 2 n 2 + (1 − α ) 2 an 1 n1 n 2
The value of alpha that minimizes the standard error is: α=
an1 n2 + an1
7-8
σ 12 n2
=µ
Applied Statistics and Probability for Engineers, 5th edition
10 February 2010
d) With a = 4 and n1=2n2, the value of alpha to choose is 8/9. The arbitrary value of α=0.5 is too small and will result in a larger standard error. With α=8/9 the standard error is s.e.( µˆ ) = σ 1
(8 / 9 ) 2 n 2 + (1 / 9 ) 2 8 n 2 2n 2
=
2
0 .667 σ 1 n2
If α=0.5 the standard error is s.e.( µˆ ) = σ 1
7-33.
a) b)
E(
( 0 .5) 2 n 2 + ( 0 .5) 2 8 n 2 2n 2
2
=
1 .0607 σ 1 n2
X1 X 2 1 1 1 1 − )= E( X 1 ) − E ( X 2 ) = n1 p1 − n2 p 2 = p1 − p 2 = E ( p1 − p 2 ) n1 n2 n1 n2 n1 n2
p1 (1 − p1 ) p 2 (1 − p 2 ) + n1 n2
c) An estimate of the standard error could be obtained substituting for
p2 in the equation shown in (b).
d) Our estimate of the difference in proportions is 0.01 e) The estimated standard error is 0.0413
7-9
X1 for n1
p1 and
X2 n2
Applied Statistics and Probability for Engineers, 5th edition Section 7-4 7-34.
f ( x) = p(1 − p) x −1 n
L( p) = ∏ p(1 − p) xi −1 i =1
= p (1 − p) n
n ∑ xi i =1
−n
⎞ ⎛ n ln L( p ) = n ln p + ⎜ ∑ xi − n ⎟ ln(1 − p ) ⎠ ⎝ i =1 n
i ∂ ln L( p) n ∑ = − i =1 1− p p ∂p
x −n ≡0
⎛ n ⎞ (1 − p )n − p⎜ ∑ xi − n ⎟ ⎝ i =1 ⎠ 0= p (1 − p ) n
n − np − p ∑ xi + pn
0=
i =1
p (1 − p ) n
0 = n − p ∑ xi i =1
n
pˆ =
n
∑x i =1
7-35.
f ( x) =
−λ
e λ x!
x
i
n
L (λ ) = ∏ i =1
−λ
e λ e = xi ! xi
− nλ
λ
n ∑ xi i =1
n
∏x ! i
i =1
n
n
i =1
i =1
ln L(λ ) = − nλ ln e + ∑ x i ln λ − ∑ ln x i ! d ln L(λ ) 1 n = −n + ∑ xi ≡ 0 dλ λ i =1 n
−n+ n
∑x i =1
i
∑x i =1
λ
i
=0
= nλ n
λˆ =
∑x i =1
i
n
7-10
10 February 2010
Applied Statistics and Probability for Engineers, 5th edition 7-36.
10 February 2010
f ( x) = (θ + 1) xθ n
L(θ ) = ∏ (θ + 1) xiθ = (θ + 1) x1θ × (θ + 1) x2θ × ... i =1
n
= (θ + 1) n ∏ xiθ i =1
ln L(θ ) = n ln(θ + 1) + θ ln x1 + θ ln x2 + ... n
= n ln(θ + 1) + θ ∑ ln xi i =1
n n ∂ ln L(θ ) = + ∑ ln xi = 0 ∂θ θ + 1 i =1 n n = −∑ ln xi θ +1 i =1 n ˆ θ= n −1 − ∑ ln xi i =1
n
7-37.
f ( x) = λe
− λ ( x −θ )
L(λ ) = ∏ λe − λ ( x −θ ) = λne
for x ≥ θ
−λ
n
∑ (x i =1
−θ )
⎞ ⎛ n x − nθ ⎟ −λ ⎜ ⎟ ⎜ n ⎠ ⎝ i =1
=λe
∑
i =1
n
ln L(λ , θ ) = n ln λ − λ ∑ x i − λnθ i =1
d ln L(λ , θ ) n = − ∑ x i − nθ ≡ 0 dλ λ i =1 n
n
λ
n
= ∑ x i − nθ i =1
⎛
n
⎞
λˆ = n / ⎜ ∑ x i − nθ ⎟ ⎝ i =1 1 λˆ = x −θ
⎠
The other parameter θ cannot be estimated by setting the derivative of the log likelihood with respect to θ to zero because the log likelihood is a linear function of θ. The range of the likelihood is important. The joint density function and therefore the likelihood is zero for θ < Min( X 1 , X 2 ,…, X n ) . The term in the log likelihood -nλθ is maximized for θ as small as possible within the range of nonzero likelihood. Therefore, the log likelihood is maximized for θ estimated with Min( X 1 , X 2 ,…, X n ) so that θˆ = xmin b) Example: Consider traffic flow and let the time that has elapsed between one car passing a fixed point and the instant that the next car begins to pass that point be considered time headway. This headway can be modeled by the shifted exponential distribution. Example in Reliability: Consider a process where failures are of interest. Suppose that a unit is put into operation at x = 0, but no failures will occur until θ period of operation. Failures will occur only after the time θ.
7-11
Applied Statistics and Probability for Engineers, 5th edition
7-38.
n x e− x θ L(θ ) = ∏ i θ2 i
i =1
10 February 2010
x ln L(θ ) = ∑ ln( xi ) − ∑ i − 2n ln θ
θ
1 2n ∂ ln L(θ ) = ∑ xi − 2 ∂θ θ θ Setting the last equation equal to zero and solving for theta yields n
θˆ =
7-39.
∑ xi
i =1
2n
E( X ) =
a−0 1 n = ∑ X i = X , therefore: aˆ = 2 X n i =1 2
The expected value of this estimate is the true parameter so it is unbiased. This estimate is reasonable in one sense because it is unbiased. However, there are obvious problems. Consider the sample x1=1, x2 = 2 and x3 =10. Now ⎯x = 4.37 and aˆ = 2 x = 8.667 . This is an unreasonable estimate of a, because clearly a ≥ 10. 1
x2 a) ∫ c (1 + θx ) dx = 1 = (cx + cθ ) = 2c 2 −1 −1 1
7-40.
so that the constant c should equal 0.5 b)
E( X ) =
θ 1 n Xi = ∑ n i =1 3
θˆ = 3 ⋅
1 n ∑ Xi n i =1
c)
⎛ 1 n ⎞ θ E (θˆ) = E ⎜⎜ 3 ⋅ ∑ X i ⎟⎟ = E (3 X ) = 3E ( X ) = 3 = θ 3 ⎝ n i =1 ⎠ d) n 1 L(θ ) = ∏ (1 + θX i ) i =1 2
n 1 ln L(θ ) = n ln( ) + ∑ ln(1 + θX i ) 2 i =1
n ∂ ln L(θ ) Xi =∑ ∂θ i =1(1 + θX i )
By inspection, the value of θ that maximizes the likelihood is max (Xi)
7-12
Applied Statistics and Probability for Engineers, 5th edition
7-41.
1 n 2 ∑ X i so n i =1
E ( X 2 ) = 2θ =
a)
Θˆ =
1 n 2 ∑ Xi 2n i =1
n
xi e − x 2θ
10 February 2010
b)
L(θ ) = ∏
i =1
2
x2 ln L(θ ) = ∑ ln( xi ) − ∑ i − n lnθ 2θ
i
θ
∂ ln L(θ ) n 1 = xi 2 − ∑ ∂θ θ 2θ 2 Setting the last equation equal to zero, the maximum likelihood estimate is Θˆ =
1 n 2 ∑ Xi 2n i =1
and this is the same result obtained in part (a) c) a
∫ f ( x)dx = 0.5 = 1 − e
− a 2 / 2θ
0
a = − 2θ ln(0.5) = 2θ ln(2) We can estimate the median (a) by substituting our estimate for θ into the equation for a. 7-42.
a) aˆ cannot be unbiased since it will always be less than a. b) bias =
na a (n + 1) a → 0. − =− n +1 n +1 n + 1 n→∞
c) 2 X n
⎛ y⎞ d) P(Y≤y)=P(X1, …,Xn ≤ y)=(P(X1≤y)) = ⎜ ⎟ . Thus, f(y) is as given. Thus, ⎝a⎠ an a . bias=E(Y)-a = −a = − n +1 n +1 e) For any n >1, n(n+2) > 3n so the variance of aˆ2 is less than that of aˆ1 . It is in this sense that the n
second estimator is better than the first. 7-43.
a)
β L( β , δ ) = ∏ i =1 δ n
=e
−
n
∑ i =1
⎛ xi ⎞ ⎜ ⎟ ⎝δ ⎠
⎛ xi ⎞ ⎜ ⎟ ⎝δ ⎠
β
β −1
e
⎛x ⎞ −⎜ i ⎟ ⎝δ ⎠
β
β ⎛ xi ⎞ ⎜ ⎟ ∏ i =1 δ ⎝ δ ⎠ n
β −1
7-13
Applied Statistics and Probability for Engineers, 5th edition n
ln L( β , δ ) = ∑ ln i =1
[() ] xi β −1
β δ
δ
⎛x ⎞ − ∑⎜ i ⎟ ⎝δ ⎠
= n ln( δβ ) + ( β − 1)∑ ln b)
∂ ln L( β , δ ) n = + ∑ ln ∂β β
10 February 2010
β
( ) −∑ ( )
xi β
xi
δ
δ
( )−∑ ln( )( ) xi
xi β
xi
δ
δ
δ
β
x ∂ ln L( β , δ ) n n = − − ( β − 1) + β ∑β +i1 ∂δ δ δ δ ∂ ln L( β , δ ) equal to zero, we obtain Upon setting ∂δ
δ n = ∑ xi β
β
Upon setting n
β n
β
1/ β
⎤ ⎥ ⎥⎦
equal to zero and substituting for δ, we obtain
∂β
+ ∑ ln xi − n ln δ =
+ ∑ ln xi −
and
⎡ ∑ xi β and δ = ⎢ ⎢⎣ n ∂ ln L( β , δ ) 1
δβ
⎛ ∑ xβ ⎞ ln⎜ i ⎟ = β ⎝ n ⎠ n
∑ xiβ (ln xi − ln δ )
n ∑ xiβ
∑ xiβ ln xi −
n ∑ xiβ
∑ xiβ
1
β
⎛ ∑ xβ ⎞ ln⎜ i ⎟ ⎝ n ⎠
⎡ ∑ xiβ ln xi ∑ ln xi ⎤ =⎢ + ⎥ β ⎣⎢ ∑ xiβ n ⎦⎥ 1
c) Numerical iteration is required. 7-44.
a) Using the results from Example 7-12 we obtain that the estimate of the mean is 423.33 and the estimate of the variance is 82.4464 b)
0
-200000
log L
-400000
-600000 425 7
8
9
Std. Dev.
10
11
12
13
14
410
415
420
430
435
Mean
15
The function seems to have a ridge and its curvature is not too pronounced. The maximum value for std deviation is at 9.08, although it is difficult to see on the graph. c) When n is increased to 40, the graph looks the same although the curvature is more pronounced. As n increases, it is easier to determine the maximum value for the standard deviation is on the graph.
7-14
Applied Statistics and Probability for Engineers, 5th edition
7-45.
10 February 2010 (σ 2 / n) µ 0 + σ 02 x and σ 02 + σ 2 / n
From Example 7-16 the posterior distribution for µ is normal with mean
σ /(σ 2 / n) . σ σ 2 /n 2
The Bayes estimator for µ goes to the MLE as n increases. This
0
variance
2
0
+
σ /n 2
follows since
σ x σ 2
goes to 0, and the estimator approaches
0
2
1
f (x | µ) =
a) Because
distribution is
σ 02 ’s cancel).
Thus, in
0
the limit µˆ = x .
7-46.
(the
e
2π σ
−
( x−µ )2 2σ 2
and f ( µ ) =
1
f ( x, µ ) =
(b − a) 2π σ
e
−
( x−µ )2 2σ 2
1 for a ≤ µ ≤ b , the joint b−a
for -∞ < x < ∞ and a ≤ µ ≤ b . Then,
2
( x−µ ) − 1 1 2σ 2 f ( x) = e dµ and this integral is recognized as a normal probability. ∫ b − a a 2π σ 1 [Φ(bσ− x ) − Φ( aσ− x )] where Φ(x) is the standard normal cumulative Therefore, f ( x) = b−a
b
distribution function. Then, f ( µ | x) =
~= b) The Bayes estimator is µ
f ( x, µ ) = f ( x) −
−
(x−µ )2 2
e 2σ 2π σ [Φ ( bσ− x ) − Φ ( aσ− x )]
( x−µ )2
µe σ dµ . 2π σ [Φ ( bσ− x ) − Φ( aσ− x )]
b
∫ a
2 2
Let v = (x - µ). Then, dv = - dµ and
µ~ =
x−a
( x − v )e
∫
v2 2σ 2
dv
2π σ [Φ ( σ ) − Φ( aσ− x )] b− x
x −b
=
−
x[Φ ( xσ− a ) − Φ ( xσ−b )]
[Φ( σ ) − Φ( σ )] b− x
a− x
x−a
−
∫
x −b
ve
−
v2 2σ 2
dv
2π σ [Φ( bσ− x ) − Φ ( aσ− x )]
v2 . Then, dw = [ 2 v2 ]dv = [ v2 ]dv and Let w = 2σ σ 2σ 2 ( x −a) 2
µ~ = x −
2σ 2
∫
( x −b) 2
σe−wdw
2π [Φ(bσ− x ) − Φ( aσ− x )]
2σ 2 ( x −b) ⎡ − ( x − a2) ⎤ − 2 ⎢ e 2σ − e 2σ ⎥ ⎢ Φ( b− x ) − Φ( a − x )⎥ σ ⎥ ⎢⎣ σ ⎦ 2
σ = x+ 2π 7-47.
2
e− λ λx ⎛ m + 1⎞ a) f ( x) = for x = 0, 1, 2, and f (λ ) = ⎜ ⎟ x! ⎝ λ0 ⎠
7-15
m +1
λm e
− ( m +1) λλ
0
Γ(m + 1)
for λ > 0.
Applied Statistics and Probability for Engineers, 5th edition
10 February 2010
Then, m +1 ( m + 1) λm + x e f ( x, λ ) =
− λ − ( m +1) λλ
0
.
λm0 +1Γ(m + 1) x!
This last density is recognized to be a gamma density as a function of λ. Therefore, the posterior distribution of λ is a gamma distribution with parameters m + x + 1 and 1 +
m+1 . λ0
b) The mean of the posterior distribution can be obtained from the results for the gamma distribution to be
m + x +1
[ ] 1+
7-48
16 25
(4) + 1(4.85) = 4.518 16 +1 25
= x = 4.85 The Bayes estimate appears to underestimate the mean.
a) From Example 7-16, b) µˆ
7-50.
λ0
~= a) From Example 7-16, the Bayes estimate is µ b.) µˆ
7-49.
m +1
⎛ m + x +1 ⎞ ⎟⎟ = λ 0 ⎜⎜ + + m λ 1 0 ⎝ ⎠
µ~ =
(0.0045)(2.28) + (0.018)(2.29) = 2.288 0.0045 + 0.018
= x = 2.29 The Bayes estimate is very close to the MLE of the mean.
a) f ( x | λ ) = λe
− λx
, x ≥ 0 and f (λ ) = 0.01e −0.01λ . Then,
f ( x1 , x 2 , λ ) = λ2 e − λ ( x1 + x2 ) 0.01e −0.01λ = 0.01λ 2 e − λ ( x1 + x2 + 0.01) . As a function of λ, this is recognized as a gamma density with parameters 3 and x1 + x 2 + 0.01 . Therefore, the posterior mean for λ is
~
λ=
3 3 = = 0.00133 . x1 + x2 + 0.01 2 x + 0.01 1000
b) Using the Bayes estimate for λ, P(X 0, x 2 > 0,..., x n > 0
i =1
7-52.
⎛ 1 f ( x1 , x 2 , x3 , x 4 , x5 ) = ⎜⎜ ⎝ 2π σ
5
5 2 ⎞ ⎟⎟ exp⎛⎜ − ∑ ( x2i −σµ2 ) ⎞⎟ ⎠ ⎝ i =1 ⎠
7-16
Applied Statistics and Probability for Engineers, 5th edition 7-53.
f ( x1 , x 2 , x 3 , x 4 ) = 1
7-54.
X 1 − X 2 ~ N (100 − 105,
7-55.
X ~ N (50,144)
10 February 2010
for 0 ≤ x1 ≤ 1,0 ≤ x 2 ≤ 1,0 ≤ x 3 ≤ 1,0 ≤ x 4 ≤ 1
2.0 2 2.52 + ) 25 30 ~ N (−5,0.3683)
P ( 47 ≤ X ≤ 53) = P⎛⎜ 47 −50 ≤ Z ≤ ⎝ 12 / 36
53 − 50 12 / 36
⎞⎟ ⎠
= P ( −1.5 ≤ Z ≤ 1.5) = P ( Z ≤ 1.5) − P ( Z ≤ −1.5) = 0.9332 − 0.0668 = 0.8664 No, because Central Limit Theorem states that with large samples (n ≥ 30), X is approximately normally distributed. 7-56.
Assume X is approximately normally distributed.
P ( X > 34) = 1 − P ( X ≤ 34) = 1 − P( Z ≤
34 − 38 0.7 / 9
)
= 1 − P( Z ≤ −17.14) = 1 − 0 = 1 7-57.
z=
X −µ s/ n
=
52 − 50 2 / 16
= 5.6569
P(Z > z) ~0. The results are very unusual. 7-58.
P ( X ≤ 37) = P( Z ≤ −5.36) = 0
7-59.
Binomial with p equal to the proportion of defective chips and n = 100.
7-60.
E (aX 1 + (1 − a) X 2 = aµ + (1 − a) µ = µ V ( X ) = V [aX 1 + (1 − a) X 2 ] = a 2V ( X 1 ) + (1 − a) 2 V ( X 2 ) = a 2 ( σn1 ) + (1 − 2a + a 2 )( σn2 ) 2
=
2
2 a 2σ 2 σ 2 2aσ 2 a 2σ 2 + − + = (n2 a 2 + n1 − 2n1a + n1a 2 )( σ ) n n n n n n1 2 2 2 1 2
2 ∂V ( X ) = ( σ )(2n2 a − 2n1 + 2n1a) ≡ 0 n n ∂a 1 2
0 = 2n2 a − 2n1 + 2n1a 2a(n2 + n1 ) = 2n1 a(n2 + n1 ) = n1 a=
n1 n2 + n1
7-61.
7-17
Applied Statistics and Probability for Engineers, 5th edition n
−x
10 February 2010
⎛ 1 ⎞ ∑ θ n 2 ⎟⎟ e L(θ ) = ⎜⎜ ∏ xi ⎝ 2θ 3 ⎠ i =1 n
i
i =1
n x n ⎛ 1 ⎞ ln L(θ ) = n ln⎜⎜ ⎟ + 2 ∑ ln xi − ∑ i 3⎟ ⎝ 2θ ⎠ i =1 i =1 θ
∂ ln L(θ ) − 3n n xi = +∑ 2 ∂θ θ i =1θ Making the last equation equal to zero and solving for theta, we obtain: n
Θˆ =
∑ xi i =1
3n
as the maximum likelihood estimate. 7-62. n
L(θ ) = θ n ∏ xiθ −1 i =1
n
ln L(θ ) = n lnθ + (θ − 1) ∑ ln( xi ) i =1
∂ ln L(θ ) n n = + ∑ ln( xi ) θ i =1 ∂θ
making the last equation equal to zero and solving for theta, we obtain the maximum likelihood estimate. Θˆ =
−n n
∑ ln( xi ) i =1
7-63. L(θ ) =
1
θ
n
n
∏x
i
1−θ
θ
i =1
ln L(θ ) = −n ln θ +
1−θ
n 1 ∂ ln L(θ ) =− − 2 θ θ ∂θ
θ
n
∑ ln( x ) i
i =1
n
∑ ln( x ) i =1
i
making the last equation equal to zero and solving for the parameter of interest, we obtain the maximum likelihood estimate. Θˆ = −
1 n ∑ ln( xi ) n i =1
7-18
Applied Statistics and Probability for Engineers, 5th edition
10 February 2010
1 n ⎤ ⎤ 1 ⎡ n ⎡ 1 n E (θˆ) = E ⎢− ∑ ln( xi )⎥ = E ⎢− ∑ ln( xi )⎥ = − ∑ E[ln( xi )] n i =1 ⎦ ⎦ n ⎣ i =1 ⎣ n i =1 nθ 1 n = ∑θ = =θ n i =1 n 1
E (ln( X i )) = ∫ (ln x ) x
1−θ
θ
1−θ
dx
let u = ln x and dv = x
0
1
then,
E (ln( X )) = −θ ∫ x
1−θ
θ
dx = −θ
0
7-64.
a) Let Therefore,
. Then and is a
Therefore
b) 7-65.
µ=x=
23.5 = 15.6 + 17.4 + 10
+ 28.7
= 21.9
Demand for all 5000 houses is θ = 5000µ
θ = 5000 µ = 5000(21.9) = 109,500 The proportion estimate is
7-19
θ
dx
Applied Statistics and Probability for Engineers, 5th edition
10 February 2010
Mind-Expanding Exercises 7-66.
P( X 1 = 0, X 2 = 0) =
M ( M − 1) N ( N − 1)
P( X 1 = 0, X 2 = 1) =
M (N − M ) N ( N − 1)
( N − M )M N ( N − 1) ( N − M )( N − M − 1) P( X 1 = 1, X 2 = 1) = N ( N − 1)
P( X 1 = 1, X 2 = 0) =
P ( X 1 = 0) = M / N P ( X 1 = 1) = N − M N P ( X 2 = 0) = P ( X 2 = 0 | X 1 = 0) P( X 1 = 0) + P( X 2 = 0 | X 1 = 1) P( X 1 = 1) M −1 M M N −M M × + × = N −1 N N −1 N N P ( X 2 = 1) = P( X 2 = 1 | X 1 = 0) P( X 1 = 0) + P ( X 2 = 1 | X 1 = 1) P ( X 1 = 1) =
=
N − M M N − M −1 N − M N −M × + × = N −1 N N −1 N N
Because P( X 2 = 0 | X1 = 0) =
M −1 N −1
is not equal to P( X 2 = 0) =
M , X1 N
and
X 2 are not
independent. 7-67.
a)
cn =
Γ[(n − 1) / 2] Γ(n / 2) 2 /(n − 1)
b) When n = 10, cn = 1.0281. When n = 25, cn = 1.0105. So S is a fairly good estimator for the standard deviation even when relatively small sample sizes are used. 7-68. a)
Zi = Yi − X i ; so Zi is N(0, 2σ 2 ). Let σ *2 = 2σ 2 . The likelihood function is n
1
i =1
σ * 2π
L(σ * ) = ∏ 2
−
e
1 2σ *2
1
( zi2 )
n
− ∑ zi 1 2σ *2 i =1 e = (σ *2 2π ) n / 2
The log-likelihood is ln[ L(σ * )] = − 2
2
n 1 ln(2πσ *2 ) − 2 2σ *2
Finding the maximum likelihood estimator:
7-20
n
∑z i =1
2 i
Applied Statistics and Probability for Engineers, 5th edition
d ln[ L(σ *2 )] n 1 =− + 2 dσ * σ * 2σ *4
n
10 February 2010
∑z i =1
2 i
=0
n
2nσ * = ∑ zi2 2
i =1
σˆ *2 =
1 n 2 1 n = z ( yi − xi )2 ∑ ∑ i 2n i =1 2n i =1
But σ *2 = 2σ 2 , so the MLE is 2σˆ 2 =
σˆ 2 =
1 n ∑ ( yi − xi )2 n i =1
1 n ( yi − xi ) 2 ∑ 2n i =1
b)
⎡1 n ⎤ E (σˆ 2 ) = E ⎢ ∑ (Yi − X i ) 2 ⎥ ⎣ 4n i =1 ⎦ n 1 = ∑ E (Yi − X i )2 4n i =1 =
1 n E (Yi 2 − 2Yi X i + X i2 ) ∑ 4n i =1
=
1 n [ E (Yi 2 ) − E (2Yi X i ) + E ( X i2 )] ∑ 4n i =1
=
1 n [σ 2 − 0 + σ 2 ] ∑ 4n i =1
=
2nσ 2 4n
=
σ2 2
So the estimator is biased. Bias is independent of n. c) An unbiased estimator of 7-69.
P⎛⎜ | X − µ |≥ ⎝
cσ n
σ 2 is given by 2σˆ 2
⎞⎟ ≤ 1 from Chebyshev's inequality. Then, P⎛⎜ | X − µ |< ⎠ c2 ⎝
cσ n
⎞⎟ ≥ 1 − 1 . Given an ε, n ⎠ c2
and c can be chosen sufficiently large that the last probability is near 1 and
cσ n
7-70.
(
)
a) P(X ( n) ≤ t ) = P X i ≤ t for i = 1,..., n = [ F (t )]n
(
) (
)
P X (1) > t = P X i > t for i = 1,..., n = [1 − F (t )]n
7-21
is equal to ε.
Applied Statistics and Probability for Engineers, 5th edition
10 February 2010
Then, P(X (1) ≤ t ) = 1 − [1 − F (t )] n
b) ∂ F X (t ) = n[1 − F (t )] n −1 f (t ) ∂t (1) ∂ f X ( n ) (t ) = F X ( n ) (t ) = n[ F (t )] n −1 f (t ) ∂t f X (1) (t ) =
c) P (X (1) = 0) = F X (1) (0) = 1 − [1 − F (0)] n = 1 − p n because F(0) = 1 - p.
(
)
P X ( n ) = 1 = 1 − F X ( n ) (0) = 1 − [ F (0)] n = 1 − (1 − p )
d) P( X ≤ t ) = F (t ) = Φ
[ ]. From Exercise 7-62, part(b) t −µ
σ
{ [ ]}
f X (1) (t ) = n 1 − Φ
t −µ
σ
t −µ
−
1
n −1
e
2π σ
{ [ ]}
f X ( n ) (t ) = n Φ
n
1
n −1
σ
2π σ
−
e
(t − µ )2 2σ 2
(t − µ )2 2σ 2
e) P( X ≤ t ) = 1 − e−λt . From Exercise 7-62, part (b)
7-71.
F X (1) (t ) = 1 − e − nλt
f X (1) (t ) = nλe − nλt
F X ( n ) (t ) = [1 − e − λt ] n
f X ( n ) (t ) = n[1 − e − λt ] n −1 λe − λt
(( ) ) (
)
P F X ( n) ≤ t = P X ( n) ≤ F −1 (t ) = t n from Exercise 7-62 for 0 ≤ t ≤ 1 . 1
If Y = F ( X ( n ) ) , then fY ( y ) = ny n −1 ,0 ≤ y ≤ 1 . Then, E (Y ) = ∫ ny n dy =
(
0
)
n n +1
P (F (X (1) ) ≤ t ) = P X (1) ≤ F −1 (t ) = 1 − (1 − t ) n from Exercise 7-62 for
0 ≤ t ≤ 1.
If Y = F ( X (1) ) , then fY ( y ) = n(1 − t ) n −1 ,0 ≤ y ≤ 1 . 1
Then, E (Y ) = ∫ yn(1 − y )n −1 dy = 0
E[ F ( X ( n) )] =
7-72.
1 where integration by parts is used. Therefore, n +1
n 1 and E[ F ( X (1) )] = n +1 n +1
n −1
E (V ) = k ∑ [ E ( X i2+1 ) + E ( X i2 ) − 2 E ( X i X i +1 )] i =1
n −1
= k ∑ (σ 2 + µ 2 + σ 2 + µ 2 − 2µ 2 ) i =1
= k (n − 1)2σ
7-73.
Therefore, k =
2
1 2 ( n −1)
a) The traditional estimate of the standard deviation, S, is 3.26. The mean of the sample is 13.43 so the values of X i − X corresponding to the given observations are 3.43, 1.43, 4.43, 0.57, 4.57, 1.57 and 2.57. The median of these new quantities is 2.57 so the new estimate of the standard deviation is 3.81; slightly larger than the value obtained with the traditional estimator. b) Making the first observation in the original sample equal to 50 produces the following results. The traditional estimator, S, is equal to 13.91. The new estimator remains unchanged.
7-22
Applied Statistics and Probability for Engineers, 5th edition
7-74.
10 February 2010
a) Tr = X 1 + X1 + X 2 − X1 + X1 + X 2 − X1 + X 3 − X 2 + ... + X1 + X 2 − X 1 + X 3 − X 2 + ... + X r − X r −1 + (n − r )( X1 + X 2 − X1 + X 3 − X 2 + ... + X r − X r −1 )
Because X1 is the minimum lifetime of n items, E ( X 1 ) =
1 . nλ
Then, X2 − X1 is the minimum lifetime of (n-1) items from the memoryless property of the exponential and E ( X 2 − X1 ) = Similarly, E ( X k − X k −1 ) = E (Tr ) =
1 . ( n − 1)λ
1 . Then, (n − k + 1)λ
n n −1 n − r +1 r ⎛T ⎞ 1 = + ... + + and E ⎜ r ⎟ = = µ nλ (n − 1)λ (n − r + 1)λ λ ⎝ r ⎠ λ
b) V (Tr / r ) = 1 /(λ2 r ) is related to the variance of the Erlang distribution V ( X ) = r / λ2 . They are related by the value (1/r2). The censored variance is (1/r2) times the
uncensored variance.
7-23
View more...
Comments