Student Solutions Manual for Introduction to Probability with Statistical Applications
Short Description
Solution manual for Geza Schay probability text...
Description
Student Solutions Manual for Introduction to Probability with Statistical Applications
Geza Schay
University of Massachusetts at Boston
1.1.1. a) The sample points are
and the elementary events are
b) The event that corresponds to the statement at least one tail is obtained is c) The event that corresponds to at most one tail is obtained is 1.1.3. a) Four different sample spaces to describe three tosses of a coin are:
an even # of
.
s, an odd # of
s
where the fourth let-
ter is to be ignored in each sample point. the event corresponding to the statement at most one tail is obtained in three b) For tosses is . For it is and in it is not possible to nd such an event. For the event corresponding to the statement at most one tail is obtained in the rst three tosses is
c) It is not possible to nd an event corresponding to the statement at most one tail is obtained in three tosses in every conceivable sample space for the tossing of three coins, because some sample spaces are too coarse, that is, the sample points that contain this outcome also contain opposite outcomes. For instance, in above, the sample point an even # of s contains for which our statement is true and the outcome the outcomes for which it is not true. 1.1.5. In the 52-element sample space for the drawing of a card
a) the events corresponding to the statements An Ace or a red King is drawn, and
The card drawn is neither red, nor odd, nor a face card are
!" # $ $ $ $ !" $ % and , and b) statements corresponding to the events & ' () * + * , * - * % ' (# !" # $ $ $ $ !" $ % ' , and . are / 0 1 ' 0 The Ace of hearts or a heart face card is drawn, and 2 An even numbered black card is 1 drawn. 1.1.7. Three possible sample spaces are: % $ 3 ' ( The 365 days of the year 1
$ $
(
' '
%
January, February,. . . , December ( % Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday
1.2.1. ( ! a) b) c) d) e) 1.2.3.
,
,
,
.
or
1.2.5. !
"
#
!
"
$ #
"
$ !
"
"
#
%
"
%
"
"
"
"
$ #
'
%
"
$ !
#
%
$ #
'
"
#
%
%
'
$ !
"
#
$ !
%
'
"
!
"
!
'
&
'
&
"
"
"
#
#
%
%
&
'
'
'
#
"
"
$ #
%
"
$ !
'
'
"
%
$ !
'
"
!
$ !
1.2.7. a) but b) and c) and 1.2.9. The Venn diagram below illustrates the relation the diagram, we have and Similarly, !
#
!
"
!
2
Figure 1.
&
#
"
Using the region numbers from which is the region outside both the whole sample space. !
#
1.2.11. that is, that whenever then Then 1. Assume that or or On the other hand, clearly Thus, implies 2. Conversely, assume that that is, that or Hence, if then must also belong to which means that Alternatively, by the de nition of unions, and so, if then substituting for in the previous relation, we obtain that implies 1.3.1. a) The event corresponding to is 4 or 5 is the shaded region consisting of the fourth and fth columns in the gure below, that is, and . !
#
!
#
!
#
#
!
#
!
'
#
#
'
#
#
!
'
#
#
!
#
!
'
#
#
!
!
#
#
#
!
#
!
!
'
#
!
'
#
#
#
!
#
'
!
#
'
#
!
#
$
Figure 2.
%
$
&
%
'
or
$
3
&
is 4 or 5
the shaded region in the gure below, that is, and .
%
corresponding to
b) The event corresponding to or is and $
& %
Figure 3.
'
c) The event corresponding to but not and below, that is, $
%
corresponding to or is
&
Figure 4.
"
"
.
corresponding to but not
4
the darkly shaded region in the gure
d) The event corresponding to gure below, that is, $
% $
and %
"
"
"
the darkly shaded region in the
corresponding to and and
and , but not
% $
is
e) The event corresponding to the gure below, that is,
%
"
Figure 5.
$
$ &
and
%
$
%
$
5
is
%
"
"
the darkly shaded region in
"
Figure 6.
"
corresponding to
and , but not
1.3.3.
1.3.5. or (that is, at least one of them) is certain to occur.
6
2.1.1. Let
set of drinkers, and Hence and so,
2.1.3. If then and have no common element. Hence common element either. Alternatively, if then The proofs of the other cases just need changing letters. 2.1.5.
set of smokers. Then From Theorem 2.1.2,
and
cannot have any
2.2.1. a)
b)
Figure 7.
7
c) 2.2.3.
2.2.5. a) 2.2.7. a) 2.3.1.
b)
2.3.3.
b)
2.3.5.
The number of permutations is and each of the four marked sets containing six permutations corresponds to an unordered selection, that is, to a combination. Thus, by and this is, the division principle, the number of combinations must be indeed, how many sets we got. 2.3.7.
,
b)
2.3.9. a) 2.3.11. a) b) c) d) 2.4.1.
, ,
1 1 1
2
1
1
9
4
1
10
5
8
6
35 70
126
1
15
35 56
84
1
20
21 28
36
6 10
15
7 8
3
4 5
6
1 1
1
3
1 1 1
1
21 56
126
1 7 28
84
8 36
2.4.3.
2.4.5.
2.4.7. a) b) for any 2.4.9. a) b) 2.5.1. a) , b) , , c) d) 2.5.3. , a) b) , c) , d) 2.5.5. a) b) 2.5.7. indistinguishable balls into distinguishable boxes. It a) This is like putting can be done in ways. b) There are 9 spaces between the 10 balls if we put them in a row. With two dividing bars, we can divide the balls into 3 groups. So, the number of ways of dividing them into 3 nonempty groups is 2.5.9. ways. You have to choose boxes out of This can be done in
!
"
#
&
'
$ (
)
#
* $
'
$
,
D
E
F GN
I
- . / 0
1
2 3 4
- 5 .
3
1 6
+
:
; <
; <
=
9
F GH
%
.
$
I
>
?
<
>
@
J
F GK
I
J
N I
R
F QL I
A
F GL
B
I
?
J
C
F GM
I
?
J
O
P Q
F L
Q
R
UQ S T
]
Z U
V ]
Z U
S V
W
b ^ S V ^ S\ ^
W
U
[
b \c
[
X Y
[ S TU S _
[ ^
W
c d
W
` a
_
_
W
X Y
X `
W
a
e
}
~
h f g
g ij g
k
l m n
oh g
g
k
p n
hq g g
p n
r
s n n
k
p n
r
t v
w
u
}
s n n
k
v
e
m l n
z
v
x y
w u {
{ |
~
v
{ |
z
{
¢ £¤
¡
9
¥
7
8
3.1.1. a) P( b) P( c) P( d) P( e) P( f) P( g) P( h) P 3.1.3.
, , ,
, , ,
,
P(
.
and Thus, by Axiom 3, P P P Similarly, P P( P Adding, we get P P P 2P P( Here, again by Axiom 3, we have P P P( P( Hence, P P P( P and so P P P P 3.1.5. we have, as in Problem 3.1.3, P P P and so Since we always have P P P Thus, P P P if and only if P P This relation is true, in particular, if that is, if But ! P P can also hold if but P( because P P( P for any and 3.1.7. a) This result follows at once from Theorem 3.1.2 because we are subtracting the (by Axiom from P P on the right of Equation 3.1.1 to get P( 1) nonnegative quantity P ). b) Apply the result of Part a) with in place of and in place of Then we get " P P P Now, apply the result of Part a) to and we " obtain P P P P Since unions are associative, this proves the required result.
$ c) This relation can be proved by induction: As seen above, it is true for # and 3. For any larger # assume the formula to be true for # Then we can prove it for # as follows: 5 6 A B and by Part a), P 7 8 9< =: ; < ? P( C DG HE F G J K P L M N J O P % &' ( ) * ' + , P - % &0 1. / 0 3 N Q R / 2 2 4 ; > > @ F I S V W X Y Z [ \] By the induction hypothesis, P L P S T R U P ^ _ ` a b and so, putting all these relations [ together, we get P c d e` f g _ ` a h i jk l m P n o p q r 3.2.1. a) s t u o s o v w o s x s w o s x v w o v o s w o v x s w o v x v w x s o s w x s o v w x s x v w x v o s wx v o v wx v x s y r
10
b) P(o and x q t t r c) There are 6 possible unordered pairs, 4 of which are favorable. So, P(o and x q t t r d) Here we are drawing without replacement and so each pair consists of two different cards. Thus, each unordered pair corresponds to two ordered pairs and therefore each one has t r In Example 3.2.2, some unordered pairs correspond to two ordered probability pairs and some to one. 3.2.3. We did not get P(at least one six) = 1, in spite of the fact that on each throw the probability of getting a six is , and 6 times is 1, for two reasons: First, we would be justi ed in taking the six times here only if the events of getting a six on the different throws were mutually exclusive then the probability of getting a six on one of the throws could be computed by Axiom 3 as but these are not mutually exclusive events. Second, the event of getting at least one six is not the same as the event of getting a six on the rst throw, or on the second, or etc. 3.2.5. P(different numbers with three dice 3.2.7. people can be seated in ways. The number of favorable cases is because the group of men can start at any one of the seats and must be followed by the group of women, and in each case the men can be permuted ways amongst themselves and the women ways. Thus, P 3.2.9. This problem is like sampling good and bad items without replacement. The good items are the player s numbers and the bad ones are the rest. Thus,
! & "
#
!
+
P(jackpot
/
P QR P Q S T Q
. /
,
-
/
$ ! #
%#
&
%
$ %
'
!
!
"
%#
#
^
\
Z c
@
A B
C
D
) $ % *
and P(match 5
: ; 8 7 :9 ; < =
-
5
X W ] X_ X` W \ a b
%
(
G H
? 4 75 68 4 5
,
P UR P V WX Y Z W[ \ ]
F
"
, -
. 0 /
1 , . 2 3
d
e
f g
4 i
>
E
F
G J H
I E K G L M H
@
E F
N
O
@ N
E
h
3.2.11. a) b) c) 3.2.13. To get 5 cards of different denominations, we may rst choose the 5 denominations out of the 13 possible ones and then choose one card from the 4 cards of each of the selected l
j kl m
j kl m
j kn m
j n
j
j
m k
lk r
l r k
o
m
n
m
p k
j ls r
q
n
iu u
o
m
v
t
q
i v w x
j
ls r
m
o
i
t
y
~
denominations. Thus, P all different z
straights and
{
o
~ z | } { z { ~ | z ~ {
(Note that we have included
ushes in the count, that is, cards with ve consecutive denominations or
11
ve cards of the same suit, which are very valuable hands, while the other cases of different denominations are poor hands.) 3.2.15. For the pair we have 13 possible denominations and then for the triple, 12 possible denominations. For the pair we have choices from the 4 cards of the selected denomination and
for the triple
Thus, P full house in poker
3.2.17. In poker dice, we have 6 possible numbers for the pair and then 5 for the triple. These ways. Thus selections can be ordered in
"
# $
&
' () * + , (
P full house in poker dice 3.2.19. If , and then the last inequality is equivalent to which together with means that is greater than or equal to both 0 and so The middle two inequalities say that is less and than or equal to both and and so . Thus, , and imply . Conversely, if then the rst part implies that and or and the second part implies that and Thus, implies , and 3.3.1. Let even and odd and consider the sample space for throwing three dice. Then and The elementary events are equally likely, and so P P and P Hence, P P P 3.3.3. P P and P Also, and so P Thus, P P P and and are not independent, but P P P P 3.3.5. P P P P P a) Let and be independent. Then P P P P P b) Similarly, P( P P P P P P P P( P
%
!
!
)
.
-
1
.
0
2
-
-
1
.
/
.
0
.
.
/
-
0
/
. /
-
4
2
0
4
4
1
.
2
0
4
5 6
7
5 6
-
5 6
7
1
5 6
-
2 8
2 8
0
1
.
0
7
/
.
.
-
<
@
@
<
@
@
=
3
C
@
<
3
E
3
<
@
<
=
-
.
/
-
.
/ (
/
-
-
.
3 0
/
.
-
4
2 3
.
/
.
/
0
4
-
9: 7
4
9 : 7
.
.
9 : 7
3 0
/ 8
3 0
.
-
)
.
/ 8 3
9: 7
/
.
)
-
.
3
@
B <
<
=
@
@
-
.
-
/
.
.
3
<
<
>
A
3
<
@
@
@
3
@
@
@
<
-
/
=
C
D
3
<
@ 8
7 D
@
@
=
3
C
7
D
E
8
/
.
-
;
2 3
/ 8
/
-
0
.
.
/
E
=
B <
<
B <
=
3
<
<
@
<
<
B <
<
@
C
N
H
3
<
3
<
1
N
K M
N
H
KL
M
0
/
)
-
<
@
<
<
/
.
.
/
-
0
/ (
.
/
-
0
2 (
<
@
<
<
@
3
3
@
3
<
(
J
FG
=
H
JI
K L
M
G
H
I
F
KL
/
/ 8
3 0
?
>
@
)
.
0
4
-
2 8
2 8
0
-
-
/
/
.
1
) 3 .
1
0
.
/ 3
0
) 3 .
/ 3
.
) 3 .
7
1
) 3 .
?
<
1
/ 3
2 3
.
1
.
.
O
N
O
KL
N
H
K M
JI
H
N
K P
N
QF R
H
H
SI
L
P
H
L
M
P
H
T K U V W N X
KL
P
N
H
O
Q IR
KL
P
K P
N
N
H Y
KL
N
K P
N
L
P
KL
M
P
Q IR
H
N
H
O
KL
N
K M
N
O
L
KL
N
]^
ZL
M
K M
\
N _
H
KL
N
ZM
[
ZM
[
M
[
H
KL
N
\
KL
M
N
ZM
[
H
KL
N
\
K L
N
KM
N
H
O
L
L
N
ZM
M
N
H
\
ZL
M
[
H
[
O
12
ZM
[
\
KL
N
H
]^
\
KL
N _
ZM
[
H
3
3.3.7.
K N
H
K N
H
Z JI [
V
K ^ N
H
Z JI [
V
K N
H
^
Z JI [
V
H
K U N
^
Z JI [
V
K N
H
Z JI [
V
Z JI [ O
Figure 8. 3.3.9. Similarly, the probability is the The probability that a ball picked at random is red is same for white and also for blue. Thus, the probability for any color combination in a given order for six independently chosen balls is We can obtain two of each color in H
I
QI
O
R
R
Z QI [
Z J
O
different orders. Thus, P(two of each color 3.3.11. If , and are pairwise independent and is independent of then, on the one hand, P P P P P P P P P P and on the other hand, P P P P P P P ! P P P P P P P Thus, P P P P and this relation, together with the assumed pairwise independence, proves that and are totally independent. 3.4.1. a) " # %$& " ' %( &( " # ' %) & " # * + , .- . / 0 + * 1 , 2- 3 . . . b) P 4 5 6 , 7 / P 4 8 6 , 7 / P 4 5 8 6 , 9 / P 4 5 : ; < = ?> @ P A ; : B < = ?> C 3.4.3. If B = D E or F G and ; = D H @ I @ E G @ then B ; = D E G and P A B : ; < = J> C 3.4.5. By Theorem 3.4.1, Part 3, P(Republican : under 30) + P(Democrat : under 30) + P(Independent
13
:
under 30 <
=
P( and under 30) P(under 30)
P(Republican or Democrat or Independent : under 30 < =
P(under 30) P(under 30)
=
=
P(
:
under 30 <
=
C
3.4.7. Whether the selected girl is the rst, second or third child in the family, her siblings, in the order of their births, can be or In two of these cases does the family have two girls and one boy. Thus, P(two girls and one boy one child is a girl 3.4.9. The number of ways of drawing two Kings, which are also face cards, without replacement, is and the number of ways of drawing two face cards is Thus, P(two Kings two face cards 3.4.11. P(exactly one King at most one King " # % # & (' ( )
3.5.1.
!
!$
Figure 9. P * # & ,+ - .+ / ,+ - (0 & .+ 21 ) 3.5.3. a) P(both are Aces 3 one is an Ace#
&
b) P(both are Aces = one is a red Ace> c) P(both are Aces H one is ASI
J
. 45 6 4 7 8 9 6 : 46 ;
;
8 5 5 <
P(a red ace plus another ace) P(one is a red ace) KF F
F KL M L F KN F M N F K F
J
F O G
P(AS plus another ace) P(both are Aces)
QR P
;
@
@ 9? ?
K S M S K Q QR Q T K S P Q PR Q P
?
4 A 9 4 @ B @ @ B 9? C D ? 4 4 @ B ? @ @ B ? ? ? ? ?
? @
;
7 8E F G
F
U V d) P(one is AS H both are AcesI J J J 3.5.5. P Equation 3.5.22 becomes P W X Y Z [ P W X Y \ ] Z ^ _ ` P W X Y a b c d e for f g h g i j where e k l m n and o p denotes the event that the gambler with initial capital q is ruined. First,
14
p
we try to nd constants such that P p for q just as in the analogous,but more familiar, case of linear homogeneous differential equations with constant coef cients. p p p Substituting from here into the rst equation, we get and canceling ! or, equivalently, the quadratic equation The solutions are ) 3 " # $ " % & ' ( ) ;7 8 9 : 7 Separating the two roots, we + , . / 0 1 . , / 2 4 0 5 6 Now, and so ' * 8 < > @ > @ ; ;8 9 : ? ;: 9 8 ? A and = ? obtain = > ? : and for the general solution of the difference 8
equation P B C D conditions P O P
Q R
Consequently,
W
b
c
E
D F = >
? S
T
H =
G
and P O P
?
F
U R
S
G
X
I KJ L
S
As in Example 3.5.5, we have the boundary and we use them to determine the constants W and X N
H
V
Q
Y
8
8
D ;
I KJ L M
N
and
W
U
W
Y
X
S
T
Y
X
I KJ L
S
V N
Hence,
X
S
and
Z Z [ \] ^ _ ` a
d ef g h ij
Thus, the probability of the gambler m s ruin is P n o p q r s t u v w x y z { | } ~ if he y z{ | } ~ starts with dollars and stops if he reaches dollars. If
that is, the game is favorable for our gambler, then and so the gambler may play forever without k
d ef g h ij
l
getting ruined and the probability that he does not get ruined is 3.5.7. © © ° ± P ¡ ¢ £ ¤ P « ¬ § ¨ © ªPP¥ «¦¨ ©§ ¨ª © ªPP« «¬¨ ©§ ¨ª ® ª P « ¨ ® ª ¯ °© ± © ® ³² ± © ¯ µ´ ¶ · ® ® 3.5.9. P ¸ ¹ ¹ º ¹ » ¼ P  À ¿ À À Á P  À À Á à P  À ¿ Ä À Á PP½Â ¾Ä ¿ÀÀ Á Àà ÁPP ÀÀ ¿ÀÀ ÁÄ Á P  À Ä Á à P  À ¿ Ä Ä Á P Â Ä Ä Á Å
É É Ã Æ Ç È
Ë Ì Æ
ÊÆ Ç ÉÈ Ã
Ê
È Ç È
È Ç È
É
Å
For other ways of solving this problem, see Example 3.4.5. 3.5.11. Õ Ö ×Ø P Í Î Ï Ð Ï Î Ñ Î Ï Ò Ó ÕÙ Ö Ù Ô Ü Ý Þß à á × Ú Û á â ã 3.5.13. æ ç è é The hit-and-run taxi was Let ä à å The witness says the hit-and-run taxi was blue, å ç ê é ç è ìí î é ï ðñ òó ô õ ï ð ó ô õ æ æ ë blue, and The hit-and-run taxi was black. Then P ï ðñ òó ô õ ï ð ó ô õ ö ï ðñ òó ÷ õ ï ð ó å ùú û ü ù ý þ Thus, the evidence against the blue taxi is very weak. ùú û ü ù ý þ ö ùÿ û ü ùú þ
15
÷ õ
ø
4.1.1. The p.f. of
is given by
for
with histogram
0.4
0.3
0.2
0.1
0
1
2
3
x
4
5
and the d.f. of X is given by
!
"
if if if if if if if
# # # % & & ' ( ' ) ' ' ' '
$ $ # $
" " "
#
& " * & $ * " $ +
with graph
1 0.8
y
0.6 0.4 0.2
0
4.1.3. The possible values of are
1
, # ,
2
*
x
and -
3
16
4
P
.
5
P(2 heads
/ 01 2 3 14 2 0 5
4
heads
5
5
4
4
P(1 head 5 P(3 heads The histogram is
5
4
0
and
5
P
5
5
P(0 heads
5
P(4
0.4
0.3
0.2
0.1
-4
0
-2
2
4
x
The d.f. is given by
5
if if if if if if
with graph
1 0.8 0.6 y 0.4 0.2
-4
4.1.5. The possible values of P ) %
!
"
!
!
0
-2
are and P(1 or 4 heads
!
2
P
!
#
$ % ' (
17
"
$ &(
!
' %
!
x
!
% ( *
4
P(2 or 3 heads and P
!
!
"
#
$ %& '
!
$ &(
!
' %
!
P(0
or 5 heads
!
#
$ &(
' %
!
The histogram is
( ( *
0.6
0.4
0.2
0
3
4
5
x
The d.f. is given by
if if if if
!
with graph
1 0.8 0.6 y 0.4 0.2
0
3
4
18
x
5
4.1.7.
P
"
and % & ' ( 4.1.9. The p.f. is 4
) *
+
# * , &
56 7
.
98
for -
for 6
.
.
Thus, in general,
/ 0 1 0 2 0 3 3 3 3
/ 0 1 0 3 3 3 0 : 0
with histogram
0.1
0
1
2
3
x 4
5
6
7
and the d.f is < =
; 56 7
? AB C
D E
. > F
19
if if if
6
@
F B
G I
/ B E J
H
E
!
"
# ! $
with graph
1 0.8 0.6 y 0.4 0.2
0
1
2
4.1.11. First, we display the possible values of dice:
7
E
F
F
F
F
F
F
F
F
F
F
E
from here we can read off the values of the p.d.f. as
E D
F
D D
E D
Hence the histogram of
6
F
B
5
D F
x 4
in a table as a function of the outcomes on the two
F
E
Since each box has probability
3
D D
E
E E E E E
is
20
if if if if if if
B
B
B
B
B
B
F
0.2
-1
0
1
2
3
x 4
5
6
and the d.f. is given by
B
E D
E D F
D D D
if if if if if if if
E
E E E E
F
B
H
G
F
G
G
G
B
G I
B
H
B
H
B
H
B
H
B
H
F
with graph
1 0.8 0.6 y 0.4 0.2
0
1
2
3
x 4
5
6
4.1.13. Since is a nondecreasing sequence of events, and the terms of the union are disjoint, Axiom 2 gives P P P ! " # $ % % By the de nition of in nite sums, the expression on the right is the limit of the partial sums, 21
that is, P # P P Applying Axiom 2 again, we get P P P 4.1.15. be a sequence of real numbers decreasing to and let Let for every Then P and for Furthermore, because there is no for which the real number can be for every considering that Thus, by the result of Exercise 4.1.14, P P Hence, by the theorem from real analysis quoted in the hint, 4.1.17. Consider any xed real number and let be a sequence of real numbers decreasing to and for every Then P and for Furthermore, Thus, by the result of P P By a theorem from Exercise 4.1.14, real analysis, if for every sequence decreasing to then that is, is continuous from the right at Since we have proved this result for any real number is continuous from the right everywhere. 4.2.1. if 1. Let Then if or
= > ?
E
e
f
V
"
#
%
&
'&
) * (
X Y W
P
X
Z [
\
q
r
s
f
]
J >
+
E
J D
.
-
) /
~
g p
r
D
0 1 2 1
w p
q
s
r
d
t
c
f
3
4 5 6
t
`
´ µ ¶
±
«
«
¬
¬
®
®
¯ °
±
±
¯ °
7
9
8
B
? R
C
È
É Ê
Ë
Ì
È
É Ê
²
²
Ö
E
T C U
F G
C O
O O
H
I
J G K
L
>
?
M
O
b
k
c _ d
l
l
]
d
t
v
v
{ y
|
}
³
´ µ ¶
³
®
± «
¯ °
¯ ·
±
²
³
¡
¯ ·
¸ ²
¸ ²
¢
£
³
¤
¥
®
º »
¯ ° ¸
¼
¦
² ¹
½
»
¾
¿
Í
Î
Ï
Ñ
Õ
Ø
ï
ß ã å ã
ê
2. Hence
ç
Ø
Ù
Ñ
Ñ
Û
Ø
Ù
ú
ë
û ü
ý
÷
æ
þ
Ò
Ú
Ñ
Ü
Ò
Ú
Ý
Õ
Þ àß
ð
ò
ó
ô õ
ö
Thus,
ðñ ù ü ø ì í ÿ
à
ß
ð é å ã
Í
Ð
î
ì
?
E
×
Ñ
Ô Ñ
Ó
è
<
D
N
S
Ð
Í Î Ï
Ñ
ë
1
Å Æ Ç
ç
7
: ;
a
j
ª
D
Q
l
d
§ ¨ ©
?
i
g
c u
t
K
y z
|
?
_
x
z
à Ä
f g
m n o
d
K
?
c
À Á Â
)
, -
^
m n o
^
g
c
y
A
O
g
g p
$
h
m n o
@
N
D
!
if if
ÿ ü
ÿ
÷
ü
õ
or
ü
22
ù ø
ö
and its graph is
á
â ã ä å ã
æ
ç
è ß
é å ã
ê
0.5 0.4 0.3 y 0.2 0.1
0
-1
1
3.
û ü
ý
÷
if if if
2
3
x
4
5
and its graph is
1 0.8 0.6 y 0.4 0.2
0
-1
4. P 5. P ! 4.2.3. 1.
Let C
&
> DF DE
"
'
G
H I
2.
Hence K
Thus, H
LM N G
F
O
P Q U
!
if if
)( * .
2
3
x
4
5
P
=
1
G
"
+
,
-
/
0
-
R
S
T
R
V
T
1
!
Then
-
"
2
#
3 54
2 4
6 78 9 : 8
J I
if if
and its graph is
23
$ %
.
3 5; 4
3 4 : 8 <
;
>? @ A =
B
1 0.8 0.6 y 0.4 0.2
0
1
2
3.
R
3
if if
4
x
5
6
and its graph is
1 0.8 0.6 y 0.4 0.2
0
4. 5.
P P
! #
" "
# $ % ,!
,$ %
2
&
P
4 *)
# $ %
' (
!
( # $ -
"
%
P
x
6
8
10
*) + !
.
# $ %
&
( # $ -
/' (
&
# $ 0 %
1 -
' (
*)
%
*) +
4.2.5. 1.
2. 3. 4. 5. 6. 7.
Roll a die. If the number six comes up, then also spin a needle that can point with uniform probability density to any point on a scale from 0 to 1 and let ! be the number the needle points to. If the die shows 1, then let ! % ' 2 and if the die shows any number other than 1 or 6, then let ! % # + P ! " ' 3 # $ % )* 2 ) P ! " 4 3 # $ % 5) 2 P ' 3 # " ! " # $ % 5) ( )* % 6) 2 ) P ! % ' $ % 5) ( 7) % *7) 2 P ! . ' $ % ' ( 5) % 5* 2 P ! % # $ % ' ( 5) % 5 + 24
4.2.7.
if if if if
1.
&
2. 3. 4. 5. 6. 7.
P P P P P
P
$ %
4.3.1. The p.f. of and the possible values of can be tabulated as
!" # $
' ( " ) * +"
%& %%
%& %%
%& %%
%& % %
%& % %
%& % %
%& %%
%& %%
%& %%
%& %%
%& % %
,-
./
%/
%-
,
-
* .
* .
-
,
%-
Thus, the p.f. of 0 is given by the table ' $ 1 2 !' # $
* .
-
,
%-
%/
./
,-
.& %%
.& % %
.& % %
.& %%
%& %%
%& %%
%& %%
4.3.3. The possible values of 3 are 0 and 1, and so the possible values of 0 are 4 5 6 7 4 8 ( % ( 9 1 2 !' # ( = > ? @ if A B C : ; < Thus, 4 5674 8 if A B D ? E F >? @ if A N C L C Hence, G H I A J B K > ? @ if C O A N D ? E M if D ? E O A F > 4.3.5. if Q N C L C G P IQ J B K > if C O Q N > and so G H I A J B P I S O A J B P I T U V W X Y Z P [ V M R if > O Q > ^ _ if X a b \] Differentiating both sides, we get the p.d.f. as f g [\ ] Y Z ` c if X d b e if X a b \] ` b if X d b e 4.3.7. b if X ^ g [ X Y Z P [ h W X Y Z P [ iV i W X Y Z ` P [ j X W V W X Y Z k ml if t l n o pq r s q
25
-
and
W \] Y Z
[X Y Z
a b u v
w
Differentiating both sides, we get the p.d.f. as 4.3.9. For a given
Thus, $
Hence, K L
. So, if
&" '
%
MG N
O
4.4.1. The values of
`
!
P LQ
a
"
MG N
b
y
n
f
c
O
R
if if
u v
w
otherwise.
j
j
k p _
k
l p
l
m p
m
n p
n
o p
o
q p
j
f f f f f
a
b
m
f
for given b and d are
d l
m
n
o
l p j
m p k
n p l
o p m
q p n
m p _
n p j
o p k
q p l
r p m
o p _
q p j
r p k
s p l
r p _
s p j
j _ p k
k
n p
l
o p
m
q p
n
r p
l
f
k
j
f f f f
k
q p
l
r p
m
s p
u vw
xy
k
f
f f f
j k
s p
l
j _ p
p z {
of
f
j _ p _
j k
and
_
j
f
`
f
j j p
f
j j p j j k p _
j
is given by the table below:
e j
k
l
m
n
_
_
_
_
k
_
_
_
_
j | l o
_
l
_
_
_
_
j | l o
_
j | l o
_
_
_
_
m
_
_
_
j | l o
_
j | l o
_
j | l o
_
_
_
n
_
_
j | l o
_
j | l o
_
j | l o
_
j | l o
_
_
_
_
j | l o _
_
j | l o _
_
j | l o
_
j | l o _
j | l o _
_
j | l o
_
j | l o
_
j | l o
_
j | l o
_
j | l o
_
s
_
_
j | l o
_
j | l o
_
j | l o
_
j | l o
_
_
j _
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
j | l o _
j | l o _
j | l o
_
j | l o
j | l o _
j | l o
_
r
j | l o
j | l o
_
j | l o
j k
j | l o
j | l o
q
j j
4.4.3. t
x
j p j p j p j p j p j {
a
} ~
~
}
~
2.
_
o
1.
v
S
_
and e
d
g h i
f
v
, there are two solutions modulo : and " , then # falls in the angle on the bottom between these two values. if , - . / ) 0 1 2 34 5 2 4 6 7 8 1 9 8: ; 1: < = > ? @ A BC ' ( if D E F G H E * +I if E F G J [ \ ] ^ ] \ if T U V W X Y Z
and so the joint probability function t h z
!
P&
~
4.4.5. For
¡
P¢ £
¤
¥
P¦§
¨
©
ª
¨
¨ «
26
¥
¬ ¬ ® ¯ ° ® ± ² ®
³
´µ ¶ · ¸ ¹ µ ¹ ·
º
j | l o
We can also obtain this result much more simply, without integration, because the random point being uniformly distributed on the unit disc implies area of disc with radius that, for P Thus, area of disc with radius if if if and otherwise. if 4.4.7.
¹ ¹
" #
1.
$ % &
?
'
@
º
+
%
,
) (
%
+
-
.
*
0
0
-
. 1
B DC
A
B FE
/
G H I J G J H
0
. 5
2 3
K
L
M FN
O P Q I
d a
e f
+
T R S
6
7
4
!
8
. 5
6
9
: ;
<
;
=
>
?
4
3
U V W U
<
X
Y
Z
T
[
W U
X
T
Y
S V\
c
^ ` ^ ^_
S[ ]
a
b
Thus,
` c
_
2.
g h
ij k
if
{
l nm
a
m o
pq
r s t u s v
and
w x
y
| } ~ }
|
z
otherwise. Similarly, for
¡
¢ £
¤ ¥ ¡
3.
otherwise. If ÌÏ
É Ç Í
Ð ã
ê ë ã
ì ëí ù
î þ
and 4.
P
ø
è õ ï
©
ª«
Ñ
É ñ ò ó
ÿ
þ
¬ ® ¯ «
then ô ñ
Ò
°
ÌÏ
õ
± ²
³
¶ · ¸ ¹ ¶
ö ù ú
ð
ú ù ö û ÷
then
õ
ø
ü
ý
and
þ
ø
ý
þ
 à Ä
Ý
ý
ý
þ
õ
ÿ
ø ü
Å
Þ ß
Ô
and
ø
Æ Ç È É
Ö
ÿ õ
ù
ø
Ê Ë
ç è
ø
Ç Í
Å
ù
õ
ö û ø
Î
õ ø
ø
Ì
é
then
þ
ù
If
ÿ
þ
#
and
à áãâ ä å æ
Ù
Õ
27
then otherwise.
Á ¿ ¿ ¿À
Ù Ú Û Ü Ú Ü Ù
ü
ù
ø
½ ¾
ù
If and
÷
õ þ ü
Ô Ø×
ù
If
ù
» ¼
¸
Ö Ó Î Ô Õ ú ù ö û
É Ç Í Å ÷ ù ú ð
ö
º
´
÷ ø ÷
§
ð
ú ó
µ
¦ ¨§
!
$ %
"
&
'
%
(
)
%
&
*
& +
%
,
4.4.9.
Clearly, using the numbered regions from the Figure, P P P P P P and P P P P P P Thus, ! P P P P ! ! " P P 4.5.1. %& %& () ' ! $ ! $ % % *( and are not independent: For instance, # P ( %
&
'
%
&
&
%
"
"
./ 0 1
%
"
%
,
'
%
"
&
&
%
"
&
" &
"
# -
'
&
%
"
&
"
+
"
'
%
'
,
&
&
%
&
%
'
%
&
'
&
. 24 3 0 . 3 6 5 0 6 . 76 0
1
:8 ;9 <
"
and = >
. ? 0 1
.230 .350 2 6 2 . 76 0
1
: :8 ; @
"
Now,
:8 ;9
A BC C D
E F
GB C B H
4.5.3. I I and J are not independent: By Example 4.4.5, and J are both uniform on the interval KL M N O I M and, by Example 4.5.2, P J Q is then uniform on the unit square and not on R H 4.5.5. 1.
By the deS nition of indicators, T U V W X Y Z [ \ X ] ^ _ ` By the dea nition of intersection, X ] ^ _ \ W X ] ^ and X ] _ Y b and, by the dea nition of indicators, W X ] and X ] _ Y \ W c U W X Y Z [ and c V W X Y Z [ Y ` Since [ d [ Z [ and [ d e Z e d e Z e b clearly, W c U W X Y Z [ and c V W X Y Z [ Y \ c U W X Y c V W X Y Z [ ` Now, by the transitivity of equivalence relations, c U V W X Y Z [ \ c U W X Y c V W X Y Z [ b which is equivalent to c U V
2.
Z
c U c V `
By dea nition, l
m
i
n
o
p
c U f g q
are disjoint, and is x u l v x k transitivity, s t f g
w
hi j
m p
s t l
k
Similarly, s t h i j u s g h i j v s t g h i j k p o q w where the three constituents h i j v s t g h i j is l u x v x k l for i n o q w q , and l u l v l k l for i n o q r Thus, by s t hi j u s g hi j v s t g hi j k l w which is equivalent to
l
because o hi j
for
hi j
k
u i
s g o
n l
m
i
q
n
o
k
o q
p
q r p
28
o q
^
s t f g
3.
k
s t
u
s g
v
s t g r
By de nition, and are independent if and only if P P P for By the de nition of indicators, this relation holds if and only if P P P P P P P P P and P P P all hold. By the de nition of independence of two events and the result of Exercise 3.3.5, the last four equations are equivalent to the independence of and Thus, by transitivity, and are independent if and only if and are. 4.5.7.
If and denote the arrival times of Alice and Bob, respectively, then they will meet if and only if ! " # $ or, equivalently, ! $ # " # % $ & Now, ' ( " ) is uniformly / distributed on the square * + ( , - . * + ( , - ( and the above condition is satis ed by the points of the shaded region, whose area is 0 1 ! the area of the two triangles 2 $ , ! 3 2 4 & Thus, P ' 5 and 6 meet ) 2 87 9 &
29
4.5.9.
Figure 10. Let the circle have radius and choose a coordinate system with origin at the center of the / circle and so that the rst random point is 5 ' ( ) & Then if the second random point is within a distance of the point 5 ( then it must lie in the intersection of the original circle and another circle of radius centered at 5 & From elementary geometry, the angle 6 is and so the area of the sector is The area of the triangle is
Therefore the area of the intersection of the circles is
! #"
& '
( ' *)
,. - /
0
$
%
+
=
Thus, P(the two random points will be nearer to each other than B C
1 2
3
5 64 7 8 9
: ; =<
>
? ;
C D
E B F
@
A
G
4.5.11. 1.
H
I
J K L
and 2.
b q
d r f
¤ ²
¦ ³ ¨
¿ Ä
ÁÈ Ã
P
M
´
O
s
P µ
JN
t
P
K L
M
Q Q R S T U
vu
w x y
z
{ }|
~
V W
X Y Z V [
~
X \ Z ] Y ] \
_ a`
^
b c
de f
g h ai j k
b l
d m f n m o
·¶
¸ ¹ º
»
¼ ¾½
¿ À
ÁÂ Ã ¿ Ä
Á Å Â Ã Â Æ Â
É
£
¡
¤ ¥
¢
¦§ ¨
© ª £« ¬
¤
¦ ® ¨ ¯ ® °
¯ §
Ç
if if otherwise otherwise if and, from Part 1, otherwise and otherwise.
Ê
Ì
ÑË
Í
Î
Ï
Ð
Ð Ñ
Ò
Ó Ô
ÕÖ ×
Ø
ó ô
÷ õ ö ö ö ö
Í
Ï
Ð
and so
Ö
Ù
Ò
Ñ
Ü Õ Û
Ó Ú ê
Ð Ñ
×
Ø
Í
Ý
Ï
Ö
Ý
Ù
ë
Ó Þ
ê
é
ì
í î
ï ð
ñ é ò
å
30
p
Ñ
3.
n e
ø
ù ú û ü ý
Õ
×
ü ü ý
Ø
ß
â
á à
à
ã ä
å
æ
ç è
é
if
±
and
4.5.13.
From De nition 4.2.3, P
P . /
*+ P P(all $ ! ! " # ! ! " % ! &' ( ) % , ' ! P 0 1 2 3 4 5 6 7 8 9 : ; < P(all = > ? @ A B C D < A E F F A G H C A E I I I I J K E
1. 2. 3. 4.
ù ù
4.5.15.
ø
T
By DeL nition 4.2.3, 4.5.31, d e
fg h
M N O PQ R
U
t
u v w x
U
if if
X Y Z [ \
S
i kj l m n o p q l m r o s
`
û
y
V
W
V
]
for
^
_
`
and from Equation
a b c b
z |{ } ~ } ~
} ~
if § ¨ © and, clearly, ª « ¬ § ® © if § ¯ © ° 4.6.1. The joint distribution of ± and ² is trinomial (with the third possibility being that» we get any½ ¼ » ¼ ½ ¾ À ¾ À ¾ À ª ³ ´ µ ¬ ¶ · ¸ ® ¬ ® ¶ · ® ¸ ® ¹ º ¹ ¿ ¹ ¿ ¹ number other than 1 or 6) and so, P± ² º ¾ for Á Â Ã Â Ä Å Æ Â Ç Â È È È Â É Â Á Ê Ã Ê Ä Å É È The table below shows the values of this function:
¡ ¢ £ ¤ ¥ ¦
Á Ë Ì
Í
Î
Ï
Ð
Ñ
Í
Î × Ø Ù Î
Î × Ø Ù Î
× Ø Ù Î
Î Ø Ù Î
Î Ø Î Ï Ú ×
× Ï Û Ø Î Ï Ú ×
Î
Î × Ø Ù Î
Î Ï Ø Ù Î
Ð Ø Ù Î
Î Ø Ð Ï Ñ
Í
Î Ï Û Ø Ð Ï Ñ
Ï
× Ø Ù Î
Ð Ø Ù Î
Î Ø Ï Î ×
Í
Í
Ï Û Ø Ï Î ×
Ð
Î Ø Ù Î
Î Ø Ð Ï Ñ
Í
Í
Í
Û Ø Ð Ï Ñ
Ñ
Î Ø Î Ï Ú ×
Í
Í
Í
Í
Î Ø Î Ï Ú ×
× Ï Û Ø Î Ï Ú ×
Î Ï Û Ø Ð Ï Ñ
Ï Û Ø Ï Î ×
Û Ø Ð Ï Ñ
Î Ø Î Ï Ú ×
Î
Ò Ü
ÔÌ Ö
Now, the conditional p.f. is given by.Ò Ó ì í î
ï
ï
ô õ ö
ð
ö ô õ ô õ ö
ñ
ö ô õ ú ö
ò
ö ô õ ø ö
ó
ö ô õ ø
ÝÜ
Ô Õ ÞÌ Ö
äç è
ë
ð
ñ
ö ÷ ø ô õ ÷ ù ø ô õ ø ô ø ô õ ø
ø ö
÷
ð
ô õ ù
õ ø
ï
ô õ ø
õ ï
ï
ô õ ï
ï
ï
ï
ï
ï
ø ô õ ï
ÔÕ Ö
and so its values are
à á é âêã ä å æ ç è
ß
Ò Ó
ò
ó
ö ô õ
4.6.3. By Example 4.5.4, P û ü
ýþ
ÿ
ÿ
and, by Equation 4.6.26,
ø
Pô P
ø
P
ô
! " # $ %
P
31
#
if if
'
" #
& (
) * + ) 0 1
2
* 3
P
if + , if / * otherwise. .
-
*
-
/
. 1
4.6.5.
First, we compute the marginal densities. By de nition, and so $ %
C
&'
(
)
*
.
! / 0 1
+ , -
/
5
6 7
2
3 4
if 8 9 6 9 otherwise.
8
D
if ! " " otherwise
:
#
if
otherwise
and
Thus, by Equation 4.6.12, ; <
?@ A 6 7
=>
B
if G H I J K L M Note that O P J is the length of the horizontal line segment inside otherwise. the triangle at the height J I and so the conditional distribution of Q given R S J turns out N to H S O P J I as expected. Similarly, or by to be uniform over the interval from H S if ^ _ ` a b c d symmetry, we obtain T U V W X J Y H K S Z e [ otherwise. [ \ ] 4.6.7. D E F N
1.
First, we can easily compute P f g
h
i jk
l
_ b
for
e h
_
and if { | if if {
m
x
p q rs f i j_ b
Hence,
l
Pf _
t
h
u
i b
l
Pf u
h
i
v
By Equation 4.6.28, ² ² · if ³ ´ µ ¶
³
¡ ¢£ ¤
and
ample 4.4.9, we computed
² ¼ ¸
½ ¾
¹ µ
¹º » ¼
l
if otherwise
¿
w y
{ z
~
}
z
h
i
m
{
~
}
}
32
as
o
}
Here the numerator is ° ª« ® , ¯ and it is ± 0 otherwise. In Ex² µ if º ´ µ ¶ Á ² à À Ã Ä µ µ ¶ if ´ à  º if µ ¶ º or ´ µ ±
¦ § ¨© ª « ¬ ® ¯ © ª ®
¥
³
_ b
e
n
Thus,
½
.
" #
%$,
&' ( ) *
+ -
/ 7 8 9 : B
if if otherwise
if 0 1 2 3 if ; < = > otherwise
4
5
; ?
Also, by symmetry,
!
6 @
>
A C
Q O
2.
From the picture above, P D E
IH J
F G
PD K
<
L J M
<
; F G
N P
p
and so,
\ ] ^_
`a b
differentiation, { |
c
}~
Pd]
e f gh e i j i k l
n c
m o
s t s
u s t
33
if if if
q v w
if otherwise
T U T U
T U
V
q p
r x
q
z
U W X V
p q r s
s y
R Y
if if if
R Q R
S Z [
Hence, by
Q R X
Z
X
5.1.1.
5.1.3.
! " # $ ! % & ' () * +
Substituting
. * /
, -
we get
gration by parts, with ? @ 8 9 > = A @ : ; < = 8 > gives B _ ` a c b d e f g h d i A second integration by parts, with r s t u v w x y z |{
}
~ |
5.1.5. From the hint,
Ï
e f g h d l
k
results in
n
op q k
and
ÍÎ
d l h m
k
and inte-
¸¼ ® ¯ ° ± ² ³ ´ µ¸ ¹¶ · ¬ ¡ ¢ £ ¤ ¢ ¥ ¦ § ¨ ¦ © ª « · ½ · º» Ð Ñ Ò Ó Ô Õ Ö × Ø Ö Ù Ú Ù Û Ü Ý ¦ Þ¨ « ß© à á â ã Þ ä å ä æ ç è é ê ë
by the geometric series formula, ì íð ñî ï ý þïÿ ò computed values, we obtain ÷ øû üù ú
GF
CD E @ j
6 7 8 9 : ; < = 8 > 4 ' 5 H I J K L M N O QP R S U T V W X Y Z [ \ ] ^
12 3 -
· ¾¿À Á Â Ã Ä Å Æ Ç Å ÈÉ Ê Å É Ë ÌÍ
0
! "
$ % & 'ú ( )
#
í î ï ô
õ ö
Furthermore, Adding the two sums and their
ð î ï
û
ó
ò
ù ú
*
+ ,
-
from which Equation 5.1.20 follows at once
by rearrangement. 5.1.7. The distribution of a discrete . is symmetric about a number / if all possible values of . are paired so that for each 0 1 2 / there is a possible value 0 3 4 / - and vice versa, such that / * - : * 6 7 0 3 8 9 For such . 7. 8 5 0 1 5 0 3 / and 6 7 0 1 8 5 ; all 1 0 1 6 7 0 1 8 5 ; < = > ? @ A B C @ A D E F F F F is not a possible value of P .) In the B C D E G H I J ? @ K B C @ K D L (Here B C D M N O if last term, we apply the symmetry conditions: G H I J ? @ K B C @ K D M G H Q > ? C R F S T U V W X T U V Y Thus, Z X [ V \ ] ^ _ ` a b c d e f g h i c d e c h j k l m ` a c d e f g h i k l m n a c d e f g h i c d e c h j k
all
g c d
ef g h
j
k
c
g d
all
ef g h
j
c o
5.1.9. By Theorem 5.1.3, p e q r s t u wv v x r y z x the solution of Exercise 5.1.3, we obtain 5.1.11.
s {
t
´ µ ¶
·
¸ ¹
º
³
´ »
ing the variable Ò to Ó ö ÷
Ô
¹
º
¼
Õ Ö Ø ×
÷ î
îí í
í
í
ð ñ ò ù ï
ò ï öó ô ï
ø
õ
ó
5.1.15. For a binomial
#
ù ú
òû
ó
½ ¯
« ¿ ½
¾
´
ù
¹ À ¾
leads to Ù ö ü If õ
|
r } ~ w
v
Using the integral from
x
5.1.13. Example 4.3.1 gives, for continuous and So, ³
u
x
º
¼
¾
ÚÛ
Ü
Ô
Ý
½ ¯
, where
ÂÃ Â Ä Å ½
Þ
¾ Ú ß Ó
¡
¢ £ ¤ ¥
Æ Ç È Ã É Ê
Ë Ì Í
¦§ ¨
If Î
ý
þ
Ï
Ð Ñ
® ¯ ª ° ±
then chang-
à
á Ü
ãâ ä å
æ ç è é ê ç
ë
é
ì îí í
ï ð ñ
òï ó ô ï õ
then the same change of variable yields ÿ ú ò ó ø ! " # just as before.
Equation 5.1.49 and De$ nition 5.1.1 give 34
²
Á
Ö Þ
©ª © « ¬
- 0
%
&' ( ) * + ,- . /
- 2 , 1
Hence,
# $ % &' ( ) * & + , + ,./
+ , 0 1 + , + 2 + , 3 4 5 6 76 8 9 : ; < = 4 5 6 >
!
"
5.1.17. In this case, Theorem 5.1.6 does not apply, because ? and @ are not independent. Nevertheless, Equation 5.1.52 may still be true, and we have to check it directly: By Equations 4.4.14 and 4.4.15 and by Theorem 5.1.1, A 7 ? : 4 A 7 @ : 4 B > On the other side, by Theorem 5.1.4, A 7 ? @ : 4 C ED C ED O P P Q R S T R TS U V V X W [ \ O P W X Y WZ X W [ \ ] ^ _ ^ _ ] Z { | v w
D
5.1.19.
r x y wz s
|
if otherwise t } r }
t
M
N
Thus, Equation 5.1.52 is true.
n l m |n o p q r
and
s
t u
So, by Theorem 5.1.4, ~ t
F G H IF J G K L F L G
j k l mn o p gh i
ba c da e f
`
D
}
y
}
u
w
¦ § ¨ © ª « ¬ ® ¯ ° ±
5.2.1.
¬
²
¶ · ¸ ¹ ² ³ ´ µ
¶ º
Let Å be a continuous r.v. with density
»
¼ ¼½
Æ ÇÈ É Ê
º
¾ ½ ¾ ¿ Ë
Á
À
¾¿
ÒÌ Í
º
½ÂÃ
5.1.21. r y zq r qy s Using the hint and the formula for the sum of a geometric series, we have, for ¡ ¢£ ¤ ¥
z s
À¾ Ä
if Ï Ð È Ð Ñ Ä This function is otherwise Ü Ý Now, Þ ß à á Û â äã æ å ç è é ê ë ì If we choose û then,
ý does exist.
È Î
indeed a density, because it is nonnegative and Ó ÕÔ ×Ö Ø Ù Ú Û and í î ï ð ñ ò does not exist, because ó õô ÷ö ø ù ú û ü ý þ ÿ ú clearly, ý also does not exist but, since
5.2.3. First, ý ý for
ý
ý û and here the two s cancel. Second, by Theorem 4.5.4, and are independent, and so, by Theorem 5.2.3, ! " # $ % & ! " # % $ ! " # % ' Third, by Theorem 5.2.1, ! " # % & ( ! " # % and ! " # % & ( ! " # % ' Thus ! " # $ $ ) % & ( ! " # % $ ( ! " # % ' 5.2.5. 1.
2.
&
*
+ # #
, - . /
5
6 0- , 1 . 2 3
4
0- , 1 . . 2 3
9 2 / : /
From the < rst part above, = that is, when G
J
H F
5
6 07
, - .23
/
5
08 07
, - . 0- , 1 . . /
0- , 1 .2 ; > ?@
and then L
5.2.7. By Examples 5.1.4 and 5.2.4,
4
L
A BCD E F MNO
NO P
as a function of G F is smallest when H
I GPD E J
J
L
NV P
35
Q D J
J
I G J
K F
R S T NO P U
XW Y Z
[\ ] ^
_
Z
[` ] ^
_
X] a Y
and
1.
Thus,
and
2.
3.
%
&'
(
;
>4
7 ? ;
4.
K
LM
N O
5.
t
u vw
x
y
z { |
)
*
%
&'
>5
) %
Now,
¡
Á
Ä Å
Æ
ÂÃ
¢ £
A HB
C
AI B J
WX
Y Z
[
\
WX
] Z
^
{
x
~
Á
¥
ÂÇ
¨ ©
and Ê
È É À
â ê
Ý
Ý
é
ö
à ù
ý
ú
and so, *
+
,
ì í ö
-
Thus, 5.3.1.
ë
I
E
Let us write
N R
F
G H
Ý ä
û
/ 0
ï
å
øù
.
æ î
ü
1.
3
L
4
M N
ý
1.
O L
M P
2 34
Y Z ` Y
N
S
[
c
e _
a Zb
f c
f
X
6 7 8
9
:;
¯°
· ¸
34
5
6
ýþ
ú û
'
+ 23
(
2)
0 5 6 7
4
8
9 :
;
< :
=
/ 0
@
A B
C
D B
E
F
A G C H
R
IJ
H
F
A G C K
M
L
A NO
H
G
P F
NC
A G C H
J
E
R
F
Q
@
ü
ý þ
ö
ö
ý þ
ð
ñ
õ
ñ
ý þ
þ
ñ
õ
Q
û
õ
A B
C
V
A G C
E
T S
U
G
@
R
U
M
A NO
H
G
NC
Q
G
E
U
W?X
W
M
A NO
H
G
NC
E
P @
A G C
V
X
Y
Z
[
f l
39
m
c
no
g
p
e
\]
f c
^
no
_
g
h
` a
b c
ij
q
d
e
f c
g
h
ij
k
i
P such that f l
c
no g
p
e
!
"# $ %
&' &
. Also, by the de nition of as the minimum we have P for Hence, P * * thus, # ( ) satis es the two conditions in the de nition of the median. f c
no g
m
5.6.9.
0 1
The d.f. of this r.v. is F G H IJ K L
ij
M N J O
C
+
,- . /
for
J
P
IQ R
i j
3
if - 4 if > ? if C D C Its graph is
78 9 : ; < =
2 C
p
K
E
5 6 @
A B
C
Thus, the quantile function is
A E
1
0
1
p
-1
5.6.11. From Equation 4.2.15,
F G H IJ K L
T S U
VW
if if if
] VW _ `
40
W Z [ \ X Y [ W Z ^ \ Y \ ^ W ] \ Y Y
and its graph is
2
1
0
p
41
1
6.1.1. 1.
]
is Poisson with
] _
! "
2. 3.
P7 8
4.
is Poisson with #
9 7:; <
and 8
:
PO P
Thus P
$
= 7:; <
]
] _
P
]
X
! " % " $
?P 7 8
7:; <
:;@
=
& ' ( )+ *, >
)/ ., -
A: B 6 C D EFG H
}
§ ¦¢ £
U WP
]
) 0, 1 2 )
I JK L M N J
OU S T X S T
V ORS T
P
]
Thus P ! :; >
]
P Y Z [ Y Q \ ] ^ and _ ` a b \ ] ^ \ d Pa_ a ^ \ ] c \
and P
Q ORS T U p qr s t q
]
3 45 6 5 4 l
e fm g h i h j j jk g j ijn nk
o
6.1.3. u v
w
.per x
y z. w
v
1.
P{ |
{}~
2.
P
¡ ¢ £
¥ ¤¢ £
3.
P² ³
²±´ µ ¶ ´ ·
± ¸ ¹ º » ¼ ½¿ ¾À Á
½Ã ÂÀ Á
4. 5.
PÐ Ñ Ò PÜÝ Þ
Ó
ÔÕ Ö
ß
Þà á â
~
6.1.5. ð ñ ð P(even P(odd
3
4
5
PÐ × PÜã ò
ó
! "
#
ÐÔÕ Ö
Í Õ Ö
Ü ä å Þà á â
Ø Ù Ú Û æ á â
è ô õ ö ÷ ôú õ ûø ù ü $ %
© ¨¢ £
ª¢ « ¬
®¯ ° ± ®
½ ÄÀ Á ½Æ ÅÀ Á ½ Í ÌÔ Ï Î Ì
½È ÇÀ É Ê
Ë ÌÍ Î Ï Ì
ç è é ê
ë ìíî ï ì
÷ ô þ õ ûø ý ü ÿ ÿ ÿ
& '
#
( ( ( )
*
+ , - . + , - .
*
+ , / 0 1 2
6 7
jk vl
n
j o v m
m
n
jo vl
t o vl
p
3 5
p u l
q r n
jk v s
p
xy z { | } ~ } ~ | } ~ } ~
¡ ¢ £ ¤¡¥ ¦ § ¥¨
©
t k vl
m
n
jk vl
p
u wn
jo l
m
n
On the other hand, P(even 6 4
9 :
P(odd and so, adding the two equations, we get 8 P(even 3 5 6 ; < = > ? @ A ing them, 8 P(odd 6.1.7. ; C D E D E F E F G C F H D I J C D I E D I Consider the instants B and let K L and K M denote two distinct interarrival times. Then N O P Q O R ] ^ _ ` a b c d e f b c d e a g b c d e f g b c e f he f g P(i j k l m n j o l p q r n j k s t k l m n n
jo
m
/ 0 1
E
ST
F I
U
jk l
t o l
, and subtract-
F I G
V W T X Y p
p
E
C F I
Z X[
\
j o vl
m
u r u r n
|
|
ª « ¬ ®
¯
° ± ² ³ ´
® µ ¶
· µ¸ ¹
where in the last step, we used part 2 of Theorem 6.1.7. If º » the proof would be similar. 6.2.1. Using the table, we obtain · µ ¹
1.
P¼ ½
¾
¿
¸ À
ÁÂ Ã Ã
¿
42
3
2. 3. 4. 5. 6. 7. 8. 9. 10.
¸ À ¶ ÁÂ Ã Ã Á ¿ ¿ » ¿ ¿ P¼ ½ ¸ P¼ ½ » ¿ » ¸ À Á ¿ ¿ ¿ P¼ ½ ¾ ¶ ¿ ¸ » P¼ ½ ¶ ¸ ¸ ¶ P¼ ¿ ¾ ½ ¾ ¿ » P¼ ½ ¾ ¿ P¼ ½ ¾ ¶ ¿ ¸ À Á Â Ã Ã ¿ ¶ Á ¿ ¿ P¼ ½ ¿ ¸ » ¶ P¼ ¶ ¿ ¾ ½ ¾ ¿ ¸ À ¶ Á Â » Á P¼ ¶ ¿ ¾ ½ ¾ ¸ » P¼ ½ ¾ ¸ ¶ P¼ ½ ¾ ¶ ¿ ¸ À Á ¶ Á ¿ ¿ P¼ ¾ ½ ¸ » Á P ¼ ½ ¾ ¸ » Á Â À Á P¼ ¶ ¾ ½ ¾ ¸ » Á Â P ¼ ½ ¾ ¸ » Á Â À Á P¼ ¶ ¾ ½ ¾ ¸ » Á P ¼ ½ ¾ ¸ » Á Â À Á ¿
»
»
ÁÂ
Á
6.2.3. 1.
¼
¸
B D E F G
»
changes sign at B E H FI 2.
that is,
23 4 5 6 7 8 9 : ; < = > ? @@ AB C ! " # $ % & ' ( ) * * + , - . 0 / B E H F I '1 K at Thus, J has points of in ection at and only
at
k o p g k q r ss t u v w PR N O P Q R S T UV T W XY Z Y [ \ ] ^ _ ` a b d e c f g h gi k j l i m h i j n ¥¦ § ¨ © } x z { y| } ~
¢ ¡ ª « ¬ ® ¯ ° ± ²³ ´ µ ¶· ¸ ¹ º · » £ ¤ ¼ ½½ ¾ ¿ À changes sign at Á Â ÃÅ Ä Æ Ç È É Ê that is, at Ë ÌÎ Í È Ï Ð Ñ Thus, Ò has points of Ó in ection at and only at Ô Õ Ö Ï × Ñ AM C
L
E
6.2.5. 1.
Assume Ø Ù Ú Ñ Then the d.f. of Û can be computed as Ü Ý Þ ß à Õ P Þ Û á ß à Õ P Þ Ø â ã ä á ß à Õ P Þ â å æ çé è ê ë í îìï ð ñ óòô and from here the é ÷ ø ùú ø û üý þ ÿ ý ç èõö Chain Rule and the Fundamental Theorem of Calculus give its p.d.f. as A comparison with De nition 6.1.2 shows that this function is the p.d.f. of a normal r.v. with in place of and in place of For any A comparison with Theorem 6.2.5 shows that this function is the m.g.f. of a normal r.v. with in place of and in place of
; <
2.
L
T S
T
<
E D I
U
F G
H
V \ ] ^
! "
#
$ ! % & $
'
(
) % *
+" + &
, - ./ - .0 1 2 3 4 4 5 6 7 . 0 8 4 5 9
:
>
; K [ \
I Z
S
=
I J
_
K L
? ; @
MN O
` ab c d
P
e f g d
A B
Q
I J
@
K L V W
R S T U
B CL
P
S
I
T Q
R S
J
W
L
T
P
S
T X
J
M Y N O
P
ab h f i g i ] ^ j
T
^
k l
^ j
m
n
l
o k p
q
p
6.2.7. Comparing with the general normal p.d.f. and this distribution is normal with 6.2.9. and denote the two weights. Then, according to Theorem 6.2.6, Let with and Thus, P P P ` s tu v w x y z w {
s tu s x y z w y
} w|~
r
· ¸¹
¸
º
» ¼ » ½ ¾ ¿
À
Á
 »
Ã
Ä
 » ¼ » ½ ¾ ¿ ¿
¡
Å
¡
¢
£
½ ¼Á Æ ¼
43
we can see that
® © ¥ ¤ ¥ ¦ª «§ ¬ ¨ ¦
¯
is normal ²
°± °³ ± ´ ± µ
¶
6.2.11. then, since is strictly increasing, we can solve this equation for to If get or Here is the area of the tail to the right of under the standard normal curve, which equals the area of the corresponding left tail, that is, So, Solving this equation results in , which, when we substitute from the rst equation, yields 6.3.1. and :P We use the binomial p.f. with
À
Ä
 »
Ã
¿
Ã
Ä
Â
¿
À
»
Ä
Â
¿
À
Ä
Ä
»
Ã
Ä
Â
¿
À
¼
À
¼
»
Ã
Ä
Â
¿
»
Ã
 Ã
¿
¼
Ä
 Ã
¿
À
Ã
 ¿
Ä
Using the normal approximation, we have P
0 ' &1
2
3
4
( &1 5
P
+
C D E FG L
M
CE F K L D
N
OK
*
+
' %
6 78 9 : ; < = D
N
, !
>
C E FG L P
?
M
@
N
P
8 < =
CE F K L Q
A
-
!
+
' %
.
C D E FG
H
I
#
"!
,
J
,
!
$
"!
E F K L
% &' ( ) &
"
+
M
and so,
/
N
C E F K L
D
B
C E FG L D
N
and
+
= 78 9 : ; < =
8 < = N
K
E FU S K V D
E FG R S T Q
K
B
E FW R K W F B
6.3.3. and By CorolA single random number is a uniform random variable with lary 6.3.2, the average of i.i.d. copies of is approximately normal with [
X
Y
]
`
]
f
^ [ g g
\ e
1.
We want
2.
¦
§¨
P
©
Ç ÈÉ
^
^ [ _
\
b c c
^
]
a
`
[
d e
^
and P 6.3.5.
[Z
M
]
h
ËÍ ÎÌ
È Ê
Ï
Thus, P
« ¬
ª
i ji k l j
mi jn l
Ð
Ñ Ò
table, amounts to
Ó ÔÍ ÎÕ
éë ìê
í
Ï
Ö
×
§ ¨
Ø
or
î ïð ñ ò
®
í
i jq r s
P
t
u v wx y z { w|
z { w|
{ w{ } y
~
{ w | z { w|
{ w { } y
~
{ w{ } y
¡
« ¬
¢
and
£
£
« ° ¬ ©
¯
¥ ¥
¤
P
ª
±
²³
â
ã
¹´ ºµ ¶ · ¸
Á ¼ ½¾ ¿ À
»
¿ À Â Ã Ä
Equivalently,
Ù ÚÛ Ú
ó
o
and so
and so we want P
p
o
Ý Þà áß
Ü
Å
Æ
which, from the
ä åæ ç è
ô õ ö ï
6.4.1. successes will occur before failures if and only if the th success occurs at the th trial, for any . Thus, using the negative binomial p.f., we obtain P( successes before failures 6.4.3. If the number of failures before the th success is then the total number of trials up to and including the th success is Thus, P P where is negative binomial, and so P 6.4.5. P P P for and 6.4.7. ÷
÷
ø
ù
ú
û
÷
ü
ø ý
ø
ý
þ
ÿ
ù
÷
# $
@ >
(
?
? A
)
*
< B >
=
C >
+ ' >
?
( ,
-
)
. /
)
& '
(
)
*
/
& '
-
)
.
0
*
/
P
T
)
1 2 5 3
?
3
A D
E F
G H I J
K E F
G L
M
N
O
! "
6 7
& '
< = >
%
P O
Q
R P O
Q
44
S
T
T
U
5
8
4
6 7
5
1 9 3: 2 3
2 3
4
N
3
M
Q
V
P M
Q
V
: 8
:
4
9 3
2
4
Q
R P T
T
T
T
3
;
Letting
denote the gamma density from De nition 6.4.2, we have for This expression equals 0 if
is positive and bounded, it must have a maximum at this critical point. 6.4.9.
8 9: = > ;
+ 0 1 2 0
1.
! "
M
N O
PQ
) '* (+ , - . /
' 3 4 -
k
lm n
R
f
8 69 : 7 ;
Since
: 9: = A B CCCDE F G H A B
<
5
for any positive
I J
L S
T
UQ
V W
X
YT
o qp
R ZV
PQ
v y z { y u
3.
5
6 7 ? @
integer K 2.
$ &%
#
t ru v s w x
a ^ _ [ \ [` ]
S
c ` d e [
b
x
r
f
hg i j
y |w } ~
for
6.4.11. ¡ ¢ for ¥ We prove it reduces to ¦ § ¨ © ¨ ª ¨ « « « by induction. For ¥ ¦ § £ ¤ ¢ Equation 6.4.30, which was proved in the book. Next, assume that it is³ ¸ true for ¥ ¬ © « Then, using also the reduction Formula 6.4.11, we get ® ¯ ° ³± ² ´ µ ¶ · ° ¹ ³ ² º ± ² » ¼ ½ µ ³ ¸
³ ¸ ° ¹³² º ± ² ¶
·
³ ¸ ° ¹³² º ± ² ½
µ
å æ ç
¸³ ¸ ° ¹³² º ± ²
³ ¾Á Â¿Ã Ä Å Æ° Ç ¹ È É² º ʺ ËÀ Ì
Í
Thus, the
Î Ï ÐÑ Ò Ó Ô Õ Ö Ñ × Ø Ù Ú ÛÜ Ý Þ ß à
á
â ã è é
â ã
truth of the formula for any í î ï implies its truth for í ð and so it isâ proved í ã ä âfor ê ë ã any é ì 6.4.13. ì ñ and óò are i.i.d. standard normal variables, and so òô õ is ö ÷ with 2 degrees of freeò dom. Thus, ø ù ú û ü ý P ú þ ÿ û ü ý P Hence, if , which shows if that is exponential with parameter In particular, the distribution is the same as the exponential with parameter 6.4.15. if if and By Theorem 4.5.8, if if if Thus, if if if and otherwise otherwise Comparing these expressions with De nition 6.4.4, we see that is beta with and and is beta with and . 6.4.17.
8 9 : 7
;
!
# $ % "
&
'
)( * *
- * . ,
+
/
1 02
4 5 2 2
3
6
8
B
? F G
C D E
N
Q RP S
O
A
J
K
?
E I E H
Q
M
@
T
U
L
Q
QP T
`
V
W
XY Z
O
[V \
X Y Z ]^
O
_
c e
a
b g
b
`
g
o
pg
o
i q
k l m r e
n
_
g
o
k g
{
|
}
y ~
o
b
d l
d
l
h
b g
l m e
b g
a
l
c
d c
d
c
h
b g
g
g
t u
¨
©
¤
¥
§
¥
kc m
n
i
w v
xy z
s
¦
45
¡
£
¥
n
¢
§
k l m
f
j
f
i
¤
¥
¦
2
1.5
1
0.5
0
0.2
0.4
0.6
x
Beta density for
¤
¥
0.8 ¨
§
1
¥ ¡
2
1.5
1
0.5
0
0.2
0.4
0.6
x
Beta density for
¤
¥
0.8 ¨
§
¥
1
¡
46
0.5 0.4 0.3 0.2 0.1
0
0.2
0.4
0.6
x
Beta density for
¤
0.8 ¨
¥
§
1
¥ ¡
1 0.8 0.6 0.4 0.2
0
0.2
0.4
0.6
x
Beta density for
¤
0.8 ¨
¥
§
1
¥ ¡
7 6 5 4 3 2 1 0
0.2
0.4
0.6
x
Beta density for
¤
¥
0.8 ¨
§
¥
1 ¡
47
4 3 2 1
0
0.2
0.4
0.6
x
Beta density for
¤
0.8 §
¨
¥
1
¥
¡
6.4.19. In Theorem 4.6.2, Equation 4.6.26, we )substitute * + , - . / 0
for
# (
if 1 2 3 2 otherwise
! "# $% & '
and
1
two expressions together, we obtain ; B C D E F
G H
I
J K L M N O P M Q
J
R
4
Then, multiplying these 5
ST U V W
T U V U Z Z Z U [
U for and X Y where we otherwise ] left the constant \ undetermined. Its value could be determined by the coef cients in ^ _ ` a and b a and the integral in the denominator of Bayes c Theorem, but we can d nd it much more easily by noting that the variable part being a power of e times a power of f g h i the posterior density j k l m must be beta. Thus, j k l m is beta with parameters n o p and q g n o r i and 6 7
89
:; < = >
?
@
A
T
s
t
u v w x y z {| } x y ~
6.5.1. Clearly, u and as linear combinations of normals, are normal. To show that they are standard normal, we compute their expectations and variances:
u
and
because ¾ é ê
ë
· ¸ ¿ º èî
¼
ì íç è
é ê ÷
ò
ó ô
õÿ ú ø
ù
÷
· ¸ ¹ º ï
ð ñ
° ´ ¢ ¡
Now, Á
¾
ä å æ æ ç è
¼
£
 Ã
½ À ò
ó ô
õ ö
¦
È É ·Ä ¿ º
because
¢¡ ¡ ¤ ¡ ¥
÷ ø
ù
¼ ò
§ ¨ © ª
Ê Ë
Ì
) ) * + ,
@ Z
1, 0 - . / ,
2
/
10 0 0
Í Î
ó ô
õ ö ú ø
+ 3 4 5
9 6 7 88 8
Ñ Ò
±
² ¯³ ³
Ó
µ
Ô ÕØ Ö Ù× Ú Û
¶
· ¸ ¹ º »
and
! "
ú ü ÷
ù
ú ó ÷ ÷
Ü
ý
#$ % & '
9 7 ;8 < :
; = > ? A R W X R Y A C D E F G HI G H H JKL M N O M P Q R S T T U V Q T S R R P Q R S T R U V e fg e h i j i i h i i h [ a T j j j j k l m n o n j p q j o o o o j r T \ ] ^ _` ^ ` ^ ^ bc d d d B
Ï Ð
û
ù
¯ ° ¬ ® ®
48
Ý Þ
¼
ß à á â
½ ã
Å Æ Ç Ç
(
«
j j n j p
si j h o o
i j oj k t
ú ó ÷ ú þ
Also,
o Similarly, $ ! " # ! %&
o
> ?@ A ( 2 2 < B ? C D EF G G / 114 (* +, -. /01 3 1 5 6 7 standard 8 9 : ; ; normal, and so, using the fact that H I and H E are0 independent b e f ; = J K L MN N T I U R I O E P Q Y Y Y a a a I S T U VW XX W UX [ W X X ^ _`d X Z X \ ] ^ _` ^ `` g i h _ ` c _ r r r h j k l m ln m l l op q p r s t q q t r q u t q r t r r v w p r x t q q u t q r y z { | } q 6.5.3.q ~
' )
Equating the coef cients of like powers in the exponents in Equation 6.5.14 and in the present problem, we get, for the coef cient of for the coef cient of ¡¤ ® ¡ ¢ ¥ ¤£ ¦ § ¨ © ¨ ª « ¬ and for the coef cient of ¯ ° These three equa£
and
Ì É
ñ ò î
îò õ
óô
constant
ô
ò
õ
ô î
Ó Ô Ñ Ò ÒÑ Õ Ö×
Furthermore,
Ê Ë Ð
Í
´ É Íº ÝÑ Ü
Ø Ù Ú Û
¹ Ä ¶º » ¼ à ŠNow,
for the coefHence,å æ Ì È Í Ë
Ê Ë and Ê Ì È Ê Î Ì É Í Ï Ð Ó ß à á â ã äå æ ç å æ ç é ê ëì ì é í è è ì
Û
ê ë ì
ì é î ï
è ì
ð
Û Þ exponent differs from the given one by the This , which can be split off, and since we obtain
î ô î ÷ ø ù ú û
ö
° ± ² ³ ´½ µ ¶ · ¸ ¶ ¶ Á Â Ã ¾ ¿ Ä Á
¼ ½ ¾ ¿ À
tions for the three unknowns yield µ Æ cients of Ç È and Ç É we get Ê Ë Ì È Ê ¹ Ì
üù ý û
þÿ
Thus, by Theorem 6.5.2
! "
#
"
$
is a bivariate normal pair with the above
%
parameters. 6.5.5. By the result of Exercise 5.4.8, &
'
(
#
! )
)
$
*
+
,
-
, . /
-
0
1
2
%
#
! "
% 3
4 5
6
7 8
9
:
8
7 ; <
9
=
>
?
@
A B
C
D
B
: 8
5
6
7
9
7
; <
9
F
G
H
A B
E
8
E
Hence, By Theorems 5.4.2, 6.5.1 and 6.5.4, and are independent if and only if their covariance is zero, that is, This equation is equivalent to 6.5.7. The conditional expected score on the second exam is given by Equation 6.5.9 as The conditional variance of is given by Equation 6.5.10 as From the table, the 90th percentile of the standard normal distribution is Since under the condition is normal with and we obtain J
>
?
@
A K
C
D
K
7
8
E
; <
J
9
A F
G
H
A B
C
:
L
F
G
H
A B
J
E
K
C
8
E E
5
6
7
J
9
>
?
@
A B
C
D
B
M
8
E
I
K
8
7
; <
J
9
A F
G
X
g h
i
jh
k
l
m
n
A B
Y Z
V
R
f
H
V8 R
OS P T UQ
o
F
:
C
[ \
a
] E^ _
X
[ \
`
r
s
b c
G
Y Z
H
A B
[ \
d\
]
J
` ^
^
8
E E
5
l
p
n
q
g h
n
p
n
t
r
u
v
w x v k i
l
p
9
>
?
@
A B
C
D
B
E
L
N
I
p
z
i
h
i
e
k
jh
l
m
n
o
l
| }
~
}
{
J
W
y
7
8
e
6
M
e
49
I
6.5.9. is bivariate normal as given by De nition 6.5.1, then is a linear If combination of the independent normals and plus a constant, and so Theorems 6.2.4 and 6.2.6 show that it is normal. To prove the converse, assume that all linear combinations of and are normal, and choose two linear combinations, and such that Such a choice is always possible, since if then and will do, and otherwise the rotation from Exercise 6.5.5 achieves it. Next, we proceed much as in the proof of Theorem 6.5.1: Let denote the bivariate moment generating function of that is, let Now, is normal, because it is a linear combination of and Denoting the parameters of and by and respectively, we have and (There is no term here with because we have chosen and so that Denote the mgf. of by that is, let Then,
$ %
&
'
"
!
#
(
%
)
*
&
*
)
+
(
%
)
&
%
( )
)
$
.
1
,
&
&
- .
,
)
- .
)
,
-
/
#
$ ,
)
'
&
,
)
.
+
)
& 0) -
.
%
4
&
%
4 (
%
#
/
.
)
'
&
2 3
)
& - %
#
) 5
6
7
+5
by Equation 6.2.15,
4
8
/
8
-
4 >
(
E H EI
/
/
#
5
9
)
=
: ; <
+
E H E
which we can factor as This equation shows that and are independent. Now, de ne and as the standardizations of and Inverting the transformations that have led from to the independent normals we can write and in the form given in De nition 6.5.1, showing thereby that is bivariate normal. 6.5.11. N
I
F O
M P
J
Q
G D
R
8
$ -
B
G
8
#
5
E H EI
/
K L
R
C D E B
N
E K L
C
W
X
Y
a
b c
s
t
]
Z
d
e
f
v wx
y
; ? @ A B
S
Y
X
C D E B
N
I
Q
]
]
#
5
E H E
F O J
F
E
g
u
h e
v u
i
x z
d
h
g
y
{
|
Y ^ ]
Ì
M
P
J
U
T
h
}
|
j
k l
m n
d
~
y
X
Z
o
p
t
u
e
q r
d
e
¥
¦
and
§
¨
© ª ¨ « è
Í É Î Ï ÐÑ Ò Ó Ô Õ Ö Ñ Ñ ÐÊ ×Ø ÐÙ
u
v
h e
i
g
m
d
h
Ú Û
Ü
Ý
Þ
Z
\
^ X
Z
^
Y ^ ]
l
h
¯ ° ° ± ²® ³ ´ ± µ
j
k l
¢
£ ¢ ¤
¶
·
¸
¹º
¡
½
¾ ¿ À ¾ Á
» ¼
¹
Thus
þ
ÿ
ú
é ê ì ì íî
Z `
m n
g
m
d
ï
Þ Ý á âã ä å æ ç
50
ð
ñ
ò
óô õ ö
ë ß à
Y
[
¬
l
U
x y
f
V
L
Y
_ ]
É Ê Ë
E J K L
W
C
[
B
Z
x u
F G
S
÷
ú ø û ûù ü ý
ü þ
 Ã
Ä Å Ã
Æ Ç
È
7.1.1. Replacing
!" # $ %
&
' (
) * +
'
, -
$
< < =
>? @ X
[
AB C D E
\ ]
F
G
IH =
cd ` e
_` a b
^
by in Equation 7.1.10, we get J
I =K L
f g h
'
M
12 3 4 5 6 2 7
0
NO P Q R S O T
which is kj
i
. - /
U V W
X
Y Z
Hence,
8 9 :
and to ; nd the critical point we set Solving for [ results in
h l
7.1.3. If m n is a discrete r.v. with o possible values p q and p.f. r s p q t u wv for all x y then, by equation 5.1.5, z u { s | n t u } ~ and, by De nition 5.2.1 and Theorem 5.1.3,
7.1.5. a) ÁÃ
È
~
® ¡
¢
É Ä Ê
£ ¤ ¥ ¦ § ¨
©
and so
¿À Ë Ì
Í
º
«¬ ® ¯ ° ³± ²
ª ¿À
ÁÂ
ÁÃ Ä Ä
´
Å
¯ µ
Ï Ç
Í Î
Ê
¶
¿À Ë Ì
Å
Ð
ë
û
ìí
î
ï
ô ú û
7.1.7. By Theorem 4.5.8, if d.f. then %
&
ÿ
!
%
'
( )
*
+%
&
( )
* ,
à
àá â â
ã
ä
Ýå Ü
æ
Ô ÒÕ ÖÓ × Ø Ù
9
:
;
.
? <
@ =
>
!
"
!
!
#
$
%
A
B
C D
E
F
G
H
L
I
G
f
d
r
m
s t u
r
'
( )
*
/ .
-
0
\
1
^
2
_
3
4
p
o
q
x
y
z
{|
}
~ |
z
{|
}
¨
©
ª
«
ª
°
Å Æ
Æ
Á Â
Ç
Å Æ
Ñ
Ï
Á Â
É
É
Ò
Á Ó
Î
±
Á Â
Ç
Ê
Ì
Æ
Á Ê
²
³ ´
È
Æ
Í
Á
Ç
Ð
µ
É
Æ
S P
T Q
U
R
V
W
X [ Y\
Z
]
` a b
X
Y
c
d
which
e
Z
g
¬ ® ¯
O
i h
b j k
c
Í
¶
Á Ê
Í
b l
c
z z
N
5
l
q
w
M
J
3
`
v
and We are given the successive values to obtain the equation and vals are g n
þ
and Therefore, shows that is an unbiased estimator of 7.1.9. This problem is an instance of the general case considered in Example 7.1.8. Here 8 4
and so
`
7
Ç
é ê è
ç
þ
K
6
Ã
Þß
¿À
for i.id. random variables with common Thus, in the present case, for
#
Hence,
úÿ û
þ
Thus, the method of moments gives
Æ
at
Ú
Û Ý Ü
üý ü ü
Û Ü
û
ô ö ÷ ø ù
é ó ô õ ó
ð òñ
ï
¾
Î
The function Ú has a maximum at this value, because b)
Hence, ¿ À Á Â Á Ã Ä Ä Å yields the critical value ÑÃ Å
·¸ ¹ º » ¸¼ ½
·
Ë
Ð
Ì
Ñ
Æ
Á
Æ
Å Æ
¸
Í
Á Â
¹
Í
Í
º
Î
Á Â
Æ
Ç
¹
Î
Æ
»½¼ ¼
¾
Ç
Ï
Á Ê
É
É
Ð
¿
Ò
, and , for each of which we need to solve Hence, Now, So, the required approximate con dence inter-
À
Á Ê
Î
Á Â
Ë
Ã
Ì
and
Æ
¡
¢
£
¤ ¥
¦
§
Ä
Á
Á
Æ
Å Æ
Í
Á Â
Í
Ç
Ð
Ñ
È
Å Æ
Í
Á Â
Á Â
Ò
Ã
Ì
Ë
Æ
Î
Á
Á Ê
Æ
Æ
Í
Í
Æ
Î
Ê
Æ
Ð
Î
Á Â
Å Æ
Ç
Ï
Á Â
Ç
Í
È
Á Â
Ã
É
Ì
Á Ó
Æ
Ê
Á
Æ
Ì
Í
Á
Æ
Í
Æ
Í
Í
Î
Ð
Á
7.2.1. We use a large-sample Z-test. The null hypothesis is that the sample was selected from the student population with mean grade 66 and SD 24, that is, is The alternative is that the students in the sample come from a different population, for which The test statistic is which we take to be approximately normal, because is suf ciently Ô
Ô
Õ
Ö
×
Ê
Ê
Á
Ø
Ù
à
Ý
Þ
ß
51
Ú
Û
Û
Ü
large for the CLT to apply. The rejection region is the set We compute the P-value as P P This probability is high enough for us to accept the null hypothesis, that is, that the low average of this class is due to chance, these students may well come from a population with mean grade 66. 7.2.3. We use a large-sample paired Z-test for the mean increase of the weights, with denoting the hypothetical mean weight of the cow population before the diet and that after the diet. We take and The test statistic is the mean weight We assume that is increase of the cows in the sample. The rejection region is approximately normal with SD We compute the P-value as P
! " #
$
&
&
Þ
Ü
" % !
'
&
(
&
)
)
&
*
+
,
&
'
*
-
,
&
.
/
1 2
( + 6 7 8
9
4
3
5
(
0
/
: ; < = ;
> ?
@
A B
CD
E F
G
P Thus, we reject the null hypothesis: the diet is very likely to be effective however, the improvement is slight and the decision might hinge on other factors, like the price and availability of the new diet. 7.3.1. a) Here a Type 2 error means that we erroneously reject an effective drug. b) Accepting the drug as effective means the same as rejecting Thus, we want P which, from Equation 7.3.4, approximately equals If then the drug has really reduced the duration of the cold from 7 to 6.5 days, and the test will correctly show with probability that the drug works. 7.3.3. We use a large-sample Z-test. The null hypothesis is that the sample was selected from the is The alternative student population with mean grade 66 and SD 24, that is, is that the students in the sample come from a different population, for which The test statistic is which we take to be approximately normal, because is suf ciently large for the CLT to apply. The rejection region is the set We compute the P-value as P P Thus, solving this equation N OK H IL M J
P
L QM NK O
R
S
T
U
V
W X
YZ X [
\
]
Y] ] ] ^
Y
_
`
e f
g
h
ij
k
l
mn o
a
Y
b
p
q
e r s tw u v
YZ [
d
z
{
| } ~
t su
s x u
W c
y
for is
¼
Å Ç
yields Ê
Ë À ¾
¼
ã
æ
½
¾ ¾
¡
¢ £
¿
À Á Â
¦¤ § ¥ ¥
Ã
Ä
¨
Å ¹
© ª¬ « «
º¹
» Æ
¸
®
¯
°
¾ ¾
Ç
The power function is given by The graph is given below.
º» Ì º
Ý Ï Þ ß àäá å â
ç
52
± ² ³µ ¶´ ´
À Á
Í
¸
É º ¾ Á »
È
Å Î
·
Æ
½
P
¹
º¹
¸
Ï Ð
» º
À ¾
Ñ
º»
Ò
and the rejection region P
ÓÔ
Õ
Ö
Ï Ð
×
Ø Ù
Ú Û
ÓÔ
Õ
Ü
1 0.8 0.6 y 0.4 0.2
0
20
40
µ
60
80
100
7.3.5. Let denote the number of nondefective chips. The rejection region is the set of integers
The operating characteristic function is P and its plot is given below.
1 0.8 0.6 y 0.4 0.2
0
0.6
0.7
p
0.8
0.9
1
This is not a very good test: For instance, when the probability of a chip s being nondefective is .8, the graph shows that the test still accepts the lot with the fairly high probability of about .3. We could improve the test by sampling more chips or by rejecting the lot if even one defective is found in the sample. 7.4.1. For the given data, we nd and ! " # $ $ We use the % -distribution with 4 degrees of freedom. We want to nd % such that P & ' ( % ) $ * + $ From a % -table we obtain % ! # $ * , and so P . / # $ * , ( 430 5 16 27 8 9 : ; < = > : ? @ : Substituting the observed values A > ? B ? and CD E
F
9 :B
for G and
HC I
we get P J ? B
Q ? K 9 : ; < L M NQ O P R S R T U T V W X Y Z [ \ Q O P ] ^
53
_` a b
that is,
_
as an approximate 95% con dence interval for _
To nd a 95% con dence interval for b we can proceed much as in Example 7.4.1. By
Theorem 7.4.2, has a chi-square distribution with degrees of freedom. We obtain, from a table or by computer, the and the quantiles of the chi-square distribution with ! ! " " " and P Thus, degrees of freedom, that is, P
P#
' ( % & ) (
$
a _
7
*
+ + , + - .
D D E D F
/
0 ,1 2 ,
EF D EG
H
I
J
8
Substituting 43 / 5 6 , - for 3 we get 0 , - 9 : ; < =B > ? @ A M K L D EF I CE A as an approximate 95% con dence interval for
or, equivalently, 7.4.3. M D P L EG I R K D ES E and Q We use the T -distribution with 4 degrees For the given data, we nd N O of freedom. We reject U V if W X Y Z [ or, equivalently, if \ X ] ^ where ] _ de f` ga hb c k l m no p k q m a i j r k ns t u v w x y z z { y Thus, from a table or by computer, P | } ~ yz z { y and so we accept the null hypothesis, the truth of the store s claim. 7.4.5. We can write
| where is the normalization constant given ex
plicitly in Equation 7.4.32. A basic limit formula in Calculus states that ¥ © ª Ë Ì Í Ô Õ Ö × Ø Ù ¦ « ¬Ä Å Æ Ç ®È É ¯ , Êwe £ § Thus, writing ¨ ½ and À Á Â Ã Ë Ì obtain Í Î Ï Ð Ñ ÒË Ó °± ²
³ ´
µ
¶ ·
¸ ¹
º
øù ú
øù ú
Ý Þ
ß
à Ý á
ô û
ü
ó ô
ý
û
ü
ó
Ú ÛÜ
÷
ý
ó
¡
¢
¥
£ ¤
· »½¼ ¾ ¿
Ö ÚÛ Ü
Ý Þ
ß
âã ä
å
æ ç è
ç é ê ë ì í
îï ð ç ñ
ã ä
ò
å
îï ð
ê ë ì í æ ç è
ç ñ
provided exists. Assuming this result for the moment, we get
þ ÿ
ò
ó ô õ ö
÷
-
Now,
!
"
# $ &%
%
'( )
, & - . / 0 12 345 / *+
and, by a theorem of Advanced Calculus we can
take the limit here under the integral sign, and so 6 ?@
` a
= A =
G
89: ; < =
bc d e f g
BC D
P
7
QR S
; EAF G H I J K L M N O I
T
89: ; < =
>; 7
U T VW X Y Z [ \ ] ^ _
S
Putting all these results together, we obtain
hij k l m
Thus, by Theorem 6.2.1, n o pq r
a
s t uv w x y z { |
}
~
7.4.7. For
¡
{
¢ £ ¤ ¥¦ §
¨
}
© ª
and so,
« ¬®¯ §
¨
²
© ° ±
}
´³ µ ¶ ·¸ ¹ º » ¼ º
½
Also,
¾ ¿ ÁÀ
Æ Ó Ô Õ ÂÃ Å Æ Ç È ÉÄ Ê Ë Ì Å Í Î È Ï Ð Ñ Ï Ò
Ö
× Ø Ù ÚÛ Ü Ý Þ ß à á â âã ä å æ ç è æ å é ê ë á âã ä å æ
ì
í
í î ï ð 7.5.1. We are testing ñ ò ó ô against ñ ì õ ÷ to be ÿ ber of yellow seeds turnedð öout
ø ú
In a sample of size ÿ ú û û û the numý ý þ . Using the binomial distribution, we compute
ó ô
ú ù
54
û üý þ ü
ú
the observed -value as
responding approximate P-value is P
#
From a normal table, the cor-
!
"
!
#
$
7.5.3. as in Equation 7.5.20. For each subpopulation, the sum The number of terms is again of the entries is prescribed, and so the degrees of freedom are reduced by Also, the sums in each category are estimated from the data, but only of them are are really estimated, because the sum of the column sums must equal the sum of the row sums. Thus, the nal number for the degrees of freedom is 7.5.5. We divide the interval into four equal parts (in order to have the expected numbers equal not less than ve) and the list gives the following observed frequencies for them: %
&
' (
'
(
!
)
' (
!
'
!
(
!
&
'
!
(
!
)
Intervals Frequ.
#
# # * #
#
+ ,
+
+
#
#
,
+
# * #
23 4 5 6 7 0 1 /
* #
+
4 5 6 7
- + ,
#
- +
+
5 4 5 6 7
*
# #
,
.
4 5 6 7
Thus, We have 3 degrees of freedom, and so a table gives P Hence, the calculator seems to generate random numbers very well. 7.5.7. We can extend the table to include the marginal frequencies: Sex Grade M F Either sex L
5
8
9:
C @A B
D
E A B F
A
5
8
G
9
5
8
9;
5
<
H IJ K I
B
C
D
F
P
M
N
K
K
O
M
J
P P
J
P P
J
Q
17
13
15
14
= >? >
16
Any grade 31 57 88
13
Hence, the expected frequencies under the assumption of independence can be obtained by multiplying each row frequency with each column frequency and dividing by 88. So, the expected numbers are Sex Grade M F L
B
C
D
K IJ R
A
M IJ J
K IM Q
M IS Q
M IN K
F
K IM Q
J IH O
P P IH P
Q IK S
J IO S
P H IR N
Q IK S
55
P
Thus, M
E A B
K IJ R B
N
M IJ J B
K
K IM Q B
K
M IS Q B
O
M IN K B
G K IJ R M
K IM Q
P P
M IJ J
K IM Q B
J
P P
J IH O
J IO S B
K IM Q
J IH O B
J
J
P P IH P
P H IR N B
M IS Q
P P IH P B
Q
P H IR N
Q IK S
Q IK S B
G J IO S
M IN K
Q IK S B
P I P N
Q IK S
The number of degrees of freedom is and so, from a table, P which gives overwhelming support to the hypothesis of independence. 7.6.1. We take and From the data, and so P 7.6.3. By the de nition of and the independence of the chi-square variables involved, Now, and N
P
S
P
D
@A B
M
E A B F
G
H IJ M
.
/
0 1 23 4
8
9 : ; ?
7
! !
"
# $ !% &! '
0 5 21
(
) *+ ,
-
¬
¯
#
0 26 1
7
< =
8 9 : ;
>
@ 9@ @ 8 A
9
B
C
G
G
< C
l
D
EF
?
I
H
D
EF
J K LM
N
J
K LO
P
N
Q
R
S T
V U
W
R
X
m n
Z Y[ \
]
^
o p
q r st
u v
w
x v
y
z
|
} ~
{
_
` a
°
± ²
³
©
ª
e
¢ £ ¤ ¥
Hence, 7.7.1. Use the large sample formula µ
d
«
f
©
´©
c b
_
g
i hj k
]
©
ª
e
¡
¦
§
if
«
®
¨
¯
§
À Ç ¶
± ·
©
¸
¹ º
»
¼
½
À Á ¿
¾
Â
à Ä
Â
È É
Ê Ë Ì Í Ì
Î
From
Å Æ
ë ò
Ï í
ÑÐ ó ô
Ò
Ó
Ô
Ñ
õ ö ÷ ø ùú û ü ÷
ý
we get and so Thus, and, this value being fairly large, we accept 7.7.3. Since has only a nite number of values, it does assume its supremum at some values of that is, its supremum is its maximum. Also, since and are right-continuous step functions with jumps at the , is assumed at every point of an interval and, in particular, at ! 7.7.5. $ ) % Use the large sample formula P " # $ % & ' ( $ % * + , - /. 0 1 2 3 4 5 6 7 8 9 : ; < = > = ? From @ A B C Ï
þ
Õ ÖÐ Ö
Ò
Ó
×
Ø× Ù
Ú
Ò
Û
Ü
Ý Þ ß à
á
Ý
â ã
ä
ÿ
å
æ ä
ç
Û
è
é
ë ì ê
í
î ï
ð ñ
ÿ
B F
_
we get G
B D ` a b c d
A
H I I C J I I
and so S
b i dH Ij I kK Ll Mm M n o pNq r sO n P O tQ R u v w x v e f
g h
Thus, P W X
T O PQ U V P
We accept y
56
v z
Y Z
[
\ Y Z ]
^
D D
E
http://www.springer.com/978-0-8176-4497-0
View more...
Comments