Kay - Solutions

May 4, 2017 | Author: farah727rash | Category: N/A
Share Embed Donate


Short Description

Download Kay - Solutions...

Description

ECE 531: Detection and Estimation Theory, Spring 2011 Homework 1 ALL SOLUTIONS BY Shu Wang – thanks for submitting latex file in 1st homework!

Problem1 (2.1) Solution:

E[σˆ2 ] = E[

N −1 1 X 2 x [n]] N n=0

N −1 1 X E[x2 [n]] = N n=0

1 = N σ2 N = σ2

So this is an unbiased estimator. N −1 1 X 2 x [n]) V ar(σˆ2 ) = V ar( N n=0

=

N −1 1 X V ar(x2 [n]) N2 n=0

1 = 2 N V ar(x2 [n]) N 1 = V ar(x2 [n]) N

According to x[n] is iid, then x2 [n] is also iid. We know that V ar(x2 [n]) = E[x4 [n]] − E[x2 [n]]2 . And according to central moment: ( 0 if p is odd E[(x − µ)p ] = p σ (p − 1)!! if p is even n!! denotes the double factorial, that is the product of every odd number from n to 1. And µ is the mean of x. In this problem, we know that the mean of x is 0. So we can have E[x4 [n]] = 3σ 4 . So V ar(x2 [n]) = 3σ 4 − σ 4 = 2σ 4 . Then we have 1 2σ 4 V ar(σˆ2 ) = 2 N 2σ 4 = N N 1

And V ar(σˆ2 ) → 0 as N → ∞ Problem2 (2.3) Solution: N −1 1 X ˆ x[n]] E[A] = E[ N

= =

1 N

n=0 N −1 X

E[x[n]]

n=0

1 NA = A N

ˆ = V ar( V ar(A)

N −1 1 X x[n]) N n=0

1 = 2 N V ar(x[n]) N 1 = σ2 N σ2 = N

2

According to x[n] is iid. Gaussian distributed. So we have Aˆ ∼ N (A, σN ). Problem3 (2.8) Solution: 2 From 2.3, we know that Aˆ ∼ N (A, σN ). Then we can have: |Aˆ − A| ǫ lim P r{|Aˆ − A| > ǫ} = lim P r{ q >q }

N →∞

N →∞

σ2 N

σ2 N

According to Q-function, we will have ǫ ǫ |Aˆ − A| > q } = lim Q( q ) lim P r{ q

N →∞

σ2 N

σ2 N

2

N →∞

σ2 N

√ ǫ N ) = lim Q( N →∞ σ =0

So Aˆ is consistent.

ˇ = E[ E[A] =

N −1 1 X x[n]] 2N n=0

N −1 1 X E[x[n]] 2N n=0

1 = NA 2N A = 2

N −1 1 X ˇ x[n]) V ar(A) = V ar( 2N n=0

=

1 4N 2

N −1 X

V ar(x[n])

n=0

1 = N σ2 4N 2 σ2 = 4N σ2 According to x[n] is iid white Gaussian. So Aˇ ∼ N ( A2 , 4N ). Then we can have:

ǫ |Aˇ − A| >q lim P r{|Aˇ − A| > ǫ} = lim P r{ q

N →∞

N →∞

σ2 4N

σ2 4N

Also according to Q-function, we can have: ǫ |Aˇ − A| >q lim P r{ q

N →∞

σ2

σ2

4N

4N

Aˇ is a biased estimator. It is centered at Problem4 (2.9)

A 2.

ǫ } = lim Q( q N →∞

σ2 4N

√ ǫ 4N = lim Q( )=0 N →∞ σ

So Aˇ is not consistent.

3

)

}

Solution: N −1 1 X ˆ E[θ] = E[( x[n])2 ] N n=0

= V ar(

N −1 N −1 1 X 1 X x[n]) + E[ x[n]]2 N n=0 N n=0

σ2 + A2 N 6 θ = =

ˆ → A2 as N → ∞, this estimator becomes unbiased. This So this is a biased estimator. E[θ] estimator is asymptotically unbiased.

4

ECE 531 - Detection and Estimation Theory Homework 2 Solutions

3.3 (Luke Vercimak) The data x[n] = Arn + w[n] for n = 0, 1, . . . , N − 1 are observed, where w[n] is WGN with variance σ 2 and r > 0 is known. Find the CRLB for A. Show that an efficient estimator exists and find its variance. What happens to the variance as N → ∞ for various values of r The pdf is: N −1 −1 X p(x, A) = exp [x[n] − Arn ]2 N 2 2σ 2 (2πσ ) 2 n=0

"

1

N −1 −1 X ∂ ln p(x, A) = 2 2 [x[n] − Arn ] (−1)rn ∂A 2σ n=0

N −1  ∂ ln p(x, A) −1 X  = 2 x[n]rn + Ar2n ∂A σ n=0

N −1 ∂ 2 ln p(x, A) −1 X 2n = 2 r ∂A2 σ n=0 " # N −1 −1 X 2n I(A) = −E r σ2 n=0

I(A) =

N −1 1 X 2n r σ2 n=0

I(A) = CRLB(A) =

1 r2N − 1 σ2 r2 − 1

σ2 σ 2 (r2 − 1) if r=1, 2N otherwise N r −1

If r ≥ 1, Aˆ → A. If r < 1, Aˆ → ∞. 3.11 (Luke Vercimak) For a 2 × 2 Fisher information matrix   a b I (θ) = b c which is positive definite, show that  −1  I (θ) 11 =

1 c 1 ≥ = 2 ac − b a [I (θ)]11 1

#

What does this say about estimating a parameter when a second parameter is either known or unknown? When does equality hold and why?   a b I (θ) = is positive definite ⇒ b c All principal minors are positive ⇒   a b det[a] = a > 0, det = ac − b2 > 0 ⇒ b c ac ≥ ac − b2 ⇒ c 1 ≥ 2 ac − b a This shows that the variance of a parameter estimate when estimating two parameters will be less than or equal to estimating only the single parameter. The equality holds when b = 0. This implies that the first and second parameters are uncorrelated. 3.15 (Shu Wang) We know that x[n] ∼ N (0, C), and x[n]’s are independent. If we suppose i(ρ) is the fisher information of x[n]. Then we can have I(ρ) = N i(ρ). According to equation 3.32 of textbook, we can get: ∂µ(ρ) T −1 ∂µ(ρ) 1 ∂C(ρ) 2 ] C (ρ)[ ] + tr[(C−1 (ρ) ) ] ∂ρ ∂ρ 2 ∂ρ 1 ∂C(ρ) 2 = tr[(C−1 (ρ) ) ] 2 ∂ρ

i(ρ) = [

According to x[n] ∼ N (0, C). Also because  C= Then we can have ∂C(ρ) = ∂ρ

1 ρ ρ 1 



0 1 1 0



Also we can get −1

C

1 (ρ) = 1 − ρ2



1 −ρ −ρ 1



Then we can have −1

(C

2 So 12 tr[(C−1 (ρ) ∂C(ρ) ∂ρ ) ] =

1 ∂C(ρ) 2 (ρ) ) = ∂ρ (1 − ρ2 )2

1+ρ2 . (1−ρ2 )2

Then I(ρ) =

2



1 + ρ2 −2ρ −2ρ 1 + ρ2

N (1+ρ2 ) (1−ρ2 )



and CRLB =

1 I(ρ)

=

(1−ρ2 )2 N (1+ρ2 )

ECE 531 - Detection and Estimation Theory Homework 3

4.6 (Correction – Shu Wang) In this problem, we only have a single component. So θˆ = [ˆ ak , ˆbk ]T . According to Example 4.2, we have ! 2σ 2 0 N C= 2σ 2 0 N 2

2

2σ ˆ So a ˆk ∼ N (ak , 2σ ˆk and ˆbk are independent. N ) and bk ∼ N (bk , N ). Also a

a ˆ2k + ˆb2k ] 2 ˆb2 a ˆ2 = E[ k ] + E[ k ] 2 2 1 2 = (E[ˆ ak ] + E[ˆb2k ]) 2 1 = [V ar(aˆk ) + E 2 [aˆk ] + V ar(Bˆk ) + E 2 [Bˆk ]] 2 1 2σ 2 2σ 2 = [ + a2k + + b2k ] 2 N N 2σ 2 a2k + b2k + = N 2

E[Pˆ ] = E[

Suppose P =

a2k +b2k 2 .

So E[Pˆ ] =

2σ 2 N

2 2 + P . Then E 2 [Pˆ ] = ( 2σ N + P) .

V ar(Pˆ ) = V ar(

a ˆ2k + ˆb2k ) 2

1 = [V ar(ˆ a2k ) + V ar(ˆb2k )] 4 According to textbook page38. Eq 3.19: If ξ ∼ N (µ, σ 2 ), then E[ξ 2 ] = µ2 + σ 4 E[ξ 4 ] = µ4 + 6µ2 σ 2 + 3σ 4 V ar(ξ 2 ) = 4µ2 σ 2 + 2σ 4 2 2σ 2 2 2σ 2 2 2 2σ 2 ˆ2 So V ar(ˆ a2k ) = 4a2k 2σ N + 2( N ) and V ar(bk ) = 4bk N + 2( N ) . Then we can have: 2 2σ 2 2 2σ 2 2σ 2 V ar(Pˆ ) = (a2k + b2k )( 2σ N ) + ( N ) = ( N )[2P + N ]

1

So 2

( 2σ + P )2 E 2 [Pˆ ] = 2σ2 N 2 V ar(Pˆ ) ( N )[2P + 2σ N ] =1+ E 2 [Pˆ ] = 1. V ar(Pˆ ) 2 E 2 [Pˆ ] = P4σ2 = V ar(Pˆ ) P

(2P )2 N 2 4[2P N 2σ 2 + 4σ 4 ]

If ak = bk = 0 ⇒ P = 0 ⇒ But if P >>

2σ 2 N ,

then

N

P 4σ 2 N

>> 1. Then signal will be easily detected.

4.13 (Shu Wang) In practice we sometimes encounter the “linear model” x = Hθ + w but H composed of random variables. Suppose we ignore this difference and use our usual estimator θˆ = (HT H)−1 HT x where we assume that the particular realization of H is known to us. Show that if H and w are independent, the mean and covariance of θˆ are ˆ =θ E(θ)   Cθˆ = σ 2 EH (HT H)−1 where EH denotes the expectation with respect to the PDF of H. What happens if the independence assumption is not made?

ˆ = E[(HT H)−1 HT x] E[θ] = E[(HT H)−1 HT (Hθ + w)] = E[(HT H)−1 HT Hθ] + E[(HT H)−1 HT w] According to H and w are independent. Also w has zero mean. Then we can have: ˆ = E[θ] + E[(HT H)−1 HT ]E[w] E[θ] = E[θ] =θ

2

Cθˆ = E[(θˆ − θ)(θˆ − θ)T ] = E[((HT H)−1 HT x − θ)((HT H)−1 HT x − θ)T ] = E[((HT H)−1 HT x − (HT H)−1 HT Hθ)((HT H)−1 HT x − (HT H)−1 HT Hθ)T ] = E[((HT H)−1 HT (x − Hθ))((HT H)−1 HT (x − Hθ))T ] = E[((HT H)−1 HT w)((HT H)−1 HT w)T ] = EHw [(HT H)−1 HT wwT H(HT H)−1 ] = EH|w Ew [(HT H)−1 HT wwT H(HT H)−1 ] = EH|w [(HT H)−1 HT σ 2 IH(HT H)−1 ] = EH|w [σ 2 (HT H)−1 ] = σ 2 EH [(HT H)−1 ] According to H and w are independent. ˆ may not equal to θ, so θˆ may be biased. If H and w are not independent. Then E[θ] 5.3 (Luke Vercimak) The IID observations x[n] for n = 0, 1, . . . , N − 1 have the exponential PDF  λ exp (−λx[n]) x[n] > 0 p(x[n]; λ) = 0 x[n] < 0 Find a sufficient statistic for λ Since the observations are IID, the joint distribution is "N −1 # X n p(x; λ) = λ exp −λx[n] n=0

" =

λn exp −λ

N −1 X

#! x[n]

(1)

n=0

= (λn exp [−λT (x)]) (1) = g(T (x), λ)h(x) By the Neyman-Fisher Factorization theorem, T (x) =

N −1 X

x[n]

n=0

is a sufficient statistic for λ 5.9 (Luke Vercimak) Assume that x[n] is the result of a Bernoulli trial (a coin toss) with Pr{x[n] = 1} = θ Pr{x[n] = 0} = 1 − θ and that N IID observations have been made. Assuming the Neyman-Fisher factorization theorem holds for discrete random variables, find a sufficient statistic for θ. Then, assuming 3

completeness, find the MVU estimator of θ

PN −1

Let p = number of times x = 1 or

n=0

Pr [x] =

N −1 Y

x[n]. Since each observation is IID,

Pr [x[n]]

n=0 p

= θ (1 − θ)N −p θp (1 − θ)N (1 − θ)p  p θ = (1 − θ)N 1−θ " # T (x) θ = (1 − θ)N [1] 1−θ =

= g(T (x), θ)h(x) By the Neyman-Fisher Factorization theorem, T (x) = p =

N −1 X

x[n]

n=0

is a sufficient statistic for θ. To get a MVUE statistic, the RBLS theorem says that we need to prove: 1. T (x) is complete. This is given in the problem statement. 2. T (x) is unbiased:

E[T (x)] = E

=

=

=

"N −1 X

# x[n]

n=0 N −1 X

E [x[n]]

n=0 N −1 X n=0 N −1 X

[Pr(x[n] = 1)x[n] + Pr(x[n] = 0)x[n]]

[θ(1) + (1 − θ)(0)]

n=0

= Nθ Therefore an unbiased estimator of θ is N −1 1 X x[n] θˆ = N n=0

4

By the RBLS theorem, this is also the MVUE.

5

ECE 531 - Detection and Estimation Theory Homework 4 February 5, 2011

6.7 (Shu Wang) Assume that x[n] = As[n] + w[n] for n = 0, 1, . . . , N − 1 are observed, where w[n] is zero mean noise with covariance matrix C and s[n] is a known signal. The amplitude of A is to be estimated using a BLUE. Find the BLUE and discuss what happens if S = [s[0]s[1] . . . s[N − 1]]T is an eigenvector of C. Also, find the minimum variance. T −1 x E[x[n]] = As[n], because s[n]’s are known. So Aˆ = ssTC . And the minimum variance C−1 s 1 ˆ = T −1 . From the problem we know that s is an eigenvector of C. According to is var(A) s C s the property of eigenvectors: If s is an eigenvector of C corresponding to the eigenvalue λ and C is invertible, the s is an eigenvector of C−1 corresponding to the eigenvalue λ1 . proof:

Cs = λs C

So

1 λ

−1

Cs = C −1 λs = λC −1 s 1 s = λC −1 s ⇒ s = C −1 s λ

ˆ = is the eigenvalue of C−1 . So var(A)

1 sT C−1 s

=

1 1 sT λ s

=

λ . sT s

ˆ = λ.In this case, since s is an eigenvector, no pre-whitening filter is needed! So var(A) 6.9 (Luke Vercimak) OOK communication system. Given: C = σ2I

n = 0, 1, . . . , N − 1

x[n] = A cos(2πf1 n) + w[n]

E[w[n]] = 0

ˆ and interpret the resultant detector. FInd the best frequency in the Find he BLUE for A (A) 1 range of 0 ≤ f1 ≤ 2 to use at the transmitter.

x[n] = A cos(2πf1 n) + w[n] x = HA + w Where:

    H=  

1 cos(2πf1 ) cos(2πf1 2) .. . cos(2πf1 (N − 1)) 1

      

A=



C−1 =

A



1 I σ2

Using the Gauss-Markov Theorem, −1 T −1 Aˆ = HT C−1 H H C x !−1 N −1 1 X 2 = cos (2πf1 n) HT C−1 x σ2 n=0 !−1 ! N −1 N −1 1 X 1 X 2 cos (2πf1 n) cos(2πf1 n)x[n] = σ2 σ2 n=0 n=0 PN −1 n=0 cos(2πf1 n)x[n] = P N −1 2 n=0 cos (2πf1 n) The detector is the ratio of the cross correlation between the carrier and the received signal to the autocorrelation of the carrier signal. It is a measurement of how much the received signal is like the carrier signal. The value chosen for γ would be A/2 since this would minimize both the number of false positive and false negatives. The best frequency range to use for the carrier would reduce the variance of Aˆ the most. −1 CAˆ = HT C−1 H =

N −1 1 X cos2 (2πf1 n) σ2

!−1

n=0

= PN −1 n=0

σ2 cos2 (2πf1 n)

Maximizing the denominator will reduce CA ˆ the most. If f1 was chosen to be 0 (no carrier) or chosen to be 21 with the added constraint that the transmitting clock and sampling clock were phase aligned with no phase shift, the variance would be minimum. 7.3 (Luke Vercimak) We observe N IID samples from the PDFs: 1. Gaussian

  1 1 2 p(x; µ) = √ exp − (x − µ) . 2 2π

2. Exponential  p(x; λ) =

λ exp(−λx) x > 0 0 x lnγ 2 2σ n=0





A σ2

N −1 X

x[n] > lnγ +

n=0

1

N A2 2σ 2

Since A < 0: N −1 1 X σ2 A x[n] < lnγ + = γ 0 N NA 2 n=0

x ¯ < γ 0 → H1 x ¯ > γ 0 → H0

( 2 N (0, σN ) T (x) ∼ 2 N (A, σN )

under H0 under H1

PF A = P r{T (x) < γ 0 ; H0 } = 1 − P r{T (x) > γ 0 ; H0 } γ0 = 1 − Q( q ) σ2 N

PD = P r{T (x) < γ 0 ; H1 } γ0 − A = 1 − Q( q ) σ2 N

γ0 1 − PF A = Q( q ) σ2 N

r

σ 2 −1 Q (1 − PF A ) N Q−1 (x) = −Q−1 (1 − x) r σ 2 −1 Q (PF A ) ⇒ γ0 = − N ⇒ γ0 =

A PD = 1 − Q(−Q−1 (PF A ) − q ) σ2 N

Q(−x) = 1 − Q(x) A ⇒ PD = Q(Q−1 (PF A ) + q ) σ2 N

Since A < 0

2

|A| PD = Q(Q−1 (PF A ) − q ) σ2 N

r −1

= Q(Q

(PF A ) −

A2 N ) σ2

This is same as A > 0.

Problem3. (3.12) If we want to have a perfect detector, the PDF of H0 and mathcalH1 cannot overlap as the figure below. So that means 1 − c > c ⇒ c < 21 .

Problem4. (3.18)

H0 : x[0] ∼ N (0, 1) H1 : x[0] ∼ N (0, 2)

We decide H1 if

3

P (H1 |x) > P (H0 |x) ⇒ P (x|H1 )P (H1 ) > P (x|H0 )P (H0 ) P (x|H1 ) P (H0 ) ⇒ > =γ P (x|H0 ) P (H1 ) P (x|H1 ) = P (x|H0 )

1 2 √1 e− 4 x [0] 4π 1 2 √1 e− 2 x [0] 2π

1 1 2 = √ e 4 x [0] > γ 2 q √ √ ⇒ x2 [0] > 4ln( 2γ) ⇒ |x[0]| > 2 ln( 2γ)

For P (H0 ) = 21 , we have P (H1 ) = 12 . Then γ = We can have the decision region as follow:

P (H0 ) P (H1 )

4

= 1 ⇒ |x[0]| > 2

q √ ln( 2) = 1.1774 ≈ 1.18.

For P (H0 ) = 34 , we have P (H1 ) = 14 . Then γ = We can have the decision region as follow:

P (H0 ) P (H1 )

5

= 3 ⇒ |x[0]| > 2

q √ ln( 2 ∗ 3) = 2.4043 ≈ 2.4.

ECE 531: Detection and Estimation Theory, Spring 2011 Homework 10

Problem 1 (4.6 – Luke Vercimak) The this is a known signal in WGN. Per eq 4.3, the test statistic will be: T (x) =

N −1 X

x[n]s[n] > γ ′

n=0

In this case (s[n] = Ar n ): E=

N −1 X

s2 [n] = A2

= A2

N −1 X

r 2n →

n=0

r 2n

n=0

n=0

For 0 < r < 1:

N −1 X

A2 as N → ∞ 1 − r2

Therefore as we gain additional samples, the detector performance will approach a constant. (obtained by plugging E into 4.14). For r = 1: = A2

N −1 X

r 2n = N A2 → ∞ as N → ∞

n=0

Per Eq 4.14, PD will approach 1 as N → ∞ For r > 1: 2

=A

N −1 X

r 2n → ∞ as N → ∞

n=0

Per Eq 4.14, PD will approach 1 as N → ∞ For all cases, the detector threshold γ ′ can be determined by plugging E into: √ γ ′ = σ 2 EQ−1 (PF A )

Problem 2 (4.10 – Shu Wang)

1

V T CV = Λ V T = V −1 C = V ΛV −1 = V ΛV T C −1 = V Λ−1 V T C −1 = D T D √ ⇒ D = Λ−1 V T First, we need to calculate the eigenvalues of C. det(λI − C) = 0, it is easy to get λ = 1 ± ρ. Then it is easy to find the matrix of eigenvectors. ! 1 1 √

VT =V =



Λ−1



D=

=



2 √1 2

2

− √12

√1 1+ρ

0

0

√1 1−ρ

1 2(1+ρ) √ 1 2(1−ρ)



!

1 2(1+ρ) √ −1 2(1−ρ)



 

Problem 3 (4.19 – Siyao Gu) Since s0 [0] = s1 [0] = 1, we can concentrate planning the decision regions around s0 [1] and s1 [1]. The test can be simplified to  N (−1, σ 2 ) under H0 (1) T ∼ N (1, σ 2 ) under H1 The NP test becomes p(x; H1 ) H0 P (H0 ) ≶ =γ p(x; H0 ) H1 P (H1 p(x; H1 ) = p(x; H0 )

√1 2π √1 2π

(2)

2

exp(− (x[1]−1) ) 2 2

) exp(− (x[1]+1) 2   (x2 [1] − 2x[1] + 1) −(x2 [1] + 2x[1] + 1) − = exp − 2 2 H 0 P (H0 ) p(x; H1 ) = exp[2x[1]] ≶ p(x; H0 ) H1 P (H1 ) 2

(3) (4) (5)

1 P (H0 ) ln 2 P (H1 )

H0

x[1] ≶

H1

(6)

Thus the line running through x[1] and perpendicular to the line running between s0 and s1 is the chosen decision boundary. This would be a 0-slope line. If P (H0 ) = P (H1 ), the boundary would be x[1] = 0. Problem 4 (4.24 – Shu Wang) PN −1 According to the text book, we have Ti (x) = n=0 x[n]si [n] − 12 εi . We need to choose Hi to make Ti (x) to be the maximum statistic. The block diagram of the optimal receiver is on page 120, figure 4.13. When M = 2, according to eq 4.25, we have: r

Pe = Q(

ε¯(1 − ρs ) ) 2σ 2

If we want to minimize Pe , we need to minimize ρs . sT1 s0 1 T T 2 (s1 s1 + s0 s0 ) N A0 A1 = 1 2 2 2 (A0 + A1 )

ρs =

|ρs | ≤ 1

So when A0 = −A1 , ρs = −1 is minimum. Then Pe is minimum.

3

ECE 531: Detection and Estimation Theory, Spring 2011 Homework 11 Solutions Problem1. (5.14 – Shu Wang) From Eq 5.5 and 5.6, we have: T (x) = xT Cs (Cs + σ 2 I)−1 x s = Ah 2 Cs = E[ssT ] = E[AhAhT ] = E[AA]hhT = σA hhT 2 2 ⇒ T (x) = xT σA hhT (σA hhT + σ 2 I)−1 x

By using matrix inversion lemma, we have: (A + BCD)−1 = A−1 − A−1 B(DA−1 B + C −1 )−1 DA−1

2 h, C = I and D = hT . Then we will get: Here we set A = σ 2 I, B = σA

σ2

2

(σ I +

2 σA hhT )−1

A T 1 1 2 h h = 2I − 2( σ ) σ σ 1 + hT h σA2

σ2

⇒ T (x) = x

= = ⇒ T 0 (x) =

T

1 2 σA hhT ( 2 I σ

σ2

A T 1 2 h h ))x − 2( σ σ 1 + hT h σA2

σ2 2 σ A T σ2 σ2 2 h h xT hhT ( A2 − A2 ( σ ))x σ σ 1 + hT h σA2 2 σ 2 σ )>γ (hT x)T (hT x)( 2 T A σA h h + σ 2 γ (hT x)2 > γ 0 = 2 σA 2 T σA h h+σ 2

( N (0, σ 2 I) x∼ N (0, Cs + σ 2 I)

under H0 under H1

( N (0, σ 2 hT h) ⇒ hT x ∼ 2 (hT h)2 + σ 2 hT h) N (0, σA 1

under H0 under H1

According to chapter2, under H0 , we can easily get: (hT x)2 ∼ X12 σ 2 hT h PF A = P r{T 0 (x) > γ 0 ; H0 } = P r{

γ0 T 0 (x) > ; H0 } σ 2 hT h σ 2 hT h

q √ 0 Also from Chapter2, we know that QX12 (x) = 2Q( x). Then we have PF A = 2Q( σ2γhT h ). Similar to H0 , we can have: (hT x)2 2 2 (hT h)2 + σ 2 hT h ∼ X1 σA PD = P r{T 0 (x) > γ 0 ; H1 } T 0 (x) γ0 = P r{ 2 T 2 > 2 (hT h)2 + σ 2 hT h ; H1 } σA (h h) + σ 2 hT h σA s γ0 PD = 2Q( 2 (hT h)2 + σ 2 hT h ) σA

2

Problem 5.16 for Avinash (book)

3

Problem2. (5.17 – Yao Feng)

Deflection coefficient is defined as (E(T ; H1 ) − E(T ; H0 ))2 V ar(T ; H0 )

d2 =

E(T ; H1 ) =

N −1 X

E (Acos(2πf0 n + φ) + w[n])Acos2πf0 n)

n=0

= cosφ

N −1 X

2

2

A cos 2πf0 n − sinφ

n=0

N −1 X

A2 cos2πf0 nsin2πf0 n

n=0

N A2 = cosφ 2 E(T ; H0 ) =

N −1 X

E(w[n]Acos2πf0 n) = 0

n=0 N −1 X

V ar(T ; H0 ) = V ar(

w[n]Acos2πf0 n)

n=0

=

N −1 X

V ar(w[n]Acos2πf0 n)

n=0 N −1 X 2



A2 cos2 2πf0 n

n=0

N A2 2 σ = 2 So, 2

2

d =

( N2A cosφ)2 N A2 2

σ2

=

N A2 cos2 φ 2σ 2

We can see that if φ = 0, which means our assumption is right, then we get the maximum d2 , hence the maximum PD ; if φ = π, which mean our truley sent signal is −Asin2πf0 n, then we get the minimum PD

4

Problem3. (6.2 – Shu Wang)

L(x) =

P (x[0], x[1] : H1 ) P (x[0], x[1] : H0 )

λ2 e−λ(x[0]+x[1]) >γ λ20 e−λ0 (x[0]+x[1]) γλ2 ⇒ e−(λ−λ0 )(x[0]+x[1]) > 20 λ γλ2 ⇒ −(λ − λ0 )(x[0] + x[1]) > ln( 20 ) λ If λ > λ0 , we decide H1 . =

γλ2

ln( λ20 ) = γ0 ⇒ T (x) = x[0] + x[1] < − λ − λ0 PF A = P r{T (x) < γ 0 ; H0 }

The region of T (x) < γ 0 is shown in the following figure:

5

Z

γ0

Z

γ 0 −x[0]

PF A = 0

Z

γ0

= 0

Z =

λ20 e−λ0 (x[0]+x[1]) dx[1]dx[0]

0 γ 0 −x[0]

−λ0 e−λ0 (x[0]+x[1]) |0

γ0

dx[0]

0

λ0 eλ0 x[0] − λ0 eλ0 γ dx[0]

0 0

= 1 − e−λ0 γ − γ 0 λ0 e−λ0 γ

0

For given PF A , the threshold is not depend on unknown parameter λ. So the UMP test exists.

6

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF