A First Course in Digital Communication Solution Manual by Nguyen HH

May 10, 2017 | Author: GREATJUSTGREAT | Category: N/A
Share Embed Donate


Short Description

Download A First Course in Digital Communication Solution Manual by Nguyen HH...

Description

we dy k

Sh

Solutions Manual “A First Course in Digital Communications” Cambridge University Press

&

Ha H. Nguyen Department of Electrical & Computer Engineering University of Saskatchewan Saskatoon, SK, CANADA S7N 5A9

Ng

uy

en

Ed Shwedyk Department of Electrical & Computer Engineering University of Manitoba Winnipeg, MB, CANADA R3T 5V6 August 5, 2011

dy k we

Preface

Ng

uy

en

&

Sh

This Solutions Manual was last updated in August, 2011. We appreciate receiving comments and corrections you might have on the current version. Please send emails to [email protected] or [email protected].

1

dy k

Chapter 2

Sh

we

Deterministic Signal Characterization and Analysis P2.1 First let us establish (or review) the relationships for real signals P∞ (a) Given {Ak , q Bk }, i.e., s(t) = k=0 [Ak cos (2πkfr t) + Bk sin (2πkfr t)], Ak , Bk are real. 2 2 k Then Ck = Ak + Bk , θk = tan−1 B Ak , where we write s(t) as

&

s(t) =

∞ X

Ck cos (2πkfr t − θk )

k=0

If we choose to write it as

s(t) =

∞ X

Ck cos (2πkfr t + θk )

en

k=0

uy

then the phase becomes θk = − tan−1

Bk Ak .

Further

Ak − jBk 2 = Dk∗ , k = 1, 2, 3, . . .

Dk = D−k

D0 = A0 (B0 = 0 always).

(b) Given {Ck , θk } then Ak = Ck cos θk , Bk = Ck sin θk . They are obtained from: ∞ X

Ng s(t) =

Ck cos (2πkfr t − θk ) =

k=0

∞ X

[Ck cos θk cos(2πkfr t) + Ck sin θk sin(2πkfr t)]

k=0

Now s(t) can written as: s(t) =

∞ X k=0

( Ck

ej[2πkfr t−θk ] + e−j[2πkfr t−θk ] 2

·

)

¸ X ¸ ∞ · e−jθ0 + ejθ0 Ck −jθk j2πkfr t Ck jθk −j2πkfr t s(t) = C0 + e e + e e 2 2 2 | {z } k=1 cos θ0

1

Nguyen & Shwedyk

A First Course in Digital Communications

Therefore

dy k

D0 = C0 cos θ0 where θ0 is either 0 or π Ck −jθk Dk = e , k = 1, 2, 3, . . . 2

we

What about negative frequencies? Write the third term as C2k ejθk ej[2πk·(−fr )·t] , where (−fr ) is interpreted as negative frequency. Therefore D−k = C2k e+jθk , i.e., D−k = Dk∗ . (c) Given {Dk }, then Ak = 2R{Dk } and Bk = −2I{Dk }. Also Ck = 2|Dk |, θk = −∠Dk , where Dk is in general complex and written as Dk = |Dk |ej∠Dk . Remark: Even though given any set of the coefficients, we can find the other 2 sets, we can only determine {Ak , Bk } or {Dk } from the signal, s(t), i.e., there is no method to determine {Ck , θk } directly.

Sh

Consider now that s(t) is a complex, periodic time signal with period T = f1r , i.e., s(t) = sR (t) + jsI (t) where the real and imaginary components, sR (t), sI (t), are each periodic with period T = f1r . Again we represent s(t) in terms of the orthogonal basis set {cos (2πkfr t), sin (2πkfr t)}k=1,2,... . That is s(t) = 2 T

R

[Ak cos (2πkfr t) + Bk sin (2πkfr t)]

k=0

&

where Ak = numbers.

∞ X

t∈T

s(t) cos (2πkfr t)dt; Bk =

2 T

R t∈T

(2.1)

s(t) sin (2πkfr t)dt are now complex

en

One approach to finding Ak , Bk is to express sR (t), sI (t) in their own individual Fourier series, and then combine to determine Ak , Bk . That is ∞ h i X (R) (R) sR (t) = Ak cos (2πkfr t) + Bk sin (2πkfr t) k=0

uy

∞ h i X (I) (I) sI (t) = Ak cos (2πkfr t) + Bk sin (2πkfr t) k=0

Ng

  ∞ ´ ³ ´ X ³ (R)   A + jA(I) cos (2πkfr t) + B (R) + jB (I) sin (2πkfr t) ⇒ s(t) = k k k  k  | {z } {z } k=0 | =Ak

=Bk

(d) So now suppose we are given {Ak , Bk }. How do we determine {Ck , θk } and Dk ? Again, (2.1) can be written as ¶ µ ¶ ¸ ∞ ·µ X Ak + jBk Ak − jBk j2πkfr t −j2πkfr t e + e s(t) = 2 2 k=0

As before, define Dk as Dk ≡ be written as

Ak −jBk 2

and note that the term

P∞

k=0

Ak +jBk −j2πkfr t e 2

can

µ ¶ 0 0 0 X X X Ak + jB−k j2πkfr t Ak − jBk e = ej2πkfr t = Dk ej2πkfr t 2 2

k=−∞

Solutions

k=−∞

k=−∞

Page 2–2

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

P j2πkfr t , where D = Ak −jBk , k = ±1, ±2, . . . and D = Therefore s(t) = ∞ 0 k k=−∞ Dk e 2 A0 − jB0 (note that B0 is not necessary equal to zero now). Equation (2.1) can also be written as:

¶ µ ∞ q X −1 Bk 2 2 s(t) = Ak + Bk cos 2πkfr t − tan Ak k=0

q A2k + Bk2 and θk = − tan−1

Bk Ak .

we

from which it follows that Ck ≡ Remarks:

Sh

(i) Dk 6= Dk∗ if s(t) is complex. (ii) The amplitude, Ck , and θk are in general complex quantities and as such lose their usual physical meanings. (iii) A complex signal, s(t) would arise not from the physical phenomena but from our analysis procedure. This occurs for instance when we go to an equivalent baseband model. n o

(c) (d)

&

(b)

en

P2.2 (a)

k k Consider now that Dk ≡ Ak −jB are given. Then since D−k = Ak +jB , we have, upon 2 2 adding and subtracting Dk , D−k : Ak = Dk + D−k ; Bk = j(Dk − D−k ). {Ck , θk } can be determined from {Ak , Bk }, which in turn can be determined from Dk as above. R R Ak = T2 t∈T s(t) cos (2πkfr t)dt = T2 t∈T s(t) cos (2π(−k)fr t)dt = A−k , i.e., even. R R Bk = T2 t∈T s(t) sin (2πkfr t)dt = − T2 t∈T s(t) sin (2π(−k)fr t)dt = −B−k , i.e., odd. q p p Ck = A2k + Bk2 ; C−k = (A−k )2 + (B−k )2 = (Ak )2 + (−Bk )2 = Ck , i.e., even (note that squaring an odd real function always gives an even function). ´ ´ ³ ³ ³ ´ ³ ´ −1 B−k = − tan−1 − Bk = tan−1 Bk = −θ , i.e., k ; θ = − tan θk = − tan−1 B −k k Ak A−k Ak Ak

odd (note that tan−1 (·) is an odd function). Ak −jBk ; 2

D−k =

uy

(e) Dk =

A−k −jB−k 2

=

Ak +jBk 2

= Dk∗ .

Remarks: Properties (a), (b) are true for complex signals as well. But not (c), (d), (e), at least not always.

P2.3 fr =

1 T

=

1 8

(Hz).

Ng

A1 = 2R(D1 ) = 0; B1 = −2I(D1 ) = −2. A5 = 2R(D5 ) = 4; B5 = −2I(D5 ) = 0. C1 = 2|D1 | = 2; θ1 = ∠D1 = π2 . C5 = 2|D5 | = 4; θ5 = 0. ¡ ¡ ¢¢ ¡ ¡ ¢¢ ¡ ¡ ¢¢ ¡ ¡ ¢¢ ∴ s(t) = −2 sin 2π 18 t + 4 cos 2π 58 t = 2 cos 2π 81 t + π2 + 4 cos 2π 85 t .

P2.4 The fundamental period, fr , is fr = fc (Hz). (a) Write s(t) as s(t) =

V jα j2πfc t V −jα −j2πfc t e |e {z } + e e| {z }. |2{z } |2 {z } D1

Solutions

k=1

D−1

k=−1

Page 2–3

Nguyen & Shwedyk

D1 = V2 ejα , D−1 = V2 e−jα = D1∗ . α = 0 ⇒ D1 = V2 = D−1 . α = π ⇒ D1 = − V2 = D−1 . α = −π ⇒ same as (iii), not much difference between +π and −π. α = π2 ⇒ D1 = j V2 , D−1 = −j V2 . α = − π2 ⇒ D1 = −j V2 , D−1 = j V2 .

dy k

(i) (ii) (iii) (iv) (v) (vi)

A First Course in Digital Communications

we

(b) The magnitude spectrum is the same for all the cases, as expected, since only the phase of the sinusoid changes. It looks like: | | (volts)

V • 2

•V 2

− fc

fc

Sh

0

f (Hz)

Figure 2.1

&

The phase spectra of cosines change as shown in Fig. 2.2. For (iii) and (iv), which plot(s) do you prefer and why? Note: In all cases, the magnitude and phase spectra are even and odd functions, respectively, as they should be. (c) s(t) = V cos α cos (2πfc t) − V sin α sin (2πfc t) A1 = V cos α, B1 = −V sin α. A1 = V , B1 = 0. A1 = −V , B1 = 0. same as (iii). A1 = 0, B1 = −V . A1 = 0, B1 = V .

en

(i) (ii) (iii) (iv) (v) (vi)

uy

For {Ck , θk } we have C1 = V always and θ1 = +α.

P2.5 s(t) = 2 sin(2πt − 3) + 6 sin(6πt).

Ng

(a) The frequency of the first sinusoid is 1 Hz and that of the second one is 6π 2π = 3 Hz. The fundamental frequency is fr = GCD{1, 3} = 1 Hz. Rewrite sin(x) as cos(x − π2 ). Then     ³    π ´  + 1 cos 2π(3)t − π . s(t) = |{z} 2 cos  2πt − 3 +   |{z}  2  | {z 2 } |{z} C1 C3 θ1

Ck 2 , ∠Dk j(3+ π2 ) 1e , D3 −j(3+ π2 )

Use |Dk | =

= θk , D−k = Dk∗ relationships to obtain:

∴ D1 =

= 12 ej 2 .

∴ D−1 = 1e

θ3

π

π

, D−3 = 12 e−j 2 .

(b) See Fig. 2.3.

Solutions

Page 2–4

Nguyen & Shwedyk ∠ (radians) •α

− fc 0

we Or

∠ (radians)

0 −π •

π•

fC

− fc

f (Hz)

Or

π

fc

f (Hz)

uy

•π



π •

2

fc

−π

• −π

∠ (radians)

2

− fc

Ng

f (Hz)

0

(vi)

∠ (radians)

0

f (Hz) • −π

− fc

(Phase spectrum of a − cos(⋅))

(v)

0



en

0

fc

∠ (radians)

&

∠ (radians)

− fc

f (Hz)

∠ (radians)

Sh

− fc



• fc

0

(Phase spectrum of a cos(⋅))

•π

Or

• − fc

f (Hz)

fc

• −α

(iii), (iv)

∠ (radians)

(ii)

dy k

(i)

A First Course in Digital Communications

fc

fc

− fc

f (Hz)

0

f (Hz) • −π

−π • 2

(Phase spectrum of a − sin(⋅))

2

(Phase spectrum of a sin(⋅))

Figure 2.2

P2.6 Classification of signals is as follows:

Solutions

Page 2–5

Nguyen & Shwedyk

A First Course in Digital Communications

1

−3

2

−2

−1

0

1

dy k

| Dk | (volts)

1

2

3 f (Hz)

∠Dk (radians)

−3 −π

−2

−1

0

2

π

we

(3 + π 2 ) 1

2

2

3 f (Hz)

Sh

Figure 2.3

It has odd (but not halfwave) symmetry. Neither even, nor odd but show a halfwave symmetry. Even symmetry, also halfwave symmetry, therefore even quarterwave symmetry. (i) odd symmetry, also halfwave ⇒ odd quarterwave symmetry. (ii) neither even, nor odd, nor halfwave. (iii) even but not halfwave. (e) Neither even, nor odd, nor halfwave. (f) Any even signal still remains even since s0 (t) = DC + s(t) and s0 (−t) = DC + s(−t) = DC + s(t) = s0 (t).

en

&

(a) (b) (c) (d)

Any odd signal is no longer odd. Note that a DC is an even signal.

uy

Any halfwave symmetric signal is no longer halfwave symmetric. Again a DC signal is not halfwave symmetric and this component destroys the halfwave symmetry. Note that any signal with a nonzero DC component cannot be halfwave symmetric.

Ng

Regarding the values of the Fourier series coefficients, all that changes is the D0 , C0 or A0 value, i.e., the DC component value. All other coefficients are unchanged. (g) In general the classification changes completely. But for certain specific time shift the classification may remain the same. To see the effect of a time shift on the coefficients consider first · Z ¸ Z 1 λ=t−τ −j2πkfr τ 1 (shifted) −j2πkfr t −j2πkfr λ s(t − τ )e dt = e s(λ)e dλ Dk = T t∈T T λ∈T | {z } (shifted)

∴ Dk

= e−j2πkfr τ Dk

(shifted) Dk

Solutions

Dk

(shifted)

=

Ak

(shifted)

− jBk 2

·

Ak − jBk = [cos (2πkfr τ ) − j sin (2πkfr τ )] 2

¸

Page 2–6

Nguyen & Shwedyk

A First Course in Digital Communications = Ak cos (2πkfr τ ) − Bk sin (2πkfr τ ).

(shifted)

= Ak sin (2πkfr τ ) + Bk cos (2πkfr τ ).

∴ Bk Now

(shifted)

Dk

(shifted)

=

Ck

dy k

(shifted)

∴ Ak

(shifted)

ejθk

2

½

= e−j2πkfr τ

¾ Ck jθk e 2 | {z } Dk



= Ck ,

we

(shifted)

∴ Ck

(shifted) θk

= θk − j2πkfr τ.

Basically a time shift leaves the amplitude of the sinusoid unchanged and results in a linear change in the phase.

Sh

P2.7 (a) s(−t) = s1 (−t) + s2 (−t) = s1 (t) + s2 (t) = s(t) ⇒ even.

(b) s(−t) = s1 (−t) + s2 (−t) = s1 (t) − s2 (t), no symmetry. ¢ ¡ ¢ ¡ ¢ ¡ (c) s t ± T2 = s1 t ± T2 + s2 t ± T2 = −s1 (t) − s2 (t) = −s(t). Therefore s(t) is halfwave symmetric. (d) Can’t say anything.

¢

¡ + s2 t ±

&

(e) Again cannot say anything. ¡ ¡ ¢ (f) Consider s t ± T2 = s1 t ± halfwave symmetric.

T 2

T 2

¢

= −s1 (t) − s2 (t) = −s(t). Therefore s(t) is

en

Now if both s1 (t), s2 (t) are even then s(t) is even and therefore it is even quarterwave symmetric. If both are odd, s(t) is odd and s(t) is odd quarterwave symmetric. However if one is even and the other is odd then s(t) is only halfwave symmetric. (g) s(t) is even but not halfwave symmetric.

uy

P2.8 s(t) = s1 (t)s2 (t)

(a) s(−t) = s1 (−t)s2 (−t) = s1 (t)s2 (t) = s(t) ⇒ even.

Ng

(b) s(−t) = s1 (−t)s2 (−t) = −s1 (t)s2 (t) = −s(t) ⇒ odd. ¡ ¢ ¡ ¢ ¡ ¢ (c) s t ± T2 = s1 t ± T2 s2 t ± T2 = [−s1 (t)] [−s2 (t)] = s(t). Therefore s(t) is not halfwave symmetric. Note that the fundamental period changes to T2 . (d) Neither even nor halfwave symmetric. (e) Neither odd nor halfwave symmetric. (f) Have 3 possibilities: (i) s1 (t), s2 (t) are both even quarterwave; (ii) s1 (t), s2 (t) are both odd quarterwave; (iii) one is odd quarterwave, the other is even quarterwave. (i) From (a) it follows that s(t) is even. From (c) it follows that it is not halfwave symmetric. ¡ ¢ ¡ ¢ ¡ ¢ (ii) Easy to show that s(t) is even. But s t ± T2 = s1 t ± T2 s2 t ± T2 = [−s1 (t)] [−s2 (t)] = ¡ ¢ s1 (t)s2 (t) = s(t) 6= −s t ± T2 , therefore not halfwave symmetric.

Solutions

Page 2–7

Nguyen & Shwedyk

A First Course in Digital Communications T 2

¢

¡ = s(t) 6= −s t ±

T 2

¢ , therefore not

dy k

¡ (iii) s(t) is odd (follows from (b)) but again s t ± halfwave symmetric.

Again note that in each case the fundamental period changes to T2 . (g) s(t) is even. Is it halfwave symmetric? µ ¶ µ ¶ µ ¶ µ ¶ T T T T s t± = s1 t ± s2 t ± = −s1 t ± s2 (t) 6= −s(t) 2 2 2 2

we

No.

Sh

P2.9 (a) Check for 2 conditions: that the magnitude is even and the phase is odd. If both are satisfied then the signal is real. If one or the other or neither condition is satisfied then the signal is complex. (i) real, (ii) real (note the phase of π is the same as −π), (iii) complex (phase spectrum is not odd). (b) If a harmonic frequency has a phase of π (or 0) then there is only a cos(·) term at this frequency. Similarly if the phase is π2 (or 3π 2 ) then there is only a sin(·) term at that frequency. Finally if the signal is even then it only has cos(·) terms in the Fourier series, if odd then only sin(·) terms.

&

Therefore the signal in (ii) is even (and real). No signal is odd. P2.10 The two spectra are

uy

en

Vm

Vc

Ng

4

− fc − fm

Vm 2 − fm 0

2

fm

f (Hz)

Dk coefficients of Vc cos(2π f ct ) Vc

2

VmVc

VmVc

4

− fc + f m

0

2

− f c f (Hz)

0

− fc

VmVc

Dk coefficients of Vm cos(2π f mt )

4

fc − f m

VmVc

4

f c + f m f (Hz)

Figure 2.4

Solutions

Page 2–8

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

P2.11 The difference between two modulations of P2.1 and P2.2 is that the one in P2.2 has a spectral component at fc or equivalently the modulating signal, s1 (t) = VDC + Vm cos (2πfm t), has a spectral component at f = 0 (i.e., DC component). S1 ( f ) (V/Hz)

Vm

VDC

Vm

2 0

fm

f (Hz)

we

− fm

2

S2 ( f )

Vc

2

− fc

0

Sh

Vc

2

− fc

f (Hz)

S( f )

VcVDC

VcVm

2

4

VcVm

4

VcVDC

2

VcVm

2

fc − f m

0

&

− fc − f m − f c − fc + f m

VcVm

fc

fc + f m

2

f (Hz)

Figure 2.5

en

Remarks:

(i) One can use the frequency shift property to determine the spectrum of s(t). (ii) The phases of all the spectra are zero, a consequence of only cosines being involved, both as carrier and message signals. A good exercise is to change some (or all) of the signals to sines and redo the spectra.

uy

(iii) The simplest way to find the spectra is to use some high-school trig, namely cos(x) cos(y) = cos(x+y)+cos(x−y) . No need to know convolution, frequency shift property, etc. 2 1 d 2π dt

Ng

P2.12 (a) fi (t) =

[2πfc t − Vm sin (2πfm t)] = fc − Vm fm cos (2πfm t) (Hz). fi (t ) (Hz)

f c + Vm f m fc

f c − Vm f m

0

1 fm

t (sec)

Figure 2.6

Solutions

Page 2–9

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(b) Consider µ ¶ · µ ¶ µ µ ¶¶¸ k k k s t+ = Vc cos 2πfc t + − Vm sin 2πfm t + fm fm fm     fc  = Vc cos  2πfc t + 2πk fm −Vm sin (2πfm t + 2πk) = s(t) |{z} n

Choose k = 1, i.e., s t +

1 fm

´

= s(t) ⇒ periodic with period T =

we

³

1 fm

(sec).

(c) Consider s1 (t) = ej[2πfc t−Vm sin(2πfm t)] µ ¶ k j[2πfc t−Vm sin(2πfm t)] s1 t + = ej[2πfc t+kn2π−Vm sin(2πfm t+k2π)] = e|j2πkn {z } |e {z } fm s1 (t)

Sh

1

Therefore s1 (t) is periodic with period f1m (sec). Its Fourier coefficients are determined as follows: Z T= 1 2 2fm 1 Dk = ej[2πfc t−Vm sin(2πfm t)] e−j2πkfm t dt T − T =− 1 2

2fm

uy

en

&

Change variable to λ = 2πfm T Z π 1 Dk = ej[(n−k)λ−Vm sin λ] dλ = Jn−k (Vm ), 2π −π R π −j[nλ−x sin λ] 1 dλ is nth order Bessel function. where by definition Jn (x) = 2π −π e Remarks: The spectrum depends on Vm , the amplitude of the modulating signals. In fact Vm determines the maximum instantaneous frequency (see plot in (a)) which implies that the larger Vm is, the wider the spectrum bandwidth becomes. Of course in theory there are spectral components out to infinity but in practise a finite bandwidth would capture most of the power of the transmitted signal. The restriction that fc is an integer multiple of fm is easily removed but for this the Fourier transform is needed since the signal is no longer periodic in general.

P2.13 (a)

Ng

s1 (t) = 1 +

√ j π j2πt √ −j π −j2πt 2e 4 e + 2e 4 e + 5ej2.21 ej2π(2t) + 5e−j2.21 e−j2π(2t)

j2.21 −j2π(3t) +5e−j2.21 ej2π(3t) e ³ + 5e √ π´ = 1 + 2 2 cos 2πt + + 10 cos (4πt + 2.21) + 10 cos (6πt + 2.21) 4

where fr is taken to be 1 Hz.

Solutions

Page 2–10

Nguyen & Shwedyk

A First Course in Digital Communications

¡ ¢ s2 (t) = 2 cos 2πt + π4

Sh

we

dy k

³ ³ √ π´ π´ s1 (t)s2 (t) = 2 cos 2πt + + 2 2 cos2 2πt + 4 4 ³ π´ +20 cos (4πt + 2.21) cos 2πt + 4 ³ π´ +20 cos (6πt − 2.21) cos 2πt + 4 ³ π´ = 2 cos 2πt + 4³ ³ ´ √ ³ π ´´ π + 2 1 + cos 4πt + + 10 cos 2πt + − 2.21 2 ´ 4 ³ ³ π π´ +10 cos 6πt + + 2.21 + 10 cos 4πt − 2.21 − 4 4 ³ π´ +10 cos 8πt − 2.21 + 4 ³ √ π´ = 2 + 2 cos 2πt + 4 ³ ´ √ ³ π π´ +10 cos 2πt + − 2.21 + 2 cos 4πt + 4 2 ³ ´ ³ π π´ +10 cos 4πt − − 2.21 + 10 cos 6πt + 2.21 + 4 4 ³ π´ +10 cos 8πt − 2.21 + 4

&

√ √ π π π π 2; D1 = ej 4 + 5ej ( 4 −2.21) ; D2 = 22 ej 2 + 5e−j ( 4 +2.21) ; π π D3 = 5ej ( 4 +2.21) ; D4 = 5e−j ( 4 −2.21) ; D−k = Dk∗

D0 =

en

P2.14 The signal s1 (t) is shown below:

−T

uy

V cos(2π f c t )

V

T

2

2

0

s1 (t )

T=

1 fc

V

Ng

−V

t

−T 0 T 4 4

−T

t

T

Figure 2.7

The Fourier coefficients of s1 (t) are given by [s (t)] Dk 1

½ [V cos(2πfc t)]

Dk Solutions

=

V 2

0,

1 = T

Z

T 4

− T4

e

−j2π Tk t

¡ ¢ sin πk 2 dt = πk

V V , k = ±1 = δD (k − 1) + δD (k + 1). k otherwise 2 2 Page 2–11

Nguyen & Shwedyk

A First Course in Digital Communications

[s(t)]

[s (t)]

[V cos(2πf t)]

c = Dk 1 ∗ Dk · ¸ ∞ X sin π( k−n V 2 ) V = δD (n − 1) + δD (n + 1) π(k − n) 2 2 n=−∞ ¡ k−1 ¢ ¡ k+1 ¢ V sin π( 2 ) V sin π( 2 ) = + 2 π(k − 1) 2 π(k + 1)

we

∴ Dk

dy k

where δD (·) is interpreted as a “discrete” delta function, i.e., ½ 1, x = 0 δD (x) = 0, x 6= 0

sout (t )

s (t ) = V cos(2π f m t )

Sh

V

V1

−t2

t2

−t1

t

t1

−V1

&

−V

en

s1 (t )

1

uy

0 −t2 −t1 t1

Ng

1 4 fm

t

t2 t1 + 1

1

2 fm

2 fm

s2 (t ) V1 −t2

t2

0

−t1

t

t1 −V1

1

fm

Figure 2.8 P2.15 It is obvious that we are interested in the case V1 < V , otherwise there is no saturation (see Figure 2.8).

Solutions

Page 2–12

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

¡ ¢ The time t1 is given by: V cos(2πfm t1 ) = V1 ⇒ t1 = 2πf1 m cos−1 VV1 . ³ ´ ¡ ¢ The time t2 is given by t2 = 4f1m + 4f1m − t1 = 2f1m − 2πf1 m cos−1 VV1 . ¡ ¢ Lastly t2 − t1 = 2f1m − πf1m cos−1 VV1 ≡ ∆t.

We are now in a position to draw the waveforms s1 (t), s2 (t) as in Figure 2.9. Note that s1 (t) 1 is periodic with period 2f1m and has a DC component equal to t2 −t T . On the other hand s2 (t) is periodic with period f1m .

we

Now we determine the Fourier coefficients for s1 (t), s2 (t). The basic pulse shapes for s1 (t) (over time interval − 4f1m to 4f1m ) and s2 (t) (over time interval − 2f1m to 2f1m ) are shown in Fig. 2.9. s2 (t )

s1 (t )

Sh

1

−1

0

−1

4 fm

= −T

−t1

2 fm

= −T

2

t

t1

1

2

4 fm

=T

V1

1 −t2

t2

0

−t1

2 fm

=T

2 t

t1

2

−V1

&

1

fm

Figure 2.9

en

Fourier coefficients of s1 (t): =

1 T

=

1 T

Ng

uy

[s (t)] Dk 1

Z

T 2

− T2

"Z

k

s1 (t)e−j2π T t dt, where T =

t1

− T2

e

−j2π Tk t

³

Z dt +

T 2

1 2fm #

k

e−j2π T t dt

t1

´

¡ ¡ ¢¢ − sin k cos−1 VV1 = = πk µ πk ¶ t1 1 −1 V1 where = 2fm t1 = cos , k = ±1, ±2, . . . , T π V t2 − t1 = . T

[s (t)]

D0 1

sin(πk) − sin

πkt1 T

Fourier coefficients of s2 (t): [s (t)]

Dk 2

= =

Solutions

1 T

"Z

−t2

− T2

−V1 e

−j2π Tk

Z t

t1

dt + −t1

−j2π Tk

V1 e

Z t

dt +

T 2

t2

# −j2π Tk

−V1 e

t

dt

· µ ¶ µ ¶¸ 2πkt2 2πkt1 1 V1 sin + sin , where T = . πk T T fm

Page 2–13

Nguyen & Shwedyk t1 T

=

1 2π

cos−1

¡ V1 ¢ V

and

t2 T

¡ V1 ¢ V . Therefore µ µ ¶¶ V1 = [1 − cos(πk)] sin k cos−1 . V

=

[s (t)]

Dk 2

1 2



1 2π

cos−1

dy k

Now

A First Course in Digital Communications

Consider now the coefficients of s1 (t)s(t) which is a convolution of the Fourier coefficients of s1 (t) and s(t). The Fourier coefficients of s(t) are simply D1 = V2 (k = 1 or f = fm ) and D−1 = V2 (k = −1 or f = −fm ). All other coefficients are zero.

we

Graphically: Dk[ s ( t )] V



V

2

−5 f m −4 f m −3 f m−2 f m − f m −3 −2 −1

0 0

fm 2 fm 3 fm 4 fm 5 fm 6 fm 3 5 6 1 2 4

Sh



2

 

f k (Hz) k

 

f k (Hz) k

Dk[ s1 (t )]

−4 f m −2

−2 f m −1

&

−6 f m −3

)

)

)

V D3[ s1 ] + D2[ s1 ] 2

V D4[ s1 ] + D3[ s1 ] 2



−4 f m −3 f m−2 f m − f m

uy

6 fm 3

V D2[ s1 ] + D1[ s1 ] 2

D− k = Dk*

4 fm 2

) (

en

Dk[ s1 (t ) s (t )]

fm 2 fm 1



V D1[ s1 ] + D0[ s1 ] 2

0 0

D2[ s1 ]

(



D1[ s1 ]

D0[ s1 ]

(

D−[ s11 ]

(

D−[ s21 ]

0 0

fm 2 fm 3 fm 4 fm 5 fm 6 fm 7 fm 7 3 5 6 1 2 4

 

f k (Hz) k

Figure 2.10 [s(t)]

Ng

Convolve the Fourier series of s1 (t), s(t) graphically, namely flip Dk around the vertical [s1 (t)] axis, slide it along the horizontal axis, multiply by Dk and sum the product. The plot looks as in Fig. 2.10. ( [s (t)s(t)]

Dk 1

[s (t)s(t)]

D−k1 [s

Finally, Dk out

Solutions

(t)]

[s (t)s(t)]

= Dk 1

[s (t)]

[s (t)]

1 1 D2k−1 + D2k−2 , k = 1, 3, 5, . . . , 0, k = 0, 2, 4, . . . n o [s (t)s(t)] ∗ = Dk 1 .

=

[s (t)]

+ Dk 2

.

Page 2–14

Nguyen & Shwedyk

A First Course in Digital Communications

P2.16 (a) Z

1 αT

[αT ]

(c) Plots of Dk

=

πk 1 sin( 2α ) 2α ( πk )

are shown in Fig. 2.11 for α = [1, 1.5, 2, 3, 5] (we set V = β = 1



and T = 4). 0.5

Sh

0.4

0.2

α=1 α=1.5 α=2 α=3 α=5

0.1

0

en

−0.1

&

Dk

[α T]

(V/Hz)

0.3

¸

dy k

=

ej2πf β − e−j2πf β e dt = V S(f ) = V j2πf −β sin(2πf β) sin(2πf β) = V = 2V β πf 2πf β ¯ kβ sin(2π αT ) 2V β sin(2πf β) ¯¯ 1 = αT 2V β kβ 2πf β ¯ 2π αT ) ( f =kfr =k/αT −j2πf t

we

[αT ]

(b) Dk

·

β

−2

uy

−0.2 −2.5

−1.5

−1

−0.5

0 f (Hz)

0.5

1

1.5

2

2.5

Figure 2.11 [αT ]

(d) As α → ∞, the amplitude (magnitude) of Dk goes to zero; however the “envelope” is always a sinc(·), i.e., that of S(f ). Note also that the frequency spacing between the frequency components, ∆f is proportional to α1 and it goes to zero as α → ∞.

Ng

P2.17 The basic difference in the magnitude/phase plots of the Fourier series representation and the Fourier transform representation is that the Fourier transform magnitude plot will have impulses at the fundamental (and harmonic) frequency(ies) of strength = |Dk |. This reflects the fact that the signal is periodic and that we have finite power concentrated in a “zero” interval. The phase spectrum is the same. √ √ ¤ £ P2.18 (a) (i) S(f ) = V21 δ(f − 2) + δ(f + 2) + V22 [δ(f − 2) + δ(f + 2)] (V/Hz) √ √ ¤ √ √ ¤ £ £ (ii) S(f ) = V21 δ(f − 2) + δ(f + 2) + V22 δ(f − 2 2) + δ(f + 2 2) √ (b) (i) Not periodic. The ratio of the two frequencies of the 2 sinusoids, 2 and 2 Hz, is not a rational number. √ (ii) Periodic, since the ratio of the 2 frequencies is a rational number, 2√22 = 2. The spectra look as follows: Solutions

Page 2–15

A First Course in Digital Communications

V2 2 −2 2

Dk (volts)

V1 2

− 2

0

2

Figure 2.12 R∞

h

−∞ m(t)

ej2πfc t −e−j2πfc t 2

i

(b) See Figure 2.13.

V2 2

f k (Hz)

2 2

e−j2πf t dt =

M (f −fc )+M (f +fc ) 2

we

P2.19 (a) S(f ) =

V1 2

dy k

Nguyen & Shwedyk

M ( f ) (V/Hz)

M ( f ) (V/Hz)

Sh

A

θ

0

−W

f k (Hz)

W

−θ

∠M ( f ) (rad)

&

S( f ) S( f )

A/ 2

θ

− fc + W

en

− fc

−θ

fc − W

0

fc

−θ

fc + W

f k (Hz)

∠S ( f )

Figure 2.13

uy

− fc − W

θ

Ng

P2.20 First find the Fourier series of e−jVm sin(2πfm t) : Z 1 T −jVm sin(2πfm t) −j2πkfm t 1 Dk = e e dt where T = T 0 fm Z 2πfm T 1 dλ λ=2πfm t = e−jVm sin λ e−jkλ T 0 2πfm Z 2π 1 fm T =1 e−j(kλ−Vm sin λ) dλ = 2π 0 = Jk (Vm ) (kth order Bessel function). Then the Fourier transform is: above spectrum by fc , i.e.,

P∞

k=−∞ Jk (Vm )δ(f

s(t) ←→ S(f ) = Vc

∞ X

(2.2)

− kfm ). Multiplying by ej2πfc t shifts the

Jk (Vm )δ(f − fc − kfm ).

k=−∞

Solutions

Page 2–16

Nguyen & Shwedyk

A First Course in Digital Communications

ds (t ) dt

s (t )



A −T

d 2 s(t ) dt 2



A/T −T

t

0 T

dy k

P2.21 Differentiate the 1st signal twice:

0 T

(A/T) −T

t

−A/T

(A/T)

0 T

t

we

( −2 A / T )

Figure 2.14 ½ F

d2 s(t) dt2

¾ =

A j2πf T 2A A −j2πf T 2A e − + e = [cos(2πf T ) − 1] T T T T

Sh

Then

n

S(f ) =

o

d2 s(t) dt2 (j2πf )2

F

=

A 1 − cos(2πf T ) (V/Hz). T 2π 2 f 2

&

Consider now the 2nd signal. The 1st derivation looks like ds (t ) dt

−T

0 T

t

(−2 A)

Figure 2.15

uy

en

A/T

Ng

We could differentiate the rectangular pulse again but we have done it so many times that its transform is almost engrained in our minds. ½ ¾ ds(t) A sin(2πf T ) F = −2A + dt πf T · ¸ A sin(2πf T ) F {s(t)} = −2 + . j2πf πf T

P2.22 For reference the Fourier transform of a rectangular pulse, centered at t = 0, width 2T , height T) A, is AT sin(2πf 2πf T . Now decompose s(t) into the sum of rectangular pulses, s1 (t) and s2 (t) (note that this decomposition is not unique):

Solutions

Page 2–17

Nguyen & Shwedyk

2

s2 (t )

+

1

t 0

1

3

2

0

4

dy k

s1 (t )

s (t ) =

A First Course in Digital Communications

3

2

4

t

−1

Figure 2.16

we

s1 (t) is a shifted version of a rectangular pulse of width 2T = 3 and height A = 2; shifted by 2.5 seconds. Therefore, F {s1 (t)} = |e−j2π(2.5)f {z } 2(3/2) due to shifting

sin(2πf (3/2)) sin(3πf ) = 3e−j5πf . 2πf (3/2) 3πf

Sh

s2 (t) is a rectangular pulse of width 2T = 1, height A = −1; shift of 2.5 seconds. Thus, 1 sin(πf ) F {s1 (t)} = − e−j5πf . 2 πf

Finally,

&

F {s(t)} = F {s1 (t)} + F {s2 (t)} = 3e−j5πf

sin(3πf ) 1 −j5πf sin(πf ) − e 3πf 2 πf

P2.23 Consider the 1st figure.

π

π

S(f ) = 1ej 2 [u(f + 2) − u(0)] + 1e−j 2 [u(0) − u(f − 2)] 0

j π2 j2πf t

en

Z s(t) =

1e

e

−2

Z df +

2

π

1e−j 2 ej2πf t df =

0

1 − cos(4πt) πt

π

uy

The spectrum in the 2nd figure is: S(f ) = 1e−j 4 f [u(f + 2) − u(f − 2)]. Z 2 π −2 cos(4πt) ∴ s(t) = e−j 4 f ej2πf t df = 2πt − π4 −2

Ng

Remark: The problem is a small illustration that the phase is also important in determining the shape of a signal. An interesting question is: Is the ear more sensitive to phase or magnitude distortions in an audio signal? Is the eye more sensitive to phase or magnitude distortions in a image? For this one needs to resort to Matlab and experiment since we are now in the realm of psychophysics.

P2.24 scos (t) = A cos(2πfm t) where fm = 1/2T ⇒ Scos (f ) = (A/2) [δ(f − fm ) + δ(f + fm )]. ¡ ¢ ¡ ¢ T) srect (t) = u t + T2 − u t − T2 ⇒ Srect (f ) = T sin(πf . πf T S(f ) = Scos (f ) ∗ Srect (f ) = =

Solutions

Z sin(π(f − λ)T ) AT ∞ [δ(λ − fm ) + δ(λ + fm )] dλ 2 −∞ π(f − λ)T 2AT cos(πf T ) . π 1 − 4(πf T )2 Page 2–18

Nguyen & Shwedyk

A First Course in Digital Communications W 2

¤

¡ −u t−

dy k

£ ¡ P2.25 A rectangular signal of width W (sec) and amplitude A (volts), i.e., srect (t) = A u t + W) has Fourier transform AW sin(πf πf W .

(a) The signal can be decomposed into 2 shifted rectangular signals; each of width T seconds; amplitude ±A volts; one shifted T /2 seconds to the left, the other T /2 seconds to the right. Now F {s(t − τ )} = e−j2πf τ s(t). Here τ = +T /2 (or −T /2). Also W = T . sin(πf T ) sin(πf T ) + AT e−j2πf (−T /2) πf T πf T 2 sin (πf T ) (V/Hz). = 2jAT πf T

we

∴ S(f ) = AT e−j2πf (T /2)

(b) The derivative of s(t) results in 3 impulses, i.e.,



Sh



ds(t) = Aδ(t + T ) − 2Aδ(t) + Aδ(t − T ) dt ½ ¾ ds(t) F = Aej2πf T − 2A + Ae−j2πf T . dt o n F ds(t) dt cos(2πf T ) − 1 sin2 (πf T ) S(f ) = = 2A = 2jAT (V/Hz). j2πf j2πf πf T

P2.26 (a)



x(t)e−j2πf t dt

&

Z X(f ) =

−∞ ∞

·Z

X ∗ (f ) =

Z

x(t)e−j2πf t dt

−∞ ∞

en

=

¸∗

Z



= Z

x(t)ej2πf t dt =

−∞



³ ´∗ x∗ (t) e−j2πf t dt∗

−∞

x(t)e−j2π(−f )t dt = X(−f ),

−∞

where the following were used: x(t) is real as is dt; the complex conjugate of a sum equals the sum of complex conjugates (and integration for engineers is essentially a summation operation); the complex conjugate of a product equals the product of complex conjugates.

uy

(b) Now

Z



Z

Z



Ng



Y (f )ej2πf t df −∞ t=−∞ f =−∞ Z ∞ Z ∞ Z ∞ −j2π(−f )t X ∗ (f )Y (f )df. = df Y (f ) x(t)e dt = f =−∞ f =−∞ t=−∞ | {z } x(t)y(t)dt =

dtx(t)

=X(−f )=X ∗ (f )

And interchanging the roles of x(t), y(t) we have Z ∞ Z ∞ ∗ X (f )Y (f )df = X(f )Y ∗ (f )df. f =−∞

If x(t) = y(t), we get Z ∞ x2 (t)dt | −∞ {z } This has units of (volts)2 ·sec=joules

Solutions

f =−∞

Z



= −∞

|X(f )|2 | {z }

df

which means this has units of joules/Hz

Page 2–19

W 2

¢¤

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

or |X(f )|2 is an energy density spectrum, i.e., it tells us how the energy of x(t) is distributed in the frequency domain. P2.27 (a) Z Es =



e−2at dt =

0

(b) Es =

Z



2



|X(f )| df = −∞

−∞

(c) Z

µ

W

2

df 1 = tan−1 a2 + 4π 2 f 2 πa

2

µ

f a/2π

we

Z

1 (joules). 2a



Sh

|X(f )| df = 0.95

1 2a

¶ ¯∞ ¯ 1 ¯ = (joules). ¯ 2a f =0

0

µ ¶ ¯W µ ¶ ¯ 1 1 f 2πW −1 −1 ¯ ³ ´= 2 = tan tan a2 πa a/2π ¯f =0 πa a 0 2π 2 f 2 + 4π 2 µ ¶ 0.95π −1 2πW Solve tan = a 2 µ ¶ 0.95π a tan = 2.02a (Hz). W = 2π 2 Z

⇒ ⇒

df

&



W

en

1 Note that the required bandwidth is proportional to time constant ⇒ the smaller the time constant, the larger the bandwidth and vice versa. Makes intuitive sense, n’est-ce pas? R∞ P2.28 (a) Since sT (t) = −∞ ST (f )ej2πf t df . Write s2T (t) as:

Z

s2T (t) P

Z



ST (f )e df · ST (λ)ej2πλt dλ f =−∞ λ=−∞ Z 1 ∞ 2 = lim sT (t)dt T →∞ T −∞ ¸ ·Z ∞ Z Z ∞ 1 ∞ j2πf t j2πλt = lim ST (f )e df ST (λ)e dλ dt T →∞ T t=−∞ f =−∞ λ=−∞ =

j2πf t

uy





Ng

Interchange the order of integration: Z Z ∞ Z ∞ 1 ∞ P = lim df ST (f ) dλST (λ) ej2π(λ+f )t dt T →∞ T f =−∞ λ=−∞ t=−∞ Z Z ∞ 1 ∞ = lim df ST (f ) dλδ(λ + f )ST (λ) T →∞ T f =−∞ λ=−∞ Z Z 1 ∞ 1 ∞ ∗ = lim df ST (f )ST (f ) = lim |ST (f )|2 df T →∞ T −∞ T →∞ T −∞ R∞ In the above derivation we made use of the facts that t=−∞ ej2π(λ+f )t dt = δ(λ + f ) and R∞ ∗ λ=−∞ dλδ(λ + f )ST (λ) = ST (−f ) = ST (f ) (since sT (t) is real).

Solutions

Page 2–20

Nguyen & Shwedyk

A First Course in Digital Communications

P2.29

Z

we

dy k

R∞ R∞ Note that as T → ∞, the integral −∞ |ST (f )|2 df → −∞ s2T (t)dt → ∞. To overcome this difficulty; interchange the integration and limiting operations (mathematicians may worry about this, engineers shall assume the functions exhibit “reasonable” behaviour). Then Z ∞ |ST (f )|2 P = lim df. T −∞ T →∞ R∞ 2 |ST (f )|2 ·sec (b) From P = limT →∞ T1 −∞ s2T (t)dt, P has a unit of volts = joules sec sec = watts ⇒ T has a unit of watts . Call the limit a power spectrum density, i.e., it shows how the power Hz of s(t) is distributed in the frequency domain. ∞

R(τ )e−j2πf τ dτ τ =−∞ ·Z ∞ ¸ Z 1 ∞ = lim sT (t)sT (t + τ )dt e−j2πf τ dτ T →∞ T τ =−∞ t=−∞ Z Z ∞ 1 ∞ = lim dt sT (t) sT (t + τ )e−j2πf τ dτ T →∞ T t=−∞ | τ =−∞ {z }

Sh

F {R(τ )} =

λ=t+τ j2πf t = e

Z

R∞

−j2πf λ dλ=ej2πf t S (f ) T λ=−∞ sT (λ)e

1 ∞ dt sT (t)ej2πf t ST (f ) T →∞ T t=−∞ Z ∞ 1 = lim ST (f ) dt sT (t)ej2πf t T →∞ T | t=−∞ {z } lim

&

=

|ST (f )|2 . T →∞ T lim

en

∴ F {R(τ )} = P2.30 (a)

uy

1 R(τ ) = lim T →∞ T



Z



−∞

V 2 (T − τ ) = V 2 watts T →∞ T

sT (t)sT (t + τ )dt = lim

S(f ) = V 2 δ(f ) watts/Hz, all the power is concentrated at f = 0 (DC only)

Z 1 ∞ V 2 (T /2 − τ ) V2 R(τ ) = lim sT (t)sT (t + τ )dt = lim = watts T →∞ T −∞ T →∞ T 2 V2 ⇒ S(f ) = δ(f ) watts/Hz (half a DC, half the power) 2 Z 1 T /2 lim V cos (2πfc t + θ) V cos [2πfc (t + τ ) + θ] dt T →∞ T −T /2 Z Z 1 T /2 V 2 1 V 2 T /2 cos [2πfc (2t + τ ) + 2θ] dt + lim cos (2πfc τ ) dt = lim T →∞ T −T /2 2 T →∞ T 2 −T /2 | | {z } {z }

Ng

(b)

=ST (−f )=ST∗ (f )

(c)

=0 2

∴ R(τ ) = Solutions

T cos(2πfc τ ) 2

V V cos(2πfc τ ) watts and S(f ) = [δ(f − fc ) + δ(f + fc )] watts/Hz. 2 4 Page 2–21

Nguyen & Shwedyk

A First Course in Digital Communications

V −T / 2

V

t

T / 2 −τ

we

0

t

T /2

0 sT (t + τ )

−T / 2 − τ

dy k

sT (t )

T −τ

Figure 2.17

Sh

sT (t )

V

T /2

0

t

sT (t + τ )

&

V −τ

0

T / 2 −τ

t

en

Figure 2.18

The average power of a sinusoid is half at f = −fc . P2.31

V2 2

watts. Half of it is concentrated at f = fc and

uy

Z 1 ∞ R(τ ) = lim sT (t)sT (t + τ )dt T →∞ T −∞ Z ∞ 1 R(τ + Ts ) = lim sT (t) sT (t + τ + Ts ) dt = R(τ ) | {z } T →∞ T −∞ sT (t+τ )

Ng

where Ts is the period of s(t).

P2.32 (a) Multiplication in the time domain results in a differentiation in the frequency domain. (b) Z ∞ S(f ) = s(t)e−j2πf t dt −∞ Z ∞ Z ∞ dS(f ) = (−j2πt)s(t)e−j2πf t dt = −j2π ts(t)e−j2πf t dt df −∞ −∞ 1 dS(f ) are a Fourier transform pair. Continuing, it is which shows that ts(t) and − 2πj df ³ ´n n d S(f ) 1 n easily shown that t s(t) ←→ − 2πj df n are a Fourier transform pair.

Solutions

Page 2–22

Nguyen & Shwedyk

A First Course in Digital Communications

P2.33 Duality states that if s(t) ←→ S(f ) then s(f ) ←→ S(−t). Therefore, if

V e−a|f |

then Adjust V, a to get

1 1+t2

←→



a2 4π 2

a2

2V a

=

2V 2π

= 1 ⇒ V = π.

we

n o 1 −1 −2π|f | = F πe = 1 + t2

Z



πe−2π|f | e2πf t df.

−∞

←→ πe−2π|f | = S(f ) are a Fourier transform pair.

Sh

From P2.32 we have ts(t) =

=

1 ←→ πe−2π|f | . 1 + t2

It is easy enough to check that

1 1+t2

2V a a2

= 1 ⇒ a = 2π and ∴

P2.34 P2.33 shows that s(t) =

dy k

2V a + 4π 2 f 2 2V a V 2a h i. ←→ 2 = 2 2 a2 a + 4π f 2 4π 2 4π 2 + t

V e−a|t|

t 1+t2

1 dS(f ) −j2π df

←→

are a Fourier transform pair, or

t 1 d(e−2π|f | ) ←→ 1 + t2 −2j df

.

d(e2πf ) df

= 2πe2πf , and with f > 0 we have

&

Now with f < 0 we have Thus,

R∞

= −2πe−2πf .

d(e−2π|f | ) = −2πe−2π|f | sgn(f ) df t ←→ −jπe−2π|f | sgn(f ). 1 + t2

+ λs2 (t)]2 dt ≥ 0 for all λ or ¸ · Z ∞ ¸ ·Z ∞ Z ∞ 2 2 s2 (t)dt λ2 ≥ 0. s1 (t)dt + 2 s1 (t)s2 (t)dt λ + | −∞ {z | −∞ {z } | −∞ {z } }

−∞ [s1 (t)

uy

P2.35 Certainly

en



d(e−2πf ) df

c

a

b



2

Ng

b −4ac The above is a quadratic in λ with coefficients a, b, c and roots λ1,2 = −b± 2a . For the polynomial to be always ≥ 0 requires that the roots to be complex (at very minimum a double root), i.e., the quadratic should never intersect the λ axis, at best it may only touch it. Therefore, b2 − 4ac ≤ 0 or ·Z ∞ ¸2 Z ∞ Z ∞ 4 s1 (t)s2 (t)dt ≤ 4 s21 (t)dt s22 (t)dt −∞ −∞ −∞ sZ ¯Z ∞ ¯ Z ∞ ∞ ¯ ¯ or ¯¯ s1 (t)s2 (t)dt¯¯ ≤ s21 (t)dt s22 (t)dt −∞ −∞ −∞ sZ sZ ∞ ∞ p p s21 (t)dt s22 (t)dt = E1 E2 . = −∞

−∞

Equality holds when s1 (t) = Ks2 (t), i.e., one signal is a scaled version of the other. Solutions

Page 2–23

Nguyen & Shwedyk

A First Course in Digital Communications

¯Z ¯ ¯ ¯



−∞

¯ ¯Z ¯ ¯ ∗ S1 (f )S2 (f )df ¯¯ = ¯¯



−∞

¯ sZ ¯ S1∗ (f )S2 (f )df ¯¯ ≤

sZ





dy k

P2.36

−∞

P2.37 Let s1 (t) = s(t), s2 (t) = s(t + τ ). Then

|S1 (f )|2 df

−∞

|S2 (f )|2 df .

we

¯Z ∞ ¯ v Z Z ∞ ¯ ¯ u u ∞ 2 ¯ ¯ u s (t)dt s2 (t + τ )dt. s(t)s(t + τ )dt ¯ ≤ ¯ u t| −∞ {z | −∞ {z } } | −∞ {z } Rs (τ )

Rs (0)

Therefore, |Rs (τ )| ≤ Rs (0).

Rs (0)

Remark: Above was done for energy signals but readily shown to be true for power signals. Z

Z

Sh

P2.38



s(0) =

S(f )df ≤ R−∞ ∞

2S(0)

|S(f )| df ;

R−∞ ∞

−∞ |S(f )| df

WT =

Z



−∞ |s(t)| dt

·

Z



S(0) =



s(t)dt ≤ −∞

|s(t)| dt −∞

1 ≥ . 2

s(0)

&

P2.39 (a) s(t) = V e−at u(t) Z ∞ Z Es = s2 (t)dt = V 2 −∞



e−2at (t)dt =

−∞

V2 (joules). 2a

en

(b) Let V = 1

2πf a ,

Z



1 1+

0

4π 2 f 2 a2

df =

κ . 2a

i.e., change variables. Then

uy

Let fn =

2 a2

µ ¶ 1 dfn 1 a =κ 2 1 + fn 2π 2a 0 µ ¶ ³ πκ ´ 2π πκ f 1 κ tan−1 fκ = ⇒ = tan . a 2 a 2π 2 2 a2

Ng

or

(c) s(0) = 1 ⇒ T =

R∞ 0

Z

2π f a κ

e−at dt = a1 . With this T and fκ found in (b) we have T fκ =

³ πκ ´ fκ 1 = tan . a 2π 2

(d) Matlab plot.

P2.40 (a) Z Es =

Solutions



Z 2

s (t)dt = −∞

Z



e −∞

−2a|t|

dt = 2 0



e−2at dt =

1 (joules), where V = 1. a

Page 2–24

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(b) Using Equation (2.62c), fκ is determined from Z fκ Z fκ 16π 2 f 2 f2 κ κ 2 ⇔ df = ³ ´2 df = 2 2 2 2 2 (a + 4π f ) a 32π 2 a 0 0 2 a4 1 + 4π f 2 a Let integration variable be λ = 2πf a . Then, Z 2πfκ a λ2 (1 +

0

dλ λ2 )2

Z

2πfκ a

λ2

(1 + λ2 )2

λ=0

κπ 4

dx √ . 2 x

we

Change variable again to x = λ2 ⇒ dλ =

=

Z

dλ =

2 4π 2 fκ a2

x=0

√ x dx. 2 (1 + x)2

Sh

To integrate, use G&R, p.71, 2.313.5, where z1 = a + bx, a = b = 1. Let fn2 = The integral becomes ¯ 2 √ Z 2 x ¯¯fn 1 fn dx √ + − ¯ 2(1 + x) 0 4 0 (1 + x) x

4π 2 fκ2 . a2

&

and using 2.211 (same page in G&R) ¯fn2 Z fn2 ¯ dx −1 √ ¯ √ = 2 tan x¯ . (1 + x) x 0 0 Putting it all together we have −

fn κπ 1 + tan−1 (fn ) = 2(1 + fn2 ) 2 4

(2.3)

P2.41

uy

en

as the equation that defines fn . Need to solve in Matlab for various values of κ. ¯ R∞ ¯ R∞ (c) s(0) = 1 ⇒ T = −∞ ¯e−a|t| sgn(t)¯ dt = 2 0 e−at dt = a2 . ¡ ¢¡ a ¢ a In (b), after solving for fn we get fκ to be fκ = cfn , c = 2π ⇒ T fκ = a2 2π fn = π1 fn (where to repeat fn is the solution of (2.3) for various values of κ). (d) Matlab work.

s(t) ←→ S(f ) dn s(t)

Ng

= s1 (t) + Kδ(t) ←→ S1 (f ) + K dtn where S1 (f ) → 0 as f → ∞. Now ¾ ½ n S1 (f ) K d s(t) = (j2πf )n S(f ) ⇒ S(f ) = + . F n n dt (j2πf ) (j2πf )n K Since S1 (f ) → 0 as f → ∞, the above means that S(f ) → (j2πf )n for large n, i.e., the asymptotic behaviour of the spectrum depends on how many times the signal can be differentiated before an impulse(s) appears.

How many times can one differentiate a pure sinusoid (i.e., one that lasts from −∞ to +∞) before an impulse appears? How “concentrated” is the amplitude spectrum of the sinusoid? Solutions

Page 2–25

Nguyen & Shwedyk

A First Course in Digital Communications

P2.42 (a) SDSB-SC (f ) = = =

·



dy k

¸ ej2πfc t + e−j2πfc t −j2πf t m(t) e dt 2 −∞ Z ∞ Z ∞ 1 1 m(t)e−j2π(f −fc )t dt + m(t)e−j2π(f +fc )t dt 2 −∞ 2 −∞ M (f − fc ) M (f + fc ) + . 2 2

Z

− fc − W

− fc

− fc + W

Sh

A/ 2

we

S DSB-SC ( f ) (V/Hz)

0

fc − W

SDSB-SC ( f ) (V/Hz)

θ fc + W

fc

−θ

f (Hz)

∠S DSB-SC ( f ) (rad)

Figure 2.19

&

(b) Note: The magnitude spectrum is a shifted and scaled version of the original spectrum while the phase spectrum is simply a shifted version. (c) The passband bandwidth is 2W Hz. How would the above change if the carrier is sin(2πfc t) instead of cos(2πfc t)?

en

P2.43 The difference between SDSB-SC (f ) of P2.42 and SAM (f ) is a spectral component at ±fc due to the carrier (or if you wish a DC component added to m(t)). It looks like in Fig. 2.20. S AM ( f ) (V/Hz)

uy

VC / 2

− fc

Ng

− fc − W

− fc + W

VC / 2

S AM ( f )

A/ 2

θ 0

fc − W −θ

fc

fc + W

∠S AM ( f ) (rad)

f (Hz)

Figure 2.20

Note: We invariably assume that the message, m(t), has zero DC. DC is useful for powering stuff but contains no information of interest to be transmitted. The reason to insert a DC is to simplify the demodulator – as we see in the next 2 problems.

P2.44 By now we are beginning to appreciate that multiplying a signal in the time domain by a pure sinusoid simply shifts the spectrum and scales it. It is the shift that is important, the scaling is somewhat arbitrary and practically would be eventually controlled by a “volume” control. Solutions

Page 2–26

Nguyen & Shwedyk

A First Course in Digital Communications

A/ 4 −2 f c − W

(VC / 2)

A/ 2

(VC / 4)

(VC / 4)

θ −2 f c −2 f c + W

−W

−θ

dy k

(a) For DSB-SC; S1 (f ) = [SDSB-SC (f − fc ) + SDSB-SC (f + fc )] /2 and looks like in Fig. 2.21 (note that the impulses in the figure only occur for AM, not for DSB-SC).

0 W

2 fc − W −θ

2 fc

2 fc + W

f (Hz)

Filtered out by the lowpass filter

we

Output of lowpass filter

θ

Sh

Figure 2.21

(b) The AM spectrum is the same as the DSB-SC except for the DC component that results in the impulses at 0, ±2fc . Normally the demodulated signal, i.e., s1 (t), would not only be passed through a lowpass filter but also through a (well designed) high pass filter to eliminate the DC component.

&

(c) Of course, as usual, multiplying by a sinusoid shifts the spectrum, scales it, etc. But it is scaling that important, perhaps a more appropriate terminology would be scaling and phasing. Let us look at the problem in 2 ways (not completely distinct). First in the time domain:

Now

en

s1 (t) = s(t) sin(2πfc t) = m(t) cos(2πfc t) sin(2πfc t).

sin x cos y =

sin(x + y) + sin(x − y) 1 ⇒ s1 (t) = m(t) sin [2π(2fc )t] 2 2

Ng

uy

which shows that the spectrum of s1 (t) is that of M (f ) shifted by ±2fc ⇒ output of the lowpass filter is ZERO. In the frequency domain: · j2πfc t ¸ e − e−j2πfc t 1 s1 (t) = s(t) ⇒ S1 (f ) = [S(f − fc ) − S(f + fc )] . 2j 2j 1 Considering 2j to be just a scaling factor, albeit a complex one, S1 (f ) is S(f ) shifted up and down (more appropriately to the left and right) by fc . But note the minus sign in front of S(f + fc ). This results in the spectrum around f = 0 canceling out – but we know this already from the time domain analysis. 1 not only scales but adds a phase of ± π2 . Putting this all together the Finally the 2j spectrum S1 (f ) looks like: To generalize one should consider s1 (t) = s(t) cos(2πfc t + θ), where θ would represent the phase difference between the oscillator at the transmitter (modulator) and the one at the receiver (demodulator), and see what happens to the output of the demodulator.

Solutions

Page 2–27

Nguyen & Shwedyk

A First Course in Digital Communications

−θ +

π

π

π

S1 ( f )

2

2

2

−2 f c − W

θ+

dy k

S1 ( f )

A/ 4

−2 f c −2 f c + W

0

2 fc − W

2

2 fc + W



π

we

−θ −

2 fc

π

θ−

π

f (Hz)

2

2

∠S1 ( f )

Figure 2.22

Sh

The last chapter in the text considers a circuit (phase locked loop) which estimates the incoming phase of the received signal. The next problem looks at a demodulation technique that does not require knowledge of the phase, hence it is called noncoherent. But alas it only works for AM.

&

P2.45 (a) s1 (t) = |[Vc + m(t)] cos 2πfc t| = |Vc + m(t)| |cos 2πfc t|. But Vc + m(t) ≥ 0. Therefore, |Vc + m(t)| = Vc +m(t). And |cos 2πfc t| is a fullwave rectified sinusoid of period = 2f1c or P j2πkfr t , where most importantly fr = 2fc . It therefore has a Fourier series of ∞ k=−∞ Dk e D0 (the DC value) 6= 0.

en

P j2πkfr t , then The spectrum of s1 (t) = [Vc + m(t)] |cos 2πfc t| = ∞ k=−∞ Dk [Vc + m(t)] e consists of shifted versions of the spectrum of [Vc + m(t)], shifted by kfr = k2fc Hz, with the kth shift weighted by Dk .

Ng

uy

The output of the lowpass filter is sout = D0 [Vc + m(t)] since the spectrum components at ±2fc , ±4fc , etc, are filtered out. The DC term D0 Vc is easily filtered out by a high pass filter whose output would be D0 m(t) (assuming m(t) does not have significant energy around f = 0 – typically the case). P j2πkfr t . Therefore the spec(b) s1 (t) now is s1 (t) = |m(t)| |cos 2πfc t| = ∞ k=−∞ Dk |m(t)| e trum of |m(t)| will lie in a larger bandwidth, theoretically infinite but practically in some bandwidth, say Wc = κW (κ > 1 but say less than 3). Then by adjusting the bandwidth of the lowpass filter to Wc , the output would be D0 |m(t)|, a distorted version of m(t). Of course, Vc = 0 is an extreme condition but distortion shall exist as long as Vc < max |m(t)| since then |Vc + m(t)| = 6 Vc + m(t).

P2.46 (a) The double sideband suppressed carrier spectrum looks like:

Solutions

Page 2–28

Nguyen & Shwedyk

A First Course in Digital Communications

A/ 2

θ

θ − fc − W

− fc

dy k

S DSB-SC ( f ) (V/Hz)

− fc + W

fc − W

0

fc + W

fc

f (Hz)

−θ

−θ

And the function

1+sgn(f −fc ) 2

+

we

Figure 2.23 1−sgn(f +fc ) 2

1 − sign ( f + f c )

1

plots as:

1 + sign ( f − f c )

− fc

1

2

Sh

2 0

fc

f (Hz)

Figure 2.24

Multiplying the 2 frequency functions together results in

&

S USSB ( f ) (V/Hz)

−θ

− fc

0

en

− fc − W

S USSB ( f )

A/ 2

θ fc

fc + W

f (Hz)

∠S USSB ( f ) (rad)

Figure 2.25

uy

Note that the magnitude function is even and the phase function is odd. This tells us that the time domain signal is real. £ ¤ (b) s(t) = sUSSB (t)2 cos 2πfc t = sUSSB (t) ej2πfc t + e−j2πfc t . Therefore, S(f ) = SUSSB (f − fc ) + SUSSB (f + fc ), i.e., the upper sideband spectrum shifted by ±fc Hz.

Ng

(c) Since the spectrum is exactly that of m(t) and the Fourier transform is unique.

P2.47 (a) sDSB-SC (f ) =

M (f −fc )+M (f +fc ) 2

and some very simple algebra gives (P2.11).

(b) m(t) cos(2πfc t). (c) M (t) sgn(f ) is a product in the frequency domain which means that in the time domain we have a convolution, i.e, m(t) ∗ h(t) where h(t) = F −1 {sgn(f )}. A shift of the spectrum by fc is accomplished the time function by ej2πfc t . Therefore, £ my multiplying ¤ [M (f − fc ) sgn(f − fc )] ←→ m(t) ∗ F −1 {sgn(f )} ej2πfc t . £ ¤ (d) The shift is now −fc Hz, hence [M (f − fc ) sgn(f + fc )] ←→ m(t) ∗ F −1 {sgn(f )} e−j2πfc t . (e) Use the fact that the inverse transform is a linear operation, i.e., superposition holds.

Solutions

Page 2–29

Nguyen & Shwedyk

A First Course in Digital Communications

A/ 2

A/ 2

−W

−2 f c − W −2 f c −θ shift by − f c

θ

−θ

0 W

dy k

shift by f c A/ 2

2 fc

2 fc + W

f (Hz)

we

Output of lowpass filter = m(t) (actually m(t)/2)

θ

Figure 2.26

Sh

(f) No. The frequency function, sgn(f ) has a magnitude spectrum = |sgn(f )| = 1, an even function but its phase spectrum is ∠sgn(f ) = 0 for f ≥ 0 and = π for f < 0, which is not an odd function. (g) Yes. The magnitude spectrum is |j sgn(f )| = 1, an even function and the phase spectrum is ∠j sgn(f ) = π/2 for f ≥ 0 and = −π/2 for f < 0 an odd function. Recall that a real time function always has an even magnitude spectrum and an odd phase spectrum.

&

(h) jh(t) ←→ j sgn(f ). (i) Note that

en

ej2πfc t − e−j2πfc t ←→ sin(2πfc t). 2j

From (b) and (h), we get the block diagram of Fig. P2.42.

uy

P2.48 In the frequency domain the lower single-sideband signal looks like:

Ng

− fc

SLSSB ( f ) (V/Hz) A/ 2

θ

− fc + W

this is M ( f + f c ) 1 + sgn ( f + f c )  2

 

2

fc − W

0

 

fc

f (Hz)

−θ

this is M ( f − f c ) 1 − sgn ( f − f c )  2

 

2

 

Figure 2.27

Therefore, SLSSB (f ) = Solutions

M (f + fc ) + M (f − fc ) M (f + fc ) sgn(f + fc ) M (f − fc ) sgn(f − fc ) + − 2 2 2 2 2 Page 2–30

Nguyen & Shwedyk

A First Course in Digital Communications

In the time domain:

Let h(t) = F −1 {j sgn(f )}. Recognize · ¸ −j2πfc t e m(t) ∗ h(t) 2 2j · ¸ j2πfc t m(t) e ∗ h(t) 2 2j

· ¸ −j2πfc t m(t) m(t) e − ej2πfc t cos(2πfc t) + ∗ h(t) 2 2 2j · ¸ m(t) m(t) cos(2πfc t) − ∗ h(t) sin(2πfc t) 2 2

Sh

=

←→

M (f + fc ) j sgn(f + fc ) 2 2j M (f − fc ) j sgn(f − fc ) 2 2j

we

Therefore, sLSSB (t) =

←→

dy k

M (f + fc ) + M (f − fc ) m(t) cos(2πfc t) ←→ 2 2

The above result means that the summer in Fig. 2.42 would now be a “subtractor”. P2.49 Nothing, except that there would be a carrier at ±fc .

&

P2.50

Z

ha (t) = −j



e−a|f | sgn(f )ej2πf t df

−∞

· Z = −j −

Z

0

e

(a+j2πt)f

df +

−∞

P2.51



S(f )Sˆ∗ (f )df

s(t)ˆ s(t)dt =

Ng

−∞

Z−∞ ∞ −∞

Z

= −j R∞

s(t)dt −∞ s(t)ˆ

s (t ) ↔ S ( f )

(Parseval)

S(f )S ∗ (f )(−j sgn(f ))df

=



|S(f )|2 sgn(f ) df. | {z } | {z } −∞ even

|S(f )|2 sgn(f ) is odd ∴

df

0

Z



uy

Z

e

(−a+j2πt)f

4πt 1 a→0 = − . −a2 − 4π 2 t2 πt

en =

¸



odd

= 0, i.e., they are orthogonal.

j sgn( f )

sˆ(t ) ↔ Sˆ ( f ) = S ( f ) j sgn( f )

Hilbert transform

Figure 2.28

Solutions

Page 2–31

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

ˆ ) = j sgn(f )S(f ), P2.52 We have established that R(τ ) ←→ |S(f )|2 are a Fourier transform pair. S(f ¯ ¯2 ¯ˆ ¯ therefore ¯S(f )¯ = |j sgn(f )|2 |S(f )|2 = |S(f )|2 , i.e., the autocorrelation functions are the same. P2.53 In general,

· ¸ δ(f − fc ) + δ(f + fc ) ˆ M (f ) = j sgn(f )M (f ) = j V sgn(f ) | {z2 } ·

F {V cos(2πfc t)}

we

¸ δ(f − fc ) sgn(f ) + δ(f + fc ) sgn(f ) = Vj 2 ¸ · δ(f − fc ) sgn(f ) − δ(f + fc ) sgn(f ) = V − 2j | {z } sin(2πfc t)

Sh

⇒ m(t) ˆ = −V sin(2πfc t). P2.54 (a)

&

S(f ) = SPB (f ) − j SˆPB (f ) = SPB (f ) − j(j sgn(f ))SˆPB (f ) ½ 2SPB (f ), f ≥0 = SPB (f ) (1 + sgn(f )) = 0, f 10 one has 1 = P (x ≤ ∞) = Fx (∞) = B, i.e., B = 1. Moreover, Fx (x) reaches (or should reach) a value of 1 at x = 10. Therefore A × 103 = 1 ⇒ A = 1013 . So the cdf Fx (x) is as follows:  x≤0  0, −3 3 10 x , 0 ≤ x ≤ 10 Fx (x) = (3.16)  1, x > 10 (b) The pdf of x is simply:  0, x≤0 dFx (x)  3 × 10−3 x2 , 0 ≤ x ≤ 10 fx (x) = =  dx 0, x > 10

(3.17)

Both the cdf and pdf are shown in Fig. 3.4. Solutions

Page 3–5

Nguyen & Shwedyk

A First Course in Digital Communications

Fx ( x )

dy k

fx ( x )

1.0

0.3 10

10

0

x

x

we

0

Figure 3.4: Plots of Fx (x) and fx (x). (c) The mean value of x is:

Z

+∞

xfx (x)dx

Sh

mx = E{x} = Z =

10

0

The variance of x is:

−∞

xfx (x)dx =

Z

10

3 × 10−3 x3 dx =

0

&

Z σx2

2

= var(x) = E{(x − mx ) } = Z

10

=

−∞

(x − mx )2 fx (x)dx

(x − 7.5)2 fx (x)dx = 3.75

en

0

+∞

30 = 7.5 4

(d)

P (3 ≤ x ≤ 7) = Fx (7) − Fx (3) = 0.316

(3.18)

uy

P3.6 To find fy (y), we first find the cdf of y, i.e., Fy (y). Consider the following cases for y (see Fig. 3.5): (i) For y < −b. Note that y can only receive values in the range −b ≤ y ≤ b. Therefore the event y ≤ y is an impossible event and for this range of y one has Fy (y) = P (y ≤ y) = 0.

Ng

(ii) For y = −b. Then

Fy (y) = Fy (−b) = P (y ≤ −b) = P (y < −b) + P (y = −b) = P (y = −b) = P (x ≤ −b) = Fx (−b) Z −b Z −b 1 2 2 √ = fx (x)d(x) = e−x /(2σ ) dx 2πσ 2 −∞ −∞

(3.19)

(iii) For y ≥ b. In contrast to case (i), for the range of y in this case the event y ≤ y is a certain event and therefore Fy (y) = P (y ≤ y) = 1. (iv) For −b < y < b. For this range of y, the random variable y and x are the same. Thus Fy (y) = P (y ≤ y) = P (x ≤ y) = Fx (y). This also implies that fy (y) = fx (y) = 2 2 √ 1 e−y /(2σ ) for y in this range. 2 2πσ

Solutions

Page 3–6

Nguyen & Shwedyk

A First Course in Digital Communications

b -b 0

x

b

we

-b

dy k

y = g (x )

f x ( x)

1 2πσ 2

Fx ( x)

1

Sh

12

x

0

fy ( y)

1

x 0

Fy ( y )

0

uy

-b

In summary,

1 Fx (b)

( Fx (−b))

en

( Fx (−b))

&

2πσ 2

b

12

y

Fx (−b)

-b

0

y

Figure 3.5

 y < −b  0, Fx (y), −b ≤ y < b Fy (y) =  1, y≥b

Now, differentiating Fy (y) gives fy (y), the pdf of y:   0,     Fx (−b)δ(y + b), dFy (y)  2 2 fx (y) = √ 1 2 e−y /(2σ ) , fy (y) = = 2πσ  dy   [1 − Fx (b)]δ(y − b) = Fx (−b)δ(y − b),    0,

Ng

b

(3.20)

y < −b y = −b −b < y < b y=b y>b

(3.21)

Note that the two delta functions are due to the discontinuities of Fy (y) at y = −b and y = b. The strengths of these two delta functions are equal to the value of the “jump” of the cdf at the discontinuities, which is Fx (−b). Plots of Fx (x), fx (x), Fy (y) and fy (y) are provided in Fig. 3.5. Solutions

Page 3–7

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

P3.7 It is straightforward enough to determine the mean mx and variance σx2 of a random variable x from the definitions: Z ∞ mx = E{x} = xfx (x)dx (3.22) −∞ mx )2 }

σx2

= E{(x − = E{x2 } − m2x Z ∞ = x2 fx (x)dx − m2x . −∞

(3.23)

we

The only challenge perhaps is doing the integrations. However using the fact that the mean is the center of gravity (or centroid) of the probability “mass” density and that the variance is the variation around the mean can simplify (even eliminate sometimes) the necessity for integrating. (a) The pdf looks as in Fig. 3.6(a). Obviously the centroid is mx = 12 (a + b).

Z =

b−a 2

− b−a 2

2 1 dx = x (b − a) (b − a)

Z

2

b−a 2

x2 dx =

0

&

σx2

Sh

To find the variance, shift fx (x) along the x-axis so that it lies symmetrically about x = 0, i.e., work with the pdf in Fig. 3.6(b). Note that the shift does not affect the variance. Then 1 (b − a)2 12

For special case of a = −b then mx = 0 and σx2 = b2 /3. f x ( x)

f x ( x) 1 b−a

en

1 b−a

0

a

b

x

uy

(a)

a

0

b

x

(b)

Figure 3.6: Uniform densities.

Ng

(b) For the Rayleigh density, we have no choice but to do the integrations. The following general results from G&R, p. 337 are useful: r Z ∞ (2n − 1)!! π 2 x2n e−px dx = , for p > 0 (Eqn. 3.461-2) 2(2p)n p 0 Z ∞ n! 2 , for p > 0 (Eqn. 3.461-3) x2n+1 e−px dx = 2pn+1 0 These equations allow us to compute all the moments of a Rayleigh random variable, if we so desired. But here we are interested in only the 1st and 2nd moments. The first moment is: s r Z ∞ x2 1 1!! 1 π π 2 − 2σ2 mx = 2 σ ≈ 1.25σ. x e dx = 2 ¡ 1 ¢ = 1 σ 0 σ 2 2 2σ2 2 2σ 2

Solutions

Page 3–8

Nguyen & Shwedyk

A First Course in Digital Communications

The second moment is: Z



2

x 3 − 2σ2

x e 0

1 1! dx = 2 ¡ ¢2 σ 2 12 2σ

Therefore the variance is σx2 = E{x2 } − m2x = 2σ 2 −

s

dy k

1 E{x } = 2 σ 2

π

1 2σ 2

= 2σ 2 .

π 2 ³ π´ 2 σ = 2− σ ≈ 0.429σ 2 . 2 2

we

Remark: The Rayleigh random variable is important in the study of fading channels (Chapter 10). As shall be seen there, the parameter σ 2 is also a variance (as the notation suggests), but of course not of the Rayleigh random variable but of the 2 underlying Gaussian random variables that make up the Rayleigh random variable. (c) For the Laplacian density:

Sh

c fx (x) = e−c|x| (3.24) 2 By inspection fx (x) is an even function (symmetric about x = 0), hence mx = 0. The variance of x is Z Z ∞ c ∞ 2 −c|x| 2 σx = x e =c x2 e−cx dx 2 −∞ 0

&

Integrate by parts to obtain: Z 2 1 2 x2 e−cx dx = − x2 e−cx − 2 xe−cx − 3 e−cx c c c Thus

Z



(3.25)

2 2 ⇒ σx2 = 2 (3.26) 3 c c 0 P (d) y is discrete random variable, given by y = ni=1 xi where xi are statistically independent and identically distributed random variables with the pmf: P (xi = 1) = p and P (xi = 0) = 1 − p.

uy

en

x2 e−cx dx =

Ng

First, the mean and variance of each random variable xi can be found as: mxi E{x2i } σx2 i

= E(xi ) = p × 1 + (1 − p) × 0 = p 2

(3.27)

2

= p × 1 + (1 − p) × 0 = p = E{x2i } − m2xi = p − p2 = p(1 − p)

Using linear property of the expectation operation E{·}, one has: ) ( n n X X xi = E{xi } = np E{y} = E i=1

(3.28)

(3.29)

i=1

To compute the variance of y, first compute E{y2 }. Write y2 as 2

Y =

à n X i=1

Solutions

!2 xi

=

n X n X i=1 j=1

xi xj =

n X i=1

x2i +

n n X X

xi xj

i=1 j=1,j6=i

Page 3–9

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

Since xi , i = 1, 2, . . . , n, are statistically independent, then ½ 2 p i 6= j E{xi xj } = p i=j

(3.30)

Using (3.30) and the linear property of E{·}, E{y2 } can be computed as:   ( n ) n n X  X X E{y2 } = E x2i + E xi xj = np + n(n − 1)p2   i=1

i=1 j=1,j6=i

2

2

= E{y } − [E{y}] = np + n(n − 1)p2 − (np)2 = np(1 − p)

we



σy2

(3.31) (3.32)

P3.8 (a) P (A∪B) = area A+area B−common area (which is added twice). But area A = P (A), area B = P (B) and common area = P (A ∩ B). Hence P (A ∪ B) = P (A) + P (B) − P (A ∩ B).

Sh

(b)

P (A ∪ B ∪ C) = P (A) + P (B ∪ C) − P (A ∩ (B ∪ C), where we consider B ∪ C as a single event. But A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C). Therefore P (A∪B∪C) = P (A)+P (B)+P (C)−P (B∩C)−P (A∩B)−P (A∩C)+P (A ∩ A ∩ C}). | ∩ B {z

&

A∩B∩C

P3.9 (a) Given B then probability of A occurring is that it “lies” in the shaded area. Also B is the new sample space. Therefore

en

P (A|B) =

P (A ∩ B) . P (B)

Note that P (A) can be written as P (A ∩ Ω) . P (Ω)

uy

P (A) =

(b) Statistical independence can be expressed in 2 ways.

Ng

(i) P (A|B) = P (A). In this case the interpretation is that the ratio of the common area (P (A ∩ B)) to the new sample space area (P (B)) is the same as the ratio of the original area (P (A)) to the original sample space (P (Ω)). (ii) P (A ∩ B) = P (A)P (B). Here the common area P (A ∩ B) is equal to the product of the areas P (A) and P (B).

(c) No. One could say they are totally dependent. Knowledge of one occurring tells you a lot about the other occurring, i.e., P (A|B) = 0 and vice versa.

P3.10 Need to determine if the 3 axioms are satisfied. Given B all possible events now lie in the shaded area. Axiom 1: P (any event) is certainly ≥ 0 and also ≤ 1. Axiom 2: P (B|B) = 1. Axiom 3: Holds. It is inherited from Ω. Solutions

Page 3–10

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

Sample space Ω

A

A∩ B

we

B

Figure 3.7

Sh

Sample space Ω

A

&

B

A∩ B

en

Figure 3.8

P3.11 (a) P (A ∩ B ∩ C) = P (A)P (B)P (C), P (A ∩ B) = P (A)P (B), P (A ∩ C) = P (A)P (C); P (B ∩ C) = P (B)P (C).

uy

(b) (i) All conditions of (a) satisfied, therefore statistically independent. (ii) Note that P (A ∩ B ∩ C) = 0 6= P (A)P (B)P (C), therefore statistically dependent. (iii) P (B ∩C) = 1/8 6= P (B)P (C) = (1/2)(1/2) = 1/4, therefore statistically dependent.

Ng

(c) Yes. Consider the Venn diagram in Fig. 3.9. Let a2 = b, where a = P (A) = P (B) = P (C). Then P (A ∩ B ∩ C) = 0 while P (A ∩ B) = P (A ∩ C) = P (B ∩ C) = b = a2 = P (A)P (B) = P (A)P (C) = P (B)P (C). ¡ ¢ P P n! (d) Number of conditions is nk=2 nk = nk=2 k!(n−k)! .

P3.12 First calculate P (A), P (B) and P (C) as follows: P (A) = 0.08 + 0.03 + 0.02 + 0.07 = 0.20 P (B) = 0.08 + 0.07 + 0.03 + 0.02 = 0.20 P (C) = 0.245 + 0.07 + 0.07 + 0.02 = 0.405 Now P (A ∩ B ∩ C) = 0.02 6= P (A)P (B)P (C) = (0.02)(0.02)(0.405) = 0.000162. Therefore these events are statistically dependent.

Solutions

Page 3–11

Nguyen & Shwedyk

A First Course in Digital Communications b

B

dy k

A

a

C

we

a

a

Sh

Figure 3.9 Next,

0.02 4 P (A ∩ B ∩ C) = = P (C) 0.405 81 P (A ∩ C) 0.09 2 P (A|C) = = = P (C) 0.405 9 0.09 2 P (B ∩ C) = = P (B|C) = P (C) 0.405 9 µ ¶2 2 4 P (A|C)P (B|C) = = P (A ∩ B|C). = 9 81

&

P (A ∩ B|C) =

en

Therefore the events A and B are conditionally independent. Problems 3.3, 3.4, 3.13, 3.14 and 3.15 are important special cases of discrete-input discreteoutput channel models. The general model is described in Fig. 3.10.

uy

The quantities of interest are usually P [y = yj ], P [x = xi |y = yj ] and also E{xy}. The general relationships for the quantities are: P [y = yj ] =

m X

P [(y = yj ) ∩ (x = xi )] =

i=1

Ng

P [x = xi |y = yj ] =

m X

P [y = yj |x = xi ]P [x = xi ] =

i=1

m X

Pij Pi

i=1

P [(x = xi ) ∩ (y = yj )] P [y = yj |x = xi ]P [x = xi ] Pij Pi = = Pm P [y = yj ] P [y = yj ] k=1 Pkj Pk

E{xy} = =

m X n X i=1 j=1 m X n X

xi yj P [(x = xi ) ∩ (y = yj )] xi yj P [y = yj |x = xi )]P [x = xi ]

i=1 j=1

=

m X n X

xi yj Pij Pi

i=1 j=1

Solutions

Page 3–12

Nguyen & Shwedyk

A First Course in Digital Communications

Input x

P11

x1 → P1

y1

P12

P1n

x2 → P2

y2

dy k

Output y

where P[y = y j | x = xi ], Pi = P[x = xi ]



and typically n ≥ m.

Further ∑ Pi = 1, 0 ≤ Pi ≤ 1.



m

we

i =1

Note typically most of the

Pij

xi → Pi

yj





xm → Pm

Sh

Pm1

Pij would be equal to 0.

yn

Pmn

Figure 3.10

&

P3.13 Consider the “toy” channel model shown in Fig. 3.25 in the textbook. (a) From the figure, we have P (x = −1) = 0.5, P (x = −1) = 0.5. All the transition probabilities of y given x are:

Then

en

P (y = 1|x = 1) = 0.4 P (y = 0|x = 1) = 0.2 P (y = −1|x = 1) = 0.4

P (y = 1|x = −1) = 0.4 P (y = 0|x = −1) = 0.2 P (y = −1|x = −1) = 0.4;

(3.33)

uy

P (y = 1) = P (y = 1|x = 1)P (x = 1) + P (y = 1|x = −1)P (x = −1) = 0.4 × 0.5 + 0.4 × 0.5 = 0.4.

(3.34)

Ng

Observe that P (y = 1) = P (y = 1|x = 1) = P (y = 1|x = −1) = 0.4. Similarly, one can show that P (y = 0) = P (y = 0|x = 1) = P (y = 0|x = −1), and P (y = −1) = P (y = −1|x = 1) = P (y = −1|x = −1). Since the distribution of the output y does not depend on which value of the input x was transmitted, the two random variables x and y are statistically independent.

(b) We have

E{x} = x1 × P (x = x1 ) + x2 × P (x = x2 ) = 1 × 0.5 + (−1) × 0.5 = 0,

(3.35)

and E{y} = y1 × P (y = y1 ) + y2 × P (y = y2 ) + y3 × P (y = y3 ) = 1 × 0.4 + 0 × 0.2 + (−1) × 0.4 = 0, Solutions

(3.36) Page 3–13

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

Since x and y are statistically independent, one can compute E{xy} as E{xy} = E{x}E{y} = 0.

(3.37)

P3.14 Consider P (y = 1) = 12 (0.3) + 21 (0.4) = 0.35. But P (y = 1|x = 1) = 0.3 6= P [y = 1] ⇒ Statistical dependence. The correlation is 2

xi yj Pij Pi =

i=1 j=1

=

3

1 XX xi yj Pij 2 i=1 j=1   3 3 X X 1 yj P1j − yj P2j  2

we

3 2 X X

j=1

j=1

1 1 [1(0.3) + 0(0.3) − 1(0.4)] − [1(0.4) + 0(0.1) − 1(0.5)] 2 2 = 0. (3.38)

Sh

=

Therefore the random variables are uncorrelated. Notes:

&

1) Uncorrelatedness does not mean (or imply) statistical independence. But statistical independence always implies uncorrelatedness. 2) One extremely important case where uncorrelatedness implies statistical independence is the case of jointly Gaussian random variables. Communication theory depends very crucially on this fact as shall be seen in later chapters.

en

3) Statistical independence can increase or decrease the conditional probability of an event. Two extreme cases are:

uy

(a) Mutually exclusive events, A, B then P (A|B) = 0. (b) An event B is contained in event A, i.e., B ⊂ A. Then P (A|B) = 1 where P (A), P (B) 6= 0. P3.15 Definitely statistically dependent since

Ng

1 P (y = 1) = (1 − ²) 6= P (y = 1|x = 1) = 1 − p − ². | {z } |2 {z } ≈1

(3.39)

≈1/2

Also correlated because

Solutions

E{xy} =

  3 3 X 1 X yj P1j − yj P2j  2 j=1

j=1

1 {(1 − p − ² − p) − [p − (1 − p − ²)]} = 2 = 1 − 2p − ² > 0, for p, ² 1: fy (y) = 0.



+

1−y| 1 e−|1− √ 2 |2(− 1−y)| .

Sh

(d) 0 < y < 1: fy (y) =



1−y| 1 e−|1+ √ 2 |2(− 1−y)| .

we

√ 1 − y ⇒ fy (y) =

(b) y = −1: fy (y) = 12 e−1 δ(y + 1).

(3.68)

xr

Therefore: (a) −∞ < y < −1: xr = 1 +

dy k

(e) Finally for y > 1 there are no root ⇒ fy (y) = 0.

A plot of the pdf is shown in Fig. 3.13 2.5

f (y)

&

y

1.5

uy

1

en

2

Ng

0.5

1/2 e−1 δ (y+1)

0 −3

y −2

−1

0

1

2

3

Figure 3.13

P3.26 (a) y = g(θ) = cos(α + θ). Consider the graph of the nonlinearity g(θ) in Fig. 3.14. The roots are θ1 = cos−1 y − α; θ2 = π − α + d = 2π − 2α − θ1 = 2π − α − cos−1 y. dy dθ = − sin(α + θ).

Solutions

Page 3–21

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

y

1 cosα θ1 π − α θ2 0



y −1 d

θ

d = π − α − θ1

d

we

Figure 3.14: Plot of the nonlinearity y = g(θ) = cos(α + θ).

fθ (θ)|θ fθ (θ)|θ ∴ fy (y) = ¯¯ ¯ ¯¯1 + ¯¯ ¯ ¯¯2 . dy dy ¯ dθ ¯θ1 ¯ ¯ dθ ¯θ2 ¯

Sh

Now fθ (θ)|θ1 = fθ (θ)|θ2 =

θ2

Therefore

1 2π

(θ is uniform over (0, 2π]).

p = − sin(α + cos−1 y − α) = − sin(cos−1 y) = − 1 − y 2

&

¯ dy ¯¯ dθ ¯θ1 ¯ dy ¯¯ dθ ¯

(3.69)

= − sin(α + 2π − α − cos−1 y) = sin(cos−1 y) =

en

( fy (y) =

π

√1

1−y 2

, −1 ≤ y ≤ 1

0,

elsewhere

p

(3.70)

1 − y2

(3.71)

(3.72)

Ng

uy

which plots as in Fig. 3.15. As a check, we know

+∞ Z Z1 1 ? p fy (y)dy = dy = 1 π 1 − y2

−∞

−1

Z1

The LHS is sin

1

p

or 0

−1

1−

¯1 ¯ (y)¯¯ = sin−1 (1) − sin−1 (0) = 0

(3.73)

π 2

y2

? π dy = . 2

(3.74)

− 0 = π2 . So the question mark can be

removed. No, it does not depend on α.

(b) A sketch of the relationship y = cos(θ + α) where 0 ≤ θ ≤ π/9 (see Fig. 3.16) shows that there is only one real root at θr = cos−1 y − α and that y lies in the range cos(α + π/9) ≤ y ≤ cos α).

Solutions

Page 3–22

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

2.5 f y(y) 2

we

1.5

1

0.5

0 −1.5

−1

Sh

1/π

−0.5

0

y

0.5

1

1.5

Figure 3.15

&

y

cosα

y

en

0

θr

π ( 20o ) 9

θ

uy

π  cos  + α  9 

Again

¯ ¯

dy ¯ dθ ¯ θ=θr

Ng

Now fθ (θ) =

9 π

Figure 3.16

= − sin(α + cos−1 y − α) = √ 1

1−y 2

.

¯ ¯ £ ¤ π u(0) − u(θ − 9 ) and therefore fθ (θ)¯¯

θr

=

9 π

£ ¤ u(0) − u(cos−1 y − α − π9 )

where cos(α + π/9) ≤ y ≤ cos α. Therefore £ ¤ 9 o −1 y − α − π ) n π π u(0) − u(cos 9 p fy (y) = u[cos(α + ) − u(cos α)] , 9 1 − y2

(3.75)

which depends on α. In communication models α is invariably 2πf t, i.e., y = cos(2πf t + θ). The above shows that if θ is uniform over [0, 2π) then the random process is stationary (at least 1st order). However if it is restricted then in general it becomes nonstationary.

Solutions

Page 3–23

Nguyen & Shwedyk

A First Course in Digital Communications

P3.27 (a) ≥ 0

dy k

(b) Since it is always ≥ 0 the function g(λ) = E{(x + λy)2 } cannot cross the λ axis, i.e., g(λ) = E{x2 } + 2λE{xy} + λ2 E{y2 } ≥ 0.

(3.76)

we

This means that the equation g(λ) = 0 has two complex roots at p −2E{xy} ± 4E 2 {xy} − 4E{x2 }E{y2 } λ1,2 = . 2E{y2 } (Recall from high school that the roots of ax2 + bx + c = 0 are at

(3.77)

√ −b± b2 −4ac ). 2a

For the roots to be complex the expression under p the square root must be ≤ 0. This requires that E 2 {xy} ≤ E{x2 }E{y2 } or E{xy} ≤ E{x2 }E{y2 }.

Sh

(c) Equality holds when y = kx, i.e., there is a linear relationship between the random variables x, y. (Statisticians are wont to call this a linear regression). (d) Consider the random variables x − mx and y − my . Then from (b) we have q q E {(x − mx )(y − my )} ≤ E {(x − mx )2 } E {(y − my )2 } = σx2 σy2 = σx σy E {(x − mx )(y − my )} ≤ 1. σx σy | {z }

& or

=ρx,y

en

But the expression in (b) should properly be written as p p − E {x2 } E {y2 } ≤ E {xy} ≤ E {x2 } E {y2 }

uy

³ ´ p or |E {xy} | ≤ E {x2 } E {y2 }

(3.78) (3.79)

from which it follows that −1 ≤ ρ ≤ 1.

(e)

Ng

|E {x(t)x(t + τ )} | ≤ or |Rx (τ )| ≤

p E {x2 (t)} E {x2 (t + τ )}

p Rx (0)Rx (0) = Rx (0)

(3.80) (3.81)

where it is assumed that the process is stationary.

P3.28 (a)

P (y < y ≤ y + ∆y|x < x ≤ x + ∆x) =

P (y < y ≤ y + ∆y, x < x ≤ x + ∆x) . (3.82) P (x < x ≤ x + ∆x)

(b) Assuming continuous random variables the expression becomes 00 .

Solutions

Page 3–24

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(c) Rewrite the expression in (a) as: P (y < y ≤ y + ∆y, x < x ≤ x + ∆x) P (y < y ≤ y + ∆y|x < x ≤ x + ∆x) = . (3.83) ∆y ∆x∆y P (x 0,

x y(x) = (1 + S) 1+Sx .

1+S dy 1+S dy = ⇒ = , 2 dx (1 + Sx) dx (1 + S|x|)2

Solutions

|x| < 1

(4.93)

Page 4–20

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

0.03

0.025

0.015

0.01

0.005

−1

−0.5

0 y/ ymax

Sh

0

we

f y( y)

0.02

0.5

1

Figure 4.14: Pdf plot of the output of the µ-law compressor when input is Gaussian.

Z1

(1 + S|x|)4 fx (x)dx,

&

1 Nq = 3L2 (1 + S)2

−1

where x =

m . mmax

(4.94)

In terms of the unnormalized variable m, one has

Then

m2max 3L2 (1 + S)2

m Zmax

−mmax m Zmax

or

dy 1 dy = . dm mmax dx

(4.95)

¶ µ |m| 4 fm (m)dm 1+S mmax

uy

Nq =

en

dy dm dy dy = = mmax dx dm dx dm

=

Ng

=

µ ¶ |m|2 |m| |m|3 |m|4 + 6S 2 2 + 4S 3 3 + S 4 4 1 + 4S fm (m)dm mmax mmax mmax mmax −mmax ¶ µ 2 2 3 4 mmax E{|m|} 2 E{|m| } 3 E{|m| } 4 E{|m| } + 6S + 4S +S 1 + 4S 3L2 (1 + S)2 mmax m2max m3max m4max m2max 2 3L (1 + S)2

2 /m2 To write this in terms of σn2 = σm max note that E{|m|} σm E{|m|} p 2 E{|m|} = = σn (as seen in P4.15, Eqn. (P4.9)) (4.96) mmax mmax σm σm E{|m|2 } (4.97) = σn2 m2max 3 E{|m|3 } 3 E{|m|3 } σm 2 3/2 E{|m| } (4.98) = = (σ ) n 3 3 m3max m3max σm σm 4 E{|m|4 } 4 E{|m|4 } σm 2 2 E{|m| } (4.99) = = (σ ) n 4 4 m4max m4max σm σm

Solutions

Page 4–21

Nguyen & Shwedyk

A First Course in Digital Communications 3L2 (1 + S)2 σn2 p E{|m|} 3} 4} 1 + 4S σn2 σm + 6S 2 σn2 + 4S 3 (σn2 )3/2 E{|m| + S 4 σn4 E{|m| σ3 σ4

dy k

∴ SNRq =

m

(4.100)

m

2 the quantities are: For a zero-mean Gaussian signal with power level σm r r E{|m|} 2 E{|m|3 } 2 E{|m|4 } = ; = 2 ; = 3. 3 4 σm π σm π σm

½

n

E{x } = ( n

E{|x| } =

1 · 3 · · · (n − 1)σ n , n even 0, n odd

we

Aside: If x is N (0, σ 2 ) then

(4.101)

1 ·q 3 · · · (n − 1)σ n , 2 k 2k+1 , π 2 k!σ

(4.102)

n = 2k

(4.103)

n = 2k + 1

50 40

en

30

&

Sh

Plots for S = 5, 10, 50, 100 are shown in Fig. 4.15. Note that the performance is not that attractive in that the SNRq varies considerably with σn2 . Some reflection leads one to conclude that this is due to the “higher” order moments presented in the Nq expression, namely σn3 , σn4 . One likes no moments higher than σn2 because then SNRq → constant as σn2 becomes larger. The bottom line is that any old saturating nonlinearity is not suitable in a compander. It has to be one that results in a σn2 term in Nq and this is what ln(·) nonlinearities of the µ-law and A-law give you.

10

q

SNR (dB)

20

uy

0

−10 −20

S=5 S=10 S=50 S=100

Ng

−30

−40 −100

−80 −60 −40 −20 Normalized signal power 10log10(σ2n) (dB)

0

Figure 4.15: Plots of SNRq for 8-bit S-law quantizer with Gaussian input signal.

P4.19 Since we would like the mapping to be unique then the number of possible mappings from the L = 2R target levels to the R-bit sequences is L! which for R = 2, 4, 8, 16 bits are 4! = 24; 16! = 2.0923 × 101 3; 256! = Matlab says inf; 216 ! = more than 256! but still countable. Though some of these mappings may be considered to be trivially different, there is still a humongous number of them. Solutions

Page 4–22

Nguyen & Shwedyk

A First Course in Digital Communications

P4.20 Let P [bit error] = Pe . Then E{q2 } = E{q2 |no bit error}(1 − Pe ) + E{q2 |1 bit error}Pe .

dy k

Assume a uniform quantizer and also a uniform pdf for the quantization error (Eqn. (4.21)). Then E{q2 |no bit error} is that of Equation (4.22) and it is ∆2 /12 (watts). With one bit error the quantization error, q, shall still have a uniform pdf but with a nonzero mean. The mean value depends on which bit is in error and the bit sequence considered. The picture looks as shown in Fig. 4.16. k∆

mj m

ml

Because of bit error we demodulate to this target level

Sh

Transmitted bit pattern

ml = m j + k∆

we



Figure 4.16

∴ E{(m − ml )2 } = E{(m − mj − k∆)2 }

&

= E{(m − mj )2 } − E{2k∆(m − mj )} + E{k2 ∆2 } ∆2 Now E{(m − mj )2 } = (i.e., as if no bit error occurred) 12 E{(m − mj )} = 0 X k 2 ∆2 P [k] E{k2 ∆2 } =

en

k

Ã

∴ E{q2 } = σq2 (1 − Pe ) + + Pe

X

(4.105) (4.106) (4.107)

! k∆2 P [k] Pe

k 2

2

k ∆ P [k].

(4.108)

k

uy

=

σq2

σq2 +

2 X

(4.104)

So now it is a matter of determining the k profile along with the value of P [k]. We do this for the three mappings of Fig. 4.19 and L = 8. 000 001 010 011 100 101 110 111 {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 2, 4} {1, 3, 7} {1, 1, 5} {1, 3, 1} {1, 1, 3} {1, 3, 7} {1, 1, 5} {1, 3, 1} {1, 1, 3} {1, 2, 1} {1, 1, 3} {1, 2, 5} {1, 2, 7} {1, 2, 1} {1, 2, 3} {1, 2, 5} {1, 2, 7} | {z } values of k

Ng

Bit pattern NBC Gray FBC

Now assuming that each bit sequence is equally probable and that which bit is in error is equally probable, the P [k] is as follows: NBC: P [k = 1] = 1/3; P [k = 2] = 1/3; P [k = 4] = 1/3 Gray: P [k = 1] = 14/24 = 7/12; P [k = 3] = 6/24 = 3/12; P [k = 5, 7] = 2/24 = 1/12 FBC: P [k = 1] = 11/24; P [k = 2] = 7/24; P [k = 3, 5, 7] = 2/24 = 1/12

Solutions

Page 4–23

Nguyen & Shwedyk

A First Course in Digital Communications

Therefore the quantization noise variance is: 1 E{q2 } = σq2 + 13 [∆2 + 4∆2 + 16∆2 ]Pe = ∆2 [ 12 + 7Pe ].

Gray:

7 E{q2 } = σq2 + [ 12 ∆2 +

E{q2 }

FBC:

=

dy k

NBC:

3 1 1 2 2 2 2 1 12 (9∆ ) + 12 (25∆ ) + 12 (25∆ )]Pe = ∆ [ 12 7 2 2 2 1 ∆2 + 24 (4∆2 )+ 24 (9∆2 )+ 24 (25∆2 )+ 24 (49∆2 )]Pe = σq2 +[ 24

If Pe ≤ 10−3 the effect is negligible.

+ 9Pe ]. 1 ∆2 [ 12 +8.46Pe ].

2 = Since σm

σq2

∆2 12

2 Vmax 2 3 ⇒ Vmax 2 Vmax = 2116 . 12(27 )2

= 3.

we

max max P4.21 (a) ∆ = 2V256 = V128 . (Note: Here the optimum quantizer is a uniform quantizer). 2 σm = Rm (τ = 0) = 1 (watts)

= = £ © ª¤ ∂ (b) ∂α E [m(k) − αm(k − 1)]2 = 0 ⇒ α =

Rm (1) Rm (0)

=

Rm (Ts ) Rm (0)

= e−1/100 .

Sh

(c) y(k) = m(k) − αm(k − 1) = m(k) − e−1/100 m(k − 1). E{y(k)} = 0; E{y2 (k)} = (1 + α2 ) Rm (0) −2α Rm (1) = 1 − α2 = 0.0198. | {z } | {z } =1



en

&

y0 (kTs ) has a mean of zero and a variance of 0.0198. Because it is uniform we know that 0 0 the variance is (ymax )2 /12 ⇒ ymax = 0.48745. ¯ 0 ¯ 0.2376 (ymax )2 ∴ σq2 ¯¯ = (4.109) = 16 3·2 3 · 216 diff. quan. ¯ ¯ 1 2¯ σq ¯ (4.110) = 216 no diff. quan. ¯ ¯ 2 σq ¯¯ 0.2376 ¯ diff. quan. = 0.0792 = −11 dB. (4.111) = ¯ 3 2 ¯ σq ¯ no diff. quan.

uy

P4.22 The USB drive can store 8 × 109 bits. (a) 4 kHz speech, 8 bits per sample. The Nyquist sampling rate is 8 kHz, giving a bit rate of 8 × 8, 000 = 64, 000 bits/sec. 8 × 109 bits = 1.25 × 105 seconds = 34.7 hrs = 1.446 days 64, 000 bits/sec

Ng

TD =

(b) 22 kHz audio signal, stereo, 16 bits per sample in each stereo channel. Have 44, 000 samples/sec × 16 bits/sample × 2 channels. TD =

8 × 109 bits = 5.682 × 103 seconds = 94.7 minutes = 1.58 hrs 44, 000 × 16 × 2 bits/sec

(c) 5 MHz video signal, 12 bits per sample, combined with the audio signal above. Have 10 × 106 samples/sec × 12 bits/sample + 44, 000 × 16 × 2bits/sec. TD =

Solutions

8 × 109 bits 8 × 109 bits = = 65.9 seconds = 1.1 minutes 120 × 106 + 1.408 × 106 121.408 × 106 bits/sec Page 4–24

Nguyen & Shwedyk

A First Course in Digital Communications

TD =

dy k

(d) Digital surveillance video, 1024 × 768 pixels per frame, 8 bits per pixel, 1 frame per second. Have (1024 × 768) pixels/frame × 8 bits/pixel × 1 frame/sec = 6, 291, 456 bits/sec. 8 × 109 bits = 1.272 × 103 seconds = 21.20 minutes 6, 291, 456 bits/sec

we

P4.23 We divide the range [−mmax , mmax ] into 2R equally spaced quantization regions, each of width ∆ = 2mmax /2R . If we place a target level in the middle of each quantization region, the maximum value of the quantization noise is qmax = ∆/2 = mmax /2R . The PSNRq is then given by ¶ µ ¡ R¢ mmax = 20 log PSNRq = 20 log10 2 = 6.02R (dB) (4.112) 10 mmax /2R

Sh

To achieve distortion D requires R at least D/6.02, so the smallest value of R is R = dD/6.02e bits

(4.113)

Ng

uy

en

&

Note that with this definition, the 6dB/bit rule-of-thumb hold as well.

Solutions

Page 4–25

dy k

Chapter 5

P5.1 First consider Z Tb Z 0 [φ2 (t)]2 dt = 0

Tb

0

=

1 E2

·

Z |0

Sh

we

Optimum Receiver for Binary Data Transmission s2 (t) √ − ρφ1 (t) E2 Tb

s22 (t)dt −2ρ {z }

=E2

{z

}

&

|

=1

2

¸2

dt

Z

Tb

s2 (t) √ φ1 (t)dt +ρ2 E |0 {z2 } Z Tb 1 √ =√ s2 (t)s1 (t)dt E2 E1 0 | {z }

Z

Tb

0

φ21 (t)dt



2

2

·

uy

P5.2 (a)

en

= 1 − 2ρ + ρ = 1 − ρ . · ¸ · ¸ 0 s2 (t) s2 (t) s1 (t) 1 1 1 ∴ φ2 (t) = √ 2 φ2 (t) = √ 2 √E2 − ρ φ1 (t) = √ 2 √E2 − ρ √E . 1 1−ρ 1−ρ 1−ρ | {z }

Ng

φˆ1 (t) · φˆ2 (t) =

φˆ1 (t) φˆ2 (t)

s1 (t) √ E1

¸

· =

cos θ sin θ − sin θ cos θ

¸·

φ1 (t) φ2 (t)

¸ (5.1)

φˆ1 (t) = cos θφ1 (t) + sin θφ2 (t), φˆ2 (t) = − sin θφ1 (t) + cos θφ2 (t),

− sin θ cos θφ21 (t)

+

sin θ cos θφ22 (t)

ZTb

(5.2)

2

2

+ [cos θ − sin θ]φ1 (t)φ2 (t),

ZTb φˆ1 (t)φˆ2 (t)dt = − sin θ cos θ

0

ZTb φ21 (t)dt + sin θ cos θ

|0

{z

|0

}

=1

£

2

2

+ cos θ − sin θ

¤

φ22 (t)dt {z

}

=1

ZTb φ1 (t)φ2 (t)dt |0

{z

}

=0

= − sin θ cos θ + sin θ cos θ = 0 1

(5.3)

Nguyen & Shwedyk

A First Course in Digital Communications

Therefore {φˆ1 (t), φˆ2 (t)} are orthogonal. To see if they are normal, consider first φˆ21 (t)dt

ZTb [cos θφ1 (t) + sin θφ2 (t)]2 dt =

0

0

ZTb = cos2 θ

dy k

ZTb

ZTb

φ21 (t)dt +2 cos θ sin θ |0

{z =1 2

φ1 (t)φ2 (t)dt + sin2 θ

|0

}

ZTb

{z

}

φ22 (t)dt |0

=0

we

= cos2 θ + sin θ = 1.

{z

}

=1

(5.4)

Sh

Similar derivation shows that φˆ2 (t) also has unit energy. Therefore {φˆ1 (t), φˆ2 (t)} form an orthonormal set. · ¸ · ¸ cos θ − sin θ − cos θ sin θ What if the minus sign is put elsewhere, i.e., or . sin θ cos θ sin θ cos θ Are {φ1 (t), φ2 (t)} still orthonormal? If so does the matrix represent just a rotation? RT (b) Need to find θ such that 0 b φˆ1 (t)[s1 (t) − s2 (t)]dt = 0: ZTb

ZTb

φˆ1 (t) [s1 (t) − s2 (t)] dt =

0

 T Zb ZTb = cos θ  s1 (t)φ1 (t)dt − s2 (t)φ1 (t)dt

&

0

[cos θφ1 (t) + sin θφ2 (t)] [s1 (t) − s2 (t)] dt

0

 T Zb ZTb + sin θ  s1 (t)φ2 (t)dt − s2 (t)φ2 (t)dt 0

en uy

0

0

= cos θ(s11 − s21 ) + sin θ(s12 − s22 ) = 0.

sin θ(s12 − s22 ) = cos θ(s21 − s11 ) s21 − s11 tan θ = s12 − s22 s21 − s11 θ = tan−1 . s12 − s22

(5.5)

(5.6)

Ng

Of course θ is determined by the signal set. You must know the signal set in order to rotate the original basis set as desired: φˆ2 (t) is perpendicular to the line joining s1 (t) and s2 (t). The above result of θ can also be obtained geometrically (see the signal space diagram on Fig. 5.1). Use the following fact from geometry: Given the equation of a straight 1 line y = mx + b, then the perpendicular to the line has a slope of − m . s22 −s12 The slope of the line joining s1 (t) and s2 (t) is . Therefore the slope of φˆ1 (t) is −1 21 −s11 tan θ = − ss22 −s12 , or θ = tan Since we choose θ such that

s21 −s11 s12 −s22 .

s21 −s11

ZTb φˆ1 (t) [s1 (t) − s2 (t)] dt = 0,

(5.7)

0

Solutions

Page 5–2

Nguyen & Shwedyk

A First Course in Digital Communications φ2 (t )

φˆ1 (t )

s22

dy k

s2 (t )

sˆ11 = sˆ21

φˆ2 (t )

θ s12

s1 (t )

θ 0

φ1 (t )

s11

we

s21

Figure 5.1: Determining the rotation angle θ.

Sh

It follows that ZTb

ZTb

s1 (t)φˆ1 (t)dt =

0

s2 (t)φˆ1 (t)dt

0

sˆ11 = sˆ21 ,

(5.8)

&

i.e., the components of s1 (t) and s2 (t) along φˆ1 (t) are the same. µ ¶ Z ∞ Z ∞ (x−µ)2 ) (λ= x−µ λ2 1 1 T −µ − σ − 2 √ e 2σ dx P5.3 Area = √ = e 2 dλ = Q . σ 2πσ T 2π T −µ (∴dλ= dx ) σ σ

en

As shown in Fig. P5.40 of the text, T − µ > 0 (i.e., T > µ) but result is true even if T < µ. Of course the argument of Q(x) is now negative and one would use the relationship Q(−x) = 1 − Q(x).

Ng

uy

Stated in English, the area is a Q function whose argument is the distance from the mean to the threshold divided by the RMS value (standard deviation). ( x − µ )2

− 1 2 e 2σ Late 2πσ Laggards Majority

Early Majority

Early Adoptors Innovators

"The Chasm"

T

Technology Adoption Process µ

0

x

Area = ?

Figure 5.2: Writing the shaded area in terms of the Q function.

Of course one must pay attention on which side of the mean the threshold lies and write the expression accordingly. For instance in terms of the Q function what is the shaded area in Fig. 5.2?

Solutions

Page 5–3

Nguyen & Shwedyk

A First Course in Digital Communications

+

=1 2 si2

=0

(b) Now let

we

=

s2i1

dy k

P5.4 (a) Since si (t) = si1 φ1 (t) + si2 φ2 (t), i = 1, 2, one has: Z Tb Z Tb Ei = s2i (t)dt = [si1 φ1 (t) + si2 φ2 (t)]2 dt 0 0 Z Tb = [s2i1 φ21 (t) + 2si1 si2 φ1 (t)φ2 (t) + s2i2 φ22 (t)]dt 0 Z Tb Z Tb Z Tb 2 2 2 = si1 φ1 (t)dt +2si1 si2 φ1 (t)φ2 (t)dt +si2 φ22 (t)dt 0 0 0 {z } {z } {z } | | |

(q φ1 (t) =

φ2 (t) = Compute

otherwise

φ1 (t)φ2 (t) = =

2 Tb

sin(2πfc t),

0 ≤ t ≤ Tb

0,

2 Tb

Z

&

0

Tb

0 ≤ t ≤ Tb

0,

(q

Z

cos(2πfc t),

Sh

and

2 Tb

1 Tb

Tb

0

otherwise

(5.9)

(5.10)

cos(2πfc t) sin(2πfc t)dt ¯Tb ¯ 1 sin(4πfc t)dt = − cos(4πfc t)¯¯ 4Tb πfc 0

1 [cos(4πfc Tb ) − 1] 4Tb πfc

(5.11)

en

= −

Tb

0

Z

=1

Ng

uy

Thus, φ1 (t) and φ2 (t) are orthogonal if cos(4πfc Tb ) = 1 ⇒ 4πfc Tb = 2kπ ⇒ fc = 2Tk b , where k is a positive integer (k = 1, 2, . . .). There must be an integer number of half cycles of the cos and sin in the interval [0, Tb ] for them to be orthogonal over the interval of Tb seconds. No other frequency suffices. Finally, (fc )min = 2T1 b when k = 1. It is graphically illustrated in Fig. 5.3. sin

0

cos

Tb

t

Figure 5.3: Orthogonality of sin and cos with a minimum frequency.

P5.5 Consider the following two signals s1 (t) and s2 (t) (this is the same signal set considered in Example 5.5): s1 (t) = V cos(2πfc t) s2 (t) = V cos(2πfc t + θ),

Solutions

0 ≤ t ≤ Tb , fc =

k , k integer 2Tb

(5.12)

Page 5–4

Nguyen & Shwedyk

A First Course in Digital Communications

Z E1 =

0

= V

Z

Tb

s21 (t)dt =

Z

2

Tb

0

Z

Tb

0

s22 (t)dt

Z

Tb

= V2 0

V 2 cos2 (2πfc t + θ)dt 0 ·Z Tb ¸ Z Tb V2 [1 + cos(4πfc t + 2θ)] dt + cos(4πfc t + 2θ)dt dt = 2 2 0 0 =

Sh

V 2 Tb 2

=

Tb

(5.13)

we

E2 =

V 2 cos2 (2πfc t)dt 0 ·Z Tb ¸ Z Tb [1 + cos(4πfc t)] V2 dt = dt + cos(4πfc t)dt 2 2 0 0

V 2 Tb 2

= Z

Tb

dy k

(a) The energies of two signals can be computed as:

(5.14)

where we have used the fact that cos(4πfc t+β) is a periodic function with a fundamental period of Tb /k (regardless of the value of the phase β) and an integration over multiple periods of this function equals to zero. V 2 Tb 2

= 1. Therefore,

&

The two signals s1 (t) and s2 (t) have unit energy if E1 = E2 = q V = T2b .

en

Remark: In communications we invariably take fc to be an integer multiple of T1b typically, sometimes 2T1 b so the results for E1 and E2 hold (and those in P5.4). Typically fc is very large, on the order of 106 (MHz) or 109 (GHz) so that even if there aren’t an integer number of cycles (or half cycles) in the time interval, for engineering purposes E1 and E2 still are closely approximated by V 2 Tb /2. (b) Since E1 = E2 = E, the correlation coefficient of the two signals is

uy

ρ =

Z Z 1 Tb V 2 Tb s1 (t)s2 (t)dt = cos(2πfc t) cos(2πfc t + θ)dt E 0 E 0 Z Z Tb V 2 Tb V 2 Tb cos θ = cos θ cos θdt + cos(4πfc t + θ)dt = 2E 0 2E {z } |0 k = 0 since fc = 2Tb

Ng

=

(5.15)

Two signals are orthogonal when the correlation coefficient ρ = 0, which is equivalent to cos θ = 0. So, θ = k π2 , k = ±1, ±2, ±3, . . .

(c) Plot of ρ as a function of θ is shown in Fig. 5.4. √ √ √ p √ (d) d = 2E 1 − ρ ⇒ dmax = 2E 1 − (−1) = 2 E when ρ = −1, or θ = π (antipodal signals).

To relate θ of part (c) to that of P5.4b note when θ = −π/2 then s2 (t) = V cos(2πfc t−π/2) = V sin(2πfc t), i.e., except for a scale factor it is φ2 (t) of P5.4b. On the other hand θ = π/2 results in −φ2 (t) of P5.4b. The signal space of the signal set is shown in Fig. 5.12 of the text and it is reproduced in Fig. 5.5. Solutions

Page 5–5

Nguyen & Shwedyk

A First Course in Digital Communications ρ = cosθ π

π 2

0

dy k

+1

3π 2 2π

−1

θ

we

Figure 5.4: Plot of ρ = cos θ. 2 sin(2π f c t ) Ts

φ2 (t ) =

Sh

θ = 3π 2 ρ =0

s1 (t )

0

θ =π ρ = −1

θ

locus of s2 (t ) as θ

E

&

2 cos(2π f c t ) Ts

φ1 (t ) =

varies from 0 to 2π . s 2 (t )

en

θ =π 2 ρ =0

Figure 5.5: Signal space representation of Problem 5.5.

uy

P5.6 (a)

ρ =

Ng

=

1 √ √ E1 E2

Z

V V2 √ 1√ 2 E1 E2

"0

Tb

V1 V2 cos(2π(fc − ∆f /2)t) cos(2π(fc + ∆f /2)t)dt # Z Z Tb

|

0

cos(2π(2fc )t)dt {z }

=0 (integer number of cycles in Tb )

since cos x cos y =

√ V1√ Tb 2

V V T sin(2π∆f Tb ) √1 2√ b 2 E1 E2 2π∆f Tb √ √ √ V2 Tb √ = E. Therefore = E1 = E and √ E2 ρ(∆f ) =

Solutions

|

0

cos(2π∆f t)dt {z }

=

sin(2π∆f Tb ) 2π∆f

cos(x + y) + cos(x − y) 2

=

But

Tb

+

(5.16)

sin(2π∆f Tb ) . 2π∆f Tb Page 5–6

Nguyen & Shwedyk

A First Course in Digital Communications

Plot of ρ versus the normalized frequency ∆f Tb is shown in Fig. 5.6.

∆fTb =

dy k

ρ

1 1 or ∆f = (minimum frequency separation 2 2Tb for orthogonality)

∆fTb

0

we

ρ min

minimum ρ point ⇒ maximum distance point between the 2 points

Figure 5.6: Plotting of ρ(∆f ).

&

Sh

(b) One can differentiate ρ with respect to the (normalized) variable ∆f Tb to find where the minimum occurs. From the plot of ρ we expect it to be between 0.5 and 0.75. Formally cos x sin x let x ≡ 2π∆f Tb . Then ρ(x) = sinx x and dρ(x) dx = x − x2 = 0 (for maximum/minimum points) or x = tan x. Solve this numerically. Another approach (still numerical) is to set up a table of values of sinx x between x = 2π × 0.65 and 2π × 0.75 in increments of say 0.01 and pick out value of x where the minimum occurs. ? The simplest approach is to trust the answer and check it by seeing if x= tan x at x = 2π× ? 0.715 (within acceptable engineering numerical accuracy), i.e., 2π×0.715= tan(2π × 0.715).

uy

en

p For ∆f Tb = 0.715, ρ = −0.2172. Therefore d = 2E(1.2172). √ When ρ = 0, d = 2E. The distance has increased by a factor of 1.1033. A more relevant way to “quantify” this increase is to state that for ρ = 0 the energy E needs to be increased by 1.2172 to achieve the same distance between the 2 signals as in the ρ = −0.2172 case. This is a 10 log10 (1.2172) = 0.85 dB increase in energy (& hence power). Nonetheless for other reasons, namely synchronization, ∆f is chosen so that ρ = 0.

Remarks:

1 2Tb

the two signals are said to 1 be “coherently” orthogonal, when ∆f is chosen to be k (k ≥ 2) they are called Tb “noncoherently” orthogonal. Chapter 7 discusses this further.

Ng

(i) When the frequency separation ∆f is chosen to be

(ii) To obtain the signal space representation is not that straightforward, except for ρ = 0. All that can be said of the top of one’s head √ is that the signal space is 2–dimensional, and that the 2 signals lie at a distance of E (because of how we adjust V1 , V2 ) from the origin. One basis function can be chosen to be, say φ1 (t) = s√1 (t) . The other basis E function is a function of ∆f .

P5.7 Using the Gram-Schmidt procedure, construct an orthonormal basis for the space of quadratic polynomials {a2 t2 + a1 t + a0 ; a0 , a1 , a2 ∈ R} over the interval −1 ≤ t ≤ 1.

Solutions

Page 5–7

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

It is simple to see that the equivalent problem is to find an orthonormal basis for three signals s1 (t) = 1, s2 (t) = t and s3 (t) = t2 over the interval −1 ≤ t ≤ 1. The first orthonormal basis function is simply φ1 (t) = the interval −1 ≤ t ≤ 1 is 2.

√1 2

since the energy of s1 (t) = 1 over

Next observe that s2 (t) = t is an odd function, while φ1 (t) is an even function. It follows that s2 (t) is already orthogonal to φ1 (t). Thus, we set:

we

p s2 (t) t φ2 (t) = qR = qR = t 3/2 1 2 1 2 −1 s2 (t)dt −1 t dt

Next we project s3 (t) = t2 on the subspace spanned by φ1 (t) and φ2 (t). It is clear that since s3 (t) is R 1an even function and φ2 (t) is an odd function, their product is odd, and we have s32 = −1 s3 (t)φ2 (t)dt = 0. On the other hand, Z

1

1 2 √ t2 dt = √ 2 3 2

Sh

s31 =

−1

The orthogonal signal is given by

2 1 0 φ3 (t) = s3 (t) − √ φ1 (t) − 0 · φ2 (t) = t2 − 3 3 2

&

Finally, the third orthonormal basis function is r µ ¶ ¶ r µ p 45 1 5 2 1 / 8/45 = t2 − = (3t − 1). φ3 (t) = t2 − 3 8 3 8 In summary,

en

(r

1 , 2

r

3 t, 2

r

5 2 (3t − 1) 8

)

is an orthonormal basis set for the quadratic functions over [−1, 1]. The set is plotted in Fig. 5.7.

uy

To find the coefficients of any quadratic polynomial, p(t) = a2 t2 + a1 t + a0 , in terms of φ1 (t), φ2 (t) and φ3 (t) we find the projections of p(t) onto the 3 basis functions, i.e., p(t) = p1 φ1 (t) + p2 φ2 (t) + p3 φ3 (t)

Ng

where

r √ 1 2 a2 + 2a0 = p(t)φ1 (t)dt = [a2 t2 + a1 t + a0 ] √ dt = 3 2 −1 −1 r r Z 1 Z 1 3 2 = p(t)φ2 (t)dt = [a2 t2 + a1 t + a0 ] tdt = a1 2 3 −1 −1 "r # r Z 1 Z 1 3 2 4 3 2 = p(t)φ1 (t)dt = [a2 t + a1 t + a0 ] (3t − 1) dt = a2 . 8 5 8 −1 −1 Z

p1

p2 p3

1

Z

1

(5.17) (5.18) (5.19)

One, of course, does not achieve very much by approximating a quadratic polynomial by another quadratic polynomial. Of more practical interest is the situation where we are given Solutions

Page 5–8

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

2 1.5

we

1 0.5

Sh

0 −0.5 −1

&

−1.5 −1

−0.5

0 t

0.5

1

en

Figure 5.7: Orthonormal basis set for quadratic polynomials.

uy

a more general time function, say s(t) of finite energy, and wish to approximate it by a quadratic polynomial over the interval of [−1, 1]. Then the set of {φ1 (t), φ2 (t), φ3 (t)} found above can be used to approximate s(t) where the coefficients in the approximation s(t) ≈ sˆ(t) = s1 φ1 (t) + s2 φ2 (t) + s3 φ3 (t) are found by: Z

s1 =

Z

1

−1

s(t)φ1 (t)dt, s2 =

Z

1

−1

s(t)φ2 (t)dt, s3 =

1

−1

s(t)φ3 (t)dt.

Furthermore the mean-squared error [s(t) − sˆ(t)]2 is minimum (see P5.9).

Ng

Remark: The 3 polynomials {φ1 (t), φ2 (t), φ3 (t)} are normalized versions of the first three members of what are referred to as Legendre polynomials. These are orthogonal polynomials over [−1, 1] that are solutions of Legendre’s differential equations: (1 − t2 )

d2 pn (t) dpn (t) − 2t + n(n + 1)pn (t) = 0. 2 dt dt

The nth member is given by pn (t) = Z

− 1)n (Rodrique’s formula) and

½



−∞

Solutions

1 dn 2 2n n! dtn (t

pn (t)pm (t)dt =

2 2n+1 ,

0,

n=m . n 6= m

Page 5–9

Nguyen & Shwedyk

A First Course in Digital Communications Z

T

si (t)sj (t)dt = 0, (i 6= j)

0

Equal energy:

Z

T

E= 0

dy k

P5.8 Orthogonal :

Z

s21 (t)dt

= ··· =

0

First, let us find the energy, say in sm (t):

Z

T

= 0

sˆ2m (t)dt =

= E− +

1 M2

= E− +

0

Z s2m (t) −

1 M2

= E−

2 M ½Z

Z

2 M ½Z

0

#2 M 1 X sm (t) − sk (t) dt M k=1

M

X 2 sm (t) sk (t)dt + M k=1

T

Z

T

0

"M #2 1 X sk (t) dt M2 k=1

sm (t) [s1 (t) + . . . + sM (t)] dt

s21 (t)

Z

0

T

"

T

0 T £

0

T

we

0

Z

s2M (t)dt.

Sh

ˆ = E

T

+ ... +

£

s2M (t)

¤ + s1 (t)s2 (t) + . . . + sM −1 (t)sM (t) dt

¤ s1 (t)sm (t) + . . . + s2m (t) + . . . + sm (t)sM (t) dt 0 Z Z T Z T T 2 2 s1 (t)s2 (t)dt + . . . + sM (t)dt + s1 (t)dt + . . . +

&

Z

T

0

0

0

¾

¾

T

sM −1 (t)sM (t)dt

2 1 E 1 E + 2 (M E + 0) = E − E= (M − 1) M M M M

Ng

uy

en

Now let us find the correlation coefficient: Z T sˆm (t)ˆ sn (t) dt ρmn = ˆ E 0 #" # Z " M M 1 X 1 X 1 T sk (t) sn (t) − sk (t) dt sm (t) − = ˆ 0 M M E k=1 k=1 # Z T" M M M X X 1 1 1 1 X = sm (t)sn (t) − sm (t) sk (t) − sn (t) sk (t) + ( sk (t))2 dt ˆ 0 M M M E k=1 k=1 k=1 Z T Z T Z M M T X X 1 1 1 = sm (t)sn (t)dt − sm (t) sk (t)dt − sn (t) sk (t)dt ˆ 0 M 0 M 0 E k=1 k=1 #2 Z T "X M 1 + sk (t) dt M2 0 k=1 · ¸ 1 E 1 1 1 1 −E/M 1 0− = E− E + 2 M E = (− ) = = ˆ ˆ M M M M E/M (M − 1) 1 − M E E The signal space plots for M = 2, 3, 4 are as follows. 2 (t) M = 2: It is easy to show that sˆ1 (t) = s1 (t)−s and sˆ2 (t) = 2 ˆ signals are antipodal with energy E = E/2.

Solutions

s2 (t)−s1 (t) 2

= −ˆ s1 (t), i.e., the

Page 5–10

Nguyen & Shwedyk

A First Course in Digital Communications

Note that sˆm (t) =

m=1

M X m=1

M 1 X sm (t) − M

ÃM X

m=1

!

M X

M 1 X sm (t) − sk (t) M

dy k

M X

sk (t)

=

m=1

k=1

k=1

à |

!

M X

1

m=1

{z

= 0. }

=M

Therefore the signals {ˆ sm (t)}M m=1 are linearly independent since any one of them is a linear combination of the remaining q M − 1 of them. They therefore lie in an (M − 1)-dimensional

Sh

we

M −1 signal space at distance M from the origin. Further, since they are equally correlated the angle between any 2 vectors from the origin to the signal points is the same for each pair. Putting this all together we have for: q M = 3: A 2-dimensional signal space. Signal points are at distance 23 E from origin, laying at a angle spacing of 120◦ , i.e., on vertices of an equilateral triangle. q M = 4: Lying in a 3-dimensional space, distance of 34 E, on the vertices of a regular tetrahedron.



sˆ1 (t )

0

E 2

sˆ2 (t )

2 3

&

sˆ2 (t )

φ2 (t )

φ1 (t )

E

φ1 (t )

0

E 2

sˆ3 (t )

en

M =2

sˆ1 (t )

φ2 (t )

M =3

uy

sˆ1 (t )

Ng

sˆ3 (t )

φ3 (t )

φ1 (t )

0 3 4

E sˆ2 (t )

M =4 sˆ4 (t )

Figure 5.8

P5.9 (a) The approximation error is simply e(t) = s(t) − sˆ(t) = s(t) −

Solutions

PN

k=1 sk φk (t).

Thus the

Page 5–11

A First Course in Digital Communications

energy of the estimation error is: #2 Z +∞ " N X Ee = s(t) − sk φk (t) dt −∞

Z

= Es − 2

k=1

Z

+∞

−∞ +∞

Z = Es − 2

Z s(t)ˆ s(t)dt +

−∞ 2 2 sN φN (t)

= Es − 2

= Es − 2

+∞

s(t)ˆ s(t)dt + −∞

Z = Es − 2

+ s1 s2 φ1 (t)φ2 (t) + . . . + sN −1 sN φN −1 (t)φN (t)]dt Z +∞ X N +∞ s(t)ˆ s(t)dt + s2k φ2k (t)dt + 0

−∞

Z = Es − 2

[s21 φ21 (t) +

we

Z

−∞

[s1 φ1 (t) + . . . + sN φN (t)]2 dt

+∞

s(t)ˆ s(t)dt +

−∞ N X

−∞ k=1 N Z +∞ X k=1 −∞ N X s2k k=1

s2k φ2k (t)dt

Sh

+ ... +

+∞

s(t)ˆ s(t)dt + −∞ +∞

dy k

Nguyen & Shwedyk

Z

sk

k=1

+∞

−∞

s(t)φk (t)dt +

N X

s2k

(5.20)

k=1

en

&

Obviously Ee is a function of sk and therefore sk can be chosen to minimize Ee . To find the optimal sk , differentiate Ee with respect to each sk and set the result to zero: i i hP hP R +∞ N N 2 s s(t)φ (t)dt s 2d d k k k=1 k=1 k −∞ dEe = 0− + dsk dsk dsk Z +∞ = −2 s(t)φk (t)dt + 2sk = 0 −∞ Z +∞ ⇒ sk = s(t)φk (t)dt R +∞

s(t)φk (t)dt = sk into (5.20), the minimum mean square approximation

uy

(b) Substituting error is given by:

−∞

−∞

Ng

(Ee )min = Es − 2 = Es − 2

N X k=1 N X k=1

Z sk

+∞

−∞

s2k +

N X

s(t)φk (t)dt + s2k = Es −

k=1

N X

s2k

k=1 N X s2k (joules) k=1

Basically the energy of the approximation error is the signal energy minus the energy in the approximation signal.

P5.10 (a) The coefficients sij , i, j ∈ {1, 2} are found as follows: Z 1 Z 1 s11 = s1 (t)φ1 (t)dt = 1, s12 = s1 (t)φ2 (t)dt = 1, 0 0 Z 1 Z 1 s21 = s2 (t)φ1 (t)dt = 1, s22 = s2 (t)φ2 (t)dt = −1. 0

Solutions

0

Page 5–12

Nguyen & Shwedyk

A First Course in Digital Communications

(b)

·

φR 2 (t)

¸

·

cos θ sin θ − sin θ cos θ

=

= (cos θ)φ1 (t) + (sin θ)φ2 (t)

¸·

θ=600 =

φ1 (t) φ2 (t)

¸

(5.21)

√ 1 3 φ1 (t) + φ2 (t) 2 2

we

φR 1 (t)

φR 1 (t) φR 2 (t)

dy k

Note that these results can also be found by inspecting that s1 (t) = φ1 (t) + φ2 (t) and s2 (t) = φ1 (t) − φ2 (t).

= −(sin θ)φ1 (t) + (cos θ)φ2 (t)

θ=600 =

√ 3 1 − φ1 (t) + φ2 (t) 2 2

(5.22)

φ1R (t ) (1 + 3 ) 2 0 .5

0

1

t

φ2R (t )

(1 − 3 ) 2

&

(1 − 3 ) 2

Sh

R 0 The time waveforms of φR 1 (t) and φ2 (t) for θ = 60 are shown in Fig. 5.9.



0

0 .5

1

t

(1 + 3 ) 2

en

R Figure 5.9: Plots of φR 1 (t) and φ2 (t).

(c) Determine the new set of coefficients sR ij , i, j ∈ {1, 2} in the representation:

uy

·

s1 (t) s2 (t)

¸

· =

R sR 11 s12 R s21 sR 22

¸·

φR 1 (t) φR 2 (t)

• One method is a straightforward calculation: sR ij =

Z 0

1

¸ (5.23) si (t)φR j (t)dt. Thus

Ã

Ng

sR 11 = sR 21 =

à √ ! √ √ ! √ 1 1+ 3 1+ 3 R 1 1− 3 1− 3 2× = , s12 = 2× = 2 2 2 2 2 2 à ! à ! √ √ √ √ 1 1− 3 1− 3 R 1 −(1 + 3) 1+ 3 2× = , s21 = 2× =− . 2 2 2 2 2 2

• A second method is based on sij and θ directly: · ¸ · R ¸· R ¸ · R ¸· ¸· ¸ s1 (t) s11 sR φ1 (t) s11 sR cos θ sin θ φ1 (t) 12 12 = = R R s2 (t) sR φR sR − sin θ cos θ φ2 (t) 21 s22 2 (t) 21 s22 · ¸· ¸ s11 s12 φ1 (t) = s21 s22 φ2 (t)

Solutions

Page 5–13

Nguyen & Shwedyk

A First Course in Digital Communications

· ⇒

·

R sR 11 s12 R s21 sR 22

s11 s12 s21 s22

¸

· =

¸

· = ·

cos θ sin θ − sin θ cos θ

s11 s12 s21 s22

¸·

cos θ sin θ − sin θ cos θ

¸

¸−1

we

=

¸·

¸· ¸ s11 s12 cos θ − sin θ s21 s22 sin θ cos θ " √ √ # · ¸" 1 3 3+1 1 1 − 2 2√ √2 = 3 1− 3 1 1 −1

= θ=600

R sR 11 s12 R sR 21 s22

dy k

Therefore

2

2

2

√ 1− 3 √2 − 3+1 2

#

Sh

(d) Geometrical picture for both the basis function sets (original and rotated) and for the two signal points is shown in Fig. 5.10. Note: As seen from the geometrical picture,

φ2 ( t )

φ1R (t )

s1 (t )

&

φ2R (t )

θ = 60 0 −450 φ1 (t )

en

d

uy

s2 (t )

Figure 5.10: Geometrical representation of the basis function sets (original and rotated) and the two signals.

Ng

R R R sR 11 = −s22 and s12 = s21 . This is seen in (c) above and is true for any θ. But do not jump to conclusions that this is true all the time. It comes about because of the locations of s1 (t), s2 (t) in the signal space.

(e) Determine the distance d between the two signals s1 (t) and s2 (t) in two ways: (i) Algebraically: Z 2

d = 0

1

Z 2

[s1 (t) − s2 (t)] dt =

0.5

Z 2

1

2 dt + 0

22 dt = 4 ⇒ d = 2

(ii) Geometrically: From the signal space plot of (d) one has d = both signal have energy E1 = E2 = 2).

Solutions

(5.24)

0.5

√ 2 + 2 = 2 (Note that

Page 5–14

Nguyen & Shwedyk

A First Course in Digital Communications

φ3 ( t )

dy k

(f) Thought φ1 (t) and φ2 (t) can represent the two given signals, they are by no means a complete basis set because they cannot represent an arbitrary, finite-energy signal defined on the time interval [0, 1]. As a start to complete the basis, we wish to plot the next two possible orthonormal functions φ3 (t) and φ4 (t). Obviously there are may possible choices for φ3 (t) and φ4 (t). One simple choice is shown below. φ4 ( t )

2 1

0 1 4 − 2

1 2

we

2

t

t

1

0

1 3 2 4

− 2

Sh

Figure 5.11: An example of φ3 (t) and φ4 (t).

P5.11 (a) The first orthonormal basis function is chosen, somewhat arbitrarily, as φ1 (t) = √1 s1 (t). 2

&

Next s2 (t) is already orthogonal to φ1 (t). Thus, we set:  1  0≤t≤1  √2 , s2 (t) 1 √ q − , 1≤t≤2 φ2 (t) = R =  2 2  0, 2 elsewhere s (t)dt 0 2

s1 (t) √ E1

=

(5.25)

Then project s3 (t) onto φ1 (t) and φ2 (t) to have Z

en

Z

s31 =

2

0

√ 1 √ dt = 2, 2

s32 =

0

1

1 √ dt − 2

Z

2

1

1 √ dt = 0 2

uy

The third orthogonal signal is given by ½

√ φ3 (t) = s3 (t) − 2φ1 (t) − 0 · φ2 (t) = 0

0

−1, 2 ≤ t ≤ 3 0, elsewhere

0

Ng

Since the energy of φ3 (t) equal 1, one sets φ3 (t) = φ3 (t). Finally, the fourth orthonormal basis function is obtained by fist projecting s4 (t) onto φ1 (t), φ2 (t), φ3 (t): Z

s41 =

0

2

√ 1 √ × (−1)dt = − 2, 2

Z s42 = Z

s43 =

0

1

1 √ × (−1)dt + 2

Z

2

1

−1 √ × (−1)dt = 0 2

3

−1 × (−1)dt = 1 2

The orthogonal signal is given by 0

φ4 (t) = s4 (t) +

Solutions

√ 2φ1 (t) − 0φ2 (t) − φ3 (t) = 0

Page 5–15

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

In conclusion, we need only 3 orthonormal basis functions to represent the signal set. They are plotted in Fig. 5.12.  1 (  0≤t≤1 ½  √2 , √1 , 0 ≤ t ≤ 2 1, 2 ≤ t ≤ 3 1 2 √ − 2 , 1 ≤ t ≤ 2 , φ3 (t) = φ1 (t) = , φ2 (t) = 0, elsewhere  0, elsewhere  0, elsewhere φ3 ( t )

we

φ1 (t ) 1

1 2 0

1

t

2

0

2

3

t

φ2 ( t ) 1 2 0 1 − 2

2

t

&

1

Sh

−1

Figure 5.12: Three basis functions.

uy

en

In terms of the above basis functions, the four signals are:     √  2 √0 0  s1 (t)  φ1 (t)  s2 (t)   0 2 0   φ2 (t)     √   s3 (t)  =  2 0 1 √ φ3 (t) s4 (t) 2 0 1 (b) First the 3 functions are certainly orthogonal (they are what one calles “time orthogonal” since they do not overlap in time) and obviously each of them has unit energy. The question is do they represent the 4 time functions exactly. Well,

Ng

s1 (t) = v1 (t) + v2 (t) + 0 · v3 (t), s3 (t) = v1 (t) + v2 (t) − v3 (t),

s2 (t) = v1 (t) − v2 (t) + 0 · v3 (t)

s4 (t) = −v1 (t) − v2 (t) − v3 (t)

(c) Geometrical representation of the four signals is plotted in Fig. 5.13.

P5.12 The signal set is given as: s1 (t) =

³ ´ ( √ 3A cos 2πt Tb , 0 ≤ t ≤ Tb (

s2 (t) =

Solutions

0, A sin 0,

³

2πt Tb

´

otherwise , 0 ≤ t ≤ Tb otherwise

Page 5–16

Nguyen & Shwedyk

A First Course in Digital Communications

s3 (t )

dy k

v2 (t )

s1 (t )

1

we

−1

−1

0

Sh

s4 (t )

v1 (t )

1

−1

s2 (t )

v3 (t )

&

Figure 5.13: Geometrical representation of the 4 signals by {v1 (t), v2 (t), v3 (t)}.

en

sin Tb 2

Tb

t

cos

Figure 5.14

Ng

uy

0

(a) There is one cycle of the cos and one cycle of the sin in the interval [0, Tb ]. Graphically: Obviously the area of the product is zero, regardless of the amplitudes of the sine and cos. Therefore s1 (t) and s2 (t) are orthogonal.

Solutions

Page 5–17

Nguyen & Shwedyk

A First Course in Digital Communications

Show mathematically: Tb

0

¶ 2πt cos sin dt Tb 0 ¯ √ √ µ ¶ µ ¶ ¯ Tb Z Tb 3 2 3 2 Tb 4πt 4πt ¯¯ A sin dt = − A cos =0 2 Tb 2 4π Tb ¯¯ 0 0

√ 2Z s1 (t)s2 (t)dt = 3A =

Tb

µ

2πt Tb

à √ !2 3A 3A2 Tb √ = = Tb = 2 2 0 µ ¶ Z Tb A 2 A2 Tb = s22 (t)dt = √ Tb = 2 2 0 Z

Tb

s21 (t)dt

Sh

E2

µ

we

The energies of the two signals are: E1



dy k

Z

φ1 (t )

&

Simply choose the basis functions {φ1 (t), φ2 (t)} to be scaled versions of s1 (t) and s2 (t): r µ ¶ s1 (t) 2 2πt φ1 (t) = √ cos ; 0 ≤ t ≤ Tb = Tb Tb E1 r µ ¶ s2 (t) 2πt 2 φ2 (t) = √ = sin ); 0 ≤ t ≤ Tb Tb Tb E2 φ2 (t )

en

2 Tb

2 Tb

Tb 2

uy

0

2 Tb

t

Tb

0



t

2 Tb

Figure 5.15

Ng



Tb

Tb 2

(b) The optimum decision rule for the minimum distance receiver is:

Solutions

Page 5–18

Nguyen & Shwedyk

A First Course in Digital Communications φ2 (t )

s2 (t )

“Decide 0T”

π

0

E1

3

s1 (t )

φ1 (t )

we

E2

θ=

dy k

T

“Decide 1T”

Sh

Figure 5.16

“0D ”

(r1 − s21 )2 + (r2 − s22 )2 T (r1 − s11 )2 + (r2 − s12 )2 “1D ”

“0D ” p p E2 )2 T (r1 − E1 )2 + r22

&

⇔ r12 + (r2 − ⇔ −2r2

“1D ” “0D ”

p p E2 + E2 T −2r1 E1 + E1 “1D ”

r “0 ” Tb A2 Tb 3Tb 3A2 Tb D + + 2r1 A − T 0 ⇔ −2r2 A 2 2 2 2 “1 ” D √ “0D ” √ A Tb ⇔ 3r1 − r2 − √ T 0 2 “1D ”

uy

en

r

Ng

(c) From the signal space diagram, the squared distance between the two signals is: r 3A2 Tb A2 Tb d221 = E1 + E2 = + = 2A2 Tb = 2Tb (when A = 1 volt) 2 2 It follows that the bit error rate bit is: à ! Ã√ ! Ãr ! d21 /2 2Tb /2 Tb P [error] = Q p =Q p =Q ≤ 10−6 N N0 /2 N0 /2 0 r Tb ⇒ ≥ 4.75 ⇒ Tb ≥ (4.75)2 N0 N0 1 1 1 ⇒ rb = ≤ = 2 Tb (4.75) N0 (4.75)2 .10−8 = 4.43 × 106 bits/sec = 4.43 Mbps

Solutions

Page 5–19

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(d) To implement the optimum receiver that uses only matched filter, one needs to rotate the basis functions as shown in the signal space diagram to obtain φˆ1 (t) and φˆ2 (t): φˆ1 (t) is perpendicular to the line joining s1 (t) and s2 (t). Since s √ E1 3A2 Tb /2 √ π tan(θ) = √ = = 3⇒θ= 2 A T /2 3 E2 b It follows that

Sh

we

φˆ2 (t) = − sin θφ1 (t) + cos θφ2 (t) r r µ ¶ µ ¶ 2 2 2πt 2πt = − cos sin θ + sin cos θ Tb Tb Tb Tb r r ¶ µ ¶ µ 2 2 2πt 2πt π = sin −θ = sin − Tb Tb Tb Tb 3 The block diagram of the optimum receiver is shown below: Received signal r (t )

 2π t π  2 −  h(t ) = cos  Tb 3  Tb

t = Tb

"1D "

rˆ2

T "0 D "

&

Figure 5.17

en

The impulse response of the filter is found as: r µ ¶ 2 2π(Tb − t) π ˆ h(t) = φ2 (Tb − t) = sin − Tb Tb 3 r µ ¶ r µ ¶ 2 2πt 5π 2 2πt π = sin − + = cos − Tb Tb 3 Tb Tb 3

uy

The threshold can be found from the signal space diagram to be: " # r √ r ·p ³π ´ d ¸ p √ Tb 3 1 A Tb 21 T =− E1 sin =− 3A − A 2Tb =− − 3 2 2 2 2 2 2

Ng

(e) The average energy of the two signals s1 (t) and s2 (t) is : Eb =

E1 + E2 = A2 Tb 2

For a given (fixed) energy per bit Eb , antipodal signalling achieves the smallest error probability amongst all binary signalling schemes. This is because the distance between the two signals in antipodal signalling is the largest (again, for a fixed Eb ). Thus, the two signals can be modified as: p sˆ1 (t) = −ˆ s2 (t) = Eb φ1 (t) r µ ¶ ¶ µ p √ 2 2πt 2πt = A Tb cos = 2A cos , 0 ≤ t ≤ Tb Tb Tb Tb

Solutions

Page 5–20

Nguyen & Shwedyk

A First Course in Digital Communications s1 (t ) V



2V t Tb

Tb 2 V 2Tb E1 = 3

t

Tb

V

Tb

t

Tb 2

0

E2 = V 2Tb

−V

we

0

2V t + 2V Tb

dy k

s 2 (t )

Figure 5.18 φ1 (t )

s1 (t ) s2 (t )

V2

Tb 0

Tb 2

−V 2

0

Tb 2

Tb

1 Tb

t

1 − Tb

Tb 2 0

Tb

t

&

total area =0

t

Sh

3 Tb

φ2 (t )

Figure 5.19

en

RT P5.13 (a) To show s1 (t) is orthogonal to s2 (t), one needs to show that 0 b s1 (t)s2 (t)dt = 0. The above is quite obvious by plotting s1 (t)s2 (t). Since s1 (t) and s2 (t) are orthogonal, choose: √ s1 (t) s2 (t) 1 3 φ1 (t) = √ = √ s1 (t); φ2 (t) = √ = √ s2 (t) V Tb V Tb E1 E2

uy

Plots of φ1 (t) and φ2 (t) are shown above.

Ng

(b) Plots of signal space diagram and the optimum decision regions are shown below:

Solutions

Page 5–21

Nguyen & Shwedyk

A First Course in Digital Communications φ 2 (t ) φˆ2 (t )

φˆ1 (t )

s2 (t )

modified

“Decide 0T”

E2

sˆ2 (t )

θ=

E2 E1

π 6

s1 (t )

φ 1 (t )

we

0

dy k

“Decide 1T”

2

V Tb 3 E2 = V 2Tb E1 =

T

Sh

Figure 5.20

The optimum decision rule is that of the minimum distance rule: “0D ”

(r1 − s21 )2 + (r2 − s22 )2 T (r1 − s11 )2 + (r2 − s12 )2 “1D ”

p p E2 )2 T (r1 − E1 )2 + r22

&

⇔ r12 + (r2 − ⇔ −2r2

“0D ” “1D ”

“0D ”

p p E2 + E2 T −2r1 E1 + E1 “1D ”

en

p ⇔ −2r2 V Tb + V 2 Tb + 2r1 V

uy



√ “0 ” r1 V Tb D √ − r2 + T 0 3 “1 ” 3 D

r

“0 ”

Tb V 2 Tb D − T 0 3 3 “1 ” D

(c) The squared distance between the two signals: V 2 Tb 4V 2 Tb 4Tb + V 2 Tb = = 3 3 3

(when V = 1 volt)

Ng

d221 = E1 + E2 =

The error probability of the minimum distance receiver is: Ã ! Ã √ Ãr √ ! √ ! d21 /2 2 Tb /2 3 Tb 2 p P [error] = Q p =Q =Q ·√ ≤ 10−6 N0 3 N0 /2 N0 /2 r √ Tb 2 3 ⇒ ·√ ≥ 4.75 ⇒ Tb ≥ (4.75)2 · N0 N0 2 3 1 2 1 ⇒ rb = ≤ = 2 2 Tb (4.75) × 3 × N0 (4.75) × 1.5 × 10−8 = 2.955 × 106 bits/sec = 2.955 Mbps

Solutions

Page 5–22

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(d) Need to rotate φ1 (t) and φ2 (t) to obtain φˆ1 (t) and φˆ2 (t) as shown in the signal space diagram: φˆ1 (t) is perpendicular to the line joining s1 (t) and s2 (t). Since √ √ √ √ E1 V Tb / 3 1 π 1 3 √ tan(θ) = √ = = √ ⇒ θ = ⇒ sin θ = ; cos θ = 6 2 2 V Tb E2 3 To realize the optimum receiver that uses only one matched filter, one needs to obtain φˆ2 (t) and the threshold T .

we

φˆ2 (t) = − sin θφ1 (t) + cos θφ2 (t) √ 3 1 = − φ1 (t) + φ2 (t) 2 2

the filter’s impulse response is h(t) = φˆ2 (Tb − t). Both φˆ2 (t) and h(t) are plotted below φˆ2 (t )

Sh

h(t ) = φˆ2 (Tb − t )

3 2 Tb

Tb

0

3 2 Tb



3 Tb

t

0



&



Tb 2

3 2 Tb

3 2 Tb



Tb 2

t

Tb

3 Tb

en

Figure 5.21

The block diagram of the optimum receiver:

uy

r (t )

t = Tb

h(t )

 V Tb : 1D rˆ2 ≥ T = 2 3   V Tb ˆ r2 < T = 2 3 : 0 D 

Figure 5.22

Ng

where the threshold can be found from the signal space diagram to be: √ √ ³ π ´ 2V √T d21 p V Tb 1 V Tb b √ − √ · = √ T = − E1 sin = 2 6 2 2 3 3 2 3 Note: One can also make use of the result obtained in Question 2 to come up with the following implementation: ZTb

“1D ”

r(t)[s2 (t) − s1 (t)]dt T 0

Solutions

“0D ”

E2 − E1 2

Page 5–23

Nguyen & Shwedyk

A First Course in Digital Communications t = Tb s2 (Tb − t ) − s1 (Tb − t )

Figure 5.23

 E2 − E1 : 1D ≥ 2  < E2 − E1 : 0 D  2

dy k

r (t )

we

(e) Need to modify s2 (t) so that the Euclidean distance d21 is √maximized. Since the energy of s2 (t) cannot be changed ⇒ move s2 (t) to the point (− E2 , 0) so that the maximized distance is Ã√ ! p p p 3+1 √ d21 = E2 + E1 = V Tb 3

Sh

sˆ2 (t ) = E2 φ1 (t )

Tb 2

Tb

0

t

− 3V

&

Figure 5.24 P5.14 (a) The two signals are (time) orthogonal with equal energy E = onal basis is φ1 (t) = s√1 (t) & φ2 (t) = s√2 (t) . E E φ1 (t )

en uy Ng

(b)+(c) See above plots.

³q

(joules). The orthog-

φ2 (t )

2 A = T E

0

A2 T 2

2 T t

T 2

0

t

T 2

T

Figure 5.25

E N0

´

µq

¶ A2 Tb 2N0

(d) P [bit error] = Q =Q =Q q ¡ ¢ −1 1 ⇒ √12 2rb ×10 2 × 10−6 ⇒ rb = −3 = erfc

³q

1 2rb ×10−3

´

= 10−6

1 2. (4×10−3 )[erfc−1 (2×10−6 )]

(e) Rotate the orthogonal basis by 450 x and look at projection along φR 2 (t) =

s2 (t)−s1 (t) normalizing factor .

(f) We want s2 (t) as far away from s1 (t) as possible. Therefore make it antipodal, i.e., s2 (t) = −s1 (t).

Solutions

Page 5–24

Nguyen & Shwedyk

A First Course in Digital Communications

s2 ( t ) ↔ 1T

E 1D

E

0

dy k

r2 φ 2 (t )

φ1 (t ) r1

we

s1 ( t ) ↔ 0T 0D

r2

Figure 5.26

Sh

∫ ( • ) dt

Tb

0

r (t )

φ1 (t )

φ2 (t )

∫ ( • ) dt

r1

t = Tb

r2 − r1

-

Comparator

Threshold = 0

+

&

Tb

0

t = Tb

r2

Figure 5.27

Ng

uy

en

φ2R (t ) 1 T

0



t

T

1 T

Figure 5.28

∫ ( • ) dt

r (t )

T 2

r2R

Tb

0

t = Tb

φ2R (t )

Figure 5.29 P5.15 The noise, n(t) = n, is simply a DC level but the amplitude of the level is random and is

Solutions

Page 5–25

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

n (t )



si (t )

r (t ) = si (t ) + n 0≤t ≤T

Figure 5.30: A binary communication system with additive noise. s1 (t )

s2 (t )

V

we

Gaussian distributed with a probability density function given by ¶ T µ t 0 0 1 n2 T fn (n) = √ exp − 2N0 2πN−0 V

t

(5.26)

The received signal in the first bit interval can be written as:

t =T r1 ≥t 0≤⇒T0b D r ( t ) r(t) 1=T si (t) + n(t) = si (t) + n, 1 ≤ ( )dt ∫ r1 < 0 ⇒ 1D T 0

Sh

(5.27)

(a) The autocorrelation function of the noise n(t) is

s1 (tR ) n (τ ) = E{n(t)n(t + τ )} s=2 (tE{n ) × n} = E{n2 } = N0 ,

&

V any two noise samples, no matter how far Since Rn (τ ) =V N0 for every τ , it follows that they are, are correlated. In fact, because the noiseTis/ 2 modeled as a random DC level, T t t 0 0 of the noiseT at one time instant knowing the value gives a full knowledge about the noise at any other time instants. −V The power spectral density of the noise is

(t ) n (τ )} = F{N0 } = N0 δ(f ) Sn (f ) = nF{R

t =T 2

en

T 2 Note that Sn (f ) 6= 0 only when f = 0, or Sn (f ) = 0 for f 6= 0. This is of course due to d t ( ) the noise being modeled r (t ) = si (t ) + n si (t )as ∫0a random ∑ DC level! (b) Consider rthe for 2the 0 ≤ t ≤ T(as usual, s+1 (t) is used (t ) following signal set and the receiver r ≥ VT ⇒ 1transD mission of bit “0” and s2 (t) is for the transmission of bit “1”). In essence, the 0 r < VT 2 ⇒system −

t =T

T

∫ ( )dt

uy

s1 (t )

D

s2 (t )

T 2

V

Ng

0

r (t )

t

T

T 0

t

−V

1 T

t =T

T

∫ ( )dt

r1 ≥ 0 ⇒ 0D r1 < 0 ⇒ 1D

0

Figure 5.31 s 2 (t )

s1 (t )

V V RT uses antipodal signalling. The decision variable is r1 = T1 0 r(t)dt. More specifically, T /2 T t t ½ 0 0 T r1 = V + n1 if “0” was transmitted r1 = −V + n1 −ifV “1” was transmitted

Solutions

Page 5–26 T 2

∫ ( )dt

t =T 2



i

i

0≤t ≤T

Nguyen & Shwedyk s1 (t )

s2 (t ) Course in Digital Communications A First

V

r (t )

1 T

r1 ≥ 0 ⇒ 0D

∫ ( )dt

0 (c) Consider the signal set plotted in Fig. 5.32.

s 2 (t )

n (t )

V

r1 < 0 ⇒ 1D

we

s1 (t )

dy k

where the noise n1 is Gaussian, zero-mean and has a variance of N0 /T . Therefore the T t t 0 0 T is: error probability of the receiver s  Ã −V ! 2T 2V /2 V  P [error] = Q p = Q N t = T N /T 0 0 T

V



T

t

T /2

r (t ) = si 0(t ) + n 0≤t− ≤ VT

T

t

Sh

0s (t ) i

Figure 5.32:s2A (t )signal set.

s1 (t ) V

t =T 2

T 2

To achieve a probability of zero, the receiver needs to remove the noise completely. There ∫0 t ( )dtthe simplest is toT take 2t samples of r(t), at r(t ) and are many possibilities. Perhaps 1 0 0 r (t ) 0 < t1 < TT /2 and T /2 < t2 < T , i.e., they are in+ the firstr half ≥ VT and 2 ⇒ 1second r(t2 ), where D − V The decision rule is as follows: 2 0 r < VT ⇒ − half of the bit interval, respectively. D t =T

&

T

If sample r(t1 ) equals r(t(2 ), s1 (t). ∫ )dtthen tdecide =T

T T 2 r1 ≥ 0 ⇒ 0D 1 d t ( ) If sample r(tT1 )∫ is not equal to r(t2 ), then decide r1 < 0 ⇒s12D(t) (note that r(t1 ) > r(t2 )). 0

r (t )

When s1 (t) is transmitted, r(t) = V + n, 0 ≤ t ≤ T ⇒ r(t1 ) = r(t2 ).

en

s1 (t )

s 2 (t )

uy

When s2 (t) is transmitted, then V V ½ V + n, 0 ≤ t T≤/ 2T /2T t t 1 ) 6= r(t2 ). r(t) = ⇒ r(t 0 T −V + n,0 T /2 ≤ t ≤ T −V

Therefore it is possible to determine whether s1 (t) or s2 (t) was transmitted perfectly. Another receiver implementation is as follows:

Ng

T 2

t =T 2

∫ ( )dt 0

r (t )

T

t =T

+

r ≥ VT 2 ⇒ 1D



r < VT 2 ⇒ 0 D

∫ ( )dt T 2

Figure 5.33 The noise process in this problem is an extreme example of colored noise. Using the signal space ideas developed in this chapter one can see that the noise lies along one

Solutions

Page 5–27

Nguyen & Shwedyk

A First Course in Digital Communications

axis only, namely that given by φ1 (t) = s2 (t) √ . E

s1 (t) √ . E

It does not have any component along

dy k

φ2 (t) = This is illustrated in Fig. 5.34. This allows you to determine which signal is sent exactly. In this context, another receiver implementation is shown in Fig. 5.34. φ2 (t )

E

0

φ1 (t )

s1 ( t )

we

s2 ( t )

E

t =T

∫ ( )dt

T

r (t )

⇒1 r =0⇒0 r2 > 0 2

0

D D

φ2 (t )

Noise projection is on φ1 (t ) only

Sh

Figure 5.34: Noise projection and another receiver implementation.

P5.16 (Matched Filter )

h (t ) ⇔ H ( f )

sout (t ) + w out (t )

&

s( t ) + w (t )

Figure 5.35

en

(a) The response of the filter to the input signal component can be written as Z ∞ sout (t) = F −1 {S(f )H(f )} = S(f )H(f )ej2πf t df Z

−∞

−∞

·Z

S(f )H(f )ej2πf t0 df ⇒ [sout (t0 )]2 =

uy

⇒ sout (t0 ) =





S(f )H(f )ej2πf t0 df

¸2

−∞

(5.28)

Ng

(b) Since the input noise is white with power spectral density (PSD) N0 /2, it follows that the PSD of the output noise is N20 |H(f )|2 . The total power of the output noise Pwout is thus given by, Z Z ∞ N0 ∞ N0 2 |H(f )| df = |H(f )|2 df Pwout = (5.29) 2 2 −∞ −∞ From (5.28) and (5.29) the SNRout can be written as,

SNRout =

)]2

[sout (t0 Pwout

hR ∞ =

j2πf t0 df −∞ S(f )H(f )e Z ∞ N0 |H(f )|2 df 2

i2 (5.30)

−∞

Solutions

Page 5–28

Nguyen & Shwedyk R∞

+ λb(t)]2 dt ≥ 0 (always). Considered as a quadratic in λ:

−∞ [a(t)

dy k

(c) Now

A First Course in Digital Communications

αλ2 + βλ + γ ≥ 0 with coefficients α, β and γ given as: Z



b2 (t)dt

α =

(5.31)

(5.32)

−∞ Z ∞

β = 2 Z γ =

a(t)b(t)dtλ

(5.33)

we

−∞ ∞

a2 (t)dt

(5.34)

−∞

Sh

The above inequality requires that the quadratic αλ2 +βλ+γ cannot have any real roots, otherwise it was cross the real axis for some value of λ (actually at two points) and would be less than 0. From the formula for the roots this means that ∆ = β 2 − 4αγ < 0, which, upon substituting for α, β, γ is ·Z ∞ ¸2 Z ∞ Z ∞ 2 a(t)b(t)dt ≤ a (t)dt b2 (t)dt (5.35) −∞

−∞

−∞

&

which is known as Cauchy-Schwartz inequality. The equality holds when a(t) = Kb(t), where K is an arbitrary (real) scaling factor, i.e., the shapes of a(t) and b(t) are the same. Z Z ∞

(d) Using Parseval’s relationship,



a(t)b(t)dt =

−∞

5.35 becomes

·Z



¸2



A(f )B ∗ (f )df (see P2.26, page 69),

−∞

A(f )B (f )df

Z

en



−∞

Z



2



|B(f )|2 (f )df

|A(f )| df −∞

(5.36)

−∞

where equality holds when A(f ) = KB(f ), K an arbitrary (real) scaling constant. (e) Now let A(f ) = H(f ) and B ∗ (f ) = S(f )ej2πf t0 , then (5.36) becomes:

uy

·Z



j2πf t0

H(f )S(f )e

¸2 df

Z ≤

−∞

Z



2



|H(f )| df −∞

|S(f )|2 (f )df

Combining (5.30) and (5.37) gives: R∞ R∞ Z ∞ 2 2 |S(f )|2 −∞ |H(f )| df −∞ |S(f )| (f )df SNRout ≤ = df R N0 ∞ |H(f )|2 df −∞ N0 /2 2 −∞

Ng

(5.37)

−∞

(5.38)

(f) The maximum value of SNRout is achieved by choosing A(f ) = KB(f ), i.e., H(f ) = KS ∗ (f )e−j2πf t0 . Finally, the corresponding impulse response of the filter is h(t) = Ks(t0 − t)

(5.39)

where K is any real number. Note that to arrive at the above relation, the following two properties of the Fourier transform have been used (x(t) is a real function here): 1. Time shift: x(t − t0 ) ⇔ X(f )e−j2πf t0 . Solutions

Page 5–29

Nguyen & Shwedyk

A First Course in Digital Communications

K

t 0

t

0

T

2T 3

T 3

dy k

s2 (t )

s1 (t )

2T 3

T

we

−K

T 3

Figure 5.36: Signal set matched to the filter’s impulse response. 2. Time reversal: x(−t) ⇔ X ∗ (f ).

Sh

In our situation we take t0 to be Tb , the bit duration. Therefore h(t) = Ks(Tb − t). P5.17 (a) The signal set would be ±Kh(Tb − t) where K is a scaling factor that determines the transmitted energy (or average power). They plot as in Fig. 5.36. This makes h(t) a matched filter to the signal set.

&

(b) Assume K = 1. The transmitted energy of either signal is ¶ ¸ Z Tb · Z Tb Z Tb /3 µ 3 2 3 Tb 3 2 2 t dt + t+ (joules) − dt = E= s1 (t)dt = T 2T 2 3 b b Tb /3 0 0 Ãr ! Ãr ! µr ¶ 2E 2Tb 2 P [bit error] = Q =Q =Q ≤ 10−3 N0 3N0 3rb × 10−9 2 × 109 = 6.98 × 107 bits/second = 69.8 Mbps 3[Q−1 (10−3 )]2

en

rb ≤

P5.18 (Antipodal signalling)

Ng

uy

s(t )

A

0

y (t )

Tb / 3

2Tb / 3

Tb

t

Figure 5.37

2 A2Tb 3 (a) The impulse response of the filter matched to s(t) is

h(t) = s(Tb − t) = s(t)

(5.40)

2

b 3 last equality we recognize that s(t) is even about t = T /2. where to arrive atA Tthe b Therefore, the impulse response h(t) looks exactly the same as s(t).

Solutions

0

2Tb / 3

Tb

4Tb / 3

t

2Tb 5–30 Page

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(b) The output of the matched filter when s(t) is applied at the input is Z t y(t) = s(t) ∗ h(t) = s(λ)s(t − λ)dλ 0  0 t 1 and the Table 6.3. If the bits bk are statistically independent and equally probable then each of combinations of ck , ck+n is equally probable, i.e., P [ck ck+n ] = 1/9. ∴ E{ck ck+n } = 1(1/9) − 1(1/9) − 1(1/9) + 1(1/9) = 0. Therefore they are uncorrelated. Now Equation (5.127) in the textbook states that:

Solutions

S(f ) =

∞ |P (f )|2 X Rc (m)e−j2πmf Tb Tb m=−∞

Page 6–22

Nguyen & Shwedyk

A First Course in Digital Communications

ck+n 1 −1 0 1 −1 0 1 −1 0

ck ck+n 1 −1 0 −1 1 0 0 0 0

we

ck 1 1 1 −1 −1 −1 0 0 0

dy k

Table 6.3

Sh

where Rc (m) is the autocorrelation we have just determined and P (f )

Z

Tb

F {NRZ signal} = V e−j2πf t dt 0 · ¸ V −jπf Tb ejπf Tb − e−jπf Tb e πf 2j | {z }

=

=

&

sin(πf Tb )

⇒ |P (f )|2 = V 2 Tb2

Finally,

i sin2 (πf Tb ) h j2πf Tb −j2πf Tb . R (−1)e + R (0) + R (1)e c c c (πf Tb )2 | {z } 1 1 =− [2 cos(2πf Tb )] + 2} | 4 {z

en

S(f ) = V 2 Tb

sin2 (πf Tb ) (πf Tb )2

uy

=sin2 (πf Tb )

Ng

qR nTb P6.21 (a) The decision rule of (6.14) is compute di = [r(t) − Si (t)]2 dt for each possible 0 transmitted sequence and choose the smallest. Since di is obviously ≥ 0 this decision rule can be restated as compute d2i for each possible transmitted sequence and choose the smallest, i.e., compute Z Z nTb r2 (t)dt − 2 d2i = 0

0

nTb

Z r(t)Si (t)dt +

0

nTb

Si2 (t)dt, i = 1, . . . , M = 2n ,

and choose smallest.

R nT Now the term 0 b r2 (t)dt is the same for each i and can be discarded. Similarly R nTb 2 Si (t)dt is the same regardless of i, i.e., for Miller modulation each transmitted 0 signal (sequence) has the same energy. It may also be discarded. Therefore the decision

Solutions

Page 6–23

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

rule can be stated as: Compute Z nTb r(t)Si (t)dt, i = 1, . . . , m = 2n , −2 0

and choose smallest.

The factor 2 does not affect which i yields the smallest relative value. Multiplying though by −1 changes the smallest into largest. Therefore the decision rule is:

0

we

Compute Z nTb r(t)Si (t)dt, i = 1, . . . , m = 2n , and choose smallest.

j=1

jTb

(j−1)Tb

sult.

rj (t)Sij (t)dt =

n X

"

(j) (j) [r1 , r2 ]

Sh

Writing this as

n Z X

j=1

(j)

si1 (j) si2

# we get the desired re-

&

(b) The steps are essentially the same as before, except now one computes the crosscorrelation between the signal projections and the signal components along a branch, adds to the surviving value of the state from which the branch emanates and compares with all the incoming path values at the state on which the branch terminates. Choose the largest and discard all others. Do this for each state. Table 6.4: Cross correlation table: r1 s1 + r2 s2 0 → Tb −0.2 −0.4 0.2 0.4

uy

en

(s1 , s2 ) s1 (t) → (1, 0) s2 (t) → (0, 1) s3 (t) → (−1, 0) s4 (t) → (0, −1)

Tb → 2Tb 0.2 −0.8 −0.2 0.8

2Tb → 3Tb −0.61 0.5 0.61 −0.5 (j) (j)

3Tb → 4Tb −1.1 0.1 1.1 −0.1

(j) (j)

(c) Trellis showing the branch correlations, i.e., r1 si1 + r2 si2 , is in Fig. 6.47. The pruned trellis showing the survivors & partial path correlations is in Fig. 6.48. To compare with the minimum distance approach remember that the minimum distance corresponds to maximum correlation, i.e., the relationship is an inverse one.

Ng

P6.22 (a) By error probability, what is meant is bit error probability. We are not all that interested in symbol error probability. From the symmetry, we can say that P [bit error] = P [bit error |0T ] = P [bit error |s1 (t)] (or P [bit error |s2 (t)]).

A bit error occurs when s1 (t) is transmitted whenever the sufficient statistics (r1 , r2 ) are closer to either s3 (t), s4 (t) then to s1 (t) (or s2 (t)). The geometrical picture looks as in Fig. 6.49. Clearly, Ã ! Ãr ! √ E E . P [bit error] = Q √ p =Q N0 2 N0 /2

Solutions

Page 6–24

Nguyen & Shwedyk

A First Course in Digital Communications Input bit=0

s1 (t )

0.61

0.2 0.5

−0.4

dy k

Input bit=1

0.8

0.8

−0.5

−1.1

−0.1

we

−0.2 s2 (t )

0.1

−0.61

0.2

s3 (t )

1.1

1.1

−0.1

−0.61

−1.1

0.1

0.5

0

Sh

s4 (t )

2Tb

Tb

r1(1) , r2(1) − 0.2, − 0.4

4Tb

3Tb

r1( 3) , r2(3) − .61, + 0.5

r1( 2) , r2( 2 ) + 0.2, − 0.8

⋅⋅⋅

r1( 4 ) , r2( 4) − 1.1, + 0.1

&

Figure 6.47

s1 (t )

0.2

en

s3 (t )

0.4

0.39

−0.29

−0.6

0.81

2.6

1.5

0.4

−1.1

1.4

3Tb

4Tb

−0.4

uy

s2 (t )

1.0

0.5

s4 (t )

Tb

2Tb

Figure 6.48

Ng

0

⋅⋅⋅

(b) Matlab works to be added.

P6.23 Matlab works to be added.

Solutions

Page 6–25

A First Course in Digital Communications

Sh

we

dy k

Nguyen & Shwedyk

r2 = −r1

r2

&

s 2 (t )

s3 ( t )

E

0

r1

π 4

Error occurs if

( r1 , r2 ) fall here

E 2

xmitted signal

s4 ( t )

Figure 6.49

Ng

uy

en

s1 (t )

Solutions

Page 6–26

dy k

Chapter 7

we

Basic Digital Passband Modulation P7.1

  

λ 4 c 3 × 108 wave length: λ = = f f

Sh

Antenna diameter: d =

(a) Direct coupling:

c 3 × 108 = 4f 4f

3 × 108 = 18.75 × 103 (m) = 18.75(km) 4 × 4 × 103

&

f = 4kHz ⇒ d =

 

⇒ d=

(b) Via modulation with carrier frequency fc = 1.2GHz ⇒d=

3 × 108 3 × 108 = = 62.5 × 10−3 (m) = 6.25(cm) 4fc 4 × 1.2 × 109

en

The above results clearly show that carrier-wave or passband modulation is an essential step for all systems involving radio transmission. P7.2 Express NRZ signal in terms of NRZ-L signal as follows: sNRZ (t) = 1/2 [sNRZ−L (t) + 1]

(7.1)

uy

Note that the factor 1/2 scales the autocorrelation by (1/2)2 = 1/4, the constant 1 shifts (vertically) the autocorrelation by 1 and sNRZ−L (t) has an autocorrelation of RsNRZ−L (τ ). Putting all this together we have ³ ´ ( |τ | 1 1 ¤ + 1 − 1£ Tb , |τ | ≤ Tb RsNRZ (τ ) = RsNRZ−L (τ ) + 1 = 4 4 1 4 , |τ | > Tb

Ng

4

Try to generalize this: Let y(t) = K[x(t) + A] where K, A are constants. x(t) has mean value mx and autocorrelation, Rx (τ ). What then is Ry (τ )?

P7.3 Consider (7.41), P [m2 |m1 ]. This probability is the volume under f (ˆ r2 , rˆ1 |m1 ) in the 1st quadrant as shown below. Note that given m1 , ˆr2 and q ˆr1 are q statistically independent Gaussian random variables, variance = N20 , and means − E2s , E2s , respectively. The volume is therefore ·Z ∞ ¸ ·Z ∞ ¸ Z ∞ Z ∞ f (ˆ r2 |m1 )f (ˆ r1 |m1 )dˆ r2 dˆ r1 = f (ˆ r2 |m1 )dˆ r2 f (ˆ r1 |m1 )dˆ r1 rˆ2 =0

rˆ1 =0

rˆ2 =0

1

rˆ1 =0

A First Course in Digital Communications 



Z ∞  1  P [m2 |m1 ] =  √ q e  2π N0 rˆ2 =0 2

µ ¶2 Es r ˆ2 + 2 µ ¶ − N0 2 2

"





Z ∞  1  dˆ r2   √ q e   2π N0 rˆ1 =0 2

µ ¶2 Es r ˆ1 + 2 µ ¶ − N0 2 2

dy k

Nguyen & Shwedyk

   dˆ r1  

Ãp !# à p ! · µ ¶¸ µ ¶ Es /2 Es /2 Es Es Q 1−Q Q = 1−Q N0 /2 N0 /2 N0 N0

=

we

Graphically the 2 integrals above compute the areas illustrated in Fig. 7.1 r2 rˆ2

rˆ1

2nd integral

 Es  = 1 − Q    N0  m1 ( s1 ( t ) ) r1

f ( rˆ1 | m1 )

0

Sh

f ( rˆ2 | m1 )

Es 2



1st integral = Q   

Es N0

   

&

Figure 7.1: Areas to compute P [m2 |m1 ]. Similarly P [m3 |m1 ] is given by the volume under f (ˆ r2 , rˆ1 |m1 ) in the 2nd quadrant, i.e., by the product of the areas in Fig. 7.2. r2

rˆ1

en

rˆ2

 Es 2   Es  = Q   = Q   2 N 0    N0 

Es 2 r1

Ng

uy

0

Es 2

 Es 2   = Q   = Q  2 N 0     P [ m3 m1 ] = Q 2  

Es   N 0  Es   N 0 

Figure 7.2: Areas to compute P [m3 |m1 ].

Finally P [m4 |m1 ] = P [m2 |m1 ] – as can be seen graphically.

P7.4 (a) As in the case of QPSK with Gray mapping, to determine the bit error probability for the above mapping it is necessary to distinguish between the different message errors (or signal errors). Again because of symmetry it is sufficient to consider only a specific

Solutions

Page 7–2

Nguyen & Shwedyk

A First Course in Digital Communications

s2 (t )

E′ s1 (t )

s3 (t )

Bit Pattern

Signal Transmitted

00 11 10 01

s1 (t) s2 (t) s3 (t) s4 (t)

we

0

φ1 (t )

dy k

φ2 (t )

Sh

s4 ( t )

Figure 7.3: QPSK modulation.

m - dimensiona l observation space ( r1, r2 ,K , rm ) say s1 (t). Then the different error probabilities are signal,

!" Ãr !# 2Eb 2Eb P [s2 (t)|s1 (t)] = Q 1−Q N0 Choose s1 (t ) or N m10 ! Ãr 2Eb P [s3 (t)|s1 (t)] = Q2 N0 Ãr !" Ãr !# 2Eb 2Eb P [s4 (t)|s1 (t)] = Q 1−Q N0 ℜN 20

&

Ãr

ℜ4

en

Choose s4 (t ) or m4

ℜ3

(7.2) (7.3) (7.4)

Choose s2 (t ) or m2

Note that the above result does not depends on the specific mapping. Then the bit error probability is given by

uy

Choose s3 (t ) or m3

ℜ1

Ng

P [bit error] = = 1.0P [s2 (t)|s1 (t)] + 0.5P [s4 (t)|s1 (t)] + 0.5P [s3 (t)|s1 (t)] = Ãr ! Ãr ! Ãr ! 2Eb 2Eb 2Eb 2 = 1.5Q −Q ≈ 1.5Q N0 N0 N0

(7.5)

´ ³q 2Eb (b) Recall that the bit error probability of QPSK employing Gray mapping is Q N0 . Compared to (7.5) we see that with anti-Gray mapping the bit error probability is increased by a factor of 3/2. The plots of both error probabilities are shown in Figure 7.4.

P7.5 (a) Since the symbols are equiprobable and the channel is AWGN the demodulator is a minimum distance one. The decision boundary and decision regions are shown in Fig. 7.5.

Solutions

Page 7–3

A First Course in Digital Communications

Pr[error]

10

10

10

10

10

−1

−2

we

10

0

−3

−4

−5

Sh

10

dy k

Nguyen & Shwedyk

Gray mapping Anti−Gray mapping

−6

0

2

4 Eb/N0 (dB)

6

8

10

en

&

Figure 7.4: Error performance of QPSK with Gray and Anti-Gray mapping.

2 sin(2π f c t ) Ts

φ2 ( t ) =

r2

Decision region for s1 (t )

ℜ2

Ng

uy

b1b2 01 s 2 (t )

s3 ( t ) b1b2

ℜ3

00

d1

Es

d2

b1b2 11 s1 (t )

π /6

0

r1

ℜ1

φ1 (t ) =

2 cos(2π f c t ) Ts

s 4 (t )

b1b2 10

ℜ4

Figure 7.5: Asymmetric QPSK.

Solutions

Page 7–4

Nguyen & Shwedyk

A First Course in Digital Communications

P [symbol error] = 1 − Pr[symbol correct] = 1 − Pr[symbol correct|s1 (t)]

dy k

√ √ Es cos θ and d2 = Es sin θ, where θ = π/6.

(b) In Fig. 7.5, d1 =

(due to symmetry)

we

= 1 − P [(r1 , r2 ) ∈ ∆¯ sT = − , b2 = 1 P sT = − ¯b2 = 1 2 2 · µ ¶ µ ¶¸ µ ¶ µ ¶ µ ¶ ∆ 3∆ 1 ∆ 3∆ 2 Q √ +Q √ =Q √ +Q √ 2 2N0 2N0 2N 2N0 ¶ µ ¶ µ ¶¸ 0 · µ 3∆ 5∆ 1 ∆ +Q √ −Q √ 2Q √ 2 2N 2N0 2N0 µ ¶ 0 µ ¶ µ ¶ ∆ 1 3∆ 1 5∆ √ √ √ Q + Q − Q . 2 2 2N0 2N0 2N0 ·

Page 8–3

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

The arithmetic averaging of the two individual bit error probabilities gives: µ µ µ ¶ ¶ ¶ 3 ∆ 3∆ 1 5∆ P [bit error] = Q √ +Q √ − Q √ (same as in (a)). 4 4 2N0 2N0 2N0

Sh

we

The first observation is that the individual bit errors are not the same, the probability of b2 being in error is higher by approximately a factor of 2. The overall bit error probability is somewhere between the two individual bit error probabilities. If one makes the reasonable approximations based on the result of P8.2(b) then ¶ µ 3 ∆ P [bit error] = Q √ 4 2N0 µ ¶ 1 ∆ √ P [b1 in error] = Q 2 2N0 µ ¶ ∆ P [b2 in error] = Q √ 2N0 ³ ´ ∆ Using the approximate expression we have P [bit error] = 12 Q √2N , which is close to 0 the exact expression. P8.4 (a) From Fig. 8.27 in the textbook, one can obtain the decision rule for bit b2 as follows: b2 =0

&

|rI | R ∆.

(8.5)

b2 =1

(b) Denote s1 (t) = {00}, s2 (t) = {01}, s3 (t) = {11}, s4 (t) = {10}. The error probabilities can be calculated as: 4

1X P[b1 in error|si (t)] 4 i=1 · µ ¶ µ ¶ µ ¶ µ ¶¸ 1 3∆ ∆ ∆ 3∆ Q √ +Q √ +Q √ +Q √ 4 2N0 2N0 2N0 2N0 · µ ¶ µ ¶¸ 1 ∆ 3∆ Q √ +Q √ . (8.6) 2 2N0 2N0

en

P[b1 in error] = =

uy

=

4

Ng

1X 1 P[b2 in error] = P[b2 in error|si (t)] = (P[b2 in error|s1 (t)] + P[b2 in error|s2 (t)]) 4 2 i=1 · µ ¶ µ ¶ µ ¶ µ ¶¸ ∆ 5∆ ∆ 3∆ 1 Q √ −Q √ +Q √ +Q √ = 2 2N0 2N0 2N0 2N0 µ ¶ µ ¶ µ ¶ ∆ 1 3∆ 1 5∆ = Q √ + Q √ − Q √ . (8.7) 2 2 2N0 2N0 2N0 Then the average bit error probability can be calculated as P[bit error] = =

P[b1 in error] + P[b2 in error] µ ¶2 µ ¶ µ ¶ 3 ∆ 1 3∆ 1 5∆ Q √ + Q √ − Q √ . 4 2 4 2N0 2N0 2N0

(8.8)

The error probabilities are identical to those of P8.3. Solutions

Page 8–4

A First Course in Digital Communications 00

01

3∆ − 2

∆ − 2

0

11

10

∆ 2

3∆ 2

Figure 8.1

rI

dy k

Nguyen & Shwedyk

we

P8.5 (a) With symbol demodulation, P [b1 in error] remains unchanged from P8.3(a) since the pattern for b1 is identical. On the other hand, P [b2 in error] = P [b2 in error|b2 = 0]P [b2 = 0] + P [b2 in error|b2 = 1]P [b2 = 1] 1 1 = P [b2 in error|b2 = 0] + P [b2 in error|b2 = 1] 2 2 ¯ ¯ ¸ ¸ · ¯ ¯ 3∆ 3∆ ¯b2 = 0 P −∆ < rI < 0¯¯ sT = − , b2 = 0 P sT = − 2 2 ¯ ¯ ¯ · ¸ · ¸ ¯ 3∆ 3∆ ¯¯ ¯ +P rI > ∆¯ sT = − , b2 = 0 P sT = − b2 = 0 2 2 ¯ ¯ ¯ ¸ · · ¸ ¯ ∆¯ ∆ +P −∆ < rI < 0¯¯ sT = , b2 = 0 P sT = ¯¯b2 = 0 2 2 ¯ ¯ · ¸ · ¸ ¯ ∆ ∆ ¯¯ ¯ +P rI > ∆¯ sT = , b2 = 0 P sT = ¯b2 = 0 2 2 ¶ µ ¶ µ ¶ · µ 3∆ 5∆ ∆ 1 −Q √ +Q √ Q √ 2 2N 2N 2N µ 0 ¶ µ 0 ¶ µ 0 ¶¸ ∆ 3∆ ∆ +Q √ −Q √ +Q √ 2N0 2N0 2N0 µ ¶ µ ¶ µ ¶ 3 ∆ 3∆ 1 5∆ √ √ √ Q −Q + Q 2 2 2N0 2N0 2N0 ¶ µ ¶ µ ¶ µ 1 3∆ 1 5∆ ∆ − Q √ + Q √ . Q √ 4 4 2N0 2N0 2N0

&

P [b2 in error|b2 = 0] =

Sh

·

en

=

=

uy

⇒ P [bit error] =

Compared ³ ´ with Gray coding ³the bit´ error probability is somewhat larger, basically ∆ ∆ Q √2N as compared to 34 Q √2N , but not dramatically so. 0

0

Ng

(b) Now consider demodulation directly to the bits. The decision rules are: b1 =1

rI R 0 b1 =0

½

(rI < −∆) or (0 < rI < ∆) ⇒ b2 = 0 (−∆ < rI < 0) or (rI > 0) ⇒ b2 = 1

In terms of error probabilities, P [b1 in error] should be the same as in P8.4(b). P [b2 in error]

Solutions

Page 8–5

Nguyen & Shwedyk

A First Course in Digital Communications

is determined as follows:

Sh

we

dy k

P [b2 in error|b2 = 0] ¯ ¯ ¸ · ¸ · ¯ ¯ 3∆ 3∆ ¯b2 = 0 = P −∆ < rI < 0¯¯ sT = − , b2 = 0 P sT = − 2 2 ¯ ¯ ¯ · ¸ · ¸ ¯ 3∆ 3∆ ¯¯ ¯ +P rI > ∆¯ sT = − , b2 = 0 P sT = − b2 = 0 2 2 ¯ ¯ ¯ · ¸ · ¸ ¯ ∆¯ ∆ +P −∆ < rI < 0¯¯ sT = , b2 = 0 P sT = ¯¯b2 = 0 2 2 ¯ ¯ · ¸ ¸ · ¯ ∆ ∆ ¯¯ ¯ +P rI > ∆¯ sT = , b2 = 0 P sT = ¯b2 = 0 2 2 ¶ µ ¶¸ µ ¶ ½· µ 3∆ 5∆ 1 ∆ √ √ √ −Q +Q = Q 2 2N0 2N0 2N0 · µ ¶ µ ¶¸ µ ¶¾ ∆ 3∆ ∆ + Q √ −Q √ +Q √ 2N0 2N0 2N0 µ ¶ µ ¶ µ ¶ 3∆ 1 3 ∆ 5∆ −Q √ + Q √ = Q √ 2 2 2N0 2N0 2N0

uy

en

&

P [b2 in error|b2 = 1] =

¯ · ¸ ¯ 1 ∆ P rI < −∆¯¯ sT = − , b2 = 1 2 2 ¯ · ¸ ¯ 1 ∆ ¯ + P 0 < ∆ < rI ¯ sT = − , b2 = 1 2 2 ¯ · ¸ ¯ 1 3∆ , b2 = 1 + P rI < −∆¯¯ sT = 2 2 ¯ · ¸ ¯ 1 3∆ ¯ + P 0 < rI < ∆¯ sT = , b2 = 1 2 2 ½ µ ¶ · µ ¶ µ ¶¸ 1 ∆ ∆ 3∆ Q √ + Q √ −Q √ 2 2N0 2N0 2N0 ¶ · µ ¶ µ ¶¸ ¾ µ ∆ 3∆ 5∆ + Q √ −Q √ +Q √ 2N0 2N0 2N0 ¶ µ ¶ ¶ µ µ 3∆ 1 3 ∆ 5∆ √ √ √ −Q + Q Q 2 2 2N0 2N0 2N0 1 1 P [b2 in error|b2 = 0] + P [b2 in error|b2 = 1] 2 µ 2¶ ¶ µ µ ¶ 3 ∆ 3∆ 1 5∆ √ √ √ Q −Q + Q . 2 2 2N0 2N0 2N0

=

=

Ng

⇒ P [b2 in error] = =

1 1 P [b2 in error] + P [b1 in error] 2· µ ¶ 2 µ ¶ µ ¶¸ 1 ∆ 1 3∆ 1 5∆ = 2Q √ − Q √ + Q √ 2 2 2 2N 2N 2N µ ¶ 0 µ ¶ 0 µ ¶ 0 ∆ 1 3∆ 1 5∆ = Q √ − Q √ + Q √ . 4 4 2N0 2N0 2N0

P [bit error] =

Again, the result is the same as demodulating to the symbol. Solutions

Page 8–6

Nguyen & Shwedyk

A First Course in Digital Communications

000

↓↓↓ 001

011

010

7∆ − 2

5∆ − 2

3∆ − 2

∆ − 2

110

∆ 2

0

Figure 8.2

b1 =0

b2 =1

rI R 0;

|rI | R 2∆;

b1 =1

b2 =0

111

101

100

3∆ 2

5∆ 2

7∆ 2

rI

we

P8.6 (a)

dy k

b1b2b3

If {|rI | > 3∆ or |rI | < ∆} ⇒ b3 = 0, else b3 = 1.

Graphically, the decision regions are illustrated in Fig. 8.3.

000

011 b1 = 0

001 b2 = 0



0

110

010

rI 101

111 b1 = 1

100

b2 = 0

b2 = 1

b3 = 1

3∆

2∆

b3 = 0

b3 = 1

b3 = 0

&

b3 = 0

−∆

Sh

−2∆

−3∆

Figure 8.3

en

(b) Consider first bit b1 :

P [b1 in error] = P [b1 in error|b1 = 0]P [b1 = 0]

Ng

uy

+P [b1 in error|b1 = 1]P [b1 = 1]

= P [b1 in error|b1 = 0] ¯ · ¯ = P rI > 0¯¯ sT is one · × P sT is one of − |

(by symmetry) ¸ 7∆ 5∆ 3∆ ∆ ,− ,− , − , b1 = 0 of − 2 2 2 2 ¯ ¸ 7∆ 5∆ 3∆ ∆ ¯¯ ,− ,− , − ¯b1 = 0 2 2 2 2 {z } =1/4

· µ ¶ µ ¶ µ ¶ µ ¶¸ 1 ∆ 3∆ 5∆ 7∆ ⇒ P [b1 in error] = Q √ +Q √ +Q √ +Q √ . 4 2N0 2N0 2N0 2N0

Now consider bit b2 : P [b2 in error] = P [b2 in error|b2 = 0]P [b2 = 0] +P [b2 in error|b2 = 1]P [b2 = 1]

Solutions

Page 8–7

A First Course in Digital Communications

dy k

Nguyen & Shwedyk

P [b2 in error|b2 = 0] = ¯ · ¸ ¯ 7∆ 5∆ 5∆ 7∆ P −2∆ < rI < 2∆¯¯ sT is one of − ,− , , , b2 = 0 2 2 2 2 ¯ · ¸ 7∆ 5∆ 5∆ 7∆ ¯¯ × P sT is one of − ,− , , b2 = 0 2 2 2 2 ¯ | {z } =1/4

we

· µ ¶ µ ¶ µ ¶ µ ¶¸µ ¶ 11∆ ∆ 9∆ 1 3∆ −Q √ +Q √ −Q √ = 2 Q √ . 4 2N0 2N0 2N0 2N0

Sh

P [b2 in error|b2 = 1] = ¯ ¸ · ¯ 3∆ ∆ ∆ 3∆ ,− , , , b2 = 1 P rI < −2∆ or rI > 2∆¯¯ sT is one of − 2 2 2 2 ¯ · ¸ 3∆ ∆ ∆ 3∆ ¯¯ × P sT is one of − ,− , , b2 = 1 2 2 2 2 ¯ | {z } =1/4

&

· µ ¶ µ ¶ µ ¶ µ ¶¸µ ¶ ∆ 3∆ 5∆ 7∆ 1 . = 2 Q √ +Q √ +Q √ +Q √ 4 2N0 2N0 2N0 2N0

en

⇒ P [b2 in error] =

· µ ¶ µ ¶ µ ¶ 1 ∆ 3∆ 5∆ 2Q √ + 2Q √ +Q √ 2 2N0 2N0 2N0 µ ¶ µ ¶ µ ¶¸ 7∆ 9∆ 11∆ +Q √ −Q √ −Q √ . 2N0 2N0 2N0

Lastly consider bit b3 :

uy

P [b3 in error|b3 = 0] = ¯ ¸ · ¯ 7∆ ∆ ∆ 7∆ P −3∆ < rI < −∆ or ∆ < rI < 3∆ ¯¯ sT is one of − ,− , , , b3 = 0 2 2 2 2 ¯ · ¸ 7∆ ∆ ∆ 7∆ ¯¯ × P sT is one of − ,− , , b3 = 0 2 2 2 2 ¯ | {z }

Ng =

=

Solutions

=1/4

½ · µ ¶ µ ¶ µ ¶ µ ¶¸ ∆ 1 5∆ 9∆ 13∆ 2 Q √ −Q √ +Q √ −Q √ 4 2N0 2N0 2N0 2N0 ¶ µ ¶ µ ¶ µ ¶ ¸¾ · µ 5∆ 3∆ 7∆ ∆ −Q √ +Q √ −Q √ +2 Q √ 2N0 2N0 2N0 2N0 · µ ¶ µ ¶ µ ¶ 1 ∆ 3∆ 5∆ 2Q √ +Q √ − 2Q √ 2 2N0 2N0 2N0 µ ¶ µ ¶ µ ¶¸ 7∆ 9∆ 13∆ −Q √ +Q √ −Q √ . 2N0 2N0 2N0

Page 8–8

A First Course in Digital Communications

dy k

Nguyen & Shwedyk

P [b3 in error|b3 = 1] =

¯ ¸ ¯ 5∆ 3∆ 3∆ 5∆ 1 ¯ P (rI < −3∆) or (−∆ < rI < ∆) or (rI > 3∆) ¯ sT is one of − ,− , , , b3 = 1 4 2 2 2 2 · µ ¶ µ ¶ µ ¶ µ ¶ 1 ∆ 3∆ 7∆ 11∆ = ·2· Q √ +Q √ −Q √ +Q √ 4 2N0 2N0 2N0 2N0 µ ¶ µ ¶ µ ¶ µ ¶¸ 3∆ ∆ 5∆ 9∆ +Q √ +Q √ −Q √ +Q √ 2N0 2N0 2N0 2N0 · µ ¶ µ ¶ µ ¶ ∆ 1 3∆ 5∆ 2Q √ = + 2Q √ −Q √ 2 2N 2N0 2N0 µ 0 ¶ µ ¶ µ ¶¸ 7∆ 9∆ 11∆ −Q √ +Q √ +Q √ . 2N0 2N0 2N0

we

·

uy

en

&

Sh

· µ ¶ µ ¶ µ ¶ 1 ∆ 3∆ 5∆ ⇒ P [b3 in error] = 4Q √ + 3Q √ − 3Q √ 4 2N0 2N0 2N0 µ ¶ µ ¶ µ ¶ µ ¶¸ 7∆ 9∆ 11∆ 13∆ −2Q √ + 2Q √ +Q √ −Q √ . 2N0 2N0 2N0 2N0 ³ ´ ∆ Using the result of P8.2 we ignore, with confidence, all terms except Q √2N . There0 fore: µ ¶ ∆ 1 Q √ P [b1 in error] = 4 2N0 ¶ µ 1 ∆ P [b2 in error] = Q √ 2 2N0 µ ¶ ∆ P [b3 in error] = Q √ 2N0 ¶ ¶ µ µ 1 7 7 ∆ ∆ ∴ P [bit error] = = Q √ . · ·Q √ 3 4 12 2N0 2N0

P8.7 Consider rectangular QAM of λ bits with λI bits assigned to the I axis, λQ bits to the Q axis (λ = λI + λQ ). Gray code the λI , λQ bit patterns separately, concatenate them and assign the corresponding bit pattern to the I, Q signal.

Ng

Example: λI = λQ = 2. Gray code for either axis is 00, 01, 11, 10. Therefore the result is as shown in Fig. 8.4.

P8.8 (a) See Fig. 8.4 for the bit pattern b1 b2 b3 b4 . Then the decision rules are: b1 =1

rQ R 0; b1 =0

b2 =0

∆ ; 2 b2 =1

|rQ | R

b3 =1

rI R 0; b3 =0

b4 =0

∆ . 2 b4 =1

|rI | R

Note: rI , rQ are statistically independent Gaussian random variables, variance N0 /2 and a mean value which depends on the transmitted signal. (b) Since rectangular 16-QAM is basically 2 independent 4-ASK modulations the results of P8.2 apply directly. Solutions

Page 8–9

Nguyen & Shwedyk

A First Course in Digital Communications Q axis, rQ 1001

1011

1010

1100

1101

1111

1110

11



we

01 0100

0101

0111

0110

0000

0001

0011

0010

00

01

11

10

λI Gray code

Sh

b1b2b3b4

I axis rI



0

00

λQ Gray code

10

dy k

1000

Figure 8.4

(c) Make this bit a b1 or b3 bit.

&

P8.9 (a) Constellation (a): The average symbol energy is equal to the squared distance of any ¡ ¢2 signal from the origin. In terms of ∆ this is Es = 2 sin∆22.5◦ = 1.71∆2 joules/symbol.

en

Constellation (b): The origin is ³ 2 ´ ³ 2sum of ´the squared distances of each signal from the ∆ ∆2 9∆ ∆2 12∆2 2 = 4 4 + 4 + 4 4 + 4 = 12∆ . The average energy is Es = 8 = 1.5∆2 joules/symbol.

uy

³ 2 ´ ³ 2 Constellation (c): Again sum of the squared distances is = 4 ∆4 + ∆4 +4 ∆ + √ √ (8 + 4 2)∆2 . Therefore Es = 1 + 22 ∆2 = 1.71∆2 joules/symbol. ³

Constellation (d): Sum of the squared distances is 2 10∆2 . Therefore Es =

10∆2 8

∆2 4

´

³ +2

9∆2 4

´

³ +4

∆2 4



´2

2 2 ∆

=

´ + ∆2 =

= 1.25∆2 joules/symbol.

Ng

So from most efficient to least efficient the ranking is (d), (b), (c), (a). Note: Since each symbol (in each constellation) represents 3 bits, the average bit energy, Eb = Es /3 joules/bit.

(b) See Fig. 8.5. (c) See Fig. 8.6.

(d) The triangular constellation is most efficient at 1.125∆2 joules/symbol, which is ¶ µ 1.25 = 0.46 dB 10 log10 1.125 better than the best constellation in Fig. 8.28.

Solutions

Page 8–10

Nguyen & Shwedyk

A First Course in Digital Communications

100

1

dy k

φ2 (t )

101

111

110

φ1 (t )

0

000

0

Gray code for Q-axis bits Gray code for I-axis bits

011

010

01

11

10

we

00

001

Figure 8.5

φ1 (t )

en

&

0

Sh

φ2 (t )

These signals are most susceptible to error. They have the least space to maneuver in, at least the corresponding received signal does.

Figure 8.6

uy

P8.10 (a) Fig. 8.7 shows the signal constellation with the signals to be Gray coded numbered. A solution is also given and explained on the figure.

Ng

(b) Assume that all the 16 signal points are equally likely. The optimum decision boundaries of the minimum distance receiver are also sketched in Fig. 8.7.

Solutions

Page 8–11

A First Course in Digital Communications

dy k

Nguyen & Shwedyk

11

we

6

1

12

7

2

4

5

10

Sh

3

&

8

en

A graph of the signals to be Gray coded and their nearest neighbours looks as follows: 1 → 6 → 11 ↓

2 → 7 → 12

uy



9

The 4-bit Gray code is: 0000 → 1 0001 → 2 0011 → 3 0010 → 4 0100 0110 → 5



0111 → 10

4 → 5 → 10

0101 → 7

Ng

3→8→9

Somewhat heuristically assign the bit patterns to signals as shown. The remaining 4 bit patterns can be arbitrarily assigned to the 4 remaining signals.

1101 → 12 1100 1111 1110 1010 → 11 1001 → 9 1011 → 8 1000 → 6

Figure 8.7: V.29 constellation and “Gray mapping”.

Solutions

Page 8–12

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

P8.11 (a) The two constellations have roughly the same symbol error performance since the minimum distance for both constellations is 2. To see which one is more power-efficient, one needs to compute the average (or total) energies for the two constellations. £ ¤ ¯(a) = 1 4{(32 + 32 ) + (12 + 12 ) + 2(32 + 12 )} = 10.0 (joules) E 16

we

o n √ √ √ √ ¯(b) = 1 4[( 2)2 + ( 2 + 2)2 + ( 2 + 4)2 + ( 2 + 6)2 ] = 24.49 (joules) E 16 From the above results it can be seen that constellation (a) is much more power-efficient than constellation (b). (b) See Fig. 8.8. Gray code by trial and error - but intelligent trial and error.

Sh

φ2 ( t )

0100 0101

&

0111

0110 1110 1111 1101 1100

φ1 (t ) 1010 1011 1001 1000

Figure 8.8

Ng

uy

en

0000 0001 0011 0010

(c) See Fig. 8.8. The four outer (corner) signals are least susceptible to error since they have the least nearest neighbors, hence and the most space for the received signal to roam in.

P8.12 (a) By observation, dmin is maximized when ∆h = ∆v = ∆ (= dmin ), which means that (∆/2) tan θ = (3∆/2) = 13 ⇒ θ = 18.4◦ . (b) Gray code “vertical bits” and “horizontal bits” separately and then concatenate them. The result is shown in Fig. 8.9. ¡ ¢2 ¡ ¢2 √ (c) For the present constellation: Es = 32 dmin + 12 dmin = 52 (dmin )2 ⇒ dmin = 0.63 Es . √ For 8-PSK the geometry looks like in Fig. 8.10 and dmin = 0.765 Es , which is better. Solutions

Page 8–13

Nguyen & Shwedyk

A First Course in Digital Communications φQ (t ) 101

d min

0 0

110

dy k

111

d min

φI (t )

000

001

011

010

00

01

11

10

we

vertical bits →

100

1

horizontal bits →

Figure 8.9

0

d min = 2 Es sin ( 22.50 ) = 0.765 Es

Sh

450

Figure 8.10



uy

en

&

d min 2



2 d min ∆2 + = ∆2 4 4

⇒d

min

= 3∆

Figure 8.11

Ng

P8.13 (a) See Fig. 8.11. (b) – Have 6 signals with energy = d2min = 3∆2 joules. – 1 signal with zero energy. ³ ´2 ¡ ¢ – 1 signal with (dmin cos 30◦ )2 + 3d2min = 34 + 49 3∆2 = 9∆2 . Therefore Es = Eb =

[6(3) + 9] ∆2 27 = ∆2 joules/signal. 8 8 Es (joules/signal) 9 = ∆2 joules/bit. 3( bits/signal) 8

(c) See Fig. 8.12. (d) No, not possible. The signal at the origin has 6 nearest neighbors. Not possible to find six 3-bit patterns that differ from its 3-bit pattern by only 1 bit. At most there are three 3-bit patterns. Solutions

Page 8–14

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

rQ decision

boundaries

rI

we

0

Sh

Figure 8.12 Q

1010

0010

1000

1100

&

1110

0110

0111

en

1111

R1

0100



π /12

I

R2



0101

1101

0011

uy

0000

0001

1011

1001

Figure 8.13: The 16APSK constellation used in DVB-S2.

Ng

P8.14 (a) From Fig. 8.13, one can calculate R1 , R2 in terms of ∆ as follows: ∆ 2

π ∆ ⇒ R2 = π = 1.932∆. 12 2 sin 12 √ ∆ ∆ = R1 2 ⇒ R1 = √ = 0.707∆. 2 = R2 sin

(8.9) (8.10)

The average symbol energy is: Es(1) =

Solutions

¢ ¢ 1 ¡ 1 ¡ 12R22 + 4R12 = 44.79∆2 + 2∆2 = 2.9245∆2 . 16 16

(8.11)

Page 8–15

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

Q (8,8)

R2 R1

we

I

π

∆ 8



Sh



π

6

Figure 8.14: The modified 16APSK constellation.

&

(b) Similarly, from Fig. 8.14, R1 , R2 can be calculated as: R1 =

en

R2 =

∆ = 1.307∆. 2 sin π8 ∆ π = 2.073∆. π + ∆ cos 2 tan 8 6

(8.12) (8.13)

The average symbol energy is:

uy

Es(2) = =

¢ 1¡ 2 ¢ 1 ¡ 2 8R1 + 8R22 = R1 + R22 16 2 ¢ 1¡ 2 1.708∆ + 4.297∆2 = 3.003∆2 . 2

(8.14)

Ng

Since the minimum distances of both constellations are the same, the two constellations perform approximately identically in AWGN. The modified constellation has a larger (2) (1) average energy (Es > Es ). Therefore it is less power-efficient compared to DVB-S2 16APSK. Furthermore, the demodulator for the (8,8) signal constellation appears to be more complicated than that of the (4,12) constellation. But to back this up you should propose block diagrams of the demodulators.

(c) The (8,8) constellation cannot be Gray coded. This is based on the observation that 3 signals which are at equal distance (dmin ) from each other, i.e., lie on the vertices of an equilateral triangle, cannot be Gray coded. This is true regardless of the length of the bit sequence. Proof: 1) Let a, b, c be 3 unique n-bit sequences.

Solutions

Page 8–16

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

2) Let a and b differ in one bit, say in the jth position. 3) Now suppose c differs from a by only one bit. This cannot be in the jth position because then it would be identical to b. So let it differ in the ith position. 4) But this means that c differs from b in the jth and ith positions, hence violating the Gray mapping rule. Remember Gray code means that 2 binary sequences that are at minimum distance differ by only one bit.

we

The (4,12) constellation, however, can be Gray coded. See a Gray code table for 4-bit sequences in Fig. 8.15. 0000

0000

0001

0100

0011

1100

0010

1000

0110

0001

0101 0100 1100 1101 1111

0011

Sh

0111

For the signals on the inner circle

0110

0111 0101

1101

For the signals on the outer circle

1111

&

1110

0010

1110

1011

1010

1001

1011

1000

1001

en

1010

Figure 8.15

P8.15 (a) The probability of symbol error can be written as

uy

P [symbol error] = P [(error on I axis) or (error on Q axis)] = P [(error on I axis)] + P [(error on Q axis)] −P [(error on I axis) and (error on Q axis)]

Ng

Now P [(error on I axis)] is that of M -ASK with M = MI = 2λI (number of signal components along I axis). Similarly P [(error on Q axis)] is that of M -ASK with M = MQ = 2λQ . Finally P [(error on I axis) and (error on Q axis)] = P [(error on I axis)]P [(error on Q axis)] because the 2 events are statistically independent. This is because the noise components wI , wQ along the respective I and Q axes are statistically independent. Using Eqn. (8.18) on page 307 we obtain ¶ ¶ µ µ 2(MQ − 1) 2(MI − 1) ∆ ∆ + P [symbol error] = Q √ Q √ MI MQ 2N0 2N0 µ ¶ 4(MI − 1)(MQ − 1) 2 ∆ − Q √ MI MQ 2N0

Solutions

Page 8–17

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

Since it is more meaningful to express the error probability in terms of Eb , the average transmitted energy per bit, we find the relationship between Eb and ∆. First we determine the average transmitted energy per symbol (or signal). Now the transmitted signal can be written as r r ∆ 2 ∆ 2 sij (t) = (2i − 1 − MI ) cos(2πfc t) + (2i − 1 − MQ ) cos(2πfc t) 2 Ts 2 Ts i = 1, 2, . . . , MI = 2λI ; j = 1, 2, . . . , MQ = 2λQ 2

2

we

The energy of the (i, j)th signal is Eij = (2i − 1 − MI )2 ∆4 + (2j − 1 − MQ )2 ∆4 . Therefore MQ MQ MI X MI X X £ ¤ 1 ∆2 X Es = Eij = (2i − 1 − MI )2 + (2j − 1 − MQ )2 MI · MQ 4m i=1 j=1

i=1 j=1

(MI2 −1)M , 3

Sh

PMI PMQ (2i − 1 − MI )2 (other sum is the same except MI , MQ i=1 j=1 nP £ ¤ PMQ o MI 2 − 4(1 + M )i + (1 + M )2 are interchanged). It equals 4i I I i=1 j=1 1 . Using the P Pn n(n+1) n ; and straightforward algebra, the identities k=1 k 2 = n(n+1)(2n+1) k=1 k = 6 2 Consider the sum

where M = MI · MQ = 2λ . n o 2 2 + M 2 − 2 joules/symbol. More algebra yields Es = ∆ M I Q 12 √ √ 2 3 log M E Now Eb = Eλs = logEsM joules/bit which means that ∆ = q 2 2 2 b . Therefore

&

sum becomes

MI +MQ −2

2

! 6 log2 M Eb P [symbol error] = 2 −2 · N MI2 + MQ 0 Ãs ! 2(MQ − 1) 6 log2 M Eb + Q 2 −2 · N MQ MI2 + MQ 0 Ãs ! 4(MI − 1)(MQ − 1) 2 6 log2 M Eb Q − 2 −2 · N M MI2 + MQ 0

Ãs

uy

en

2(MI − 1) Q MI

Ng

It is worthwhile, because of all the algebra, to see if the above error probability √ expression reduces to (8.47), page 320 of the text for square QAM, i.e., MI = MQ = M . Also Eb plots of P [symbol error] versus N for non-square QAM are also of interest. To reduce 0 λ−1 the possibilities consider the special case of λI = λQ + 1 ⇒ λI = λ+1 2 , λQ = 2 , λ odd. Special case but the most practical one. Left as a further exercise. It is also of interest to see how the average energies divide between the inphase and quadrature axes. Along the quadrature axis we have MQ 1 X ∆2 ∆2 2 EQ = = (MQ − 1) (2i − 1 − MI )2 MQ 4 12

joules/“quadrature symbol”

j=1

Similarly EI = Solutions

∆2 2 12 (MI

− 1)

joules/“inphase symbol”. Page 8–18

Nguyen & Shwedyk

A First Course in Digital Communications Es 2 ,

dy k

Note EQ + EI = Es (as would be expected); further for square QAM, EI = EQ = i.e., the energy is divided evenly. Now consider non-square QAM M2 − 1 EI 22λI − 1 = I2 = 2λ EQ MQ − 1 2 Q −1 Now let λI = λQ + 1. Then the ratio becomes

As λ becomes large,

EI EQ

we

EI 22λ − 1 = 2λ , EQ 2 −1

λ odd.

→ 4, indeed it does this quite rapidly.

Sh

(b) The union upper bound is obtained by simply ignoring the overlapping regions, i.e., P [(error on I axis) and (error on Q axis)] or the Q2 (·) term. P8.16 In general, P [symbol error] =

M X

P [error|si (t)] P [si (t)]

i=1

M 1 X P [error|si (t)] M

(since signals are equally probable)

&

=

i=1

≤ P [worst-case conditional error]

Ng

uy

en

From the geometry the worst-case conditional error occurs when an “inner” signal is transmitted and equals the volume under the 2-dimensional pdf in the region outside the square or 1 − (volume under pdf inside the square). See Fig. 8.16. ∆

∆ signal point

Figure 8.16 ·

¶¸2

µ

Volume under pdf inside the square = 1 − 2Q

√∆ 2 N0 /2

"

Ã

P [symbol error] ≤ 1 − 1 − 2Q

Solutions

. Therefore,

∆ p 2 N0 /2

!#2 .

Page 8–19

Nguyen & Shwedyk

A First Course in Digital Communications √ √ q 2 3 Es . 2 −2 MI2 +MQ

It then follows that "

dy k

Now from P8.15, ∆ =

Ãs

P [symbol error] ≤ 1 − 1 − 2Q

6Es 2 2 − 2)N (MI + MQ 0

!#2

.

we

2 < 2M 2 . A somewhat looser upper bound can be obtained by noting that 2M ≤ MI2 + MQ 2 = 2M and the result, Eqn. (8.48), follows immediately. For For square QAM, MI2 + MQ 2 < 2M 2 the reasoning is as follows: non-square QAM where MI2 + MQ 2 by 2M 2 ⇒ the argument of the Q(·) function is smaller ⇒ that Q(·) is Replacing MI2 + MQ larger ⇒ 1 − 2Q(·) is smaller ⇒ [1 − 2Q(·)]2 is smaller ⇒ 1 − [1 − 2Q(·)]2 is larger ⇒

Ãs

"

Note that

6Es (2M 2 −2)N0

=

Sh

P [symbol error] ≤ 1 − 1 − 2Q 3Es (M 2 −1)N0

=

6Es (2M 2 − 2)N0

!#2 .

(8.15)

3λEb . (M 2 −1)N0

en

&

2 ≥ 2M or 22λI + 22λQ ≥ 2 · 2λ . Let λ = λ + k, k ≥ Remark: We show that indeed MI2 + MQ I Q λ−k 2λI + 22λQ = 2λ 2k + 2λ 2−k = 2λ (2k + 2−k ) and since 1 ⇒ λI = λ+k , λ = . Then 2 Q 2 2 2 > 2M . The inequality M 2 + M 2 < 2M 2 2k + 2−k > 2 for any k ≥ 1, then indeed MI2 + MQ I Q is quite obvious. q P8.17 The verification involves 2 changes of variable. First in the integral w.r.t. λ, let x = N20 λ. The integral becomes Z q 2 r1 N0 x2 1 √ e− 2 dx 2π x=−∞ q Now let y = N20 r1 . Then

uy

1 P [correct] = √ 2π

Z



y=−∞

·

1 √ 2π

Z

y

e

2 − x2

¸M −1 dx

e

¶2 µ r y− 2Es N0 − 2

dy

x=−∞

Finally observe that Es = λEb = log2 M Eb and that P [error] = 1 − P [correct].

Ng

P8.18 The transmitted signal can be written as: p p p p si (t) = (± Eb )φ1 (t)+(± Eb )φ2 (t)+· · ·+(± Eb )φj (t)+· · ·+(± Eb )φλ (t), i = 1, 2, . . . , M, √ where the jth component is + Eb if the jth bit of the transmitted bit sequence is 1 and √ − Eb if the jth bit is 0; j = 1, 2, . . . , λ. The demodulator consists of projecting the received signal r(t) = si (t)+w(t) onto φ1 (t), φ2 (t), . . . , φλ (t) to generate the sufficient statistics r1 , r2 , . . . , rλ , and using minimum distance to carve up the decision space. Geometrically this consists of choosing the signal point that lies in the quadrant that r1 , r2 , . . . , rλ fall in (convince yourself of this for λ = 2 and λ = 3; perhaps λ = 4 if you are really ambitious).

Solutions

Page 8–20

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

Now because of symmetry (and the fact that the signals are equally probable), one has P [error] = P [error|a specific signal or bit sequence] = 1 − P [correct|a specific signal] √ √ √ √ √ Choose for the specific signal the bit sequence (111 · · · 11), or ( Eb , Eb , Eb , · · · , Eb , Eb ), the signal that lies in the first quadrant. Now,

we

P [correct|this specific signal] = P [r1 > 0, r2 > 0, · · · , rλ > 0] √ where the rj ’s are Gaussian, mean value Eb , variance N0 /2 and statistically independent. Therefore, " Ãr !#λ λ Y 2Eb P [correct|this specific signal] = P [rj > 0] = 1 − Q . N0 j=1

Finally,

Ãr

Sh

"

P [error] = 1 − 1 − Q

2Eb N0

!#λ

.

Remark: The signal points also lie on the surface of a hypersphere of radius

√ λEb .

P8.19 (a) Simply put, the transmission bandwidth is halved. It is directly proportional to the dimensionality of the signal space.

en

&

(b) The sufficient statistics are generated by projecting the received signal onto the M 2 orthonormal bases (which are si (t) normalized to unit energy) to obtain r1 , r2 , . . . , rj , . . . , rM/2 . √ √ N0 Statistically rj is Gaussian with variance √ of 2 and a mean value of 0, + Eb , or − √Eb . Zero if the transmitted is not ±sj (t), + Eb if the transmitted signal is +sj (t) and − Eb is the transmitted signal is −sj (t). Note that we assume, for convenience, that the signal sj (t) has energy Eb . To obtain the decision rule consider just the special case of M = 4 and generalize from there. φ2 (t ), r2

r2 = − r1

s2 (t )

Eb

uy

Choose − s1 (t )

Ng

r2 = r1 Choose s1 (t )

− s1 (t )

s1 (t )

− Eb

0 − Eb

φ1 (t ), r1

Eb

− s2 (t )

Figure 8.17

The decision rule is: If |r1 | > |r2 | and

(

r1 > 0 choose s1 (t) r1 < 0 choose −s1 (t)

.

To generalize: M Compute |rj |, j = 1, 2, . . . , M 2 . Determine maximum |rj |, j = 1, 2, . . . , 2 . Then the decision is si (t) if rj > 0 and −si (t) if rj < 0.

Solutions

Page 8–21

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(c) Due to the symmetry of the decision space we have P [symbol error] = P [symbol error|any specific transmitted signal] = 1 − P [correct decision|any specific transmitted signal]. Choose the specific transmitted signal to be s1 (t). Then s1 (t) is chosen if (r1 > 0) and (−r1 ≤ r2 ≤ r1 ) and (−r1 ≤ r3 ≤ r1 ) and · · · and (−r1 ≤ r M ≤ r1 ). 2

we

To determine the probability of the above happening, fix the random variable r1 at a specific ¡√ value, ¢ say r1 = r1 ≥ 0. Note that the probability of this is fr1 (r1 |s1 (t))dr1 = N Eb , N20 dr1 . Then the probability of the event h i (−r1 ≤ r2 ≤ r1 ) and (−r1 ≤ r3 ≤ r1 ) and · · · and (−r1 ≤ r M ≤ r1 )|s1 (t) 2



r1

e

r2 − 2(N i /2) 0

&

1  √ qN 2π 20

Z

Sh

Q M2 is given by i=2 P [−r1 ≤ ri ≤ r1 |s1 (t)] where the random variables are Gaussian, zeromean (given that the transmitted signal is s1 (t)), variance N20 and statistically independent. Therefore the probability becomes

−r1

 M −1 2

dri 

· µ r ¶¸ M2 −1 2 = 1 − 2Q r1 N0

en

Now 0 ≤ r1 ≤ ∞, i.e., we find the weighted sum of the above, weighted by the probability of r1 = r1 :   µ r ¶¸ M2 −1 (r −√E )2 Z ∞ · 1 b 1 2 − N0 1 − 2Q r1 e P [symbol error] = 1 −  √ dr1  . N0 πN0 r1 =0

uy

It would be useful to plot and compare the above error probability with that of orthogonal modulation (such as M -FSK), either for the same number of signal points, or the same dimensionality of the signal space.

P8.20 (a) See Fig. 8.18.

Ng

(b) It can be seen from Fig. 8.18 that the USSB waveform does not have the desirable property of constant envelope of BPSK signal. This is the price one has to pay for a reduced (half) bandwidth of the USSB signal.

P8.21 One has to establish a bandwidth definition and also the error probability at which the signalling techniques are compared. In the text (Section 8.7 and Fig. 8.26) the bandwidth W is defined as the reciprocal of the symbol duration. Other possible definition that could have been used are null-to-null bandwidth, 95% power bandwidth and so on. The error probability used to determine the SNR is 10−5 . ¡r ¢ b (a) Using Eqn. (8.33), W = 2 logM2 M , we have FSK ¡ rb ¢ 2 log2 21 • 2-FSK: W = = 1. 2 Solutions

Page 8–22

Nguyen & Shwedyk

A First Course in Digital Communications

0

1

dy k

(a) BPSK signal 1

0

V 0 −V Tb

2Tb

3Tb 4Tb t (b) USSB−BPSK signal

0

1

1

V

0

5Tb

0

1

Sh

0 −V Tb

0

we

0

1

2Tb

3Tb t

4Tb

5Tb

6Tb

0

6Tb

• 4-FSK:

¡ rb ¢ W

&

Figure 8.18 =

2 log2 22 4

= 1.

And using Fig. 8.23 we have at P [symbol error] = 10−5 : Eb N0 Eb N0

= 12.6 dB.

en

• 2-FSK: • 4-FSK:

= 10.2 dB.

(b) The coordinates for the plot are given below. Fig. = = = =

uy

(i) For FSK, use M Eb @ 10−2 N0 Eb −7 N0 ¡@ 10 ¢ rb W

Eb rb 8.83 to determine N and Eqn. (8.83) for W . The results are: 0 8 16 32 64 5.1 4.5 4.2 3.5 (dB) 10.4 9.3 8.5 7.8 (dB) 0.75 0.5 0.3125 0.1875 (bits/sec/Hz)

Ng

(ii) For PSK, use Fig. 8.15 and Eqn. (8.39) for rb (8.81) for W . The results are: M = 2 4 8 16 Eb −2 = 4.4 5.3 8.4 13 N0 @ 10 Eb −7 @ 10 = 11.25 11.6 14.8 19.5 N0 ¡ ¢ rb = 1 2 3 4 W

(iii) For QAM, use Fig. 8.20 are: M = 4 Eb −2 = 5.3 N0 @ 10 Eb −7 = 11.4 N0 ¡@ 10 ¢ rb = 2 W

Solutions

to determine 16 9.7 15.6 4

64 14.4 20 6

Eb N0

M = 64 to determine 32 18 24.5 5

64 22.7 29.7 6

Eb N0 .

Use Eqn.

(dB) (dB) (bits/sec/Hz)

and use Eqn. (8.81) for

rb W.

The results

(dB) (dB) (bits/sec/Hz) Page 8–23

Nguyen & Shwedyk

A First Course in Digital Communications

Note for QAM: W =

1 Ts

=

1 λTb

=

rb log2 M



rb W

= log2 M .

dy k

The plots on power-bandwidth plane are shown in Fig. 8.19. Note that the points shift toward Shannon’s capacity curve for a larger value of SER. This does not mean that it is beneficial to increase the SER requirement. It merely says that one can approach the Shannon’s curve easier by lowering the transmission reliability requirement. Shannon’s work, however, promises to approach the curve with an arbitrarily low SER!

5

we

10 SER=10−2

M=16

3

M=32

M=8

2

M=4

SER=10−7

Sh

rb/W (bits/s/Hz)

M=64

M=2

1

M=8

0.5

M=16

0.3

−5 −1.6 0

en

0.1 −10

&

0.2

M=32

PSK QAM FSK

M=64

5 10 15 SNR per bit, Eb/N0 (dB)

20

25

30

Figure 8.19

P8.22 (a) Invoking the sampling theorem we have 2W samples/sec.

uy

(b) The answer in (a) means that in T seconds there are 2W T independent time samples; independent in the sense that they are all needed to represent the time signal.

Ng

(c) Any set of orthogonal functions are linearly independent in the usual mathematical sense that there is no set of nonzero PN coefficients {c1 , c2 , . . . , cN }, N being the number of orthogonal functions such that k=1 ck φk (t) = 0. Each orthogonal function can provide an independent sample. Since we need 2W T independent samples then it follows that one needs 2W T orthogonal functions.

(d)

Z SRECT (f ) =

T 2

− T2

e−j2πT dt = V T

sin(πf T ) πf T

and in passband this becomes · ¸ V T sin(π(f − fc )T ) sin(π(f + fc )T ) + . 2 π(f − fc )T π(f + fc )T

Solutions

Page 8–24

Nguyen & Shwedyk

A First Course in Digital Communications

we

dy k

From the sketch of the above function it is (or should be) readily seen that the null-tonull bandwidth is W = T2 Hz. This means that W T = 2 and that 2W T = 4, i.e., we can fit 4 orthogonal time functions in this bandwidth. Remark: The above derivation is very heuristic, it however agrees well with the sophisticated mathematical analysis given by Landau, Pollack & Slepian. Their definition of bandwidth is different; remember no time-limited signal can be bandlimited - the Gordian knot of signal analysis. The achieved result however agrees very well with their conclusions. Perhaps the moral is: one should not let sophisticated mathematics stand in the way of good engineering. Always keep in mind math for engineers is a tool, albeit a very important one.

Sh

P8.23 (a) To show that two R time functions are orthogonal over an interval of T seconds, one needs to show that t∈T s1 (t)s2 (t)dt = 0. Since here we are dealing with sinusoids we can dispense with the integration by recalling that an integer number of cycles of a sinusoid in the time interval T means the area (or the integral of it) under it is zero. Recall further the following trig identities: cos x cos y =

sin x sin y =

&

cos x sin y =

1 [cos(x + y) + cos(x − y)] 2 1 [cos(x − y) − cos(x + y)] 2 1 [sin(x + y) − sin(x − y)] 2

Show φ01 (t) ⊥ φ02 (t)

¶ µ ¶ µ ¶ · µ ¶¸ πt 1 2πt 1 k πt 2 sin cos (2πfc t) = sin 1 + cos 2π t = cos T T 2 T 2 T ½ µ ¶ µ ¶ µ ¶¾ 1 2πt 1 k+1 1 k−1 = sin + sin 2π t − sin 2π t 4 T 2 T 2 T µ

en

φ01 (t)φ02 (t)

uy

The sinusoids have 1, (k + 1), (k − 1) cycles, respectively, over T sec and therefore the area under them is 0. Therefore φ01 (t) and φ02 (t) are orthogonal. Show φ01 (t) ⊥ φ03 (t)

Ng

φ01 (t)φ03 (t)

µ

¶ · µ ¶¸ µ ¶ 1 2πt k πt cos (2πfc t) sin (2πfc t) = 1 + cos sin 2π t = cos T 4 T T ½ µ ¶ µ ¶ µ ¶¾ 1 k 1 k+1 1 k−1 = sin 2π t + sin 2π t + sin 2π t 4 T 2 T 2 T 2

Here there are k, (k+1), (k−1) cycles respectively over T sec ⇒ area = 0 ⇒ φ01 (t) ⊥ φ03 (t). Show φ01 (t) ⊥ φ04 (t) µ

φ01 (t)φ04 (t)

Solutions

¶ µ ¶ µ ¶ µ ¶ πt πt 1 2πt k = cos sin cos (2πfc t) sin (2πfc t) = sin sin 2π t T T 4 T T ½ µ ¶ µ ¶¾ 1 k−1 k+1 = cos 2π t − cos 2π t 8 T T Page 8–25

Nguyen & Shwedyk

A First Course in Digital Communications

Show φ02 (t) ⊥ φ03 (t) µ φ02 (t)φ03 (t)

= sin

πt T



dy k

⇒ (k − 1), (k + 1) cycles over T ⇒ area = 0 ⇒ φ01 (t) ⊥ φ04 (t).

µ cos

πt T



cos (2πfc t) sin (2πfc t)

Show φ02 (t) ⊥ φ04 (t) µ 2

πt T



cos (2πfc t) sin (2πfc t) = sin · µ ¶¸ πt = 1 − cos2 cos (2πfc t) sin (2πfc t) T µ ¶ 2 πt = cos (2πfc t) sin (2πfc t) − cos cos (2πfc t) sin (2πfc t) T

Sh

φ02 (t)φ04 (t)

we

Note this product is the same as φ01 (t)φ04 (t). Therefore it follows that the area under φ02 (t)φ03 (t) is zero over T sec and that φ02 (t) ⊥ φ03 (t).

&

The area under the first term is 0 over T sec because as noted in the problem the two carriers are orthogonal over T sec. The second term is the same as φ01 (t)φ03 (t) and we have shown that this area is zero. Therefore φ02 (t) ⊥ φ04 (t). Show φ03 (t) ⊥ φ04 (t)

¶ µ ¶ πt πt sin sin2 (2πfc t) T T µ ¶ µ ¶ µ ¶ µ ¶ πt πt πt πt cos sin − cos sin cos2 (2πfc t) T T T T | {z } | {z } µ

en

φ03 (t)φ04 (t) = cos =

area=0, from φ01 (t) ⊥ φ02 (t)

area=0 because p1 (t) ⊥ p2 (t)

uy

Therefore φ03 (t) ⊥ φ04 (t).

Remark: Given n functions, one needs to check (n − 1)! conditions to see that they form an orthogonal set. Here n = 4 and (n − 1)! = 3! = 6 conditions.

Ng

(b) Though one should find the normalizing factor for each basis function, we’ll determine it for the 1st one and trust the gods (or intuition) it is the same for the other three. Now µ ¶ Z T Z T ¢2 2 ¡ 2 0 2 πt φ1 (t) dt = cos cos2 (2πfc t) dt T − T2 − T2 µ ¶¸ · µ ¶¸ Z T · 2 2πt 1 1 2π(2fc )t T 1 1 = + cos + cos dt = 2 T 2 2 T 4 −T 2 2

q Therefore multiply each basis function by (c) Let φi (t) = Solutions

√2 φ0 (t), i T i

= 1, 2, 3, 4, − T2 ≤ t ≤

4 T

T 2.

=

√2 T

to normalize.

See Fig. 8.20. Page 8–26

Nguyen & Shwedyk

A First Course in Digital Communications

…b b

b

b

b

4 i 4 i +1 4 i + 2 4 i + 3 4 i + 4

Tb 4Tb = Ts = T



Demultiplexer

b4i b4i +1 b4i + 2 b4i +3

dy k

b−1b0b1b2b3

input bit sequence

φ1 (t − iT )

b4i

φ2 (t − iT )

we

b4i +1

φ3 (t − iT )

b4i + 2



s (t )

φ4 (t − iT )

Sh

b4i +3

NRZ-L modulators L = ± Eb

i = … , −1, 0,1, 2,…

&

Figure 8.20

en

(d) The transmitted signal s(t) is given by p p p p s(t) = ± Eb φ1 (t) ± Eb φ2 (t) ± Eb φ3 (t) ± Eb φ4 (t) √ √ where it is + Eb if the bit is 1 and − Eb if the bit is 0 (assumed).

uy

The signal space is 4 dimensional, with basis functions φi (t), i = 1, 2, 3, 4. The signals of√a four-dimensional with √ √ √ hypercube √ √ coordinates ranging from √ lie on√the vertices + Eb ) corresponding to bit pat(− Eb , − Eb , − Eb , − Eb ) to (+ Eb , + E √b , + Eb , √ terns (0000) to (1111) (assuming that 0 ↔ − Eb , 1 ↔ Eb ).

Ng

There are 16 signals, one on each vertex. The minimum distance√between two signals occurs when a pair differ in only one component and is equal to 2 Eb . Finally the number of nearest neighbors is 4. Note that in an n-dimensional hypercube any vertex has n nearest vertices.

P8.24 (a) The generation of the sufficient statistics is the same for the symbol demodulator and for the bit demodulator(s). See Fig. 8.21. The difference between the two is in the decision rule. The decision rule for the symbol demodulator is: determine which quadrant r1 , r2 , r3 , r4 fall in and choose the signal (or bit pattern it represents) that lies in that quadrant. √ √ √ √ For example, if (r1 > 0, r2 < 0, r3 > 0, r4 > 0) choose signal (+ Eb , − Eb , + Eb , + Eb ) (or bit pattern 1011).

Solutions

Page 8–27

Nguyen & Shwedyk

A First Course in Digital Communications t = kTs

φ1 (t )

∫ ( •)dt

dy k

r1

t∈T

φ2 (t )

∫ ( •)dt

r2

t∈T

r (t )

φ3 (t )

∫ ( •)dt

φ4 (t )

r3

Decision

we

t∈T

Decision rule

∫ ( •)dt

t∈T

r4

Sh

Figure 8.21

For the bit demodulator the decision rule is: b4i =1

r1 R 0;

R

b4i+2 =1

0;

b4i+1 =0

r3

R

b4i+3 =1

0;

r4

b4i+2 =0

R

0.

b4i+3 =0

&

b4i =0

b4i+1 =1

r2

(b) Due to symmetry we have

P [symbol error] = P [symbol error|a specific signal]

en

= 1 − P [correct|a specific signal] ¯ · ³ p p p p ´¸ ¯ ¯ = 1 − P correct¯specific signal + Eb , + Eb , + Eb , + Eb

¯³ p p p p ´¸ ¯ P correct¯¯ + Eb , + Eb , + Eb , + Eb ¯³ · p p p ´¸ p ¯ ¯ = P r1 ≥ 0, r2 ≥ 0, r3 ≥ 0, r4 ≥ 0¯ + Eb , + Eb , + Eb , + Eb

uy

·

Ng

Under the given conditions the random variables ri are Gaussian with mean ance N0 /2, and statistically independent. The probability is therefore "

1 √ p 2π N0 /2

Z





e



(x− Eb )2 2(N0 /2)

#4 dx

"

Ãr

= 1−Q

0

2Eb N0

√ Eb , vari-

!#4

Therefore

Solutions

"

Ãr

!#4 2Eb P [symbol error] = 1 − 1 − Q N0 ! Ãr 2Eb P [bit error for the bit demodulator] = Q N0 Page 8–28

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

Note that the above result is just a special case of the more general result derived in P8.18 for a hypercube constellation. The bit error probability of the symbol demodulator is the same as that of the bit demodulator. To see this consider bit b4i , i.e., the one that corresponds to sufficient statistic ri . A symbol error is caused by any one of r1 , r2 , r3 , r4 or any subset of them being of the wrong sign. However bit b4i would still be correct ³q if ri is ´ of the right polarity regardless 2Eb N0

of what r2 , r3 , r4 are. The probability of this is Q

.

we

The bit error probability of Q2 PSK is the same as BPSK and QPSK/OQPSK/MSK. P8.25 (a) Substituting in for the φi (t)’s and collecting terms we have: √ ½µ ¶ µ ¶ ¾ 2 Eb πt πt πt πt s(t) = √ b1 cos + b2 sin cos (2πfc t) + b3 cos + b4 sin sin (2πfc t) T T T T T

Sh

The envelope is given by (ignoring the constant

√ 2√Eb ): T

"µ ¶ µ ¶ #1 πt πt 2 πt πt 2 2 e(t) = b1 cos + b2 sin + b3 cos + b4 sin T T T T

&

Using the fact that cos2 x + sin2 x = 1 (always) and that b2i = 1, one has · ¸1 · ¸1 πt πt 2 2πt 2 2 + 2(b1 b2 + b3 b4 ) cos sin = 2 + 2(b1 b2 + b3 b4 ) sin T T T ¸1 · πt 2 = 2 + 2(b1 b2 + b3 b4 ) sin . (8.16) 2Tb

en

e(t) =

Plot of e(t) for all three different combinations of b1 b2 + b3 b4 is shown in Fig. 8.22.

Ng

uy

(b) Need b1 b2 + b3 b4 = 0 ⇒ b1 = − b3b2b4 . Price paid is a reduction in the information rate. Instead of 4 information bits transmitted every T seconds, there are only 3 information bits transmitted now. The coding scheme is represented by the following table: Real b2 −1 −1 −1 −1 1 1 1 1

Number Arithmetic b3 b4 b1 −1 1 +1 −1 1 −1 1 −1 −1 1 1 +1 −1 −1 −1 −1 1 +1 1 −1 +1 1 1 −1

Bolean Algebra b2 b3 b4 b1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 0

Note that b1 = 1 if there is an even number of ones, otherwise b1 = 0. The encoder can be implemented by the following logic: b1 = b2 ⊕ b3 ⊕ b4 Solutions

Page 8–29

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

2 1.8 1.6 1.4

b1b2+b3b4=2

b1b2+b3b4=0

1

we

e(t)

1.2

b1b2+b3b4=−2

0.8 0.6

Sh

0.4 0.2 0 0

4Tb

8Tb

12Tb

t

&

Figure 8.22

(c) Eight – one for each 3-bit allowable bit pattern. ·³ p ´ ³ p ´2 ¸ 12 √ p 2 = 2 2 Eb . 2 Eb + 2 Eb

en

dmin =

(Note that signals closest together differ in 2 components). (d) The block diagram is the same as that of P8.24(a). One difference could be that the sufficient statistics for b1 (b4i ) does not need ³q to be generated. Only bits b2 , b3 , b4 represent ´ 2Eb N0

uy

information. The bit error probability is Q

.

Ng

(e) Yes, it can be used for error detection. After demodulating the bits to ˆb1 , ˆb2 , ˆb3 , ˆb4 , determine if ˆb1 = ˆb2 ⊕ ˆb3 ⊕ ˆb4 . If this is not true then an error has been made. Note that the error could be on the “redundant” bit b1 . If the relationship is satisfied then either no errors are made or an undetectable number of errors have been made. You may wish to convince yourself that an odd number errors (i.e., 1 or 3) are detectable while an even number (2 or 4) are not.

P8.26 (a) Consider E {si (t)sj (t + τ )}, i 6= j. This, with a bit of creativity in notation, is: ∞ X

∞ X

k=−∞ l=−∞

cos

E {bik bjl } or

sin

½

πt 4Tb

¾

cos

or

sin

½

π(t + τ ) 4Tb

¾

But E {bik bjl } = 0 ∀i, j; i 6= j. Therefore any two signals are uncorrelated.

Solutions

Page 8–30

Nguyen & Shwedyk

A First Course in Digital Communications

(b) ½

∞ X

πt Rs1 (τ ) = E {s1 (t)s1 (t + τ )} = E {b1k b1l } cos 4Tb k=−∞ l=−∞ ¾ ½ ¾ ½ ∞ ∞ X X πt π(t + τ ) cos Rs3 (τ ) = E {b3k b3l } cos 4Tb 4Tb k=−∞ l=−∞

¾

dy k

∞ X

cos

½

π(t + τ ) 4Tb

¾

we

But E {b1k b1l } = E {b3k b3l }, since both = 1 if k = l and = 0 if k 6= l. Therefore the two autocorrelations are the same. Use the same argument to show that Rs2 (τ ) = Rs4 (τ ).

Sh

(c) The PSD component due to s1 (t) (or s3 (t)) is 4T1 b |H(f )|2 . n ³ ´o πt and F {u(t + 2Tb ) − u(t − 2Tb )} which are To find H(f ), determine F cos 4T b h i 4Tb sin (4πf Tb ) 1 1 1 , respectively. 2 δ(f − 8Tb ) + δ(f + 8Tb ) and 4πf Tb Now convolve and simplify to obtain:

H(f ) = −2πTb

cos (4πf Tb )

(4πf Tb )2 −

π2 4

&

³ ´ πt For s2 (t), h(t) = sin 4T [u(t + 2Tb ) − u(t − 2Tb )] and we now convolve b h i (4πf Tb ) 1 1 1 δ(f − ) + δ(f + ) and 4Tb sin . This gives 2j 8Tb 8Tb 4πf Tb

en

H(f ) =

16πTb f Tb cos (4πf Tb ) 2 j (4πf Tb )2 − π 4

The overall PSD is

uy

π 2 Tb cos2 (4πf Tb ) h

1 + 64(f Tb )2 (4πf Tb )2 −

π2 4

i2 (watts/Hz).

Ng

(d) Plot of the resultant PSD is shown in Fig. 8.23.

Solutions

Page 8–31

A First Course in Digital Communications

we

dy k

Nguyen & Shwedyk

Sh

10

−10

−30 −40

&

−20

en

Power spectral densisty (dB)

0

−50

0 fT

5

b

Figure 8.23

Ng

uy

−60 −5

Solutions

Page 8–32

dy k

Chapter 9

we

Signaling Over Bandlimited Channels

&

Sh

P9.1 A bandwidth of 4 kHz in the passband (i.e., around fc ) means that in baseband we have 2 kHz of bandwidth. Since a bit rate of 9600 bit/second means that at minimum 2T1 b = r2b = 4800 Hz of bandwidth is needed to transmit with zero ISI, obviously an M -ary modulation is required to reduce the bandwidth requirement. Since QAM has been specified as the modulation paradigm, determine the number of bits a symbol needs to carry (or represent) to achieve not only zero ISI but also a β ≥ 0.5 (the roll-off factor spec.). Do this by trial and error: start bits/second with λ = 3, i.e., 3 bits/symbol. Then the symbol transmission rate is rs = 9600 3 bits/symbol = 3200 symbols/sec. Using the hint this means that the required bandwidth needed for zero ISI is 2T1 s = r2s = 1.6 kHz. Since we have 2 kHz this means that we have an excess of bandwidth with the excess being 0.4 kHz. But is this enough to achieve a β ≥ 0.5. Let’s check this 1+β rs rs 0.4 = + β = 2 kHz ⇒ β = = 0.25 < 0.5 2Ts 2 2 1.6

en

Back to the drawing board. (An auxiliary question is: When the developer of the drawing board made an error in the drawing board what did she go back to?)

uy

rs Try λ = 4. Now rs = 9600 4 = 2400 symbols/sec. Since 2 = 1.2 kHz we have 800 Hz (or 0.8 2 kHz) excess bandwidth ⇒ 1.2 kHz + 0.8 kHz = 2 kHz ⇒ β = 0.8 1.2 = 3 > 0.5 so this spectrum is satisfied, i.e., 16-QAM is the modulation to use.

Ng

Finally need to satisfy the bit error probability of 10−6 . To do this consider the 16-QAM modulation as 2 4-ASK modulation, i.e., 2 bits on the inphase axis, 2 bits on the quadrature axis. From Eqn. (8.27) we have P [symbol error] = 2P [bit error] = 2 × 10−6

Eb And from Fig. 8.8, using the M = 4 curve, this means N ' 14.5 dB or 101.45 = 28.2 so 0 Eb = 28.2 × 10−8 joules. The average transmitted power is

Pav =

Es 4Eb = = 4 × 28.2 × 10−8 × 2400 = 2.707 × 10−3 (watts) = 2.707 (mW) Ts Ts

(9.1)

P9.2 First in “baseband” the channel looks like in Fig. 9.1, where hc (t) = h0c (t)2 cos(2πfc t); fc = 1.8 kHz. We therefore have 1.5 kHz if bandwidth to play with. (a) Now Ts = λTb or λ =

rb rs

=

9600 2400

= 4 bits/symbol ⇒ 16-QAM.

1

Nguyen & Shwedyk

A First Course in Digital Communications

−1.5 kHz

dy k

H C' ( f )

f (kHz)

0

we

1.5 kHz

Figure 9.1

Sh

This means that 2T1 s = r2s = 1.2 kHz is the minimum bandwidth needed to achieve zero ISI. But we have 1.5 kHz of bandwidth or an excess of 0.3 kHz. Therefore β can be larger than 0. Utilizing the entire bandwidth, β is determined from: 1+β rs = (1 + β) = 1.2(1 + β) = 1.5 ⇒ β = 0.25 2Ts 2

&

(b) In “passband” the spectrum looks as in Fig. 9.2 (showing just the positive frequency portion).

1

en

0.5

uy

0

0.3

0.6

0.9

1.8

2.7

3.0

3.3

f (kHz)

Figure 9.2

P9.3 With 75% excess bandwidth, then β = 0.75. Now,

Ng

|HR (f )| =

1

|SR (f )| 2 1

|HC (f )| 2

1

;

K2 |SR (f )| 2 |HT (f )| = K1 |HC (f )| 12

Assume K2 = K1 and that SR (f ) is raised-cosine. Therefore  √ Ts  |f | ≤ 0.25  1 , 2Ts  2 |1+α cos (2πf T )|  s h ³ ´i √ s |f |− 0.25 Ts cos πT 1.5 2Ts HR (f ) = HT (f ) = , 0.25 1  2Ts ≤ |f | ≤  |1+α cos (2πf Ts )| 2   0, |f | ≥ 1.75 2Ts Note that 1 + α cos (2πf Ts ) for 0 < f < that 0 < α < 1).

Solutions

1.75 2Ts

=

7 8Ts

1.75 2Ts

plots as in Fig. 9.3 (where it is assumed

Page 9–2

Nguyen & Shwedyk

A First Course in Digital Communications

1−α 1

2Ts

7

f

8Ts

we

0

dy k

1 + α cos(1.75π )

1+α

Figure 9.3 P9.4 (a) HC (f ) = 1 +

s(t) ↔ S(f )

Sh

Therefore

i α h j2πf t0 e + e−j2πf t0 ; 2

Y (f ) = S(f )HC (f ) = S(f ) +

α α S(f )ej2πf t0 + S(f )e−j2πf t0 . 2 2

Now, multiplying S(f ) by ej2πf t0 means a time shift of t0 seconds in the time domain (see Table 2.3 – Property 2). Therefore

&

y(t) = s(t) +

α α s(t + t0 ) + s(t − t0 ) 2 2

Remark: The fact that s(t) is bandlimited to W Hz is necessary. Otherwise we could not make use of the property. Why?

en

(b) The output of the matched filter, based on the hint is Z ∞h i α α y0 (t) = s(λ) + s(λ + t0 ) + s(λ − t0 ) s(t − (T − λ))dλ. 2 2 −∞ And at t = kT :

Z α ∞ y0 (kT ) = s(λ)s((k − 1)T − λ)dλ + s(λ − t0 )s((k − 1)T − λ)dλ 2 −∞ −∞ Z α ∞ + s(λ + t0 )s((k − 1)T − λ)dλ 2 −∞ ∞

uy

Z

Ng

(c) If t0 = T , the above becomes Z ∞ Z α ∞ y0 (kT ) = s(λ)s((k − 1)T − λ)dλ + s(λ − T )s((k − 1)T − λ)dλ 2 −∞ −∞ Z α ∞ s(λ + T )s((k − 1)T − λ)dλ + 2 −∞ Change variables in the last two integrals to λ − T → λ and λ + T → λ, respectively. Then Z ∞ Z α ∞ y0 (kT ) = s(λ)s((k − 1)T − λ)dλ + s(λ)s((k − 2)T − λ)dλ 2 −∞ −∞ Z α ∞ s(λ)s(kT − λ)dλ + 2 −∞

Solutions

Page 9–3

Nguyen & Shwedyk

A First Course in Digital Communications

Let x(t) = s(t) ∗ s(t − T ). Then the above can also be written as: α α x((k − 1)T ) + x((k + 1)T ) 2 2

dy k

y0 (kT ) = x(kT ) +

where the first term is the desired signal component, while the last two terms account for ISI. P9.5 (a) The sufficient statistic is

p p Eb ± 0.25 Eb + wk p p = − Eb ± 0.25 Eb + wk |{z} | {z } ³

0T : rk

we

1T : rk =

due to ISI

N 0,

N0 2

´

Sh

The decision space looks as shown in Fig. 9.4. By symmetry, we have à à √ ! √ ! 1 0.75 Eb 1 1.25 Eb P [bit error] = Q p + Q p 2 2 N0 /2 N0 /2 Ãr ! Ãr ! 1 9 Eb 1 25 Eb = + Q Q 2 8 N0 2 8 N0 − Eb

Eb

&

rk

−1.25 Eb −0.75 Eb

0.75 Eb 1.25 Eb

0

en

0D

1D

Figure 9.4 ³√

(b) With no ISI, one has P [bit error] = Q

√2Eb N0

´ . The two error probabilities are plotted

uy

in Fig. 9.5. Observe that at the bit error probability of 10−5 the difference in SNR is about 2.2 dB.

P9.6 (a) Solve questions of this type graphically (usually), i.e., slide SR (f ) along the f axis until the sum results in a constant level between − 2T1 b to 2T1 b Hz. See Fig. 9.6. (b) No. Spectrum would not be flat in 0 to

1 2Tb

Hz.

Ng

(c) Excess bandwidth is 1 kHz.

P9.7 (a) To transmit with zero-ISI we need to find a signalling rate rb = 1/Tb bits/sec such that in the frequency band − r2b = − 2T1 b ≤ f ≤ 2T1 b = r2b (Hz) the spectrum SR (f ) and its ´ ´ ³ ³ aliases SR f − T1b and SR f + T1b add up to a constant value. For the two spectra given, the answers are (see Figs. 9.7 and 9.8): rb = 2 × 2000 = 4000 bits/sec = 4 kbps for (a) rb = 2 × 1500 = 3000 bits/sec = 3 kbps for (b) Remark: One could also determine symmetry about r2b (which is?).

Solutions

rb 2

from the frequency at which SR (f ) has the required

Page 9–4

Nguyen & Shwedyk 10

A First Course in Digital Communications −1

10

10

10

−3

−4

−5

−6

0

2

4

we

10

−2

6 8 Eb/N0 (dB)

Sh

P[bit error]

10

dy k

no ISI with ISI

10

12

14

Figure 9.5

first alias

&

excess bandwidth

−1 kHz

Ng

uy

en

−3 kHz

−3

−2

−1

f (Hz)

0 1 kHz

3 kHz 4 kHz 1

1 = r = 4000 bits/sec Tb b

2Tb

Figure 9.6 first alias

1 0.5

f (kHz)

0

1

3

r 1 = b =2 2Tb 2

rb =

1 =4 Tb

Figure 9.7

Solutions

Page 9–5

Nguyen & Shwedyk

A First Course in Digital Communications

0.5

dy k

first alias

1

f (kHz)

−3

0

3

1 2Tb

1 Tb

we

rb =

Figure 9.8

Sh

(b) Clearly the excess bandwidths are

3000 − 2000 = 1000 Hz (or 50%) for (a) 3000 − 1500 = 1500 Hz (or 100%) for (b)

&

(c) See Fig. 9.9.

Raised-cosine; β = 0.5

1

en

0.5

−2

uy

−3

−1

0

1

2

3

f (kHz)

Figure 9.9

Ng

(d) The raised-cosine spectrum is preferred because its eye diagram is considerably more open. The wide opening is desirable under synchronization error and additive white Gaussian noise in order to minimize the effect of ISI. Note that the relative shapes of the two eye diagrams are expected since the RC spectrum is smoother than the other spectrum for the same bit rate and excess bandwidth.

P9.8 As mentioned on page 348, SR (f ) should have a certain symmetry about the 2T1 b point. Graphically, at least for SR (f ) functions that have zero phase, this symmetry can be checked by checking to see if SR (f ) is: i) flat from 0 to some f1 ≤ 2T1 b ; ii) the portion from f1 to ³ ´ 1 1 1 1 − f has odd symmetry about the , 1 Tb 2Tb 2 point; iii) is zero for f > Tb − f1 . Applying this to the four spectra given: (i) Yes. Choose (ii) Yes. Choose

Solutions

1 2Tb 1 2Tb

to be to be

2B 4 3 Hz ⇒ rb = 3 B bits/second. B 2 Hz ⇒ rb = B bits/second.

Page 9–6

Nguyen & Shwedyk (iii) Yes. Here

1 2Tb

A First Course in Digital Communications =

2B 3

Hz ⇒ rb = 43 B bits/second.

P9.9 See Table 9.1. Table 9.1

0 0 −V

1 1 1 +V 0 1

2 1 0 −V 0 1

3 0 0 −V −2V 0

4 0 0 −V −2V 0

5 1 1 V 0 1

6 0 1 V 2V 0

7 1 0 −V 0 1

8 0 0 −V −2V 0

9 1 1 V 0 1

10 1 0 −V 0 1

P9.10 (a)

(sampled) SR (t)

∞ X

=

s(kTb )δ(t − kTb ) =

k=−∞

N −1 X

Z

&

Therefore (sampled) SR (f )

=

11 0 0 −V −2V 0

we

= = = = = =

Sh

k bk dk Vk yk ˆbk

dy k

(iv) No. The required symmetry does not exist on the f axis.

sk

k=0



−∞

N −1 X

12 1 1 V 0 1

(ignore noise)

sk δ(t − kTb )

k=0

−j2πf t

δ(t − kTb )e

dt =

N −1 X

sk e−j2πf kTb

k=0

en

From the sampling theorem, Section 4.11, pages 136-139, Eqn. (4.5) we know that µ ¶ ∞ X k (sampled) SR f − = Tb SR (f ) Tb k=−∞

uy

Ignoring the aliases, i.e., considering only k = 0 term: ( P −1 (sampled) −j2πf kTb , − 1 ≤ f ≤ Tb SR (f ) = N k=0 Tb sk e 2Tb SR (f ) = 0, otherwise.

1 2Tb

Ng

(b)

sR (t) = F

−1

{SR (f )} =

N −1 X

Z Tb sk

k=0

=

N −1 X

Z Tb sk

− 2T1 b

k=0 N −1 X

1 2Tb

e

1 2Tb

− 2T1

e−j2πf kTb ej2πf t df

b

ej2π(t−kTb )f df

j2π(t−kTb ) 2T1

−j2π(t−kTb )f 2T1

−e = Tb sk j2π(t − kTb ) k=0 ³ ´ π N −1 sin (t − kT ) X b Tb = sk π Tb (t − kTb ) b

b

k=0

Solutions

Page 9–7

A First Course in Digital Communications

P9.11 (a)

Z sR (t) = j2Tb

1 2Tb

− 2T1 b

µ

ej2πf Tb − e−j2πf Tb 2j

¶ ej2πf t df

dy k

Nguyen & Shwedyk

After integration this becomes ³ ³ ´ ´  π π   sin (t + T ) sin (t − T ) b b Tb Tb Tb sR (t) = −  π  (t + Tb ) (t − Tb ) Now sin

π Tb (t

´ ³ ´ ³ ´ π + Tb ) = sin Tb (t − Tb ) = − sin Tπtb so Tb sin π

πt Tb

¶½

−1 1 + t + Tb t − Tb

¾

=

2Tb2

sin

³

πt Tb

´

π t2 − Tb2

³ ´ πt sin Tb 2 = ³ ´2 π t −1 Tb

Sh

sR (t) =

µ

we

³

sR (t) decays as t12 which is expected since SR (f ) needs to be differentiated twice to produce an impulse(s). Use the duality concept and the result of P2.41: sR (kTb ) = 0 at k = 0, ±2, ±3, ±4, . . . . At k = ±1, i.e., ±Tb , sR (±Tb ) becomes 00 . Using L’Hospitale’s rule we have s(Tb ) = −1, s(−Tb ) = +1.

&

(b) Ignoring any random noise: yk = Vk+1 + Vk−1 Associate value Vk+1 with bit bk . Vk−1 is due to bit bk−2 and an ISI term, i.e., the interference comes not from the previous bit but from the one before it. Since Vk = ±V , there are three received levels, ±2V and 0. (c) As a precoder form dk = bk ⊕ dk−2 . See Fig. 9.10

uy

en

bk

dk

Delay 2 units

Figure 9.10

(d) 2.1 dB. Note that this is very similar to duobinary.

Ng

P9.12 (a) Using the result of P9.10, Eqn. P9.3,  · ¸ 2(ej2πf Tb +e−j2πf Tb )   T 2 + = 2Tb (1 + cos(2πf Tb ))  b 2  SR (f ) = = 2T (1 + cos2 πf T − sin2 πf T ) = 4T cos2 πf T , − 1 ≤ f ≤  b b b b b 2Tb   0, otherwise. (b) It decays as

1 . t3

1 2Tb

Need to differentiate SR (f ) three times before an impulse(s) appears.

(c) yk = Vk−1 + 2Vk + Vk+1 (ignoring the random noise). (d) There are five levels, ±4V, ±2V, 0.

Solutions

Page 9–8

Nguyen & Shwedyk

A First Course in Digital Communications

dk−2 0 1 0 1 0 1 0 1



dk 0 1 0 1 1 0 1 0

yk−1 = Vk−2 + 2Vk−1 + Vk = −2 − 1 − 1 = −4 = −2 + 1 + 1 = 0 = +2 − 1 − 1 = 0 = +2 + 1 + 1 = +4 = −2 − 1 + 1 = −2 = −2 + 1 − 1 = −2 = +2 − 1 + 1 = +2 = +2 + 1 − 2 = +2

we

dk−1 0 0 1 1 0 0 1 1

Sh

bk 0 0 0 0 1 1 1 1

dy k

(e) To design the encoder, one wants a unique yk to correspond to bk . Note further that there are 2 interfering terms which implies that dk = f (bk , dk−1 , dk−2 ). Given this let us build a truth table of (bk , dk−1 , dk−2 ) and the corresponding dk so that a unique yk is produced. Finally let bk = 0 correspond to the levels 0, ±4V and bk = 1 to the level ±2V .

Note that if yk−1 = ±4V or 0 then bk = 0 and if yk−1 = ±2V then bk = 0. Therefore the decision space looks as in Fig. 9.11: yk

−2V

0

2V

4V

0D

1D

0D

1D

0D

&

−4V

Figure 9.11

en

Note further that we are assuming, as usual, that the random noise sample wk is Gaus2 (watts). sian, zero-mean and of variance σw ³ 2´ (f) To determine the SNR degradation, we need to determine σV2 from Eqn. (9.37) w

where |SR (f )| = 4Tb cos2 (πf Tb ). Therefore

uy

µ

V2 2 σw

"r



max

= PT Tb

N0 2

Z

1 2Tb

− 2T1

max

#2 2

4Tb cos (πf Tb )

b

Ng

³ 2´ T Tb Note that cos2 (πf Tb ) = 12 [1+cos(2πf Tb )] and do the integration to get σV2 = P2N . 0 w max ³q ´ PT Tb Therefore the probability of error is Q (ignoring constant factors before the 2N0 Q(·) function(s)). ´ ³q PT Tb Compared with the ideal binary case where P [error] ∼ Q 2N0 , we see that PT needs to be increased by a factor of 4 which implies an SNR degradation 10 log10 4 = 6 dB.

P9.13 (a) The Fourier transform of the sampled spectrum sampled SR (f ) = Tb F {−δ(t + Tb ) + 2δ(t) − δ(t − Tb )} n o = Tb −ej2πf Tb + 2 − e−j2πf Tb = 2Tb [1 − cos(2πf Tb )]

Solutions

Page 9–9

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

The spectrum of the continuous time signal is ( 2Tb [1 − cos2 (πf Tb )] = 4Tb sin2 (πf Tb ), |f | ≤ 2T1 b SR (f ) = 0, otherwise. (b) The spectrum of SR (f ) looks as in Fig. 9.12. Since it has a discontinuity, sR (t) decays as 1t .

f

0

Sh

−1

we

4Tb

2Tb

1

2Tb

Figure 9.12

(c) Without random noise yk = −Vk−1 + 2Vk − Vk+1 where Vj = ±V .

&

(d) The sampled output yk = −(±V ) + 2(±V ) − (±V ). Looking at all combinations sk = −4V, −2V, 0, +2V, +4V , i.e., there are 5 levels.

en

(e) To establish the coder’s table, note that there are 2 interfering terms which implies that a coded bit should depend on 2 previous coded bits as well as the present information bit, i.e., dk = f (bk , dk−1 , dk−2 ). Further we wish bk = 0 and bk = 1 to be associated with levels that are distinct. Let bk = 0 correspond to the levels 0, ±4V and bk = 1 correspond to levels ±2V . Therefore dk−1 0 0 1 1 0 0 1 1

Ng

uy

bk 0 0 0 0 1 1 1 1

dk−2 0 1 0 1 0 1 0 1



dk 0 0 1 1 1 1 0 0

yk−1 = Vk−2 + Vk−1 + Vk = −1 − 2 − 1 = −4 = −1 + 2 − 1 = 0 = +1 − 2 + 1 = 0 = +1 + 2 + 1 = +4 = −1 − 2 + 1 = −2 = −1 + 2 + 1 = +2 = +1 − 2 − 1 = −2 = +1 + 2 − 1 = +2

(f) Using Eqn. (9.39) we have µ

V2 2 σw

so P [error] ∼ Q

"r

¶ = PT Tb max

³q

PT Tb 2N0

N0 2

Z

1 2Tb

− 2T1

#2 2Tb [1 − cos(2πf Tb )] df

=

PT Tb 2

b

´ ⇒ a 6 dB degradation in SNR.

P9.14 See Fig. 9.13. The eye diagram with the duobinary signalling (left figure) is significantly more open, reflecting the fact that its overall impulse response, sR (t), decays as 1/t2 , while that of P9.11 decays as 1/t. Solutions

Page 9–10

3

3

2

2

1

1

0

−1

we

−1

0

dy k

A First Course in Digital Communications

y(t)/V

y(t)/V

Nguyen & Shwedyk

−2

−2

−3 0

−3 0

2.0

Sh

1.0 t/Tb

1.0 t/Tb

2.0

Figure 9.13: Eye diagrams with the duobinary modulation (left) and with the spectrum of P9.11 (right). Note that for duobinary modulation the sampling times need to be delayed by Tb /2. Eye Digram for P9.15 with RC = 0.2T

b

Note that the eye is quite wide open

0.8 0.6

0.2

en

y(t) (normalized)

0.4

&

1

0

−0.2

uy

−0.4

Sampling Time

−0.6 −0.8

Ng

−1 0

0.2

0.4

0.6

0.8

1 t/Tb

1.2

1.4

1.6

1.8

2

Figure 9.14

P9.15 See Figs. 9.14, 9.15, 9.16.

Solutions

Page 9–11

A First Course in Digital Communications

dy k

Nguyen & Shwedyk

Eye Diagram For P9.15 with RC = Tb 1 0.8 0.6

0.2

we

y(t) (normalized)

0.4

0 −0.2 −0.4 −0.6 −0.8 −1 0

0.2

0.4

Sh

Sampling Time

0.6

0.8

1 t/Tb

1.2

1.4

1.6

1.8

2

&

Figure 9.15

Eye Diagram For P9.15: RC = 2T

b

en

Note that the eye is closed − very severe ISI

1

0.8 0.6

0.2 0

−0.2

Ng

y(t) (normalized)

uy

0.4

−0.4 −0.6 −0.8 −1 0

Sampling Time 0.2

0.4

0.6

0.8

1 t/T

1.2

1.4

1.6

1.8

2

b

Figure 9.16

Solutions

Page 9–12

dy k

Chapter 10

P10.1 The two equations of interest are

!# " Ãr r 1 −T 2 /N0 1 2E 2 = e h + 1−Q , Th 2 2 N0 N0

Sh

P [error]BASK

we

Signaling Over Fading Channels

(10.1)

where Th is the threshold used in BASK and E = 2Eb , and 1 P [error]BFSK = e−Eb /2N0 2

&

(10.2)

When one chooses Th =

√ E 2

=

then P [error]BASK simplifies to: " Ã r r !# 1 −Eb /2N0 1 Eb Eb + 1−Q 2 , = e 2 2 N0 N0

en

P [error]BASK

√ 2Eb 2

Since the first term in the above expression is P [error]BFSK , while the second term is always positive for finite Eb /N0 , it follows immediately that P [error]BASK > P [error]BFSK . (normalized)

Ng

uy

P10.2 The plot of the normalized optimum threshold, namely Th = qTEh is shown in Fig. b 2 q √ Eb 10.1. The plot shows that Th ≥ 2E = 2 . This fact can be shown mathematically as follows. ¡ ¢ √0 I −1 eE/N0 where E = 2Eb . Therefore Start with Th = 2N E 0 ³ ´ N0 Th = √ I0−1 e2Eb /N0 = 2 2Eb

or

(normalized) Th

Now ex cos θ = I0 (x) + 2 Therefore with θ = I0−1 (ex ) ≥ x, ∀x > 0.

r

³ ´ Eb 1 ´ I0−1 e2Eb /N0 ³ 2 2Eb N0

¡ ¢ I0−1 e2Eb /N0 Th ³ ´ =q = Eb 2

P∞

2Eb N0

k=1 Ik (x) cos(kθ) and Ik (x) > 0 for all k P 0 we have ex = I0 (x) + 2 ∞ k=1 Ik (x) ≥ (normalized) Therefore Th ≥ 1.

1

> −1 and x > 0. I0 (x), which also means

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

1.8

1.6 1.5 1.4

we

Normalized optimum threshold value

1.7

1.3 1.2 1.1

Sh

1 0

5

10 Eb/ N0 (dB)

15

20

Figure 10.1

&

P10.3 The transmitted signal set and signal space are as follows: √ 1T : s(t) = E cos(2πfc t)

en

0T

: s(t) = 0

0T

1T

0

E

φT (t ) =

(10.3)

2 cos(2π f c t ) Tb

Ng

uy

The received signal set and signal space is √ 1T : r(t) = Eα cos(2πfc t − θ) + w(t) 0T

: r(t) = 0 + w(t)

0R

1R

0

α E

φR (t ) =

(10.4)

2 cos(2π f c t − θ ) Tb

Since we assume α, θ are estimated accurately for each transmission, we use θ to set the basis function φR (t) appropriately and α to set the threshold. The decision rule is r √ 1D α E Eb r≷ =α 2 2 0D Z where Eb =

Solutions

E 2

and r = 0

Tb

r(t)φR (t)dt.

Page 10–2

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

¡ ¢ ¡ √ ¢ Note that f (r|0T ) ∼ N 0, N20 and f (r|1T ) ∼ N α 2Eb , N20 . Conditioned on θ and α, the error probability is given by à p ! à r ! α Eb /2 Eb P [error|θ, α] = Q p =Q α N0 N0 /2 The error probability depends on the specific value α (but not on the specific value of θ) that the random variable α assumes during the a specific transmission. Now α takes on values 2 2 from 0 to ∞ according to the Rayleigh pdf, fα (α) = σ2α2 e−α /σF u(α). To find the overall F

1 2

h ³ ´i 1 − erf √x2 . It follows that

Sh

where we have substituted Q(x) =

we

average error probability, average the conditional bit error probability over fα (α), i.e., find à r !# Z " Z ∞ à r ! Eb 1 ∞ Eb fα (α)dα = 1 − erf α fα (α)dα P [error] = Q α N0 2 0 N0 0

1 1 P [error] = − 2 2 σF

Z

à r



α · erf

α

0

Eb N0

!

2

e

− α2

σ F



where Φ(x) =

√2 π

Rx

q

2

e−t dt (G&R, p.930, Eqn. 8.250-1), i.e., Φ(x) = erf(x).

Eb 2N0 ,

µ=

1 2 σF

, and substituting we get

en

Identifying β =

0

&

Perusing Gradstyn & Ryzhik we notice, page 649, Eqn. 6.287-1, that Z ∞ β 2 [Re(µ) > −Re(β 2 ), Re(µ) > 0] xΦ(βx)e−µx dx = p 2 2µ µ + β 0

(10.5)

uy

q σF2 Eb /2N0 1 1 q P [error] = − 2 2 1 + σ 2 E /2N 0 F b

Interpreting σF2 Eb as the received energy per bit and comparing with coherent BFSK (Eqn. (10.80)) we see that coherent BASK’s performance is identical to that of coherent BFSK and both are 3 dB less efficient than coherent BPSK (Eqn. (10.81)).

Ng

As aside, the following shows that σF2 Eb is indeed the average received energy per bit. During any transmission the received energy is ER (α) = 21 (0) + 12 (α2 E) = 21 α2 (2Eb ) = α2 Eb . Therefore the average received energy per bit is Z

E {ER (α)} =

0



(

2 − α2 σ F

2 α Eb αe σF2 {z | 2

)

2

λ= α2 σ

F

z}|{ dα = σF2 Eb }

Z 0



λe−λ dλ = σF2 Eb

(joules/bit).

fα (α)

Note that this is true for both BPSK and BFSK.

Solutions

Page 10–3

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

P10.4 We use the approach developed in Section 10.3.3. The main difference is that not only is the received phase, θ, random but so is the received amplitude which is scaled by α. Eqn. (10.42) which expresses the received signals over 2 bit intervals becomes:  q ± 2Eb α cos(2πfc t − θ) [u(t) − u(t − 2Tb )] + w(t), “0T ” q Tb r(t) = ± 2Eb α cos(2πfc t − θ) {[u(t) − u(t − Tb )] − [u(t − Tb ) − u(t − 2Tb )]} + w(t), “1T ” Tb

we

Rewrite this as  q 2Eb   ±q Tb [nF,I cos(2πfc t) + nF,Q sin(2πfc t)] [u(t) − u(t − 2Tb )] + w(t), r(t) = ± 2Eb [nF,I cos(2πfc t) + nF,Q sin(2πfc t)] Tb    {[u(t) − u(t − Tb )] − [u(t − Tb ) − u(t − 2Tb )]} + w(t), “1T ”

“0T ”

where nF,I = α cos θ, nF,Q = α sin θ are statistically independent, zero-mean Gaussian ran2 σF 2

.

Sh

dom variables with identical variance

Again we need the 4 basis functions of Eqn. (10.43) to generate the sufficient statistics. These are as in (10.44), (10.45), the difference being that rather than cos θ, sin θ we have α cos θ, α sin θ, i.e., nF,I , nF,Q .

&

The sufficient statistics are therefore  √  r1,I = 2Eb nF,I + w1,I   √  r 2Eb nF,Q + w1,Q 1,Q = 0T :  r2,I = w2,I    r 2,Q = w2,Q   r1,I = w1,I    r 1,Q = w1,Q 1T : √ r2,I = 2Eb nF,I + w2,I   √  r 2Eb nF,Q + w2,Q 2,Q =

en

OR

OR

 √  r1,I = − 2Eb nF,I + w1,I   √  r 1,Q = − 2Eb nF,Q + w1,Q  r2,I = w2,I    r 2,Q = w2,Q   r1,I = w1,I    r 1,Q = w1,Q √ r2,I = − 2Eb nF,I + w2,I   √  r 2,Q = − 2Eb nF,Q + w2,Q

uy

Observe that, regardless of the case considered, r1,I , r1,Q , r2,I , r2,Q are statistically independent Gaussian random variables. Consider the case of 0T , i.e., f (r1,I , r1,Q , r2,I , r2,Q |0T ). In N0 2 2 this situation r1,I is a zero-mean Gaussian √ random √ variable of variance σ2 = Eb σF + 2 . So is r1,Q . This is regardless of whether + 2Eb or − 2Eb is considered. r2,I , r2,Q are zero-mean Gaussian random variables of variance σ12 = N20 .

Ng

With 1T the situation is reversed, i.e., r1,I , r1,Q ∼ N (0, σ12 ) and r2,I , r2,Q ∼ N (0, σ22 ). The likelihood ratio test

becomes

f (r1,I , r1,Q , r2,I , r2,Q |1T ) 1D ≷1 f (r1,I , r1,Q , r2,I , r2,Q |0T ) 0D 2

2

2

2

2

2

2

2

e−r1,I /2σ1 e−r1,Q /2σ1 e−r2,I /2σ2 e−r2,Q /2σ2

1D

2 2 2 2 2 2 2 2 ≷ 1 e−r1,I /2σ2 e−r1,Q /2σ2 e−r2,I /2σ1 e−r2,Q /2σ1 0D Taking the natural logarithm, gathering terms and rearranging, the test simplifies to ¶ ¶ µ µ ¡ 2 ¢ ¢ 1 1D ¡ 2 1 1 1 2 2 r2,I + r2,Q − 2 + 2 ≷ r1,I + r1,Q − 2 + 2 σ2 σ1 0D σ2 σ1

Solutions

Page 10–4

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

Now σ22 > σ12 ⇒ − σ12 + σ12 > 0 ⇒ can cancel this function from both sides without affecting 2 1 the inequality relationship. Therefore the decision rule becomes 1D

2 2 2 2 r2,I + r2,Q ≷ r1,I + r1,Q 0D

we

The above makes intuitive sense. Since both the phase and the amplitude information is destroyed by the fading channel the only way to distinguish as to whether a 1 or 0 was transmitted is to look at whether there is more energy in the {φ2,I (t), φ2,Q (t)} plane or in the {φ1,I (t), φ1,Q (t)} plane, which is what the above decision rule does. The decision rule can be rewritten as q 1D q 2 + r2 2 + r2 r2,I r1,I 2,Q ≷ 1,Q 0D

Sh

which looks at the envelope of the received signal in the appropriate plane. To obtain the error performance, the “energy form” of the decision rule is preferred. Also for purposes of error performance the variables in the decision rule are considered to be random, i.e., it is written as 1D

`2 ≡ r22,I + r22,Q ≷ r21,I + r21,Q ≡ `1

&

0D

Due to symmetry the error probability can be written as P [error] = P [error|0T ] = P [`2 ≥ `1 |0T ]

en

Under the condition that a zero is transmitted the random variable `2 is Chi-square with 2 degrees of freedom and parameter σ12 while `1 is also Chi-square, 2 degrees of freedom, parameter σ22 . Now fix `1 at a specific value, say `1 = `1 . Then

uy

P [error|0T , `1 = `1 ] =

Z

Z



`1

f`2 (`2 |0T )d`2 =



`1

2

e−`2 /2σ1 2 d`2 = e−`1 /2σ1 2 2Γ(1)σ1

Sweep `1 over all possible values, weighted, of course, by the probability of that value occurring, i.e.,

Ng

Z

P [error|0T ] = =

Z



P [error|0T , `1 = `1 ]f`1 (`1 |0T )d`1 =

0

1 1+

σ22 σ12

1

= 1+

2 +N /2 Eb σF 0 N0 /2

1 1 = . 2 1 + σF2 Eb

0



−`1 /2σ22 −`1 /2σ12 e e d`1 2Γ(1)σ22

(10.6)

N0

Comparison of (10.6) with the performances of coherent BFSK and coherent BPSK (Eqns. (10.80) and (10.81) in Section 10.4.3) is shown in Fig. 10.2. Observe that noncoherentlydemodulated DBPSK performs basically the same as coherently demodulated BFSK.

Solutions

Page 10–5

Nguyen & Shwedyk

10

10

10

dy k

P[error]

10

−1

−2

−3

we

10

0

−4

−5

Coherently demodulated BFSK Coherently demodulated BPSK Noncoherently demodulated DBPSK

0

5

Sh

10

A First Course in Digital Communications

10

15 20 25 Received SNR per bit (dB)

30

35

Figure 10.2

P10.5 (a) The received signal is

r

&

q

1T :

0T :

(j − 1)Tb ≤ t ≤ jTb , j = 1, 2, . . . , N .

en

where

2 αj cos(2πfc t − θ j ) + w(t) Tb q r2 rj (t) = − Eb0 αj cos(2πfc t − θ j ) + w(t) Tb

rj (t) =

Eb0

uy

Though the terminology is somewhat ambiguous, indeed one might say misleading, coherent demodulation means that not only the phase θ j is estimated perfectly (at least perfectly enough for engineering purpose) but so is the attenuation factor αj .

Ng

With θ j known, a set of sufficient statistics is generated by projecting the received q 2 signal(s) onto the N basis functions of (j − 1)Tb ≤ t ≤ jTb , Tb cos(2πfc t − θj ), j = 1, 2, . . . , N . They are: q 1T : rj = Eb0 αj + wj q 0T : rj = − Eb0 αj + wj √ where j = 1, 2, . . . , N . The rj ’s are i.i.d. Gaussian random variables of means ± Eb αj and variance σ 2 = N0 /2. The LRT f (r1 , r2 , . . . , rN |1T ) 1D ≷1 f (r1 , r2 , . . . , rN |0T ) 0D

Solutions

Page 10–6

Nguyen & Shwedyk

A First Course in Digital Communications

becomes

³

´

√1 2πσ

dy k

√ 0 2 e−(rj − Eb αj )/2σ 1D ³ ´ √ ≷1 −(rj + Eb0 αj )/2σ 2 0D √1 ΠN e j=1 2πσ ΠN j=1

Taking the ln and simplifying gives the following decision rule: N X

1D

αj rj ≷ 0. 0D

we

j=1

Remark: The above decision rule is actually known as the maximum-ratio-combining (MRC) detection rule. This is because the signal-to-noise ratio of the decision variable is maximized (the proof is left as a further problem).

Sh

(b) In developing the receiver, the αj ’s were assumed to be known (i.e., perfectly estimated at the receiver). To determine the error performance, however, we consider many, many transmissions (typically > 108 ) and the αj ’s will take on a gamut of values, i.e., they are considered to be random variables with a Rayleigh pdf. Start the calculation of the error performance by observing that due to symmetry

and that

&

P [error] = P [error|1T ] = P [error|0T ] 

P [error|0T ] = P ` =

N X

 ¯ αj r j ≥ 0¯0T  .

j=1

en

To evaluate the above, start by assuming the αj ’s take on a specific set of values and − − then average over all possible values. That is, let → α = (α1 , α2 , . . . , αN ) = → α = (α1 , α2 , . . . , αN ). Then   N X ¯ → − → → P [error|0T , − α =− α ] = P ` = αj r j ≥ 0¯0T , → α =− α.

uy

j=1

Ng

p Note that the quantities αj rj = − Eb0 αj2 + αj wj are statistically independent Gaussian p 0 2 variables of mean − Eb αj and variance αj2 N0 /2. Therefore ` is a Gaussian random p P N0 PN 2 2 2 variable whose mean is m = − Eb0 N j=1 αj and variance σ = 2 j=1 αj . Therefore, 1 → → P [error|0T , − α =− α] = √ 2πσ

Z



0

2 /2σ 2

e−(`+m)

d` = Q

³m´ σ

s = Q

v

uN X 2Eb0 u t N0

 αj2  .

j=1

But, as mentioned, the above error expression is for a specific set of αj ’s and we view the αj ’s as random. Therefore the error expression should be averaged over all possible values − → Though → of the αj ’s with a weighting provided by the joint pdf of the αj ’s, i.e., f− α ( α ).P 2 this can be done, here we take a (slightly) different approach. Namely let β = N j=1 αj where fαj (αj ) =

2 2αj −α2j /σF u(αj ) 2 e σF

are the statistically independent Rayleigh random

variables. Now recognize that each α2j can be represented as a sum of squares of two Solutions

Page 10–7

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

i.i.d. Gaussian random variables. As such, β is essentially a sum of squares of 2N i.i.d. Gaussian random variables. The pdf of β can be readily determined from Eqn. (10.90), i.e., 1

fβ (β) =

σ 2N (N

2

− 1)!

Therefore

Z

Z

1 σ 2N

By changing variables x =



 2Eb0 β  fβ (β)dβ N0

Q



2Eb0 N0

β

β N −1 −β/σ2 F dβ e (N − 1)!

the above expression can be rewritten as ∞

P [error] =

Q

0

. Now Q

¡√ ¢ 2x =

&

2 Eb0 σF N0

Q

β=0

s

β=0

Z

where SNR =

s

Sh

Eb0 N0 β,



we

P [error] = P [error|0T ] =

=

β N −1 e−β/σF u(β).

Z



P [error] =

x=0

½

1 √ 2π

³√ ´ xN −1 e−x/SNR 2x dx SNRN (N − 1)!

√1 2π

Z

R∞

−y 2 /2 dy √ y= 2x e



√ e y= 2x

¾ −y 2 /2

dy

and therefore

xN −1 e−x/SNR dx SNRN (N − 1)!

en

The above integral essentially finds the volume under the 2-dimensional function (sur1 N −1 e−x/SNR e−y 2 /2 in the region illustrated as in Fig. face) of g(x, y) = √2π(N −1)!SNR Nx 10.3.

Ng

uy

y

Region for integration

2 y = 2 x or x = y

2

dx

0

x

Figure 10.3

The key to performing the integration is to change the order of integration, i.e., sweep first w.r.t. x and then w.r.t. y as illustrated in Fig. 10.4.

Solutions

Page 10–8

Nguyen & Shwedyk

A First Course in Digital Communications y

dy k

Region for integration

dy

2 y = 2 x or x = y

2 sweeping first w.r.t. y from 2 x to ∞ and then w.r.t. to x from 0 to ∞

dx

x

sweeping first w.r.t. x from 0

we

0

to y 2 2 and then w.r.t. to y from 0 to ∞

Figure 10.4 Therefore:

Sh

   1 Z y22 xN −1 e−x/SNR  2 √ dx e−y /2 dy N   2π x=0 SNR (N − 1)! y=0

Z

P [error] =

Z

Consider the inner integral



y2 2

xN −1 e−x/SNR dx, which looks like an incomplete Gamma

&

x=0

2

en

1 function, and from G&R, page 317, Eqn. 3.381.1 with u = y2 , ν = N , µ = SNR we get ³ ´ ¡ 1 ¢−N 2 y the integral to be SNR γ N, 2SNR where γ(·, ·) is the incomplete Gamma function. Therefore: µ ¶ Z ∞ 1 1 y2 −y 2 /2 √ P [error] = e γ N, dy (N − 1)! 2π y=0 2SNR

uy

Using the relationship from G&R, page 940, Eqn. 8.352.1 with 1 + n ≡ N , x ≡ the incomplete Gamma function becomes: ( ) µ ¶ N −1 2k X y2 y2 y γ N, = (N − 1)! 1 − e− 2SNR 2SNR 2k SNRk k!

y2 2SNR

k=0

Ng

Therefore the error probability can be written as: 1 P [error] = √ 2π

Now

√1 2π

R∞

y=0 e

y2 2

dy =

1 2

Z



e y=0

y2 2

1 dy − √ 2π

Z



e y=0

1 −( 21 + 2SNR )y2

N −1 X k=0

y 2k dy 2k SNRk k!

(half the area of a zero-mean, unit variance Gaussian pdf) and

changing variables in the 2nd integral, λ ≡

y2 2 ,

simplifying, etc., we get

Z ∞ N −1 SNR+1 1 1 1 X 1 P [error] = − √ λk− 2 e−( SNR )λ dλ k 2 2 π SNR k! λ=0 k=0

Solutions

Page 10–9

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

The integral looks now like a complete Gamma function and from G&R, page 317, Eqn. 3.381.4 with ¡γ − 1 ¢≡ k − 12 ⇒ γ = k + 12 , µ = SNR+1 SNR we see that the integral is 1 1 Γ k + where Γ(·) is the complete Gamma function. 2 k+ 1 2 ( SNR+1 SNR ) √ ¢ ¡ Now Γ k + 12 = 2π (2k − 1)!! (Notation: (2k − 1)!! = 1 · 3 · 5 · · · (2k − 3) · (2k − 1)). Using this relationship and after some algebra, one gets:

q Define parameter µ ≡

SNR SNR+1

we

µ ¶k+ 1 N −1 2 SNR 1 1 X (2k − 1)!! P [error] = − 2 2 k!2k SNRk SNR + 1 k=0 ⇒ SNR =

µ2 1−µ2

³

and

SNR SNR+1

´k+ 1

2

¡ ¢k+ 1 2 = µ2 = µ2k+1 .

In terms of µ, the error probability is

Sh

µ ¶k N −1 1 1 X (2k − 1)!! 1 − µ2 P [error] = − µ2k+1 2 2 k!2k µ2 k=0

³ Now

1−µ2 µ2

´k

³ =

1−µ 2

´k ³

´k

2k 2k µ12k . Therefore

µ ¶ µ ¶ N −1 1 1 X (2k − 1)!! k 1 − µ k 1 + µ k − 2 µ 2 2 k! 2 2

&

P [error] =

1+µ 2

k=0

(2k−1)!! = 1·3·5···(2k−3)·(2k−1) × 2·4·6···(2k−2)·(2k) k! k! 2·4·6···(2k−2)·(2k) . ¡ ¢ (2k−1)!! (2k)! ⇒ k! = k!2k k! = 21k 2k k . Finally,

Write 2k k!

But 2 · 4 · 6 · · · (2k − 2) · (2k) =

en

¶ µ ¶ N −1 µ ¶ µ 1−µ k 1+µ k 1 1 X 2k P [error] = − µ k 2 2 2 2

(10.7)

k=0

uy

except we want to write it in the form of (P10.1)!

Ng

We next show that (10.7) is indeed the same as (P10.1) by induction. This means that we first confirm that (10.7) and (P10.1) are the same for N = 1 (quite obvious) and assume that they also the same for N = L, i.e., ¶ µ ¶ µ ¶ " L−1 µ ¶µ ¶ # L−1 µ ¶ µ 1−µ k 1+µ k 1−µ L 1 X L−1+m 1+µ m 1 1 X 2k − µ= . 2 2 k 2 2 2 2 m 2 m=0 k=0 (10.8) Then we need to show that (10.7) and (P10.1) are indeed the same for N = L + 1. This is accomplished with the following manipulations: " ¶ µ ¶ # L µ ¶µ X 1 2k 1−µ k 1+µ k 1− µ 2 k 2 2 k=0 " # µ ¶µ ¶ µ ¶ L−1 X µ2k ¶ µ 1 − µ ¶k µ 1 + µ ¶k 1 2L 1−µ L 1+µ L 1 1− µ − µ = 2 k 2 2 2 L 2 2 k=0

Solutions

Page 10–10

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

´L ³ Using the assumption of (10.8) this can be written as (where the term 1−µ is factored 2 out): µ ¶ " L−1 µ ¶ µ ¶µ ¶ # ¶µ 1−µ L X L−1+m 1+µ L 1 + µ m 1 2L − µ (10.9) 2 m 2 2 L 2 m=0

Now µ ¶ L−1+m = m

we

(L − 1 + m)! L + m L (L + m)! L = m!(L − 1)! L L+m m!L! L + m µ ¶µ ¶ L+m m = 1− L+m m

Sh

Therefore (10.9) becomes µ ¶ " L−1 µ ¶µ ¶ ¶µ ¶ L−2 µ 1−µ L X L+m 1+µ m X L+m 1 + µ m+1 − 2 m 2 m 2 m=0 m=0 # µ ¶µ ¶ 1 2L 1+µ L − µ (10.10) 2 L 2

L−1 X m=0

&

where µ ¶µ ¶ m L+m 1+µ m L+m m 2

L−2 X

k=m−1

=

k=−1 L−2 X

But

en

=

k=0

µ ¶µ ¶ k+1 L+k+1 1 + µ k+1 L+k+1 k+1 2

µ ¶µ ¶ k+1 L+k+1 1 + µ k+1 L+k+1 k+1 2

uy

µ ¶ µ ¶ k+1 L+k+1 k + 1 (L + k + 1)! (L + k)! L+k = = = L+k+1 k+1 L + k + 1 L!(k + 1)! L!k! k

Ng

Changing the dummy index k to m we get (10.10) in the following form: µ ¶ " L−1 µ ¶µ ¶ ¶µ ¶ L−1 µ 1−µ L X L+m 1+µ m X L+m 1 + µ m+1 − 2 m 2 m 2 m=0 m=0 µ ¶µ ¶L µ ¶µ ¶L # 1 2L 1+µ 1 2L 1+µ + − µ (10.11) 2 L 2 2 L 2 where the term

¡ ¢ ³ 1+µ ´L

1 2L 2 L

2

is added and subtracted.

Since, obviously µ

Solutions

1+µ 2

¶m+1

µ =

1+µ 2

¶m µ

1 µ + 2 2

¶ =

1 2

µ

1+µ 2

¶m +

µ 2

µ

1+µ 2

¶m

Page 10–11

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(10.11) can be written as µ ¶ " L−1 µ ¶µ ¶ ¶µ ¶ L−1 µ 1−µ L 1 X L+m 1+µ m µ X L+m 1+µ m − 2 2 m 2 2 m 2 m=0 m=0 # µ ¶µ ¶ µ ¶µ ¶ 1 2L 1 + µ L 1 2L 1+µ L + − µ (10.12) 2 L 2 2 L 2

we

Gathering terms appropriately (10.13) becomes µ ¶ " L µ ¶µ ¶ ¶µ ¶ # L µ 1−µ L 1 X L+m 1+µ m µ X L+m 1+µ m − , 2 2 m 2 2 m 2 m=0

m=0

which (finally) equals: 1−µ 2

¶L+1 "

¶µ ¶ L µ 1 X L+m 1+µ m 2 m 2

#

Sh

µ

(10.13)

m=0

(c) With approximations, (10.13) with L = N − 1 becomes µ

¶ ¶N NX µ ¶ −1 µ N −1+k k 1 2N − 1 1 1 = N k 4 N SNRN

&

P [error] ≈

1 4SNR

k=0

en

The error probability decreases as the N th power of the signal-to-noise ratio. N is usually called the diversity gain (or order) of the system. Note that this performance behavior is the same as with noncoherent FSK. But there is a 3 dB saving in power but at the expense of a more complicated receiver since the phase and attenuation need to be estimated. P10.6 To be added.

Ng

uy

P10.7 (a) A constant does not vary very much, indeed not at all. Therefore its variance is equal to 0. Further the expected or average value of a constant is the constant itself1 . Therefore the amount of fading is AF = 0/constant = 0. Z ∞ 2 2 (b) var{α2 } = E{α4 } − E 2 {α2 }. E{α2 } = σ22 α3 e−α /σF dα and after a change of variF 0 Z ∞ R∞ 2 2 2 −λ 2 able this becomes σF λe dλ = σF . E{α4 } = σ22 0 α5 e−α /σF dα and with the 0 Z ∞ F 4 same change of variable this becomes σF λ2 e−λ dλ = 2σF4 . Therefore AF =

4 −σ 4 2σF F 2 )2 (σF

0

= 1.

(c) First find a general expression for E{αn } and then evaluate it for n = 4 and n = 2. E{αn } =

2mm Γ(m)σ 2m

Z



2 /σ 2

αn+2m−1 e−mα

dα.

0

1

For an anecdote about a totally absurd proof that the expected value of a constant is the constant, contact the second author.

Solutions

Page 10–12

Nguyen & Shwedyk mα2 . σ2

Then

mm σ n+2m−2 ¢ ¡ E{α } = m Γ(m)σ 2m m n+2m−2 2

Z

n

For n = 2, E{α2 } =

E{α4 } =

λ 0

n+2m−2 2

e−λ dα =

³ σn n´ Γ m + 2 Γ(m)mn/2

σ2 σ 2 mΓ(m) Γ(m + 1) = = σ2 Γ(m)m Γ(m)m

σ4 σ 4 (m + 1) σ4 4 Γ(m + 2) = = σ + Γ(m)m2 m m

we

For n = 4,



dy k

Let λ →

A First Course in Digital Communications

1 Therefore AF = m . Note that when m = 1 the Nakagami distribution is Rayleigh and as m → ∞, it tends to a Gaussian pdf. Plots of the Nakagami-m pdf for various values of m are shown in Fig. 10.5. 2

Sh

Normalized Nakagami−m Distribution (with σ =1) 3

2.5

m=1 (Rayleigh) m=2 m=4 m=10

1.5

en

1

&

α

f (α)

2

0.5

uy

0 0

0.5

1

1.5 α

2

2.5

3

Figure 10.5

Ng

P10.8 (a) If the phase is π radians then the sign(s) of the transmitted signal is reversed, i.e., a bit zero is represented by a signal pattern where the transmitted signal in the (k − 1)th bit interval and kth bit interval are the same: either (−, −) or (+, +). On the other hand, for bit one the pattern is (+, −) or (−, +). In essence, nothing changes. (b) Start by observing that we want dk = dk−1 if the present bit bk = 0 and dk = dk−1 if bk = 1. The truth table for dk , dk−1 , bk is

Solutions

dk−1 0 (π rad) 0 (π rad) 1 (0◦ ) 1 (0◦ )

bk 0 1 0 1

dk 0 (π rad) 0 (0◦ ) 1 (0◦ ) 0 (π rad)

Page 10–13

Nguyen & Shwedyk

A First Course in Digital Communications

bk

dy k

From the truth table one sees (easily) that dk = dk−1 ⊕ bk . In block diagram form this means dk

d k −1

Delay

we

(Tb second)

Sh

Remark: The above is what is discussed in Chapter 6, Section 6.6, differential modulation, pages 251-252. q 2 cos(2πfc t) be the orthogonal basis for the (k − 1)th bit interval (c) Let φ(k−1) (t) = q Tb and φ(k) (t) = T2b cos(2πfc t) for the kth bit interval. Then the signal space plot of the signal set of (10.40) becomes

φ ( k ) (t )

Eb

0T

&

1T − Eb

en

0T

Eb 0

− Eb

φ ( k −1) (t )

1T

uy

It is identical (perhaps one should say isomorphic) to the signal space plot for Miller modulation. (d) Based on (b), consider the following coherent demodulation: first demodulation to dˆk and then to ˆbk . The block diagram looks as follows:

Ng

kTb

r (t )

2 cos(2π f ct ) Tb



( k −1) Tb

(•)dt

dk

b k

t = kTb Comparator

Delay (Tb seconds)

d k −1

Inverse of mapping at modulator

Of course it is possible to demodulate by projecting r(t) onto {φ(k−1) (t), φk (t)} (the signal space of bit dk ) and choosing the closest signal point. But this would require two matched filters (or two correlators).

Solutions

Page 10–14

Nguyen & Shwedyk

A First Course in Digital Communications

we

dy k

(e) First, observe that P [error] = P [error|0T ]. Second, note that ˆbk is in error if (i) dˆk−1 is in error and dˆk is correct or (ii) if dˆk−1 is correct and dˆk is in error, i.e., if both dˆk , dˆk−1 are correct then ˆbk is correct and if both dˆk , dˆk−1 are incorrect then ˆbk is also correct (who said “two wrongs do not make a right”). Since the events dˆk−1 , dˆk are statistically independent (remember noise is white and Gaussian) and the 2 events (i), (ii) above are mutually exclusive, one has à √ !" à √ !# Ãr !" Ãr !# Eb Eb 2Eb 2Eb 1−Q p = 2Q 1−Q . P [error] = 2Q p N0 N0 N0 /2 N0 /2 Plots of the error performance for this phase uncertainty model and that of (10.48) are shown in Fig. 10.6. Observe that coherent demodulation saves about 0.5 dB in Eb /N0 .

10

10

−2

−3

−4

−5

DBPSK Coherently demodulated DBPSK

en

10

Sh

P[bit error]

10

−1

&

10

0

2

4

6 8 Eb/N0 (dB)

10

12

14

Figure 10.6

uy

10

−6

Ng

(f) CDDBPSK perhaps. Remark: Because of the memory introduced by the differential mapping one can determine a state diagram, a trellis and use Viterbi’s algorithm to perform a sequence demodulation. Left as exercise or convince yourself that Fig. 6.19 is applicable here.

P10.9 (a) One possible relative phase change, ∆φ, mapping is as follows: 00 → ∆φ = 0 (radians) π 01 → ∆φ = 2 11 → ∆φ = π 3π ³ π´ 10 → ∆φ = or − 2 2

(b) Since there is memory, let us assume an initial condition of 0 radians. Parse the sequence into 2 bit sequences and determine the transmitted phase sequence. Solutions

Page 10–15

Nguyen & Shwedyk

A First Course in Digital Communications 01

11

11

00

π 2

3π 2

π 2

π 2

01 π

10

10 0

π 2

... . . . (transmitted phase)

dy k

0 ↑ initial condition As can be seen and as expected, the absolute transmitted phase has no meaning. It is the relative (or differential) phase that conveys the information. (c) The trellis looks as shown below, where the state is defined as the previous transmitted phase. Note that the trellis is fully developed after one Ts interval.

0 / 00

0

π

π

2

π 11

01 0 /10

2

π



2

11

00 π 01

2

0 / 11

π

2

10

π

2

11

⋅ ⋅ ⋅



&

π 00

π

2

01

π 10

0 / 01



2

00

en



01

Sh



2

we

Output phase Input 2 bit sequence

State d k −1 (rad)

(k − 1)Ts

2

kTs

Ng

uy

(d) Because of the memory of one symbol interval we shall look at the signal in the previous symbol interval along with q the present symbol q interval. A signal in a symbol interval lies 2 in 2 dimensions, that of { Tb cos(2πfc t), T2b sin(2πfc t)}, which means 4 dimensions are needed to represent the signal over 2 symbol intervals. Along each dimension the √ signal component takes on one of 2 possible value (± Eb ). Therefore there are 24 = 16 possible signals lying in the 4-dimensional signal space. Each signal lies in one of the 16 quadrants, indeed each quadrant has only 1 signal lying in it. Remark: More detail as to the actual signal points is provided in the next problem. (e) Start by considering the following “truth table”, perhaps it is more appropriately called a desired table.

Solutions

Page 10–16

A First Course in Digital Communications

0

0



0

1



0



Phase changes by + π2 radians dk−1,Q dk−1,I 1

1

0

0

1

0

0

1



Phase changes by − π2 radians dk−1,Q dk−1,I

dk,I

dk−1,Q

dk−1,I



Now how dk,Q , dk,I behave depends on the bit pattern dk−1,Q , dk−1,I



dk−1,Q

dk−1,I

dk−1,Q

dk−1,I

dk−1,Q

dk−1,I

dk−1,Q

dk−1,I

Again change in dk,Q , dk,I depends on the bit pattern dk−1,Q , dk−1,I

1

dk−1,Q

dk−1,I

0

0

dk−1,Q

dk−1,I

1

0

dk−1,Q

dk−1,I

1

dk−1,Q

dk−1,I

dk−1,Q

dk−1,I

&

1

0

1



uy

1

dk,Q

en

1

Phase stays the same as the previous phase

we

bk,I

Sh

bk,Q

dy k

Nguyen & Shwedyk

Phase changes by +π radians



Remark: To see the above relationships between the {dk,Q , dk,I } bits and the {dk−1,Q , dk−1,I } bits, especially for the 01, 10 bit patterns for bk,Q , bk,I you may wish to sketch a few (I, Q) signal space diagrams.

Ng

Now to form the Boolean expressions. Let us first consider the inphase bit dk,I . The formation shall not proceed by formal logic design procedures but more by intuition, in other words by (hopefully) intelligent trial and error. The first observation is that the pattern for the dk,I bit is similar for two subsets of the bk,Q , bk,I bit pattern, namely (00, 11) and (01, 10). So let us start with the expression bk,Q ⊕ bk,I which would create a selector bit of 0 or 1, respectively for the two subsets. Consider subset (00, 11). Expression bk,Q ⊕ bk,I shall be equal to 1 in this case. Note that bk,I ⊕ dk−1,I has as its output dk−1,I when bk,I = 0 and dk−1,I when bk,I = 1. So AND this with bk,Q ⊕ bk,I and the subset (00,11) is accounted for. Thus far we have dk,I = (bk,Q ⊕ bk,I ) AND (bk,I ⊕ dk−1,I ).

Solutions

Page 10–17

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

To account for the 2nd subset, OR the above expression with an expression to be determined for the 2nd subset. Since when the 2nd subset is relevant, the above expression is 0, we can develop this expression independently. To see what this expression is, consider the table below where just dk,I is considered and the Boolean expressions (bk,Q ⊕ dk−1,Q ) and (bk,I ⊕ dk−1,I ) are also shown. bk,I

dk−1,Q

dk−1,I

dk,I

0

1

1

1

dk,I = dk−1,I = 0

0

0

1 1

0

1 0

1

0

dk,I = dk−1,I = 1

0

1

0

dk,I = dk−1,I = 0

1

0

1

dk,I = dk−1,I = 1

0

1

1

dk,I = dk−1,I = 1

0

1

0

dk,I = dk−1,I = 0

1

0

0

dk,I = dk−1,I = 1

0

1

1

0

&

1

(bk,I ⊕ dk−1,Q )

Sh

1

(bk,Q ⊕ dk−1,Q )

we

bk,Q

0

1

dk,I = dk−1,I = 0

en

Note that both “track” the dk,I bit, albeit one in a complementary fashion. Therefore form the following Boolean expression for the 2nd subset: (bk,Q ⊕bk,I ) AND (bk,Q ⊕ dk−1,Q ). The final overall expression for the inphase bit is £ ¤ £ ¤ dk,I = (bk,Q ⊕ bk,I ) AND (bk,I ⊕ dk−1,I ) OR (bk,Q ⊕ bk,I ) AND (bk,Q ⊕ dk−1,Q )

uy

Remark: One could also use (bk,I ⊕ dk−1,Q ). Do it? Also convince yourself that (bk,I ⊕ dk−1,I ) or (bk,Q ⊕ dk−1,I ) are not appropriate to be used. For the quadrature bit reason as above, or use “symmetry”, or guess, to get £ ¤ £ ¤ dk,Q = (bk,Q ⊕ bk,I ) AND (bk,Q ⊕ dk−1,Q ) OR (bk,Q ⊕ bk,I ) AND (bk,I ⊕ dk−1,Q )

Ng

P10.10 (a) There are four basis functions, namely r  2  φk,I (t) = cos(2πfc t)  Ts (k−1)Ts ≤t≤kTs r (present symbol interval)  2  sin(2πfc t)  φk,Q (t) = Ts and

Solutions

r

 2  φk−1,I (t) = cos(2πfc t)  Ts (k−2)Ts ≤t≤(k−1)Ts r (previous symbol interval)  2  sin(2πfc t)  φk−1,Q (t) = Ts Page 10–18

Nguyen & Shwedyk

A First Course in Digital Communications

(b) 24 , i.e., 16 possible bit patterns.

dy k

(c) 24 quadrants.

Signal ii)

we

(d) Using Table 10.2 of P10.9 bk−3 bk−2 bk−1 bk Previous phase Present phase π π 0 0 0 0 i) 4 4 3π 3π ii) 4 4 5π 5π iii) 4 4 7π 7π iv) 4 4 Since the present 2 bits are 00, the present phase is the same as the previous phase, i.e., does not change. φk −1,Q (t )

φk ,Q (t ) Eb

− Eb

0

Signal iii)

φk −1,Q (t )

φk −1, I (t )

φk −1, I (t )

0

φk ,Q (t )

− Eb

φk −1,Q (t )

φk ,Q (t )

0

φk −1, I (t )

− Eb

Eb 0

φk , I (t )

− Eb

− Eb

uy

φk , I (t )

0

− Eb

Eb

φk , I (t )

0

− Eb

en

Signal iv)

− Eb

&

− Eb

Sh

Eb

Figure 10.7

Ng

To determine which quadrant(s) the signals fall in, the notation needs to be clarified further. Let the components along a specific axis be ±1 as implied and let the axes be ordered as follows: (φk−1,I , φk−1,Q , φk,I , φk,Q ). Then the quadrants in this notation are: i) (+1, +1, +1, +1) ii) (−1, +1, −1, +1) iii) (−1, −1, −1, −1) iv) (+1, −1, +1, −1)

or if we let −1 ↔ bit 0, +1 ↔ bit 1 then in binary the quadrants are

1111 0101 0000 1010

or in decimal, taking the left most bit as being the least significant

15 5 0 10

5π 7π (e) The previous phase can still be any one of the 4 values, ( π4 , 3π 4 , 4 , 4 ) and since the 5π 7π present 2 bits are 00, which means ∆φ = 0, the present phase is also one of ( π4 , 3π 4 , 4 , 4 ) as in (d).

Solutions

Page 10–19

Nguyen & Shwedyk

A First Course in Digital Communications

(f) bk−2 0

bk−1 0

bk 1

i) ii) iii) iv)

Signal space

φk −1,Q (t )

Eb

φk −1,Q (t ) Eb

φk −1, I (t )

− Eb

φk −1, I (t )

0

φk ,Q (t )

− Eb

en 0

− Eb

uy Ng Solutions

φk ,Q (t ) Eb

φk −1, I (t )

φk −1,Q (t )

0

φk , I ( t )

0 − Eb

− Eb

iv)

φk , I ( t )

0

φk −1,Q (t )

− Eb

φk , I (t )

0

&

iii)

Eb

Sh

0

− Eb

3π 4 5π 4 7π 4 π 4

φk ,Q (t )

Eb

ii)

Present phase

π 4 3π 4 5π 4 7π 4

we

i)

Previous phase

dy k

bk−3 0

φk ,Q (t ) Eb

Eb

φk −1, I (t )

0

Eb

φk , I ( t )

− Eb

Figure 10.8

Page 10–20

Nguyen & Shwedyk

A First Course in Digital Communications

The signals fall in the following quadrants Binary 1101 0100 0010 1011

Decimal 13 4 2 11

dy k

Quadrant i) (+1, +1, −1, +1) ii) (−1, +1, −1, −1) iii) (−1, −1, +1, −1) iv) (+1, −1, +1, +1)

Consider now bk−2 0

bk−1 1

bk 1

i) ii) iii) iv)

Previous phase

Present phase

π 4 3π 4 5π 4 7π 4

5π 4 7π 4 π 4 3π 4

Ng

uy

en

&

Sh

bk−3 0

we

If the previous bits bk−3 bk−2 are 01, 10, or 11 we still have the same set of signals. As in (e), the previous phase can still be any of the 4 values and the present phase is a “relative” shift of π4 with respect to these 4 values.

Solutions

Page 10–21

Nguyen & Shwedyk

A First Course in Digital Communications

Signal space i)

φk −1,Q (t )

dy k

φk ,Q (t )

Eb

0

Eb

φk −1, I (t )

− Eb

φk , I (t )

0

ii)

we

− Eb

φk ,Q (t )

φk −1,Q (t ) Eb

0

iii)

φk −1,Q (t )

Sh

− Eb

φk −1, I (t )

Eb

0

φk , I (t )

− Eb

φk ,Q (t )

− Eb

&

Eb

φk −1, I (t )

0

0

Eb

φk , I (t )

en

− Eb

iv)

uy

φk −1,Q (t )

0

Eb

φk ,Q (t ) Eb

φk −1, I (t )

− Eb

φk , I (t )

0

Ng

− Eb

Solutions

Figure 10.9 Quadrant i) (+1, +1, −1, −1) ii) (−1, +1, +1, −1) iii) (−1, −1, +1, +1) iv) (+1, −1, −1, +1)

Binary 1100 0110 0011 1001

Decimal 12 6 3 9

Page 10–22

Nguyen & Shwedyk

A First Course in Digital Communications

Finally, consider bk−1 1

bk 0

i) ii) iii) iv)

Previous phase

Present phase

π 4 3π 4 5π 4 7π 4

7π 4 π 4 3π 4 5π 4

we

bk−2 0

Ng

uy

en

&

Sh

bk−3 0

dy k

When bk−3 bk−2 are 01, 10, or 11, same argument as before, i.e., have the same signal set.

Solutions

Page 10–23

Nguyen & Shwedyk

A First Course in Digital Communications

i)

dy k

Signal space

φk −1,Q (t )

φk ,Q (t )

Eb

Eb

0

φk , I (t )

we

0

Eb

φk −1, I (t )

− Eb

ii)

φk ,Q (t )

φk −1,Q (t )

Eb

− Eb

0

iii)

φk −1, I (t )

&

φk −1,Q (t )

Sh

Eb

− Eb

φk −1, I (t )

en

0

0

Eb

φk , I (t )

φk ,Q (t ) Eb

− Eb

0

φk , I (t )

− Eb

φk −1,Q (t )

uy

iv)

0

Eb

φk ,Q (t )

φk −1, I (t )

− Eb 0

− Eb

− Eb

Ng

φk , I (t )

Figure 10.10 Quadrant i) (+1, +1, +1, −1) ii) (−1, +1, +1, +1) iii) (−1, −1, −1, +1) iv) (+1, −1, −1, −1)

Binary 1110 0111 0001 1000

Decimal 14 7 1 8

Again bk−3 bk−2 = 01, 10, 11 results in the same set of signal points for bk−1 bk = 10.

Solutions

Page 10–24

Nguyen & Shwedyk

A First Course in Digital Communications

( k −1)Ts

(•)dt

Sh



we

dy k

(g) To develop the demodulator, observe that the signal points lie as follows for the various possibilities of bk−1 bk bk−1 bk  00   01   11   10  15 13 12 14  5   4   6   7         Quadrant →   0   2   3   1  10 11 9 8 The observation is that they lie in disjoint quadrants (or disjoint quadrant subsets). Therefore, to make a decision on the symbol bk−1 bk decide which quadrant the received signal lies in, more accurately which quadrant the sufficient statistics lie in, and choose the bk−1 bk symbol that belongs to this quadrant. The sufficient statistics are the projection of r(t) onto the basis functions φk−1,I (t), φk−1,Q (t), φk,I (t), φk,Q (t). The block diagram looks as follows:

( k − 2)Ts

φk −1, I (t )

( k −1)Ts



(•)dt

( k − 2)Ts

r (t )

( k −1)Ts



(•)dt

( k − 2)Ts

φk , I (t )

t = (k − 1)Ts

Decide which quadrant r1I , r1Q , r2 I , r2Q fall in

&

φk −1,Q (t )

t = (k − 1)Ts

( k −1)Ts

en



( k − 2)Ts

(•)dt

b k b k −1

and choose t = kTs

the symbol bk bk −1

t = kTs

φk ,Q (t )

uy

Figure 10.11

Remarks:

Ng

(i) The above block diagram is a minimum-distance receiver, i.e., the signal point r(t) is closest to is chosen and the symbol bk bk−1 it corresponds to is decided on. Remember we are in white Gaussian noise. (ii) As an example quadrant 6 is chosen and symbol ˆbkˆbk−1 = 11 is decided on if (r1,I < 0) and (r1,Q > 0) and (r2,I > 0) and (r2,Q < 0).

Solutions

Page 10–25

Nguyen & Shwedyk

A First Course in Digital Communications

P10.11 From Problems 10.8 and 10.9 let us make the following observations:

we

dy k

(i) The received signal over two symbol intervals can be written as: p p p p r(t) = ± Eb φk−1,I (t) ± Eb φk−1,Q (t) ± Eb φk,I (t) ± Eb φk,Q (t) + w(t) √ Therefore the sufficient statistics r1,I , r1,Q , r2,I , r2,Q are of the√form ± Eb + w, i.e., statistically independent Gaussian random variables of mean ± Eb and variance N0 /2. Note that Eb is readily interpreted as the transmitted energy per bit. (ii) Because, as usual, the information bits bk are assumed to be equally probable and statistically independent which leads to symmetry in the transmitted (and received) signal space. We have P [symbol error] = P [symbol error|bk bk−1 = 00] Indeed we can refine this further as:

Sh

P [symbol error] = P [symbol error|a specific quadrant signal transmitted] (iii) Let us choose as a specific quadrant signal the√one that corresponds to quadrant 5, or √ √ √ (−1, +1, −1, +1) or the signal − Eb φk−1,I (t)+ Eb φk−1,Q (t)− Eb φk,I (t)+ Eb φk,Q (t). A symbol error is made if r1,I , r1,Q , r2,I , r2,Q fall in quadrants 1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 13, 14.

uy

en

&

Note that if r1,I , r1,Q , r2,I , r2,Q fall in quadrants 0, 10, or 15, though an error is made in the transmitted signal, a correct decision is still made on the transmitted information bits bk bk−1 = 00. (iv) So one needs to calculate the volume under f (r1,I , r1,Q , r2,I , r2,Q |signal in quadrant 5) in the quadrants in which a symbol error is made. This can be done (almost) by inspection. Consider the volume in quadrant 2, i.e., quadrant (−1, −1, +1, −1). Note that there is agreement in 2 of the components, in this case specifically r1,I , r2,Q and disagreement in 2 components, namely r1,Q , r1,I . The “contributions” to the volume when there are agreement and disagreement are shown below: N0 2

Ng

Disagreement

Agreement

0

Eb

Figure 10.12

Quantified mathematically,

Ãr

Agreement “contribution” = 1 − Q Ãr Disagreement “contribution” = Q

Solutions

2Eb N0

2Eb N0 !

!

Page 10–26

Nguyen & Shwedyk

A First Course in Digital Communications

volume is therefore 1 − Q

2Eb N0

2Eb N0

. Based on this reasoning, set up the

of the Q(·) function is understood. Error volume contribution (1 − Q)Q3 (1 − Q)Q3 (1 − Q)3 Q (1 − Q)Q3 (1 − Q)3 Q (1 − Q)3 Q (1 − Q)3 Q (1 − Q)Q3 (1 − Q)Q3 (1 − Q)2 Q2 (1 − Q)2 Q2 (1 − Q)2 Q2 (1 − Q)2 Q2

we

Decimal (5) (2) (4) (11) (13) (1) (7) (8) (14) (3) (6) (9) (12)

Sh

Quadrant (−1, +1, −1, +1) (−1, −1, +1, −1) (−1, +1, −1, −1) (+1, −1, +1, +1) (+1, +1, −1, +1) (−1, −1, −1, +1) (−1, +1, +1, +1) (+1, −1, −1, −1) (+1, +1, +1, −1) (−1, −1, +1, +1) (−1, +1, +1, −1) (+1, −1, −1, +1) (+1, +1, −1, −1)

2Eb N0

Q3 q

following table where the argument

dy k

The volume in this quadrant is then a product of the four “contributions”, 1 agreement and 3 disagreementshin this ³q case. The ´i contribution ³q ´ of this quadrant volume to the total

en

&

The probability of symbol error is the sum of the above volumes, remember the events are mutually exclusive. A little algebra yields ! Ãr ! Ãr ! Ãr !# " Ãr 2Eb 2Eb 2Eb 2Eb 2 3 4 − 2Q + 2Q −Q . P [symbol error] = 4 Q N0 N0 N0 N0 (10.14) Remark: An error expression is given in “Telecommunication Systems Engineering”, W. C. Lindsay & M. K. Simon, Doves, 1973, p. 246. The derivation is quite different. Compare. Also the Viterbi algorithm can be applied here. Left as an exercise.

uy

For comparison, the symbol error probability of coherently-demodulated QPSK is given from Eqns. (7.39) and (7.40) in the text as: "

2Eb N0

!#2

Ãr = 2Q

2Eb N0

!

Ãr −Q

2

2Eb N0

! (10.15)

Ng

P [symbol error] = 1 − 1 − Q

Ãr

Fig. 10.13 shows plots of (10.14) and (10.15). Observe that coherently-demodulated QPSK outperforms coherently-demodulated DQPSK over the whole range of Eb /N0 . At high SNR (i.e., high Eb /N0 ), the difference in Eb /N0 to achieve the same symbol error probability is quite small, about 0.2 dB only.

Solutions

Page 10–27

Nguyen & Shwedyk

10

10

10

dy k

10

−2

−3

−4

we

P[symbol error]

10

−1

−5

−6

Coherently demodulated QPSK Coherently demodulated DQPSK

0

2

4

6 8 Eb/N0 (dB)

10

Sh

10

A First Course in Digital Communications

12

14

Figure 10.13

&

P10.12 (a) 16-QAM symbols are 4-bit sequences. Consider 2 contiguous symbols bk − 7bk −6bk −5bk − 4

bk −3bk − 2bk −1bk

(k − 1)Ts

kTs

Ts = 4Tb

en

Arbitrarily, let bk−1 bk be the differentially encoded bits, which determine the transmitted phase, i.e.,

Transmitted Transmitted Transmitted Transmitted

uy

bk−1 bk 00 01 01 01

phase phase phase phase

= = = =

Previous Previous Previous Previous

xmitted xmitted xmitted xmitted

phase phase +π/2 phase +π phase +3π/2 (or −π/2)

Ng

(b) The remaining 2 bits, bk−3 bk−2 amplitude modulate the transmitted sinusoid by +∆/2, +3∆/2. q Write the transmitted sinusoid as T1s cos(2πfc t+ϕk ) where ϕk = π/4, 3π/4, 5π/4, 7π/4 radians (ϕk is the relative phase as determined by bk bk−1 ). In the symbol interval {(k − 1)Ts , kTs } the transmitted signal lies in the signal space as follows:

Solutions

Page 10–28

Nguyen & Shwedyk

A First Course in Digital Communications φQ (t )

(11)

(10)

(01)

(01)

(00)

(00)

(11)

∆ 2

(10)

φI (t )

0

(00) ∆ 2

(01)

(11)

(01)

(00)

(10)

φI (t ) =

2 cos(2π f c t ) Ts

we

(10)

dy k

bits bk − 2bk −3

φQ (t ) =

(11)

2 sin(2π f c t ) Ts

Sh

Figure 10.14

Remark: Which quadrant the transmitted signal lies in depends on what the differential encoding tells one.

&

(c) 64-QAM has 6-bit signals bk−5 bk−4 bk−3 bk−2 bk−1 bk . Again we let bk−1 bk be the differentially encoded bits while the bits bk−5 bk−4 bk−3 bk−2 amplitude modulate the transmitted carrier by +∆/2, +3∆/2, +5∆/2, +7∆/2. Showing only one quadrant of the transmitted signal space: φQ (t )

To map the bits bk −5bk − 4bk −3bk − 2

en

use a Gray code mapping

uy

∆ 2

Ng

0

φI (t ) ∆ 2

φI (t ) =

2 cos(2π f c t ) Ts

φQ (t ) =

2 sin(2π f c t ) Ts

Figure 10.15

Again the transmitted signal space occupies all 4 quadrants. The signal points in the other 3 quadrant are rotated version of the ones shown q in the first quadrant. The bits bk−5 bk−4 bk−3 bk−2 would amplitude modulate the carrier T1s cos(2πfc t + ϕk ), where ϕk is the differential phase determined by bk−1 bk . Solutions

Page 10–29

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

(d) To generalize to any square QAM, proceed as above. Simply take the first 2 bits and differentially encode the phase. The remaining bits then amplitude modulate the carrier. Out of curiosity which, if any, of the 16-QAM constellations in Fig. 8.17 lend themselves to differentially encoding (against kπ/2 phase ambiguity)? (e) Modulator: PAM modulator

 ∆ 3∆  + , +  2   2

we

bk

dk ,I

S/P

bk bk −1

Differential d k −1, I mapper

d k ,Q

NRZ-L

Ts

d k −1,Q

PAM modulator

{+1, −1} {+1, −1}

sT (t )

2 sin ( 2π f c t ) Ts

 ∆ 3∆  + , +  2   2

&

bk −1

NRZ-L

Ts

Sh

bk −3bk − 2bk −1bk

2 cos ( 2π f c t ) Ts

Figure 10.16

Ng

uy

en

Demodulator: The block diagram consists of 2 parts. One of these demodulates the differential bits bk bk−1 , and is that of P10.10(g) where one looks at the signal over 2 symbol intervals (k − 2)Ts to (k − 1)Ts . Decide which of the 16 quadrants the (4) sufficient statistics fall in and then determine the differential bits ˆbkˆbk−1 . The second part demodulates the “amplitude bits”. It carves up the signal space of (b) using a minimum-distance criterion (as we are in AWGN). rk ,Q (11)

(10)

(01)

(01)

(00)

(00)

(11)

decision boundaries

(10) ∆ 2

rk , I

0 (10)

(00)

(00)

(01)

(10)

(11)

∆ 2

(11)

(01)

Figure 10.17

Solutions

Page 10–30

Nguyen & Shwedyk

A First Course in Digital Communications

we

dy k

P10.13 (a) The received signal space has 4 basis functions, namely r 2 φ1,I (t) = cos(2πf1 t); Tb r 2 φ1,Q (t) = sin(2πf1 t); Tb r 2 cos(2πf2 t); φ2,I (t) = T r b 2 φ2,Q (t) = sin(2πf2 t). Tb

(10.16) (10.17) (10.18) (10.19)

Sh

In terms of these basis functions the received signal can be written as p p p 0T : r(t) = E1 φ1,I (t) + E2 nF,I φ1,I (t) + E2 nF,Q φ1,Q (t) + w(t), p p p 1T : r(t) = E1 φ2,I (t) + E2 nF,I φ2,I (t) + E2 nF,Q φ2,Q (t) + w(t),

(10.20) (10.21)

&

where the fading parameters nF,I = α cos θ, nF,Q = α sin θ are statistically independent Gaussian random variables, zero mean, variance σF2 /2. Project r(t) onto the 4 basis functions to obtain a set of sufficient statistics: √ √ E1 + √ E1 nF,I + w1,I 1T : r1 = w1,I 0T : r1 = r2 = E2 nF,Q + w1,Q r2 = √ w 1,Q √ r3 = w2,I r3 = E1 + √ E2 nF,I + w2,I r4 = w2,Q r4 = E2 nF,Q + w2,Q

en

where w1,I , w1,Q , w2,I , w2,Q , due to the AWGN, are the usual statistically independent, zero-mean Gaussian random variables, variance σ 2 ≡ N0 /2. Note that whether 0T or 1T the sufficient statistics r1 , r2 , r3 , r4 are statistically independent Gaussian random variables. (b) The LRT is therefore (as usual, bits are assumed equally probable):

uy

4 Q i=1 4 Q i=1

f (ri |1T ) 1D R1 0D f (ri |0T )

(10.22)

Ng

√ Ignoring the 2π factor which cancels out top and bottom and letting σt2 ≡ E2 σF2 /2 + N0 /2, the LRT is 1 σe

r2 − 12 2σ

1 σt e



1 σe √

r2 − 22 2σ

(r1 − E1 )2 2σt2





1 σt e

1 σt e



(r3 − E1 )2 2σt2

r2 2 2σt2

1 σe

r2 − 32 2σ

r2

− 42 1 2σt e σt

1D

r2 − 42 2σ

0D

1 σe

R 1

(10.23)

Canceling and taking ln, we get: √ √ r22 r23 r24 (r1 − E1 )2 r24 1D r21 r22 (r3 − E1 )2 + + + + + + R σt2 σt2 σ 2 σ 2 0D σ 2 σ 2 σt2 σt2 Solutions

(10.24)

Page 10–31

Nguyen & Shwedyk

A First Course in Digital Communications

Rewrite this as:

dy k

√ √ r23 (r3 − E1 )2 r24 r24 1D r21 (r1 − E1 )2 r22 r22 − + − R − + − σ2 σ 2 σt2 0D σ 2 σ 2 σt2 σt2 2σt2

(10.25)

After some algebra, namely consisting of completing the square by adding and subtracting terms, the decision rule can be written as:

we

√ ¶2 √ ¶ µ 1D µ E1 E1 2 2 r3 + + r4 R r1 + + r22 2 cσt2 cσ t 0D

where c ≡ σ12 − σ12 > 0. Also note that cσt2 = E2 σF2 /2σ 2 ⇒ t Therefore the decision rule can be written as 1D

(10.26)



E1 cσt2

=

√ 2 E1 σ 2 2 E 2 σF

≡ m.

¡ 2 ¢ r1 + m + r22 .

Sh

∴ (r3 + m)2 + r24 R

0D

The parameter m reflects the direct LOS path, which results in a “DC” received component.

Specifically,

&

P10.14(a-d) Though it is easy enough to find the characteristic functions of what should read x2 2 2 and y2 , i.e., E{ejωx } and E{ejωy } where x ∼ N (m, σ 2 ) and y ∼ N (0, σ 2 ), and then multiply the 2 characteristic functions, the difficulty is in finding the inverse transform.

uy

and

en

n o 2 Φx2 (ω) = E ej2ωx = =

1 √ 2πσ

Z∞ 2

ejωx e−

dx

−∞

1

p

(x−m)2 2σ 2

1 − j2ωσ 2

m2

e 1−j2ωσ2

(10.27)

m2

e 1−j2ωσ2 Φy2 (ω) = p ⇒ Φz (ω) = (1 − j2ωσ 2 ) 1 − j2ωσ 2

Ng

1

1 ∴ fz (z) = 2π

Z∞ −∞

m2

e (1−j2ωσ2 ) −jωz e dω (1 − j2ωσ 2 )

Unfortunately, we are not able to perform this integration. Well, if one fails in one approach, try another one. Consider random variable z = p x2 + y2 where x, y are statistically independent Gaussian random variables, common variance, σ 2 , and mean mx , my , respectively. First let us find the probability distribution function Fz (z) = P [z = x2 + y2 ≤ z]. Graphically this is finding the probability that z falls in the region indicated in Fig. 10.18.

Solutions

Page 10–32

Nguyen & Shwedyk

A First Course in Digital Communications

dy k

y

z = x2 + y 2

»

x

we

0

Figure 10.18

Sh

This probability is ZZ fxy (x, y)dxdy = Z

1 2πσ 2

ZZ

e−

(x−mx )2 2σ 2

e−

(y−my )2 2σ 2

dxdy

(10.28)

Z

&

Now change to polar coordinates: ρ2 = x2 + y 2 ; dxdy = ρdρdα; x = ρ cos α; y = ρ sin α and region Z is 0 ≤ ρ ≤ z; 0 ≤ α < 2π. Zz Z2π

∴ Fz (z) =

ρ=0 α=0

1 e σ2

(m2 +m2 ) − x 2 y 2σ

en

=

2



Zz

2 − ρ2 2σ

ρe ρ=0

 1 2π



Z2π e

ρ(mx cos α+my sin α) σ2

uy Ng

Fz (z) =

e

2 (m2 x +my ) 2σ 2

σ2

Zz ρe

2 − ρ2 2σ

ρ=0

dα dρ (10.29)

α=0

³ q Write mx cos α + my sin α as m2x + m2y cos α − tan−1 ¶ µ √ ρ m2x +m2y . Therefore integral to be I0 σ2 −

2

(m +m ) y ρ sin α 1 − ρ22 mx ρ cos α+m − x 2 y 2σ e σ2 2σ e e ρdρdα 2πσ 2

my mx

´ and recognize the inner

 q  ρ m2x + m2y  dρ I0  σ2

(10.30)

But what is desired is the probability density function fz (z) = dFz (z)/dz. Differentiating the expression for Fz (z) with respect to z, (using Leibnitz rule), we get  q  2) z m2x + m2y (z 2 +m2 +m x y z −  (of course it is = 0 for z < 0) (10.31) 2σ 2 fz (z) = 2 e I0  σ σ2 which is (P10.10) with mx = m and my = 0.

Solutions

Page 10–33

Nguyen & Shwedyk

A First Course in Digital Communications 2 κσF (1+κ)

and σ 2 =

2 σF 2(1+κ) .

Substituting and simplifying by letting zn ≡ z/σF , ³ p ´ κ 2 σF fz (zn ) = 2zn (κ + 1)e−(zn + 1+κ )(1+κ) I0 2zn κ(1 + κ)

dy k

(e) We have m2 = one has

Plots of σF fz (zn ) with σF = 1 and various values of κ are shown in Fig. 10.19. As κ increases the pdf tends to a Gaussian pdf. This is expected since the fading becomes less and less of a factor. 2

Normalized Rician Distribution (with σF=1)

we

2.5

κ=0 (Rayleigh) κ=1 κ=4 κ=16

2

fz(zn)

Sh

1.5

1

0.5

1

1.5 zn

2

2.5

3

Figure 10.19

Ng

uy

en

0 0

&

0.5

Solutions

Page 10–34

dy k

Chapter 11

we

Advanced Modulation Techniques

&

Sh

P11.13 (a) Because in general s1 (t) and s2 (t) are correlated, two orthornormal basis functions φ1 (t) and φ2 (t) are required to completely represent them. Since the four signals y1 (t), y2 (t), y3 (t) and y4 (t) are just linear combinations of s1 (t) and s2 (t), they can also be represented as linear combinations of φ1 (t) and φ2 (t). Thus the dimensionality of the four signals is two. One possible set of orthornormal functions φ1 (t) and φ2 (t) are given below (using Gram–Schmidt procedure):   φ1 (t) = s1 (t) 1 ρ (11.1) s2 (t) − p s1 (t)  φ2 (t) = p 2 2 1−ρ 1−ρ

en

(b) First the two signature waveforms s1 (t) and s2 (t) can be written in terms of the the basis functions as follows: ½ s1 (t) = φ1 (t) p (11.2) s2 (t) = ρφ1 (t) + 1 − ρ2 φ2 (t)

uy

It follows that the four signals {y1 (t), y2 (t), y3 (t), y4 (t)} can be written as follows: p y1 (t) = +s1 (t) + s2 (t) = (1 + ρ)φ1 (t) + p1 − ρ2 φ2 (t) y2 (t) = +s1 (t) − s2 (t) = (1 − ρ)φ1 (t) − p1 − ρ2 φ2 (t) (11.3) y3 (t) = −s1 (t) + s2 (t) = −(1 − ρ)φ1 (t) + p1 − ρ2 φ2 (t) y4 (t) = −s1 (t) − s2 (t) = −(1 + ρ)φ1 (t) − 1 − ρ2 φ2 (t)

Ng

These four signals are plotted in Fig. 11.1-(a)

(c) The joint optimum receiver for this CDMA system is a minimum distance receiver, which jointly demodulate the transmitted bits of both user 1 and user 2 based on the signal (y1 (t), y2 (t), y3 (t) or y4 (t)) that is closest to r(t). The decision boundary and the decision regions for this minimum distance receiver when ρ = 0.5 are shown in Fig. p 11.1-(b). Note that for ρ = 0.5 one has 1 − ρ2 = 0.866.

1

A First Course in Digital Communications

φ 2 (t ) 1− ρ 2

s2 ( t )

we

y 3 (t )

dy k

Nguyen & Shwedyk

− (1 + ρ )

1− ρ

− (1 − ρ )

s1 (t )

ρ

1+ ρ

φ1 (t )

Sh

0

y1 (t )

y 4 (t )

y2 ( t )

− 1− ρ 2

&

(a)

0.866

y3 ( t )

en

φ 2 (t ) s2 ( t )

uy

− 1.5

Ng

y 4 (t )

y1 (t )

s1 (t ) − 0.5

0

− 0.866

1.0

0.5

1.5 φ1 (t )

y2 ( t )

(b)

Figure 11.1: Two-User CDMA: (a) Signal space plot for four possible received signals in the absence of the background noise, (b) Decision regions of the jointly minimum distance receiver when ρ = 0.5.

Solutions

Page 11–2

dy k

Synchronization

we

Chapter 12

°

&

Sh

2 P12.1 The attenuation √ √affects the received energy, Eb , which is now α Eb . From equations (12.1), (12.2) with Eb → α Eb . Ã r ! 2Eb BPSK: P [bit error] = Q α cos θ , (12.1) N0 Ã r ! 2Eb 1 Q α (cos θ − sin θ) QPSK: P [bit error] = 2 N0 Ã r ! 1 2Eb + Q α (cos θ + sin θ) . (12.2) 2 N0

en

BPSK, θ=0 ; α=[1:−0.02:0.8] (from left to right) −2

−2

10 P[bit error]

−4

uy

P[bit error]

10

10

−6

10

0

−6

°

Ng −2

5 10 Eb/N0 (dB) °

BPSK, θ=30 ; α=[1:−0.02:0.8] (from left to right)

10 P[bit error]

P[bit error]

0

−2

10

−4

10

−6

0

−4

10

10

5 10 Eb/N0 (dB)

BPSK, θ=20 ; α=[1:−0.02:0.8] (from left to right)

10

°

BPSK, θ=10 ; α=[1:−0.02:0.8] (from left to right)

−4

10

−6

10

5 10 E /N (dB) b

0

0

5 10 E /N (dB) b

Figure 12.1

1

0

A First Course in Digital Communications °

QPSK, θ=0 ; α=[1:−0.02:0.8] (from left to right) −2

−2

10 P[bit error]

P[bit error]

10

−4

10

−6

−6

0

5 10 Eb/N0 (dB) °

10

QPSK, θ=20 ; α=[1:−0.02:0.8] (from left to right) −2

°

QPSK, θ=30 ; α=[1:−0.02:0.8] (from left to right)

P[bit error]

−4

−6

−4

10

Sh

P[bit error]

5 10 Eb/N0 (dB)

10

10

0

0

−2

10

10

−4

10

we

10

°

QPSK, θ=10 ; α=[1:−0.02:0.8] (from left to right)

dy k

Nguyen & Shwedyk

−6

10

5 10 E /N (dB) b

0

0

5 10 E /N (dB) b

0

&

Figure 12.2 The Matlab plots allow one to make design judgements as to the margins needed for transmitted power and phase locked loop performance based usually on worst-case scenarios.

en

Note that this is always somewhat of a subjective judgement. For instance – what error probability does one choose to make the decision(s)?

uy

P12.2 (a) Refer to Figure 12.3. Note that x1 = d − (i − 1)∆ where d = r cos(α + θ). ¶ ¶ µ µ ¤ ∆£ 2 2 1/2 −1 2j − 1 ∴ x1 = (2i − 1) + (2j − 1) + θ − ∆(i − 1) cos tan 2 2i − 1 ¶ ¶ µ µ ¤ ∆£ 2 2 1/2 −1 2j − 1 x2 + x1 = ∆ ⇒ x2 = i∆ − (2i − 1) + (2j − 1) cos tan +θ 2 2i − 1

Ng

Similarly, d1 = r sin(α + θ), y2 = d1 − (j − 1)∆. µ µ ¶ ¶ ¤ ∆£ 2 2 1/2 −1 2j − 1 ∴ y2 = (2i − 1) + (2j − 1) sin tan + θ − ∆(i − 1) 2 2i − 1 µ µ ¶ ¶ ¤ ∆£ 2 2 1/2 −1 2j − 1 y1 = ∆ − y2 = j∆ − (2i − 1) + (2j − 1) sin tan +θ 2 2i − 1

(b) The error probabilities are determined by finding a volume under a 2-dimensional Gaussian pdf in the appropriate regions. Because the 2 Gaussian random variables (nI , nQ if you wish) are statistically independent (each of variance N0 /2 and a mean determined by the signal point under consideration) the volume(s) are a product of 2 Q-functions and basically determined by inspection from the geometrical picture. Let σ 2 ≡ N0 /2.

Solutions

Page 12–2

Nguyen & Shwedyk

A First Course in Digital Communications ∆ 2 tan α = ∆ (2i − 1) 2 (2 j − 1) ⇒ α = tan −1 (2i − 1) 1/ 2 ∆ r =  (2i − 1) 2 + (2 j − 1) 2  2

Q ∆

y1

∆ (2 j − 1) 2



y2

( j − 1)∆

d1

r

x1

α 0

we

θ

dy k

(2 j − 1)

x2

I

d

(i − 1)∆

(2i − 1)

∆ 2

Sh

Figure 12.3 II

y1

y2 x1

en

&

I

III x2

VI

Figure 12.4

Ng

uy

Inner signals (refer to Figure 12.4) Need to find the volumes under the 2-D Gaussian pdf in the 4 regions I, II, III, IV . These are: ³x ´ 1 Volume I = Q (12.3) σ h ³x ´ ³ x ´i ³ y ´ 1 2 1 Volume II = 1 − Q −Q Q (12.4) σ σ ³x ´ σ 2 Volume III = Q (12.5) h σ ³x ´ ³ x ´i ³ y ´ 1 2 2 Volume IV = 1 − Q −Q Q . (12.6) σ σ σ The total volume is of course the sum, and therefore: ³x ´ ³x ´ 1 2 Pinner [symbol error] = Q +Q σ³ ´i h h σ³ y ´ ³x ´ ³ x ´i y2 1 1 2 + Q +Q 1−Q −Q . (12.7) σ σ σ σ

Solutions

Page 12–3

Nguyen & Shwedyk

A First Course in Digital Communications

II y1

I

y2 x1

we

III

dy k

Outer-vertical signals (refer to Figure 12.5)

Sh

Figure 12.5

&

³x ´ 1 Volume I = Q (12.8) σ h ³ x ´i ³ y ´ 1 1 Volume II = 1 − Q (12.9) Q σ σ h ³ x ´i ³ y ´ 1 2 Volume III = 1 − Q (12.10) Q σ σ ³x ´ h ³y ´ ³ y ´i h ³ x ´i 1 1 2 1 Pouter−V [symbol error] = Q (12.11) + Q +Q 1−Q σ σ σ σ

II

x1

x2

y2

III

I

Figure 12.6

Ng

uy

en

Outer-horizonal signals (refer to Figure 12.6)

³y ´ 2 Volume I = Q (12.12) σ h ³ y ´i ³ x ´ 2 1 Volume II = 1 − Q Q (12.13) σ σ h ³ y ´i ³ x ´ 2 2 Volume III = 1 − Q Q (12.14) σ σ ³y ´ h ³x ´ ³ x ´i h ³ y ´i 2 1 2 2 Pouter−H [symbol error] = Q + Q +Q 1−Q (12.15) σ σ σ σ

Solutions

Page 12–4

Nguyen & Shwedyk

A First Course in Digital Communications

I

dy k

Corner signals (refer to Figure 12.7)

y2 x1

we

II

Figure 12.7

Es =

&

Sh

³x ´ 1 Volume I = Q (12.16) σ h ³ x ´i ³ y ´ 1 2 Volume II = 1 − Q Q (12.17) σ σ ³x ´ ³y ´ h ³ x ´i 1 2 1 Pcorner [symbol error] = Q +Q 1−Q (12.18) σ σ σ ¤ £ ∆ (c) Consider the (i, j)th signal, i.e., the signal at (2i − 1) ∆ 2 , (2j − 1) 2 . Its energy is Eij = ∆2 ∆2 2 2 4 (2i − 1) + 4 (2j − 1) joules. (Note - phase rotation does not affect energy, therefore ignore θ). NQ NI X X

4 ↑

i=1 j=1

en

(4 quadrants)

=

Now

i=1 j=1

n X

Ng

|| 1 = 1 M 2λ

¸ NQ · 2 NI X ∆ 4 X ∆2 2 2 (2i − 1) + (2j − 1) (joules/symbol) M 4 4 i=1 j=1   NQ NQ N N I I   2 XX XX ∆ (4i2 − 4i + 1) + (4j 2 − 4j + 1)  M 

uy

=

NI = 2λI /2 , NQ = 2λQ /2 λ = λI + λQ , M = 2λ

Eij P [signal ij]

k=

k=1

n(n + 1) ; 2

(12.19)

i=1 j=1

n X

k2 =

k=1

After some algebra and by realizing that

Pn

k=1 1

n(n + 1)(2n + 1) . 6 = n, we get

¤ ∆2 NI NQ £ 2 (4NI2 − 1) + (4NQ − 1) 3M ¡ ¢1/2 = 2λ = M 1/2

Es =

But NI NQ = 2(λI +λQ )/2

(12.20)

¢ ¢ 2∆2 ¡ 2 2∆2 ¡ 2 Es 2 2 = √ ∴ Es = √ 2NI + 2NQ − 1 ⇒ Eb = 2NI + 2NQ −1 λ 3 M 3λ M

Solutions

Page 12–5

Nguyen & Shwedyk

A First Course in Digital Communications

¡ ¢2 ¡ ¢1/2 √ 2. With square QAM, we have λI = λQ = λ/2 ⇒ NI2 = 2λ/4 = 2λ = M = NQ

dy k

´ 2∆2 ³ √ √ 4 M − 1 joules/bit 3λ M  1/2 √ 3λ M ´ or ∆ =  ³ √ 2 4 M −1

∴ Eb =

(12.21)

(12.22)

Remark: Can express ∆ as a function of λ and M only for square QAM.

we

(d) The expression for P [symbol error] is quite lengthy. To simplify it, at least notationally note that the distances x1 , x2 , y1 , y2 are functions of i, j which means the error probabilities in (b) are functions of i, j. So write Pinner [error] as Pinner [error, i, j], Pouter-V [error] as Pouter-V [error, i, j], and so on. Then only 1 quadrant considered

 Q −1 I −1 NX NX

Sh

↓ 4 M ↑

P [symbol error] =



i=1

Pinner [error, i, j]

j=1

symbols are equally probable

NQ −1

X

Pouter-V [error, i = NI , j]

&

+

j=1

+

NX I −1

)

Pouter-H [error, i, j = NQ ] + Pcorner [error, i = NI , j = NQ ]

(12.23)

i=1

en

(e) Use the expression for ∆ in terms of Eb /N0 derived in (c) and the general expression of (d) to guide you in writing the Matlab program. P12.3 (a) See Fig. 12.8.

Ãr

! 2Eb P[error] = Q cos θ N0 ! Z2π Ãr 2Eb E {P[error]} = Q cos θ fθ (θ)dθ N0

Ng

uy

(b)

(12.24)

0

! Z2π Ãr 2Eb 1 = Q cos θ eΛm cos θ dθ N0 2πI0 (Λm ) 0

=

1 2πI0 (Λm )

! Z2π Ãr 2Eb Q cos θ eΛm cos θ dθ. N0

(12.25)

0

There is no “closed-form” integration available. Therefore resort to numerical integration. Specifically, make use of the routine quadl in Matlab. Plots of P [bit error] for different values of Λm are shown in Fig. 12.9. Solutions

Page 12–6

Nguyen & Shwedyk

A First Course in Digital Communications

Λm=0

dy k

Tikhonov Distribution 1.8 1.6

Λ =1

1.4

Λm=10

m

Λ =20 m

1.2

θ

f (θ)

1

we

0.8 0.6 0.4 0.2

Sh

0 0

pi θ

2*pi

Figure 12.8

0

&

10

−2

10

−4

−6

10

en

P[bit error]

10

−8

10

−10

10

uy

m

−12

10

−14

10

Ng

Λ =0

0

Λ =1

No phase error

m

Λm=10 Λm=20 2

4

6 8 Eb/N0 (dB)

10

12

14

Figure 12.9

Note that for Λm = 0, i.e., the uniform pdf case, the performance breakdowns completely, equivalent to flipping a coin to make a decision. This is expected since the phase (which carries the information) is completely random.

Solutions

Page 12–7

Nguyen & Shwedyk

A First Course in Digital Communications

r (t ) = ± Eb

2 cos ( 2π f c t ) + w (t ) Tb

dy k

P12.4 The demodulator looks like:

∫ ( ) dt Tb

0

r

we

2 cos ( 2π ( f c + ∆f ) t ) Tb

t = Tb

Figure 12.10 Performing the integration one gets:

Sh

¸ p sin(2π∆f Tb ) · ∆f Tb r = ± Eb 1+ +w 2π∆f Tb 2π(2fc + ∆f ) where

ZTb r

w=

2 cos (2π(fc + ∆f )t) w(t)dt Tb

&

0

(12.26)

is a Gaussian r.v., zero-mean and a variance that, strictly speaking, is not N0 /2. We shall take it to be N0 /2 as a very good approximation. 2 Remark: The interested h i reader may wish to show that the variance, E{w }, is given by sin(4π(f +∆f )T ) N0 c b . Since fc is on the order of 106 to 109 while Tb on the order of 10−3 2 1+ 4π(fc +∆f )Tb

en

to 10−4 , the second term is neglected.

Therefore the error probability is given by: Ãr

uy

P [bit error] = Q

where the approximation 1 +

∆f Tb 2π(2fc +∆f )

2Eb sin(2π∆f Tb ) N0 2π∆f Tb

! ,

(12.27)

≈ 1 is used for the signal part of r.

Ng

From the plot, take ∆f Tb < 0.1 (at an error rate of ≈ 10−5 ). Then if Tb ∼ 10−3 ⇒ ∆f < 0.1(103 ) = 102 , i.e., 100Hz. So the oscillator specification would read 16Hz ±100Hz. Need to be quite accurate and stable. Note that as Tb becomes smaller (i.e., a higher bit rate rb ) the allowable frequency deviation is larger.

Solutions

Page 12–8

Nguyen & Shwedyk

10

10

10

dy k

10

−2

−3

we

P[bit error]

10

∆fTb=[0.01:0.02:0.20] (curves are from left to right)

−1

−4

−5

−6

0

2

Sh

10

A First Course in Digital Communications

4

6 8 Eb/N0 (dB)

10

12

14

Figure 12.11

&

P12.5 The demodulator’s block diagram looks as shown in Fig. 12.12, where fi = f1 if 0T and fi = f2 if 1T . t = Tb

r1

∫ ( ) dt Tb

0

en

2 cos ( 2π fi t ) + w (t ) Tb

Ng

uy

r (t ) = Eb

Solutions

bˆk

2 cos ( 2π ( f1 + ∆f ) t ) Tb

t = Tb

∫ ( ) dt Tb

r2

0

2 cos ( 2π ( f 2 + ∆f ) t ) Tb

Figure 12.12

Page 12–9

Nguyen & Shwedyk

A First Course in Digital Communications

0T :

r1 =

p

2 Eb Tb

dy k

The sufficient statistics r1 , r2 are: ZTb

cos(2πf1 t) cos (2π(f1 + ∆f1 )t) dt 0

r ZTb 2 + cos (2π(f1 + ∆f1 )t) w(t)dt Tb 0

we

p sin (2π(f1 + ∆f1 )Tb ) p sin (2π∆f1 Tb ) r1 = Eb + Eb + w1 2π(f1 + ∆f1 )Tb 2π∆f1 Tb

(12.28)

On the other hand, 2 Eb Tb

+

2 Tb

ZTb

cos(2πf1 t) cos (2π(f2 + ∆f2 )t) dt 0

ZTb

cos (2π(f2 + ∆f2 )t) w(t)dt

0

&

r2 =

p

Sh

An easy assumption to make is to ignore the 1st term since f1 >> 1, typically ∼ 106 − 109 . Therefore, p sin (2π∆f1 Tb ) r1 = Eb + w1 2π∆f1 Tb

en

p sin (2π(f1 + f2 + ∆f2 )Tb ) p sin (2π(f2 − f1 + ∆f2 )Tb ) + Eb + w2 = Eb 2π(f1 + f2 + ∆f2 )Tb 2π(f2 − f1 + ∆f2 )Tb (12.29)

uy

Again the 1st term is easily ignored. To ignore the 2nd term is more problematical. Let f2 − f1 =√ 1/Tb , the minimum separation for (noncoherent) Then √ orthogonality. √ the term sin 2π∆f2 Tb sin 2π∆f2 Tb E Eb and is becomes Eb 2π(1+∆f . If ∆f T
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF