December 13, 2016 | Author: Regislane Paiva Lima | Category: N/A
Download Contemporary Communication Systems Using Matlab Proakis and Salehi Copy...
CONTEMPORARY
COMMUNICATION SYSTEMS using
MATLAB 1
John G. P r o a k i s Masoud S a l e h i
The PWS
BookWare Companion Series tm
CONTEMPORARY COMMUNICATION SYSTEMS USING MATLAB®
John G. Proakis Masoud Salehi Northeastern
University
PWS Publishing Company I(T)P
An International Thomson Publishing Company Boston • Albany • Bonn • Cincinnati • London • Madrid • Melbourne • Mexico City • New York Paris • San Francisco • Singapore • Tokyo • Toronto • Washington
The PWS
BookWare Companion Series tm
CONTEMPORAR Y COMMUNICA TION SYSTEMS USING MATLAB®
John G. Proakis Masoud Salehi Northeastern
University
PWS Publishing Company I( T P
An International Thomson Publishing Company Boston • Albany • Bonn • Cincinnati • London • Madrid • Melbourne * Mexico City • New York Paris • San Francisco • Singapore • Tokyo • Toronto • Washington
PWS Publishing C o m p a n y 20 Park Plaza, Boston, M A 02116-4324 Copyright © 1998 by PWS Publishing Company, a division of International Thomson Publishing Inc. AH rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transcribed in any form or by any means—electronic, mechanical, photocopying, recording, or otherwise—without the prior written permission of PWS Publishing Company. MATLAB is a trademark of The Mathworks. Inc. The MathWorks, Inc. is the developer of MATLAB, the high-performance computational software introduced in this book. For further information on MATLAB and other MathWorks products—including SIMULINK® and MATLAB Application Toolboxes for math and analysis, control system design, system identification, and oiher disciplines—-contact The MathWorks at 24 Prime Parle Way, Naticlc. MA 01760 (phone 508-647-7000: fax: 508-647-7001. email:
[email protected]). You can also sign up to receive the MathWorks quarterly newsletter and register for the user group. Macintosh is a trademark of Apple Computer. Inc MS-DOS is a trademark of Microsoft Corporation BookWare Companion Senes is a trademark of PWS Publishing Company.
I(T;P® International Thomson Publishing The trademark ITP is used under license For more information,
contact-.
P W S Publishing Company 20 Park Plaza Boston, M A 02116
International Thomson Editores Campos Eiiseos 385. Piso 7 Col. Polanco 11560 Mexico C F.. Mexico
International Thomson Publishing Europe Berkshire House 168-173 High Holbom Undon WC1V7AA England
International Thomson Publishing GmbH Königs winterer Strasse 418 53227 Bonn, Germany
Thomas Nelson Australia 102 Dodds Street South Melbourne. 3205 Victoria. Australia Nelson Canada 1120 Birchmoum Road Scarborough. Ontario Canada Ml K 5 G 4
International Thomson Publishing Asia 221 Henderson Road #05-10 Henderson Building Singapore 0315 International Thomson Publishing Japan Hirakawacho Kyowa Building. 31 2-2-1 Hirakawacho Chiyoda-ku. Tokyo 102 Japan
About the Cover The BookWare Companion Series cover was created on a Macintosh Quadra 700. using Aldus FreeHand and Quark XPress. The surface plot on the cover, provided courtesy of The MathWorks. Inc.. Natick. MA, was created with MATLAB® and was inserted on the cover mockup with a HP ScanJet IIP Scanner It represents a surface created by assigning the values of different functions to specific matrix elements. Serif j Ct>-originators: Robert D. Strum and Tom Robbins Editor Bill Barter Assistant Editor: Suzanne Jeans Editortol Assistant: Tricia Kelly Library of Congress Cataloging-in-Publication Data Market Development Manager: Nathan Wilbur Production Editor: Pamela Rockwell Proakis, John G. Manufacturing Coordinator: Andrew Chnstensen Contemporary communication systems using M A T L A B ! John G. Cover Designer: Stuart Paterxon. Image House. Inc. Proakis. Masoud Salehi. — 1st ed. p. cm. — (PWS BookWare companion senes) Includes index. Cover Printer. Phoenix Color Corp. ISBN 0-534-93804-3 Text Printer and Binder. Malloy Lithographing I. Data transmission systems—Computer simulation. 2. Telecommunication systems—Computer simulation. 3. MATLAB. I. Salehi. Masoud. II. Title. III. Senes. TK5105.P744 1997 97-31815 Printed and bound in the United States of America. 62! .382' !6'07855369—dc21 CIP 98 99 00 — 10 9 8 7 6 5 4 3 2 1
ABC Note Students learn in a number of ways and in a variety of settings. They learn through lectures, in informal study groups, or alone at their desks or in front of a computer terminal. Wherever the location, students learn most efficiently by solving problems, with frequent feedback from an instructor, following a workedout problem as a model. Worked-out problems have a number of positive aspects. They can capture the essence of a key concept—often better than paragraphs of explanation. They provide methods for acquiring new knowledge and for evaluating its use. They provide a taste of real-life issues and demonstrate techniques for solving real problems. Most important, they encourage active participation in learning. We created the BookWare Companion Series because we saw an unfulfilled need for computer-based learning tools that address the computational aspects of problem solving across the curriculum. The BC series concept was also shaped by other forces: a general agreement among instructors thai students learn best when they are actively involved in their own learning, and the realisation chat textbooks have not kept up with or matched student learning needs. Educators and publishers are just beginning to understand that the amount of material crammed into most textbooks cannot be absorbed, let alone the knowledge to be mastered in four years of undergraduate study. Rather than attempting to teach students all the latest knowledge, colleges and universities are now striving to teach them to reason: to understand the relationships and connections between new information and existing knowlege; and to cultivate problem-solving skills, intuition, and critical thinking. The BookWare Companion Series was developed in response to this changing mission. Specifically, the BookWare Companion Series was designed for educators who wish to integrate their curriculum with computer-based learning tools, and for students who find their current textbooks overwhelming. The former will find in the BookWare Companion Series the means by which to use powerful software tools to support their course activities, without having to customize the applications themselves. The latter will find relevant problems and examples quickly and easily and have instant electronic access to them. We hope that the BC series will become a clearinghouse for the exchange of reliable leaching ideas and a baseline series for incorporating learning advances from emerging technologies. For example, we intend toreuse the kernel of each BC volume and add electronic scripts from other software programs as desired by customers. We are pursuing the addition of AI/Expert System technology to provide and intelligent tutoring capability for future iterations of BC volumes. We also anticipate a paperless environment in which BC content can
v
!low freely over high-speed networks to support remote learning activities. In order for these and other goals to be realized, educators, students, software developers, network administrators, and publishers will need to communicate freely and actively with each other. Wc encourage you to participate in these exciting developments and become involved in the BC Series today. If you have an idea for improving the effectiveness of the BC concept, an example problem, a demonstration using software or multimedia, or an opportunity to explore, contact us. We thank you one and all for your continuing support. The PWS Electrical Engineering Team:
vi
[email protected]
Acquisitions Editor
[email protected]
Assistant Editor
[email protected]
Marketing Manager
[email protected]
Production Editor
[email protected]
Editorial Assistant
Contents 1
Signais and Linear Systems !. I Preview 1.2 Fourier Series 1.2.1 Periodic Signals and LTI Systems 1.3 Founer Transforms 1.3.1 Sampling Theorem 1.3.2 Frequency Domain Analysis of LT1 Systems 1.4 Power and Energy i .5 Lowpass Equivalent of Bandpass Signals
1 1 1 12 16 23 28 32 35
2
Random Processes 2.1 Preview 2.2 Generation of Random Variables 2.3 Gaussian and Gauss-Markov Processes 2.4 Power Spectrum of Random Processes and White Processes 2.5 Linear Filtering of Random Processes 2.6 Lowpass and Bandpass Processes
45 45 45 51 57 63 69
3
Analog Modulation 3.1 Preview 3.2 Amplitude Modulation (AM) 3.2.1 DSB-AM 3.2.2 Conventional AM 3.2.3 S SB-AM 3.3 Demodulation of AM Signals 3.3.1 DSB-AM Demodulation 3.3.2 SSB-AM Demodulation 3.3.3 Conventional AM Demodulation 3.4 Angle Modulation
79 79 79 80 89 96 101 101 106 Ill 116
4
Analog-to-Digital Conversion 4.1 Preview 4.2 Measure of Information
131 131 132 vii
I
A
CONTENTS
viii
4.3
5
7
132 138 138 146
Baseband Digital Transmission 5.1 Preview 5.2 Binary Signal Transmission 5.2.1 Optimum Receiver for the AWGN Channel 5.2.2 Signal Correlator 5.2.3 Matched Filter 5.2.4 The Detector 5.2.5 Monte Carlo Simulation of a Binary Communication System . . . 5.2.6 Other Binary Signal Transmission Methods 5.2.7 Antipodal Signals for Binary Signal Transmission 5.2.8 On-Off Signals for Binary Signal Transmission 5.2.9 Signal Constellation Diagrams for Binary Signals
165 165 165 166 166 169 172 174 Ill 177 182 185
5.3
187 189 191 191 192 196 200 201 210
5.4
6
4.2.1 Noiseless Coding Quantization 4.3.1 Scalar Quantization 4.3.2 Pulse-Code Modulation
Multiamplitude Signal Transmission 5.3.1 Signal Waveforms with Four Amplitude Levels 5.3.2 Optimum Receiver for the A W G N Channel 5.3.3 Signal Correlator 5.3.4 The Detector 5.3.5 Signal Waveforms with Multiple Amplitude Levels Multidimensional Signals 5.4.1 Multidimensional Orthogonal Signals 5.4.2 Biorthogonal Signals
Digital Transmission T h r o u g h Bandlimited Channels 6.1 Preview . 6.2 The Power Spectrum of a Digital PAM Signal 6.3 Characterization of Bandlimited Channels and Channel Distortion 6.4 Characterization of Intersymbol Interference 6.5 Communication System Design for Bandlimited Channels 6.5.1 Signal Design for Zero ISI 6.5.2 Signal Design for Controlled ISI 6.5.3 Precoding for Detection of Partial Response Signals 6.6 Linear Equalizers 6.6.1 Adaptive Linear Equalizers 6.7 Nonlinear Equalizers Digital Transmission via Carrier Modulation 7.1 Preview 7.2 Carrier-Amplitude Modulation 7.2.1 Demodulation of PAM Signals 7.3 Carrier-Phase Modulation
. . . .
221 221 221 226 235 241 244 248 253 257 265 271 281 281 281 285 287
CONTENTS
7.4
7.5
7.6
7.3.1 Phase Demodulation and Detection 7.3.2 Differential Phase Modulation and Demodulation Quadrature Amplitude Modulation 7.4.1 Demodulation and Detection of QAM 7.4.2 Probability of Error for QAM in an AWGN Channel Carrier-Frequency Modulation 7.5.1 Frequency-Shift Keying 7.5.2 Demodulation and Detection of FSK Signals 7.5.3 Probability of Error for Noncoherent Detection of FSK Synchronization in Communication Systems 7.6.1 Carrier Synchronization 7.6.2 Clock Synchronization
ix
291 297 304 306 308 313 314 316 321 325 326 332
8
Channel Capacity and Coding 8.1 Preview 8.2 Channel Model and Channel Capacity 8.2.1 Channel Capacity 8.3 Channel Coding 8.3.1 Linear Block Codes 8.3.2 Convolutional Codes
343 343 343 344 355 357 371
9
Spread Spectrum Communication Systems 9.1 Preview 9.2 Direct-Sequence Spread Spectrum Systems 9.2.1 Signal Demodulation 9.2.2 Probability of Ercor 9.2.3 Two Applications of DS Spread Spectrum Signals 9.3 Generation of PN Sequences 9.4 Frequency-Hopped Spread Spectrum 9.4.1 Probability of Error for FH Signals 9.4.2 Use of Signal Diversity to Overcome Partial-Band Interference . .
391 391 392 395 397 398 403 409 412 417
PREFACE There are many textbooks on the market today that treat the basic topics in analog and digital communication systems, including coding and decoding algorithms and modulation and demodulation techniques. The focus of most of these textbooks is by necessity on the theory that underlies the design and performance analysis of the various building blocks, e.g., coders, decoders, modulators, and demodulators, that constitute the hasic elements of a communication system. Relatively few of the textbooks, especially those written for undergraduate students, include a number of applications that serve to motivate the students.
SCOPE OF THE BOOK The objective of this book is to serve as a companion or supplement, to any of the comprehensive textbooks in communication systems. The book provides a variety of exerciscs that may be solved on the computer (generally, a personal computer is sufficient) using the popular student edition of MATLAB. The book is intended to be used primarily by senior-level undergraduate students and graduate students in electrical engineering, computer engineering and computer science. We assume that the student (or user) is familiar with the fundamentals of MATLAB. Those topics are not covered, because several tutorial books and manuals on MATLAB arc available. By design, the treatment of the various topics is brief. We provide the motivation and a short introduction to each topic, establish the necessary notation, and then illustrate the basic notions by means of an example. The primary text and the instructor are expected to provide the required depth of the topics treated. For example, we introduce the matched filter and the correlator and assert that these devices result in the optimum demodulation of signals corrupted by additive white Gaussian noise (AWGN), but we do not provide a proof of this assertion. Such a proof is generally given in most textbooks on communication systems.
ORGANIZATION OF THE BOOK The book consists of nine chapters. The first two chapters on signals and linear systems and on random processes treat the basic background that is generally required in the study of communication systems. There is one chapter on analog communication techniques, and the remaining five chapters are focused on digital communications. Chapter 1: Signals and Linear Systems This chapter provides a review of the basic tools and techniques from linear systems analysis, including both time-domain and frequency-domain characterizations. Frequency-domainanalysis techniques are emphasized, since these techniques are most frequently used in the treatment of communication systems. xi
XII
Chapter 2: Random Processes in this chapter we illustrate nieilu'ds f;;r generating random variables and samples ot randotn processes. The topics include the generation of random variables with a specified probability distribution function. the generation uf samples of Gaussian and Gauss-Markoy processes, and the characten/.ation i.t stationary random princesses in ihe lime domain and the frequency domain Chapter 3: Analog Modulation The performances of analog modulation and demodulation techniques in the presence and absence of additive noise are treated in this chapter. Systems studied include amplitude modulation (AM), such us double-sideband AM. single-sideband AM, and conventional AM, and angle-modulation schemes, such as frequency modulation (FM) and phase modulation (PM). Chapter 4: Analog-to-Digital Conversion In this chapter we treat various methods that are used to convert analog source signals into digital sequences in an efficient way. This conversion process allows us to transmit or store the signals digitally. We consider both lossy data compression schemes, such as pulse-code modulation (PCM), and lossless data compression, such as Huffman coding. Chapter 5: Baseband Digital Transmission The focus of this chapter is on the introduction of baseband digital modulation and demodulation techniques for transmitting digital information through an AWGN channel. Both binary and nonbinary modulation techniques are considered. The optimum demodulation of these signals is described, and the performance of the demodulator is evaluated. Chapter 6: Digital Transmission Through Bandlimited Channels In this chapter we consider the characterization of bandlimited channels and the problem of designing signal waveforms for such channels. We show that channel distortion results in intersymbol interference (ISI), which causes errors in signal demodulation. Then, we treat the design of channel equalizers that compensate for channel distortion. Chapter 7: Digital Transmission via Carrier Modulation Four types of carrier-modulated signals that are suitable for transmission through bandpass channels are considered in this chapter. These are amplitude-modulated signals, quadratureamplitude-modulated signals, phase-shift keying, and frequency-shift keying. Chapter 8: Channel Capacity and Coding In this chapter we consider appropriate mathematical models for communication channels and introduce a fundamental quantity, called the channel capacity, that gives the limit on the amount of information that can be transmitted through the channel. In particular, we consider two channel models, the binary symmetric channel (BSC) and the additive white
xiii
Gaussian noise (AWGN) channel. These channel models are used in the treatment of block and convolutional codes for achieving reliable communication through such channels. Chapter 9: Spread Spectrum Communication Systems The basic elements of a spread spectrum digital communication system are treated in this chapter. In particular, direct sequence (DS) spread spectrum and frequency-hopped (FH) spread spectrum systems are considered in conjunction with phase-shift keying (PSK) and frequency-shift keying (FSK) modulation, respectively. The generation of pseudonoise (PN) sequences for use in spread spectrum systems is also treated.
SOFTWARE The accompanying diskette includes all the MATLAB files used in the text. The files are in separate directories corresponding to each chapter of the book. In some cases a MATLAB file appears in more than one directory because it is used in more than one chapter. In most cases numerous comments have been added to the MATLAB files to ease their understanding. It should be noted, however, that in developing the MATLAB files, the main objective of the authors has been the clarity of the MATLAB code rather than its efficiency. In cases where efficient code could have made it difficult to follow, we have chosen to use a less efficient but more readable code. In order to use the software, copy the MATLAB files to your personal computer and add the corresponding paths to your m a t l a b p a t h environment (on an IBM PC compatible machine this is usually done by editing the matlabrc.m file). All MATLAB files have been tested using version 4 of MATLAB.
Chapter 1
Signals and Linear Systems 1.1
Preview
In this chapter we review the basic tools and techniques from linear system analysis used in the analysis of communication systems. Linear systems and their characteristics in the time and frequency domains, together with probability and analysis of random signals, are the two fundamental topics whose thorough understanding is indispensable in the study of communication systems. Most communication channels and many subblocks of transmitters and receivers can be well modeled as linear and time-invariant (LTI) systems, and so the well-known tools and techniques from linear system analysis can be employed in their analysis. We emphasize frequency-domain analysis tools, since these are the most frequently used techniques. We start with the Fourier series and transforms; then we cover power and energy concepts, the sampling theorem, and lowpass representation of bandpass signals.
1.2
Fourier Series
The input-output relation of a linear time-invariant system is given by the convolution integral defined by (1-2.1)
y{t)=x(t)*h(t)
where hi!) denotes the impulse response of the system, x{t) is the input signal, and y ( f ) is the output signal. If the input .v(r) is a complex exponential given by Aejl!,fnt
jc(f) =
1
(1-2.2)
CHAPTER
2
I. SIGNALS
AND
LINEAR
SYSTEMS
then the output is given by /oo
Aej2nM'-z)h(r)dT -oo
A(r) e~j27lJ"r
= a | J
dx
° j 2 * f i ''
(1.2.3)
In other words, the output is a complex exponential with the same frequency as the input. The (complex) amplitude of the output, however, is the (complex) amplitude of the input amplified by h{x)e-j2*f»x
dr
Note that the above quantity is a function of the impulse response of the LTI system, h(t ), and the frequency of the input signal, /o. Therefore, computing the response of LTI systems to exponential inputs is particularly easy. Consequently, it is natural in linear system analysis to look for methods of expanding signals as the sum of complex exponentials. Fourier series and Fourier transforms are techniques for expanding signals in terms of complex exponentials. A Fourier series is the orthogonal expansion of periodic signals with period To when the signal set is employed as the basis for the expansion. With this basis, any periodic signal 1 x(t) with period To can be expressed as
*(D=
£
xne'2nn"Ta
(1.2.4)
where the .r„'s are called the Fourier series coefficients of the signal ,r(i) and are given by eor + 7"„
- _L "
f
x(t)e~J2*nl/T"dr
(1.2.5)
7*0 Ja
Here a is an arbitrary constant chosen in such a way that the computation of the integral is simplified. The frequency /o = 1 / 7o is called the fundamental frequency of the periodic signal, and the frequency / „ = n/o is called the nth harmonic. In most cases either a = 0 or a = — 7q/2 is a good choice. This type of Fourier series is known as the exponential Fourier series and can be applied to both real-valued and complex-valued signals *(f) as long as they are periodic. In general, the Fourier series coefficients {*„} are complex numbers even when x ( f ) is a real-valued signal. 1 A sufficient condition for the existence of the Fourier series is that details see 11 ].
(/) satisfy the Dirichlet conditions. For
1.2. Fourier Scries*-
3
When x(r) is a real-valued
periodic signal we have rc-Tn xit)ej2:""/T,'dt To J a I
t-rTn
[L
x { { ) e
- r . ™ i Tu
d {
(1.2.6)
From this it is obvious that '-0,1 = |.t_„[
(1.2.7)
Thus the Fourier scries coefficients of a real-valued signal have Hermitian symmetry, i.e.. their real part is even and their imaginary part is odd (or, equivalently. their magnitude is even and their phase is odd). Another form of Fourier series, known as trigonometric Fourier series, can be applied only to real, periodic signals and is obtained by defining «»
x,
*-n
=
=
jlh,
-
(1.2.8)
-
(in + jbn — J —
(1.2.9)
which, after using Euier's relation e
- j 2.t«:/7}t _
CQS
t y ^ - j sin ^ 2 n t
y
(1.2.10)
results in 22
"
C f°a ~ T "
(
n
x(f)cos I2.t/— ) dt
To Ja
2 rfaa —Tip b„ = -zr \ x ( 0 sin I 2,t 2 . t t—~ j) dt
To)
(1.2.11)
and, therefore, (1.2.12)
•r(r)
Note that for n = 0 we always have bo = 0. so oq — l.toBy defining
b » 6 n = — arctan —
(1.2.13)
4
CHAPTER I. SIGNALS AND LINEAR
SYSTEMS
and using the relation ,
•
—TT
(
b\
a cos c£> + /,> sin $ — y a- 4- b-cos \'(r) can be obtained by employing the convolution integral oc
/ =
••TO .r(r -
z)h{x)dx xne'2*n{'-r)!T»h{x)dx
£
! "
n= -oo
=
j r
=
E
x„(
h(T)e~j2',nr/r,>dx)ej2y"'"T
y^
2
""'^
(1-2.28)
=
(1.2.29)
n= -oo
From the above relation we have
where H ( / ) denotes the transfer function 3 of the LTI system given as the Fourier transform of its impulse response h{t). oo
/
-oo k(t)e~
j2:tf
'dt
(1.2.30)
2 We say usually with the same period as the input signal. Can you give an example where the period of the output is different from the period of the input? "Also known as the frequency response of the system.
Fourier Senes
Qoooooooooooooooooooooooooo
o è
Figure 1.8; Magnitude and phase spectra for Illustrative Problem 1.3.
CHAPTER I. SIGNALS AND LINEAR
14
X(t )
SYSTEMS
LTi System
Figure 1.9: Periodic signals through LTI systems.
Illustrative Problem 1.4 [Filtering of periodic signals] A triangular pulse train x(t) with period To = 2 is defined in one period as
I
t + 1.
-I
< r < 0
+ I, 0 < t < I 1. Determine the Fourier series coefficients of x(r). 0, otherwise 2. Plot the discrete spectrum of x{t).
(1.2.31)
3. Assuming that this signal passes through an LTI system whose impulse response is given by h(t)
=
r.
0 < / < 1
0,
otherwise
(1.2.32)
plot the discrete spectrum and the output v(f). Plots of ,r(i) and h(t) are given in Figure 1.10. h{t)
Figure 1.10: The input signal and the system impulse response.
1.2. Fourier Senes
15
1. We have
x
"= F I = ]- Ç =
^ -
x(!)e-r-*>u/Tt)
AU)e->*
d {
(1.2.33)
(1.2.34)
dt
(1.2.35)
AU)e-!7""dt
P J-IK.
(1.2.36) (1.2.37) where we have used ihe fact that A(/) vanishes outside the [ —1, 1] interval and thai the Fourier transform of A(r) is s i n c 2 ( / ) . This result can also be obtained by using the expression for A(r) and integrating by parts. Obviously, we have ,i„ — 0 for all even values of n except for n = 0. 2. A plot of the discrete spectrum of x(i) is shown in Figure 1.11.
0»
OJK oI -10
4
I
.. .... *
l_I
.4
!
-2
... L . 0 n
2
< I
4
t
J
6
.
•
a
.
10
Figure 1.1!: The discrete spectrum of the signal.
3. First we have to derive H ( f ) . the transfer function of the system. Although this can be done analytically, we will adopt a numerical approach. The resulting magnitude
CHAPTER I. SIGNALS AND LINEAR SYSTEMS
16
of the transfer function and also the magnitude of H («/ Tq) = H(n/2)
are shown in
Figure 1.12. 4. To derive the discrete spectrum of the output we employ the relation y„ = x „ / / ^ j
(1.2.38)
1 -1 / « \ / n\ = -sine-(-)*( -)
(1.2.39)
The resulting discrete spectrum of the output is shown in Figure 1.13. The MATLAB script for this problem follows.
% MA I'lAB script for illustrative echo on
Problem 4, Chapter
I.
n=[-20:1 20], % Fourier series coefficients
of ill)
vector
x=.5»(sinc(n/2>).*2; % sampling
interval
ts=1/40; 'Si time vector
i=[-.5:ts: 1.51; '7c impulse
response
fs=1/Lc; h=(zeros(1.20).t(21 61).zeros0.20)). % transfer function H=fft(h)/fs. % frequency resolution
df=fs/80; f=[0:df:fs]-fs/2; % rearrange H HJ=fflshift(H);
y=x.»H I (21 61); n
c Platting commands
1.3
follow
Fourier Transforms
The Fourier transform is the extension of the Fourier series to nonperiodic signals. The Fourier transform of a signal .t(f) that satisfies certain conditions, known as Dirichlet's conditions [1]. is denoted by X ( f ) or, equivalently, f"[jc(r)l and is defined by rx>
/
-oo x{t)e-W
dt
(1.3.1)
The inverse Fourier transform of X ( f ) is x ( t ) , given by oo
/
-oo X { f ) e
j 2 n /
'df
(1.3.2)
1.3. Fourier Transforms 33
Figure 1.12: The transfer function of the LTI system and the magnitude of H
CHAPTER
18
I.
SIGNALS
AND LINEAR
SYSTEMS
I
-I
J
•20
-IS
ID
-5
0
S
10
15
20
n
Figure 1.13: The discrete spectrum of the output.
If jr(r) is a real signal, then X ( f ) satisfies the Hermitian symmetry, i.e., X { ~ f )= X ' { f )
(1.3.3)
There are certain properties that the Fourier transform satisfies. The most important properties of the Fourier transform are summarized as follows. 1. Linearity: The Fourier transform of a linear combination of two or more signals is the linear combination of the corresponding Fourier transforms. riaxi(t) +
2. Duality: If X ( f ) =
flx2(0)
= ; [Xl.xi ITdf 1 ]=fflseq(x 1 .ts.df); [X2.x21 ,df2)=fFtseq(x2.ts.df); Xll=XI/fs: X2l=X2/Fs; f={0.df 1 df 1 *(length{x il ) - 1 )]-fs/2, plot(f,fftshift(abs{Xl J))) p!or(f(500.'525),/ftshift(angle(XI !(500:525))).f(500;525).fftshift(ajigle(X21{500:525))), ' -- ' )
1.3. Fourier Transforms
23
Figure 1.16: The phase spectra of the signals ^ i ( f ) and jct(0-
1.3.1
Sampling Theorem
The sampling theorem is one of the most important results in signal and system analysis; it forms the basis for the relation between continuous-time signals and discrete-time signals. The sampling theorem says that a bandlimited signal—i.e., a signal whose Fourier transform vanishes for | / | > VV for some VV—can be completely described in terms of its sample values taken at intervals Ts as long as T, < 1/2VV. If the sampling is done at intervals Tx = 1 / 2 W , known as the Nyquisl interval (or Nyquist rate), the signal x(r) can be reconstructed from the sample values {*[«] = as 00 £ j:(n7: i )sinc(2W(r ~ nT,)) t=-oo
x(t)=
(i.3.19)
This result is based on the fact that the sampled waveform ^ ( r ) defined as oo x
-t4(0«
(nTx)S(t
^ nT,)
(1.3.20)
n = -'X>
has a Fourier transform given by . o o
*«(/) = -
£
-^rX(f) Is
j x
{
n
f
v
~ T )
foralI/
for|/| W, where W < /o- A lowpass signal is a signal for which the frequency components are located around the zero frequency, i.e., for \ f \ > W, we have X ( f ) =0. Corresponding to a bandpass signal x(t) we can define the analytic signal z(t), whose
CHAPTER I. SIGNALS AND LINEAR
36
SYSTEMS
Fourier transform is given as Z ( f )=2«_i(/)X(/) where u-\(f)is
(1.5.1)
the unit step function. In the time domain this relation is written as z
o it» fretfoetny
mo
30a
400
Figure 1.27: The magnitude-spectrum of the lowpass equivalent to * ( / ) in Illustrative Problem 1.9 when f0 = 200 Hz.
CHAPTER
40
I.
SIGNALS
AND LINEAR
SYSTEMS
Comparing chis to x ( 0 = R z[x,{i)eJJ~'2'TX /(,;-,
(1.5.16)
jc/(r) = sinc(lOOf)
(1.5.17)
we conclude that
which means that the lowpass equivalent signal is a real signal in this case. This, in turn, means that xv(t) = xi(t) and xs{t) = 0. Also, we conclude that V(0 =
1^(01
0(0 J0' • JT,
(1.5.18) ATt(f) < 0
Plots of xc(t) and V(t) are given in Figure 1.28.
Figure 1.28: The in-phase component and the envelope of x{t).
Note that choosing this particular value of /o, which is the frequency with respect to which X ( f ) is symmetric, yields in these results. 3. If /o = 100 Hz, then the above results will not be true in general, and x/(f) will be a complex signal. The magnitude spectrum of the lowpass equivalent signal is plotted in Figure 1.29. As its is seen here, the magnitude spectrum lacks the symmetry present in the Fourier transform of real signals. Plots of the in-phase components of * ( f ) and its envelope are given in Figure 1.30.
1.5. Lowpass Equivalent of Bandpass Signals
SOQ
tM
300
Ï00
•••&>
41
Û
100
2Ï»
300
4Ù0
Figure 1.29: The magnitude spectrum of the lowpass equivalent to x(t) Problem 1.9 when / 0 = 100 Hz.
Sto
in Illustrative
Figure 1.30: The in-phase component and the envelope of the signal x(t) when / o = 100 Hz.
CHAPTER I. SIGNALS AND LINEAR
42
SYSTEMS
Problems 1.1 Consider the periodic signal of Illustrative Problem i. 1 shown in Figure 1.1. Assuming A = 1, To = 10, and to = I, determine and plot the discrete spectrum of the signal. Compare your results with those obtained in Illustrative Problem 1.! and justify the differences. 1.2 In Illustrative Problem 1.1, assuming A = 1, To — 4, and fo = 2, determine and plot the discrete spectrum of the signal. Compare your results with those obtained in Illustrative Problem 1.1 and justify the differences. 1.3 Using the m-file fseries.m, determine the Fourier series coefficients of the signal shown in Figure 1.1 with A = 1, Tq = 4, and to = \ for - 2 4 < n < 24. Plot the magnitude spectrum of the signal. Now using Equation {1.2,5) determine the Fourier series coefficients and plot the magnitude spectrum. Why are the results not exactly the same? 1.4 Repeat Problem 1.3 with To = 4.6, and compare the results with those obtained by using Equation (1.2.5). Do you observe the same discrepancy between the two results here? Why? 1.5 Using the MATLAB script dis.spci.rn, determine and plot the phase spectrum of the periodic signal jc(r) with a period of To = 4.6 and described in the interval [ - 2 . 3 , 2.3] by the relation * ( 0 = A(r). Plot the phase spectrum for - 2 4 < n . % Computation
of the pdf of (xI.
(2) follows
delta=0.3; xJ=-3;del«:3; x2=-3:deiU'3; for 1=1 lengih(x 1), for j=1:length(x2), f(i,j>=(1/((2»pi)»det(Cx)-1/2)).exp«-1/2M((xl(i) x2(j))-mx' ) ' i n v ( C x ) ' ( [ * l [ i | ; x2(j)l-mx))), end; end; % plotting
command for pdj
follows
mesh(xl,x2,f);
function [x] = multi.gp(m,C) % % %
[x] =
multi-gp{m.C) MULTl-GP generates a multivariate Gaussian random process with mean vector m (column vector), and covariunce
matrix C.
54
CHAPTER 2. RANDOM
PROCESSES
N=length(m): for i=1:N, y(i)=gngauss; end; y=y.'; x=sqrrm(C)*y+m; Figure 2.6 illustrates the joint pdf / (x\, XT) for the covariance matrix C given by (2.3.6).
Figure 2.6: Joint probability density function of
and X2.
As indicated above, the most difficult step in the computation is determining Given the desired covariance matrix, we may determine the eigenvalues {A.*, 1 < k < n} and the corresponding eigenvectors 1 < k < n}. Then, the covariance matrix C can be expressed as
(2-3.10) *=i and since C = C 1 / 2 ( C 1 / 2 ) ' , it follows that
(2.3.11) t= i
2.3. Gaussian and Gauss-Markov Processes
55
Definition: A Markov process X ( 0 is a random process whose past has no influence on the future if its present is specified. That is, if r„ > then
^ [*('«>
I X(r)
t < r„_,] = P [ X ( f „ ) < * „ | X ( r „ _ i ) ]
(2.3.12)
From this definition, it follows that if fi < ri < • • • < f„, then
P[X(tn)
< xn | X(r„_i), X(/„_ 2 )
Definition: A Gauss-Markov function is Gaussian.
X ( n ) ] = P[X(tn)
[,
v_i = 0
Compute the output sequence {y„ | and determine the autocorrelation functions Rx(m) and R%(m), as indicated in (2.4.6). Determine the power spectra S x ( f ) and S v ( / ) by computing the DFT of Rx (m) and R v (m). — The MATLAB scripts for these computations are given below. Figures 2.18 and 2.19 illustrate the autocorrelation functions and the power spectra. We note that the plots of the autocorrelation function and the power spectra are averages over 10 realizations of the random process.
% MATLAB script for Illustrative Problem N=1000; M=50; Rxav=zeros(1.M+1); Ryav=zeros(1.M+1); Sxav=zeros{1 ,M+1); Syav=zeros(1 ,M+1); for 10, X=rand( 1 , N l - ( 1 /2); Y(1)s0; for n=2'N. Y(n) « 0 9 * Y ( n - 1 ) + X(n); end: Rx=Rx_cst(X,M); Ry=Rx.ttst(Y.M). Sx=fFtshift(abs(fft 0 f < 0
77
2.6. Low pass and Bandpass Processes
2.12 Determine numerically the autocorrelation function of the random process at the output of the linear filter in Problem 2.11. 2.13 Repeat Illustrative Problem 2.8 when
h{n)
=
,
(O.S)\
"0.
«>0 n < 0
2.14 Generate an i.i.d. sequence | of N = 1000 uniformly distributed random numbers in the interval ^ J. This sequence is passed through a linear filter with impulse response
/!(,)= f(0-95)n' (o.
n 0,
y_ ( = 0
Compute the autocorrelation functions Rx(m) and R^im) of the sequences {x„} and {y„} and the corresponding power spectra Sx ( / ) and S y ( f ) using the relations given in (2.4.6) and (2.4.7). Compare this result for S v ( f ) with that obtained in Illustrative Problem 2.8. 2.15 Generate two i.i.d. sequences {UJ^} and (w xlabe!(' Frequency ' i pause 7c Press u key i see ti noise subplol(2.1.1)
sample
plotU,noisei1:lengthiO)) title!' n o i s e sar.ple ') xlabel( 'Tine ' ) pause
% Pre.** a key i *ee '-he modulated signal imd noise
subp!ot(2.1.2) ploKt.nl :Iengih(t))> title(' S i g n a l and r . o i s e ' l xlabeK ' Time ' ! pause
7c Press a kt\ : see the modulated signal am! nam
m (>*" P„ = 0 . 1 PR = 0 . 1 P Î / = 0 . 0 0 2 4 7
The MATLAB script for the above problem follows.
% dsb2.m % Mailab demonstration % is m(i) = sinc(IOOt)
script for DSB-AM
modulation.
The message
signal
echo on
10=2;
ts=0.001; fc=250; snr=20; fs=1 /ts; df=0.3; t=[—i0/2:ts:t0/2]; snrJin=10"(snr/10); m=sinc(100«t);
c=cos(2*pi»fc.*t); u=m.»c; [M,in,dfl]=mseq(m,ts,df); M=M/fs;
[U.u.dfl ]=fftseq(u,ts,df);
% % % % % % % % % % % %
.fi^no/ duration sampling internal carrier frequency StWR in dB (logarithmic) sampling frequency required frtq resolution time vector linear SNR the message signal the carrier signal the DSB-AM modulated signal Fourier transform
% scaling % Fourier
transform
3.2. Amplitude Modu la lion (AM)
87
Figure 3.3: Spectra of the message and the modulated signals in Illustrative Problem 3.2.
88
CHAPTER
3. ANALOG
MODULATION
% scaling U=U/fs; 7c frequency vector f=[0:dfL:dfl*(length(m)-1)]-fs/2: % compute modulated signal power signaJ-power=power(u(1 :length(t))); 7c Compute noise power noise.power=signal.power/snr_iin, 7c Compute noise S!andard deviation noise_std=sqn(noise^power) ; % Generate noise sequence noise=noise.std*randn( 1 .length(u)); % Add noise to the modulated signal r=u+noise; % Fourier transform [R,r,dfl]=mseq(r,is,df); 7c scaling R=R/fs; pause % Press a key to show the modulated signal power signal-power pause 7cPress any key to see a plot of the message elf subplot (2.2,1) plo({t,m(1 length(t))) x!abel( • T i m e ' ) tille{'The m e s s a g e s i g n a l ' ) pause % Press any key to see a plot of the carrier subplot(2.2.2) ploi(t,c(1:length(l)>) xlabeK' T i m e ' ) title(' T h e c a r r i e r ' ) pause % Press any key to sec a plot of the modulated signal subplot(2,2.3) plot t t,u{ 1 :lengih(t))) xlabeK ' T i m e •) title('The m o d u l a t e d s i g n a l ' ) pause % Press any key to see a plot of the magnitude of the message and the % modulated signal in the frequency domain. subplot{2,1,1) plot(r,abs(fftshifi(M))) xlabeK ' F r e q u e n c y ' ) title(' S p e c t r u m of t h e m e s s a g e s i g n a l ' ) Subplot{2.1.2) ploi(f.abs(fftshift(U))) Iitle(' S p e c t r u m of t h e m o d u l a t e d s i g n a l ' ) x)abel(' F r e q u e n c y ' ) pause % Press a key to see a noise sample subplot(2,1,1) plot{t.noised :length(0)) tit)e('noise sample') xlabeK• T i m e ' ) pause % Press a key to see the modulated signal and noise subplot(2,1,2) plot(t,r(1:length(t)>) title{' S i g n a l arid n o i s e ' ) xlabel(' T i m e ' ) pause % Press a key to see the modulated signal and noise tn freq. domain subplot{2,1.1) plol(f,abs(fftshift(U))) tille(' S i g n a l s p e c t r u m ' ) xlabeK' F r e q u e n c y ' ) subplot(2,1,2) p!ot(f.abs(ffishift(R») titleC S i g n a l a n d n o i s e s p e c t r u m ' ) xlabel(' F r e q u e n c y ' )
3.2. Amplitude Modu la lion (AM)
89
What happens if the duration of the message signal fo changes; in particular, what is the effect of having large ro's ar>d small fa's? What is the effect on the bandwidth and signal power ^ The m-file dsb.mod.m given below is a general DSB modulator of the message signal given in vector m on a carrier of frequency fc.
function u=dsb_mod(m,is,fc) % %DS8-MOD % % % %
u - dsb jnodfm.ts.fcl lakes signal m sampled at is and carrier I re if /; tit input and returns the DSB modulated signal ts = E
HI
~ ( A + "An)) H
=
+ (A +
nf„))
(3.4.6)
119
J.-f.Angle Modulation
Obviously, the bandwidth of the modulated signal is not finite. However, we can define the effective bandwidth of the signal as the bandwidth containing 98% to 99% of the modulated signal power. This bandwidth is given by Carson's rule as Br = 2(fi + 1)W
(3.4.7)
where /3 is the modulation index, W is the bandwidth of the message, and Bj is the bandwidth of the modulated signal. The expression for the power content of the angle-modulated signals is very simple. Since the modulated signal is sinusoidal, with varying instantaneous frequency and constant amplitude, its power is constant and does not depend on the message signal. The power content for both FM and PM is given by
Pu = y
(3.4.8)
The SNR for angle modulated signals, when no pre-emphasis and de-emphasis filtering is employed, is given by -r \ =
PMii ( r . J fmax|m(iy): NUW <
3
, Ss-
2
(max |/r.(OI) W W -
pm
(3 4 9)
FM rlV1
Since max |m(i)| denotes the maximum magnitude of the message signal, we can interpret Pm/(max |m{i)l) 2 as the power in the normalized message signal and denote it by Pm„. When pre-emphasis and de-emphasis filters with a 3-dB cutoff frequency equal to / o are employed, the SNR for FM is given by
( I ) W o
= P D
^ ( I ) 3 (W7/o — arctan(W//o)) \ N /
(3.4.10) 0
where ( S / N ) 0 is the SNR without pre-emphasis and de-emphasis filtering given by Equation 3.4.9. ILLUSTRATIVE P R O B L E M Illustrative Problem 3.10 [Frequency modulation] The message signal
I
I.
0 < r < %
-2,
f
0,
otherwise
>! I.ill "* Matlab demonstration script for frequency modulation. The message signal "i is ~l /or 0 < i < tO/3. •2 for tOf3 < t < 2tO/3, and zero other-use. on i " - 15. % vijjrtui duration [>=0 0005; % sampling internal ic-200, % earner frequency k; =50. % modulation index :s=1 is: % sampling frequency :-.0til0]; % time vector ilt =0.25; % required frequency resolution ~c message 'ig/iul rr, = ionesi 1 i0/(3*ts)).-2*ones(1 ,t0/(3*ts)).zeros(1 ,l0/(3*ts)+1 )H int.an 11=0. I"LT 1=1 iengih(t)-1 % integral of m mr-ini i-1 )=mt.m(i)*m(i>»ts; i.nJ 'A! tn.df I ]=fftseq(m.ts.df), % Fourier transform M = \1,'ts. % scaling f=:0 dfl d f l » ( l e n g t h ( m ) - 1 ) ] - f s / 2 ; % frequency vector j=cos(2"pi»fc«i+2*pi»kf«int_m); % modulated signul fL'.u.dfI ]=fFrseq(u.t.s,df); % Fourier transform L' = U/fs. % scaling pause il Press any ke\ to see a plot of the message and the modulated signal iufcplm(2.1.1) ploiit.mf 1 length(t))) xxisl[0 0.15 - 2 . 1 2.1]) ilabeK • T i m e ' ) mlei'The message s i g n a l ' ) ubploi(2.1.1 > plot(f.abs(fTtshift(M))) \labell ' F r e q u e n c y ' ) title!' M a g n i t u d e - s p e c t r u m o f t h e m e s s a g e s i g n a l ' ) subp!ot(2.1,2) p!otii'.abs(Rtshift(U))) titles.' M a g n i t u d e - s p e c t r u m o f t h e m o d u l a t e d s i g n a l ' ) xlabeK ' F r e q u e n c y ' )
and the
MODULATION
J.-f. Angle Modulation
123
ILLUSTRATIVE
PROBLEM
Illustrative Problem 3.11 [Frequency modulation] Let the message signal be rn(t) =
[sinc(lOOr), 0,
|r| < ?o otherwise
where ?o = 0 . 1 . This message modulates the carrier c(t) = cos(2;r/ t -f), where fc = 250 Hz. The deviation constant is kf — 100. 1. Plot the modulated signal in the time and frequency domain. 2. Compare the demodulator output and the original message signal. —
I. We first integrate the message signal and then use the relation «(i) = Ac cos ^2,t/ ( t + Inkf
J
m(
to find w(r). A plot of u(t) together with the message signal is shown in Figure 3.26. The integral of the message signal is shown in Figure 3.27.
\J
\J
!/ V
Figure 3.26: The message and the modulated signals.
A plot of the modulated signal in the frequency domain is shown in Figure 3.28. 2. To demodulate the FM signal, we first find the phase of the modulated signal u(t). This phase is 2nkf f'_gom(z)dT, which can be differentiated and divided by 2kkf to obtain m (t). Note that in order to restore the phase and undo the effect of 2n phase foldings, we employ the unwrap.m function of MATLAB. Plots of the message signal and the demodulated signal are shown in Figure 3.29. As you can see, the demodulated signal is quite similar to the message signal.
126
CHAPTER 3. ANALOG
Tint Figure 3.27: Integral of the message signal.
MODULATION
J.-f. Angle
Modulation
125
!a\ M
A
Figure 3.29: The message signal and the demodulated signal.
The MATLAB script for this problem follows.
%
fm2.m Muilob demonstration % i.r m(t)-sincl lOOt) echo on
t0= 2;
ts=0.001: fc=2S0; snr=20; fs=1 /is; df=0.3; t=(-tO/2:ts:[0/2]:
kf=100; df=0.25; m=sinc(100»O,
script for frequency
modulation
7c 7c % % % % 7c % 7c %
The message
signal
•'ignu! duration sampling interval carrier frequency SNR in dB Th
no =
f J0
n(t)so(l)dt
fTh ni=
n(t)si(t)dt Jo
(5.2.6)
168
CHAPTER 5. BASEBAND DIGITAL
and £ = A2T/, is the energy of the signals sq(i) and S((l). waveforms are orthogonal, i.e.,
f * So(t)Sl{t)dt Jo
TRANSMISSION
We also note that the two signal
=0
(5.2.7)
On the other hand, when si (r) is the transmitted signal, the received signal is r ( i ) = i i ( r ) + n(t),
0 < i < Th
It is easy to show that, in this case, the signal correlator outputs are ro = no n = £ +nL
(5.2.8)
Figure 5.3 illustrates the two noise-free correlator outputs in the interval 0 < r < Tb for each of the two cases—i.e., when i o ( 0 is transmitted and when s i ( f ) is transmitted.
Output of
O u t p u t of Correlator 0
O u t p u t of Correlator I
r
>
T„
Figure 5.3: Noise-free correlator outputs. ( a ) i o ( O was transmitted, (b) i i ( 0 w a s transmitted. Since n(t) is a sampie function of a white Gaussian process with power spectrum N o / 2 , the noise components no and /i[ are Gaussian with zero means, i.e.,
E(n0)
=
f '' s0(t)E[n(t)] Jo
= f 'L and variances o f , for i = 1, 2, where
dt = 0
*Sl(t)£[n(t)ldr=0
(5.2.9)
5.2. Binary Signal Transmission
a- =
169
E{n-) r•TIi, h
=
pir ni,
i(t}s,{z}E[n(t)n(z)]dtdz
=
Jo
Jo
Nq
fT>>
1
Si(t)Si(T)S(t-T)dtdz
Jo
N0 -
fT"
/
, S-U)di
(5.2.10)
^ Jo £No
= — ,
i=0.1
(5.2.11)
Therefore, when i 0 ( O is transmitted, the probability density functions of ro and r\ are p{ra \ j o ( 0 was transmuted) = p{r\
I so(i) was transmitted) =
V2jt a _L V27T a
e
(r
"
e
(
5
.
2
.
1
2
)
These two probability density functions, denoted as p(ro | 0) and p{ri | 0), are illustrated in Figure 5.4. Similarly, when A| (r) is transmitted, ro is zero-mean Gaussian with variance o ~ and r[ is Gaussian with mean value ri and that a 1 was transmitted when r\ > ro- Determine the probability of error.
When i o ( 0 is the transmitted signal waveform, the probability of error is Pt = P{r\
> r 0 ) = P(n{
> 3E + n0) ~ P{ny ~ nQ > £ )
(5.2.19)
Since n \ and no are zero-mean Gaussian random variables, their difference x = n j — no is also zero-mean Gaussian. The variance of the random variable x is E(x2) But E{n\no)
= £ [ ( n , - n o ) 2 ] = £ ( n ? ) + E(n\)
- 2£(«, n0)
(5.2.20)
— 0, because the signal waveforms are orthogonal. That is, r Th
E{n \ /io) = £ / Jo
r Ti,
/ J0
Nn fTh = T r / 2 Jo = ^ 2
so(t)sl(T)n(t)n(T)dtdz
fT,] / J0(Oit(r)fi(f-T)£/frfr Jo
fT* Jo 5Q{t)s\(t)dt
= 0
(5.2.21)
Therefore, E(x2)
= 2
=
s a]
(5.2.22)
Hence, the probability of error is P
^
'
f
e
-
^
l
d
x
V2TT (i)=smldPe54(SNRindBl(i)). end; for i=1 length! SNRindB2), SNR=exp(SNRmdB2(i)*log( 1 0 ) / 1 0 ) . % theoretical error rate ihco.en_prt>(i)=Qfuni;t(5qri{SNR)). end; % plotting commands follow semilogy(SNRindB1.smld.eiT-prb.' * ' ). hold semilogy(SNRindB2,iheo_erT.prb);
4. Chapter
5.
function [pJ=smldPe54(snr_in_dB) % (pi = smldPe54(rnrJnJB) % SMLDPF.54 finds I he probability of error for the given % snr_in^iB. stgnut-io-noise ratio in JB E=1; SNR=en.p(snr.in_dB*log(10)/10). 7o signal-to-noise ratio sgma=E/sqrt(2*SNR); % sigmti. standard deviation of noise N=10000; % generation of the binar\ data source for i=1:N. % a uniform random variable over (0.1) temp=rand, if ( t e m p < 0 . 5 ) , % with probability 1/2, source output is 0 dsourcc(i)=0. else % with probability 1/2. source output is 1 dsource(i)=1; end end, % detection, and probability of error calculation numoferr=0; for 1=1.N.
176
CHAPTER 5. BASEBAND DIGITAL
% matched
filter
TRANSMISSION
outputs
if ( d s o u r c e ( i ) = 0 ) , rf)=E+gngauss(sgma); r!=gngauss(sgma);
% if the source
output
is
"0"
% if the source
output
is "I "
else rO= gngauss(sgma); r!=E+gngauss(sgma>; end; 7t> detector
follows
if (rO>rl), decis=0;
7c decision
is
"0"
% decision
is
"I"
else decis=1; end; if (decis"=dsou/ce(i)), numoferr=numoferr+1, end;
% if it is tin error, increase
the error
counter
end; p=numoferr/N; % probability
of error
estimate
Figure 5.9: Error probability f r o m M o n t e Carlo simulation compared with theoretical error probability for orthogonal signaling.
— c n n s E ® - — In Figure 5.9, simulation and theoretical results completely agree at low signal-to-noise ratios, whereas at higher S N R s they agree less. Can you explain why? H o w should w e change the simulation process to result in better agreement at higher signal-to-noise ratios?
5.2. Binary Signal Transmission
5.2.6
177
Other Binary Signal Transmission Methods
The binary signal transmission method described above was based on the use of orthogonal signals. Below, we describe two other methods for transmitting binary information through a communication channel. One method e m p l o y s antipodal signals. The other method employs an on-off-type signal.
5.2.7
Antipodal Signals for Binary Signal Transmission
Two signal waveforms are said to be antipodal if one signal waveform is the negative of the other. For example, one pair of antipodal signals is illustrated in Figure 5.10(a). A second pair is illustrated in Figure 5.10(b).
•MO
iMO
A
A
A
Tb
0
Tb
Tb
0 -A
0
0
-A
—A
(a) A pair of antipodal signals
Tb
(b) A n o t h e r pair of antipodal signals
Figure 5.10: Examples of antipodal signals, (a) A pair of antipodal signals, (b) Another pair of antipodal signals. Suppose w e use antipodal signal waveforms i o ( 0 = J ( f ) and i i ( r ) = - i ( r ) to transmit binary information, where i ( f ) is some arbitrary waveform having energy £ . The received signal waveform from an A W G N channel may be expressed as
r(t) = ± j ( r ) + n ( i ) .
0SNR.per_btf)>; end; % plotting
commands
follow
semilogy(SNRindBI,smld_err.prt>,' * ' ) ; hold semi lo gy (SNRindB2 ,theo_eir_ prb);
5.3. Muliiamplitude Signai Transmission
function [p]=smldPe58(snr_in.dB) % [pj = smldPe58(snrJn^dB) % SMLDPE5H simulates the emir probability for the given % snrjn-dB. signal-to-notse ratio in dB M=16; % 16-ary PAM d=1; SNR=exp(snrJn_dB*log(10)/l0); % signal-to-noise ratio per bit sgma=sqrt((85*d"2)/(8*SNR)); % sigma, standard deviation of noise N=10000; % number of symbols being simulated % generation of the i/uarternary data source for i=1 :N, temp=rand; % a uniform random variable over (0,1) index=floor(M*temp), % the index is an integer from 0 to M-l, where % all the possible values are equally likely dsource(i)=index; end; % detection, and probability of error calculation numoferr=0; for i=1:N. % matched filter outputs % (2*dsource(i)-M+l)*d is the mapping to the 16-ary constellation r={2*dsourcc(i)—M+1)*d+gngauss(sgma); % the detector if ( r > ( M - 2 ) * d ) , decis=15; elseif ( r > { M - 4 ) » d ) , decis=14; elseif (r>{M—6)*d), decis=13; elseif ( r > ( M - 8 ) * d ) , decis=12, elseif ( r > ( M - 1 0 ) » d ) , decis=11; elseif ( r > ( M - 1 2 ) * d ) , dec:s=10; elseif ( r > ( M - 1 4 ) * d ) , decis=9; elseif ( r > ( M - 1 6 ) * d ) , decis=8; elseif ( r > ( M - 1 8 ) * d ) , decis=7; elseif ( r > ( M — 2 0 ) * d ) . decis=6; elseif ( r > ( M - 2 2 ) * d ) , decis=5; elseif ( r > ( M — 2 4 ) * d ) , decis=4; elseif ( r > ( M — 2 6 ) * d ) , decis=3; elseif ( r > ( M — 2 8 ) » d ) , decis=2; elseif ( r > ( M - 3 0 ) + d ) , decis=1; else
199
200
CHAPTER 5. BASEBAND DIGITAL
decis=0; end; if (decis~=dsource(i)), % if it is on error, increase numoferr=numoferr-t-1; end; end; p=numoferr/N; % probability of error
the error
TRANSMISSION
counter
estimate
Figure 5.25: Error rate from Monte Carlo simulation compared with the theoretical e u o r probability for M = 16 PAM.
5.4
Multidimensional Signals
In the preceding section we constructed multiamplitude signal waveforms, which allowed us to transmit multiple bits per signal waveform. Thus, with signal w a v e f o r m s having M = 2k amplitude levels, w e are able to transmit k = log 2 M bits of information per signal waveform. We also observed that the multiamplitude signals can be represented geometrically as signal points on the real line (see Figure 5.20). Such signal w a v e f o r m s are called one-dimensional signals. In this section we consider the construction of a class of M = 2* signal w a v e f o r m s that have a multidimensional representation. That is, the set of signal w a v e f o r m s can be represented geometrically as points in N - d i m e n s i o n a l space. We have already observed that binary orthogonal signals are represented geometrically as points in two-dimensional space.
5.4. Multidimensional Sign3!s
5.4.1
201
Multidimensional Orthogonal Signals
There are many ways to construct multidimensional signai waveforms with various properties. In this section, w e consider the construction of a set of M = 2k waveforms i ; ( i ) . for i = 0, 1, . . . , M - 1, which have the properties of (a) mutual orthogonality and (b) equal energy. These t w o properties may be succinctly expressed as
L
T
si{t)sk(t)dt
= £8ik,
i,k
= 0, 1, . . . , M - 1
where 3T is the energy of each signal waveform and is defined as
«••* = r ' jO,
i
~
(5.4.1)
is called the Kronecker delta, which
k
(5-4.2)
i
As in our previous discussion, we a s s u m e that an information source is providing a sequence of information bits, which are to be transmitted through a communication channel. The information bits occur at a uniform rate of R bits per second. The reciprocal of R is the bit interval, 7},. The modulator takes k bits at a time and maps them into one of M = 2* signal waveforms. Each block of k bits is called a symbol. The time interval available to transmit each symbol is T ~ kTf,- Hence, T is the symbol interval.
JoCO
MO
J 3 (0
I'lC) A
A
.1 1 2
21
J
1 . . 7"
1
1 yr 4
j
Figure 5.26: An example of four orthogonal, equal-energy signal waveforms. T h e simplest way to construct a set of M — 2k equal-energy orthogonal w a v e f o r m s in the interval (0, T) is to subdivide the interval into M equal subintervals of duration T/M and
202
CHAPTER 5. BASEBAND DIGITAL TRANSMISSION
to assign a signal waveform for each subinterval. Figure 5.26 illustrates such a construction for M = 4 signals. All signal waveforms constructed in this manner have identical energy, given as rT £ = / sf(t)dt, Jo
i = 0, I, 2 , . . . , M - 1
2
= A T/M
(5.4.3)
Such a set of orthogonal waveforms can be represented as a set of A/-dimensional orthogonal vectors, i.e.,
So
=
0,0
Si - (0, y/£,Q
0) 0)
sM = ( 0 , 0 , . . . , 0,
(5.4.4)
Figure 5.27 illustrates the signal points (signal constellations) corresponding to M = 2 and M — 3 orthogonal signals.
V
'T 1
'T Al
Vx
vT _ ]
— Vf—J
—
M= 2
Vr—*l
m = 3
Figure 5.27: Signal constellation for M = 2 and M = 3 orthogonal signals. Let us assume that these orthogonal signal waveforms are used to transmit information through an AWGN channel. Consequently, if the transmitted waveform is s, (/), the received waveform is
r ( f ) = Si(t) + n(t),
0 H , ro > r 2 , . . . , r 0 >
r
w
_i)
(5.4.11)
5.4. Multidimensional Sign3!s
205
and the probability of a symbol error is simply
Pm = I - Pc = 1 - P(r0
> rx, r 0 > r2,...
, ro > rM~0
(5.4.12)
It can be shown that P\; can be expressed in integral form as 1 Pm = ^ = = J
f00
I
i
l l - t l - Q i y ) ^ -
1
) ^ - ^ ^ ' /
2
^
(5.4.13)
For the special case M — 2, the expression in (5.4.13) reduces to Pi = Q
(y^b/Wo)
which is the result we obtained in Section 5.2 for binary orthogonal signals. The same expression for the probability of error is obtained when any one of the other M — 1 signals is transmitted. Since all the M signals are equally likely, the expression for Pm given by (5.4.13) is the average probability of a symbol error. This integral can be evaluated numerically. Sometimes, it is desirable to convert the probability of a symbol error into an equivalent probability of a binary digit error. For equiprobable orthogonal signals, all symbol errors are equiprobable and occur with probability
M - 1
=
2" - 1
(5-4.14)
Furthermore, there are ways in which n bits out of k may be in error. Hence, the average number of bit errors per £-bit symbol is
i=i and the average bit error probability is just the result in (5.4.15) divided by k, the number of bits per symbol, Thus
Pb -
2t-1 2*31^
( 5 A 1 6 )
The graphs of the probability of a binary digit error as a function of the SNR per bit, £h/N-o, are shown in Figure 5.29 for M = 2 , 4 , 8, 16, 3 2 , 6 4 , where £h = £ / k is the energy per bit. This figure illustrates that by increasing the number M of waveforms, one
206
CHAPTER 5. BASEBAND DIGITAL TRANSMISSION
\0)og
i 0
(£
h
/N
0
)
Figure 5.29: Bit-error probability for orthogonal signals.
can reduce the SNR per bit required to achieve a given probability of a bit error. The MATLAB script for the computation of the error probability in (5.4.13) is given below.
% MATLAB script that gene rules the probability of error versus the signal-to-noise inmal.snr=0; hnaLsnr=15; snr.step=1; (olerance=1e-7. % tolerance used for the integration minus. i n f = - 2 0 ; % this is practically -infinity plus_inf=20; % this is practically infinity si\r-in_dB=imual_snr.snr.step.fmal_5nr; for i=1 :length(snr.in-dB), snj=l0~(snr-in.dB(i)/10); Pe_2(i)=Qfunct(sqrUsnr)V, Pe_4(i)=(2/3)*quad8(' b d t _ i n t ' , mi n u s . inf.pl us-inf. tolerance,[ ],snr,4); Pe-8(i)=(4/7)»quad8(' b d t _ i n c ' .mi nus.inf.plus.inf. tolerance,! ].snr,8); Pe.l6(i)=(8/15)»quad&(' b d t _ i n t ' .minus.inf,plus.inf,tolerance,[ ],snr, 16); Pe_32(i)=(16/31)*quad8( ' b d t _ i n t ' ,minus_inf.p]us-inf.tolerance,n,snr,32); Pe.64(i)=(32/63)*quad8( ' b d t _ i n t ' .minus.usf,plus.inf,tolerance,| ],snr,64); end. % plotting commands follow
ratio
5.4. Multidimensional Sign3!s
207
iHMfeHsfcHLÜJafcfablldflfc
I l l u s t r a t i v e P r o b l e m 5.10 Perform a M o n t e Carlo simulation of a digital communication system that employs M = 4 orthogonal signals. The model of the system to be simulated is illustrated in Figure 5.30.
Figure 5.30: Block diagram of system with M — 4 orthogonal signals for Monte Carlo simulation.
^aWlhJklJM As shown, w e simulate the generation of the random variables r 3 , which constitute the input to the detector. We may first generate a binary sequence of 0's and 1 *s that occur with equal probability and are mutually statistically independent, as in Illustrative Problem 5.4. T h e binary sequence is grouped into pairs of bits, which are mapped into the corresponding signal components. An alternative to generating the individual bits is to generate the pairs of bits, as in Illustrative Problem 5.8. In any case, we have the mapping of the four symbols into the signal points:
CHAPTER 5. BASEBAND DIGITAL TRANSMISSION
208
0 0 — so = ( v ^ , 0 . 0 , 0 ) 01 — si = (0, VZ.O,
0)
10 —V 52 = ( 0 , 0 , y f W , 0) 1 I —v
— ( 0 , 0 , 0,
V~£)
(5.4.17)
The additive noise components no, n \, nj, ny are generated by means of four Gaussian noise generators, each having a mean zero and variance c — N o £ J2. For convenience, we may normalize the symbol energy to £ = 1 and vary a~. Since £ = 2 £ h , il follows that £ ) , — The detector output is compared with the transmitted sequence of bits, and an error counter is used to count the number of bit errors. Figure 5.31 illustrates the results of this simulation for the transmission of 20,000 bits at several different values of the SNR £ h j N o . Note the agreement between the simulation results and the theoretical value of Ph given by (5.4.16). The MATLAB scripts for this problem are given below.
% MATLAB scrip! jar Illustrative Problem echo on SNRindB=0:2:10, for i=1 :length(SNRtndB), % simulated error rate smJd.err.prb(i)=smldPe59{SNRindB(i)); end; % pita ting commands follow semilogyiSNRindB.sinld.eir.prb.' * ' ) ;
10, Chapter
5
function [p)=sinJdPe59(snr.in.dB) % (pj smldPe59(snrJn.dB) % SMLDPE59 simulates the probability of error for the given % snrJn-dB, signal-to-noise ratio in dB. M=4; % quarternary orthogonal signalling E=1, c ratio per bit SNR=exp(snr_in_dB*log(10)/10); .% signal-to-noise sgma=sqrt(E~ 2/(4*SNR));