Network Communication by Nader F Mir Solution Manual...
SOLUTIONS MANUAL FOR COMPUTER AND COMMUNICATION NETWORKS
Nader F. Mir
Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Paris Madrid Capetown Sydney Tokyo Singapore Mexico City .
.
.
.
.
.
.
.
.
.
.
.
.
The author and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. Visit us on the Web: www.prenhallprofessional.com Copyright © 2007 by Pearson Education, Inc. This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning. Dissemination or sale of any part of this work(including on the World Wide Web) will destroy the integrity of the work and is not permitted. The work and materials from it should never be made available to students except by instructors using the accompanying text in their classes. All recipients of this work are expected to abide by these restrictions and to honor the intended pedagogical purposes and the needs of other instructors who rely on these materials. ISBN 0-13-234570-6 First release, March 2007
Contents Preface
ii
0.1
How to Obtain Errata of Text-Book . . . . . . . . . . . . . .
iii
0.2
Errors in This Solution Manual . . . . . . . . . . . . . . . . .
iii
0.3
How to Obtain an Updated Solution Manual . . . . . . . . .
iii
0.4
How to Contact Author . . . . . . . . . . . . . . . . . . . . .
iv
About the Author
v
I
1
Fundamental Concepts
1 Packet-Switched Networks
3
2 Foundation of Networking Protocols
9
3 Networking Devices
15
4 Data Links and Transmission
21
5 Local-Area Networks and Networks of LANs
29
6 Wireless Networks and Mobile IP
35
7 Routing and Inter-Networking
41
8 Transport and End-to-End Protocols
51
i
ii
CONTENTS
9 Applications and Network Management
57
10 Network Security
67
II
73
Advanced Concepts
11 Packet Queues and Delay Analysis
75
12 Quality-of-Service and Resource Allocation
89
13 Networks in Switch Fabrics
101
14 Optical Switches and Networks, and WDM
121
15 Multicasting Techniques and Protocols
123
16 VPNs, Tunneling, and Overlay Networks
133
17 Compression of Digital Voice and Video
135
18 VoIP and Multimedia Networking
149
19 Mobile Ad-Hoc Networks
155
20 Wireless Sensor Networks
157
Preface An updated version of the solution manual is hereby provided to instructors. For any problem marked with “N/A,” the solution will be provided in the upcoming versions. Please check with the publisher or the author timeto-time to obtain the latest verion of the solution manual. For effective educational learning purposes, instructors are kindly requested not to allow any student to access this solution manual.
0.1
How to Obtain Errata of Text-Book
The Errata of the text-book, Edition 1, is now available. Please contact the author directly at
[email protected] for copy.
0.2
Errors in This Solution Manual
If you find any error in this solution manual, please directly inform the author at
[email protected] .
0.3
How to Obtain an Updated Solution Manual
Please check time to time the web page of the text-book in the PrenticeHall site and click on “Instructors’ link” to obtain the latest version of this solution manual. iii
iv
CONTENTS
0.4
How to Contact Author
Please feel free to send any feedback on the text book to Department of Electrical Engineering, San Jose State University, San Jose, CA 95192, U.S.A, or via email at
[email protected]. The preparation of this version of the solution manual took years and the manual may contain some errors. Please also feel free to send me any feedback on this solution manual. I would love to hear from you especially if you have suggestions for improving this book further for its next editions. I will carefully read all review comments. You can find out more about me at: http://www.engr.sjsu.edu/nmir I hope that you enjoy the text and that you receive a little of my liking for the computer communications from it.
Nader F. Mir San Jose, California
About the Author Nader F. Mir received the B.Sc. degree (with honors) in electrical & computer engineering in 1985 and the M.Sc. and Ph.D. degrees both in electrical engineering from Washington University in St. Louis, MO, in 1990 and 1994 respectively. He is currently a professor and department associate chairman of Electrical Engineering at San Jose State University, California. He is also the director of MSE Program in Optical Sensors and Networks for Lockheed Martin Space Systems. Previously, he was an associate professor at this school and assistant professor at the University of Kentucky in Lexington. From 1994 to 1996, he was a research scientist at the Advanced Telecommunications Institute, Stevens Institute of Technology, in New Jersey, working on the design of advanced telecommunication networks. From 1990 to 1994, he was with the Computer and Communications Research Center at Washington University in St. Louis and worked as a research assistant on design and analysis of high-speed switching-systems projects. His research interests are: analysis of computer communication networks, design and analysis of switching systems, network design for wireless adhoc and sensor systems, and applications of digital integrated circuits in computer communications. He is a senior member of the IEEE and has served as the member of Technical Program Committee and Steering Committee of a number of major IEEE networking conferences such as WCNC, GLOBECOM, and ICC. Dr. v
vi
CONTENTS
Mir has published numerous refereed technical journal and conference papers all in the field of communications and networking. He has published a book in video communication engineering, and another text-book published by Prentice Hall Publishing Co. entitled “Computer & Communication Networks, Design and Analysis”. Dr. Mir has received a number of prestigious national and university awards including the university teaching recognition award and research excellence award. He is also the recipient of the 2004 IASTED Outstanding Tutorial Presentation award. Currently, he has several journal editorial positions such as: the Editorial Board Member of the International Journal of Internet Technology and Secured Transactions, the Editor of Journal of Computing and Information Technology, and the Associate Editor of IEEE Communication Magazine.
Part I
Fundamental Concepts
1
Chapter 1
Packet-Switched Networks
1. Total distance = = 2( 3, 0002 + 10, 0002 ) = 20,880.61 km. Speed = c = 2.3 ×108 m/s. (a) proagation delay = tp =
c
=
20,880.61 km 2.3×108 m/s
= 90.8 ms
(b) Number of bits in transit during the propagation delay = (90.8 ms) × (100 × 106 b/s) = 9.08 Mb (c) 10 bytes = 80 bits 2.5 bytes = 20 bits, then: total length = 80 + 20 = 100 bits 100 b = 1 μs T = 100×106 b/s
2. Total distance = = 2( (30/1000)2 + 10, 0002 ) ≈ 20,000 km. Speed = c = 2.3 ×108 m/s. (a) tp =
c
=
20,000 2.3×108
km = 87 ms m/s
(b) 100 Mb/s × 0.087 s = 8.7 Mb 3
4
Chapter 1. Packet-Switched Networks (c) Data: Ack:
(10 B)×8 b + tp = 0.79 μs + 0.087 100 Mb/s (2.5 B)×8 b + tp = 0.19 μs + 0.087 100 Mb/s
s s
Total time ≈ 1μs (transfer) + 0.173 s (prop.)
3. Assuming the speed of transmission at 2.3 × 108 : (a) Total Delay: D = [np + (nh − 2)]tf + (nh − 1)tp + nh tr 50 miles×1600 m/miles = 0.35 ms (b) tp1 = 2.3×108 m/s 400 miles×1600 m/miles = 2.8 ms tp2 = 2.3×108 m/s MB = 20, 000 Number of packets = np = 200 10KB 10,040 B/pockets ×8 b/B = 0.8 ms/pockets tf = 100 Mb/s D = [20, 000 + (5 − 2)]0.8 + [(3 − 1)0.35 + (3 − 1)2.8] + 5 × 0.2 × 103 ≈ 16.6 s
4. Dp = [np + (nh − 2)]tf 1 + nh tr1 + (nh − 1)tp Dc = 3 ([1 + (nh − 2)]tf 2 + nh tr2 + (nh − 1)tp ) Dt = Dp + Dc = (np + nh − 2)tf 1 + 3(nh − 1)tf 2 + nh (tr1 + 3tr2 ) + 4(nh − 1)tp
5. Number of packets = np =
200MB 10KB
= 20, 000
Dt = Dp + Dc Dc = dconn-req + dconn-accep + dconn-release (a) Here, the problem askes that tr be defined as the processing time for each packet. Therefore, tr = 20, 000× 0.2 = 4,000 s
5 Dc = dconn-req = dconn-accep = dconn-release = [np + (nh − 2)]tf + nh tr + (nh − 1)tp 500 b/packet + 3 × 4, 000 s + 4.84 ms = [1 + (5 − 2)] 100 mb/s ≈ 12, 000 s (b) Same as Part (a). (c) Dt = Dp + Dc = 17 + 3 × 12, 000 = 36, 017 s
6. s = 109 b/s nh = 10 nodes tr1 = tr2 = 0.1 s = tr Data forms two packets: (9960 + 40) bytes for packet1 (2040 + 40) bytes for packet2 tf 1−packet1 =
B×8 b/B
109 b/s
= 8 × 10−5 s
B×8 b/B = 16.64 × 10−6 s b/s b = 5 × 10−7 s = transfer times for control packets = 500 109 b/s
tf 1−packet2 = tf 2
10,000
tp =
c
=
500
2,080
109
miles×1.61×103 m = 3.5 × 10−3 s 2.3×108 m/s
(a) request + accept time: t1 + t2 = 2 ([np + (nh − 1)]tf 2 + (nh − 1)tp + nh tr ]) = 2.06 s (b) t3 = 12 (t1 + t2 ) = 1.03 s (c) Dt = Dp + Dc Dp = Dp−packet1 + Dp−packet2 = [np + (nh − 2)]tf 1−packet1 + nh tr1 + (nh − 1)tp + [np + (nh −
6
Chapter 1. Packet-Switched Networks 2)]tf 2−packet1 + nh tr1 + (nh − 1)tp Dc = t1 + t2 + t3 Dt = 4.1 s
7. d + h = 10, 000 b, ρd = 72%, h d
= 0.04,
s = 100 Mb/s, (a) h = 0.04d then:
d d+h
= 0.96.
d then: 0.72 = ρ0.96 Since: ρd = ρ h+d
⇒ ρ = 0.74 (b) μ =
s h+d
=
100×106 b/s 10×103 b
= 10 × 103 = 10, 000 packets/sec
(c) λ = ρμ = 0.74 × 10, 000 = 74, 000 1 ¯= = 0.38 ms D (d)
10,000−7,400 √ 2 ρ h ¯ √d D = opt s 1− ρd
d + h = 10, 000h/d = 0.04 ⇒ h = 384 b ¯ D opt = 0.12
8. s = 100 Mb/s ρ = 80% (a) ρ =
λ μ
⇒ 0.8 =
8000 μ
⇒ μ = 10, 000 packets/s
7 Node A Node D
Connection Request
Connection Release
t t
Data Node C
Connection Accept
Node B
t t
Figure 1.1: Signaling delay in connection-oriented packet-switched environment. μ=
s h+d
⇒ 10, 000 =
100×106 h+d
⇒ h + d = 10, 000 b h (b) ρh = 0.008 and ρh = ρ d+h
⇒ d + h = 100h ⇒ h = 100 b, and d = 9900 b. (c) ρh = 0.008 ρd = ρ − ρh = 0.8 − 0.008 ⇒ ρd = 0.792
√ ρ
dopt = h 1−√dρ = 809 bits d (d + h)opt = h + dopt = 100 + 809 ⇒ (d + h)opt = 909 b
2
h 1 ¯ (d) D opt = s 1−√ρd − ¯ ⇒D opt = 8.2 × 10 5 s
9. See Figure 1.1.
10. D =
d+h s[1−ρd /d(d+h)]
8
Chapter 1. Packet-Switched Networks (a)
∂D ∂h
=0
Thus: hopt = (b) Queueing delay
d(1−2ρd ) 2ρd
Chapter 2
Foundation of Networking Protocols 1. (a) Address: 127.156.28.31 = 0111 1111 . 1001 1100 . 0001 1100 . 0001 1111 Mask: 255.255.255.0 = 1111 1111 . 1111 1111 . 1111 1111 . 0000 0000 Class A Subnet ID: 1001 1100 0001 1100=39964 (b) Address: 150.156.23.14 = 1001 0110 . 1001 1100 . 0001 0111 .0000 1110 Mask: 255.255.255.128 = 1111 1111 . 1111 1111 . 1111 1111 . 1000 0000 Class B Subnet ID: 000101110 = 46 (c) Address: 150.18.23.101 = 9
10
Chapter 2. Foundation of Networking Protocols 1001 0110 . 0001 0010 . 0001 0111 . 0110 0101 Mask: 255.255.255.128 = 1111 1111 . 1111 1111 . 1111 1111 . 1000 0000 Class B Subnet ID: 000101110 = 46
2. (a) IP: 1010 1101 . 1010 1000 . 0001 1100 . 0010 1101 Mask: 1111 1111 . 1111 1111 . 1111 1111 . 0000 0000 Class B Subnet ID=00011100=28 (b) A packet with IP address 188.145.23.1 using mask pattern 255.255.255.128 IP: 1011 1100 . 1001 0001 . 0001 0111 . 0000 0001 Mask: 1111 1111 . 1111 1111 . 1111 1111 . 1000 0000 Class B Subnet ID=000101110=46 (c) A packet with IP address 139.189.91.190 using a mask pattern 255.255.255.128 IP: 1000 1011 . 1011 1101 . 0101 1011 . 1011 1110 Mask: 1111 1111 . 1111 1111 . 1111 1111 . 1000 0000 Class B Subnet ID=010110111=183
3. IP1: 1001 0110 . 0110 0001 . 0001 1100 . 0000 0000 IP2: 1001 0110 . 0110 0001 . 0001 1101 . 0000 0000 IP3: 1001 0110 . 0110 0001 . 0001 1110 . 0000 0000 New IP (CIDR): 150.97.28.0/22
11
4. Address: 141.33.11.0/22 = 1000 1101 . 0010 0001 . 0000 1011 . 0000 0000 141.33.12.0/22 = 1000 1101 . 0010 0001 . 0000 1100 . 0000 0000 141.33.13.0/22 = 1000 1101 . 0010 0001 . 0000 1101 . 0000 0000 141.33.8.0/21
5. (a) 191.168.6.0 1011 1111 . 1010 1000 . 0000 0110 . 0000 0000 1111 1111 . 1111 1111 . 1111 1110 . 0000 0000 Result: 1011 1111 . 1010 1000 . 0000 0110 . 0000 0000 = 191.168.6.0/23 (b) 173.168.28.45 1010 1101 . 1010 1000 . 0001 1100 . 0010 1101 1111 1111 . 1111 1111 . 1111 1110 . 0000 0000 Result: 1010 1101 . 1010 1000 . 0001 1100 . 0000 0000 = 173.108.28.0/23 (c) 139.189.91.190 1000 1011 . 1011 1101 . 0101 1011 . 1011 1110 1111 1111 . 1111 1111 . 1111 1110 . 0000 0000 Result: 1000 1011 . 1011 1101 . 0101 1010 . 0000 0000 = 139.189.90.0/23
6. 180.19.18.3: 1011 0100 . 0001 0011 . 0001 0010 . 0000 0011
12
Chapter 2. Foundation of Networking Protocols (a) 180.19.0.0/18: 1011 0100 . 0001 0011 . 0000 0000 . 0000 0000 180.19.3.0/22: 1011 0100 . 0001 0011 . 0000 0011 . 0000 0000 180.19.16.0/20: 1011 0100 . 0001 0011 . 0001 0000 . 0000 0000 (b) The longest match is 180.19.16.0/20.
7. (a) N1 L11: 1100 0011 . 0001 1001 . 0000 0000 . 0000 0000 N2 L13: 1000 0111 . 0000 1011 . 0000 0010 . 0000 0000 N3 L21: 1100 0011 . 0001 1001 . 0001 1000 . 0000 0000 N4 L23: 1100 0011 . 0001 1001 . 0000 1000 . 0000 0000 N5 L31: 0110 1111 . 0000 0101 . 0000 0000 . 0000 0000 N6 L33: 1100 0011 . 0001 1001 . 0001 0000 . 0000 0000 (b) Packet: 1100 0011 . 0001 1001 . 0001 0001 . 0000 0011 L11: 1100 0011 . 0001 1001 . 0000 0000 . 0000 0000 L12: 1100 0011 . 0001 1001 . 0001 0000 . 0000 0000 L12: 1100 0011 . 0001 1001 . 0000 1000 . 0000 0000 L13: 1000 0111 . 0000 1011 . 0000 0010 . 0000 0000 L21: 1100 0011 . 0001 1001 . 0001 1000 . 0000 0000 L22: 1100 0011 . 0001 1001 . 0001 0000 . 0000 0000 L23: 1100 0011 . 0001 1001 . 0000 1000 . 0000 0000 L31: 0110 1111 . 0000 0101 . 0000 0000 . 0000 0000 L33: 1100 0011 . 0001 1001 . 0001 0000 . 0000 0000 Answer=N6 (c) 32-21=11 b So, 211 = 2,048 users
13
8. (a) IPv4 address field has 32 bits Total number of IP addersses available = 232 Number of IP addresses per person for 620 million people =
232 620×106
= 6.9 ≈ 7
(b) Number of bits required to serve 620 million people is 30 since 229 ≈ 536 mil < 620 mil < 230 ≈ 1, 074 mil Thus, CIDR can only have 32-30 = 2 bit as Network ID field as x.x.x.x/2 (c) IPv6 address field has 128 bits: Total number of IPaddresses available = 2128 Number of IP addresses per person for 620 million people =
2128 620×106
= 5.49 × 1029
9. (a) 1111:2A52:111:73C2:A123:56F4:1B3C binary form: 0001 0001 0001 0001 : 0010 1010 0101 0010 : 1010 0001 0010 0011 : 0000 0001 0001 0001 : 0111 0011 1100 0010 : 1010 0001 0010 0011 : 0101 0110 0111 0100 : 0001 1011 0011 1100 (b) 2532::::FB58:909A:ABCD:0010 binary form: 0010 0101 0011 0010 : : : : 1111 1011 0101 1000 : 10001 0000 1001 1010 : 1010 1011 1100 1101 : 0000 0000 0001 0000 (c) 2222:3333:AB01:1010:CD78:290B::1111 binary form: 0010 0010 0010 0010 : 0011 0011 0011 0011 : 1010 1011 0000 0001: 0001 0000 0001 0000 : 1100 0000 0111 1000 : 0010 1001 0000 1011 : : 0001 0001 0001 0001
14
Chapter 2. Foundation of Networking Protocols
10. N/A
11. Connection set up can be greatly simlpified because the VP is already selected. Only the VC has to be chosen with 216 = 64K choices.
12. Retain features: • conection-oriented network. Lost Features: • shorter process time • lower header/data ratio • harder to multiplex
Chapter 3
Networking Devices 1. (a) Number of channels= 12 × 5 × 10 = 600 (b) Capacity= 600 × 4 KHZ = 2.4 MHZ
2. (First, please make a correction: 170 Kb/s must change to 160 Kb/s) Total output bit-rate of the multiplexer is 160 Kb/s (a) Total bit-rate from analog links = (5 KHz + 2 KHz + 1 KHz) × 2 samples/cycle × 5 bits/sample = 80 Kb/s (b) Total bit-rate from digital links = 160 Kb/s - 80 Kb/s = 80 Kb/s 80 Kb/s = 20 Kb/s Total bit-rate per digital line = 4 lines Pulse stuffing per digital line = 20 Kb/s - 8 Kb/s = 12 Kb/s (c) The number of channels of the frame dedicated to each line is proportional to the data rate of that line. Let’s consider one channel when upto 10 Kb/s of data rate is present. Using a proportional channel assignment , we need a total of 5 + 2 + 1 = 8 analog channels. As each digital line requires 8 Kb/s, we can also assign one channel per digital line. Therefore we need 4 15
16
Chapter 3. Networking Devices digital channels, and one control channel: Bits in each frame = (8 + 4 + 1)×5 b/channel + 1 guard-bit= 66 b/frame Frame rate =
160×103 66
b/s = 2,424 frames/s b/frame
3. (a) Pulse stuffing = 4800 b/s × 0.03 = 144 b/s Number of characters are = 1 (sync) + 99 (data)= 100 4800 b/s ≈ 50 b/s Synchronization bit rate = 100 Number of 150 b/s terminals 4800−(2×600+5×300+50+144) b/s = 12.7 ≈ 12 terminals = 150 b/s (b) The number of characters for synchronization is proportional to bit rates. For example, since we need 12 characters for 150 b/s terminals, therefore we need 3 characters for synchronization. Frame format in terms of bits is: 2 × (12 char × 10 b/char) + 5 × (6 char × 10 b/char) + 12 × (3 char × 10 b/char) + 3 × 10 b/char + 1 × 10 b/char = 940 b/frame
4. (a) #bits/frame (total) = 2 Mb/s × 26 μs/frame = 52 bits/frame #bits/frame (data) = 52 bits/frame − 10 = 42 bits/frame #channels = n = 42 bits/frame (b) P[clipping] =
m−1
i=n
m = 10, n = 7, p = 0.9 P[clipping] = 0.947
m−i i
1 6
bits/ch
= 7 ch/frame
pi (1 − p)m−i−1
17 5. (a) ρ =
ta tc +d
(b) Pj=3 =
= 2/8 = 25%
j
m j
i=0
(c) B = Pj=n=4 =
m j
=
(d) E[C] =
j=1 jpj
( 26 )3
8 1
m i
= (1/3)i
≈ 21%
( 26 )i
(1/3)n
70/81 1+8/3+28/9+56/27+70/81
4
j=0
m n
n
3
( tta )i d
i=0
=
8 3
t ( t i )j a
4 i=0
8 7
(1/3)4
8 i
≈ 9%
= 1(0.275) + 2(0.32) + 3(0.213) + 4(0.0889) ≈
1.94
6. (a) m = 4 n=2 Prob[clipping] = Pc =
3
i=2
3 i
ρi (1 − ρ)3−i
= 10.4% for ρ = 0.2 = 35.2% for ρ = 0.4 = 64.8% for ρ = 0.6 = 89.6% for ρ = 0.8 (b) m = 4 n=3 Prob[clipping]Pc =
3
= 0.0% for ρ = 0.2 = 6.4% for ρ = 0.4 = 21.6% for ρ = 0.6 = 51.2% for ρ = 0.8 (c) n = 4 P4 = 100%
(1/3)i
i=3
3 3
ρ3 (1 − ρ)3−3
18
Chapter 3. Networking Devices
7. ρ =
ta ta +td
= 0.9
for m = 11 and n = 10 : prabability Pc = theclipping m−1 m−1 10 ρi (1−ρ)m−1−i = ρ10 (1−ρ)0 = ρ10 =≈ = i=n i 10 0.35
8.
1 μ
1 − ηρ m
9. See Figure 3.1. 1
1
0
0
1
1
1
Natural NRZ
Polar NRZ
Manchester
Figure 3.1: Line coding.
0
1
19 10. See Figure 3.2.
ASK
FSK
PSK
Figure 3.2: Modulation techniques.
11. (a) Assume a packet incoming at input port of IPP has length of L bits. Then, T =
d+50 r
⇒
∂T 2 ∂d∂r
= 0 ⇒ dopt , ropt
Therefore ways to optimize the transmission delay T are followings: • Increase transmission rate (r) by reducing clock cycle time of CPU. • Define value of d to be equal to highest-probability packet length(L).
20
Chapter 3. Networking Devices (b) For example, if the switch fabric has 5 stages of routing in its internal network, the processing delay mostly depends on AND gate switch time of gates on a fabric. Assume applying CMOS transistors,which are slowest technology for switching transistors, for this switch fabric. Assuming 50-80 ns switch time for CMOS AND gate, the total propagation delay in this switch fabric = 80 ns ×5 stages=0.4 μs. On the other hand, the delay in IPP (D) mostly depends on packet fragmentation and encapsulation delay time. Typical value of this delay time is about tens or hundreds of milliseconds for a 512-bytes packet. Therefore processing delay in the switch fabric is not significant compared to delay in IPP.
12. N/A
Chapter 4
Data Links and Transmission 3 m = 16.7 m/s 1. Tprop = 5000×10 3×108 m/s Total size=(500 page)(1000 char/page)(8 bits/char)=4 Mb
(a) T = 16.7 ms + 4 Mb/(64 kb/s)=62.51 s (b) T = 16.7 ms + 4 Mb/(620 mb/s)=23.15 ms (c) With two million volumes of books: Total size = 4Mb ×2 × 106 = 8000 Gb i. T = 16.7 ms + 8000 Gb/(64 kb/s) = 1.25 × 108 s ≈ 4 years ii. T = 16.7 ms+8000 Gb/(620 mb/s) = 12903.2167 s ≈ 3.6 hours
2. N/A
3. (a) See Figure 4.1 (a). CRC-12: X 12 + X 11 + X 3 + X 2 + X + 1
Rule of hardware: For each existing term except the first term (in this case X 12 ) assign an EXOR followed by a 1-bit register. 21
22
Chapter 4. Data Links and Transmission
0
1
2
3
4
10
11
14
15
(a)
0
1
2
3
(b) Figure 4.1: Answer to exercise. For each non-existing term assign a 1-bit register. Once all bits of data (D, 0) moves in completely, the content of registers show the remainder of the division process. (b) See Figure 4.1 (b). CRC-16: X 16 + X 15 + X 2 + 1
4. (a) Dividend = X 10 + X 8 + X 6 + X 5 + X 4 Divisor = X 4 + X (b) If dividend = X 10 + X 8 + X 6 + X 5 + X 4 , and divisor = X 4 + X, then, quotient = X 6 + X 4 − X 3 + X 2 + 2 and, remainder = −X 3 − 2X
23 5. The hardware is shown in figure 4.2.
Power of x:
10101110000 dividend
0
2
1
3
1-bit Shift Register
Starting bit to enter
Figure 4.2: Contents of the four shift registers. If we sift in D, 0 = 1010111,0000 G = 10010 ⇒ X 4 + X The final contents of shift registers as the step-by-step implementation of
D,0 G |2
shows CRC = 0 0 0 1 (MSB at right):
Bits of D, 0 left to shift in 1010111,0000 010111,0000 10111,0000 0111,0000 111,0000 11,0000 1,0000 0000 000 00 0 -
Shift registers’ contents 0000 1000 0100 1010 0101 1110 1111 1011 0001 0100 0010 0001
If we sift in D, CRC = 1010111,1000 G = 10010 ⇒ X 4 + X The final contents of shift registers as the step-by-step implementation of
D,CRC |2 G
shows 0 0 0 0 indicating no errors:
24
Chapter 4. Data Links and Transmission Bits of D, CRC left to shift in 1010111,1000 010111,1000 10111,1000 0111,1000 111,1000 11,1000 1,1000 1000 000 00 0 -
Shift registers’ contents 0000 1000 0100 1010 0101 1110 1111 1011 1001 0000 0000 0000
6. (a) D=1010 1101 0101 111 G=1110 10 g=6, then, g-1=5 D,0=1010 1101 0101 111,0000 0 CRC= D,0 G |2 Dividend = 101011010101111,00000 Divisor = 111010 Quotient = 111011111000010 Remainder = 10100 CRC=10100 D,CRC=1010 1101 0101 111,10100 (b) D,CRC = 1010 1101 0101 111,10100 G=1110 10 D,CRC |2 G
Dividend = 101011010101111,10100 Divisor = 111010 Quotient = 111011111000010
25 Remainder = 0 The data is correct.
7. v = 3 × 108 m/sec = 80 b/frame r = 10 × 106 bits/sec, tp /tf = 10 (a) E =
tf t
=
tf tf +2tp
=
1 1+2tp tf
=
1 1+2(10)
= 0.04762 or 4.76%
(b) Assuming the speed of light to be 3 × 108 in the cable: tp tf
10 =
d v r
=
d = 24 km
=
d 3×108 80 10×106
⇒
d(10×106 ) 80(3×108 )
= 10
(c) See Figure 4.3. tp = d/v = 24 km/3 × 108 ⇒ tp = 8 × 10−5 s 1 1+2(8) = 0.0588 = 5.88% 6.4 × 10−5 sec, d = 19.2 km
(d) 8 : E = tp =
1 1+2(6) = 0.0769 = 7.69% 4.8 × 10−5 sec, d = 14.4 km
6:E= tp =
1 1+2(4) = 0.111 = 11.11% 3.2 × 10−5 sec, d = 9.6 km
4:E= tp =
1 1+2(2) = 0.2 = 20% 1.6 × 10−5 sec, d = 4.8
2:E= tp =
8. tp = 0.2 s r = 2 Mb/s f = 800 b tf =
f r
=
800 2×106
= 4 × 10−4 s
km
26
Chapter 4. Data Links and Transmission
E 0.2
0.1111 0.0769 0.0588 0.0476
2
4
6
8
10
tp tf
Figure 4.3: Answer to exercise. The efficiency trend. (a) Stop-and-Wait protocol: E=
tf t
=
tf tf +2tp
=
4×10−4 4×10−4 +2(.2)
= 0.0010 ≈ 0.1%
(b) Sliding window protocol, w = 6: w
E=
w+2
tp tf
=
6
6+2
0.2 4×10−4
= 0.00596 ≈ 0.6%
9. Frame = 5,000 b tp = 1 μs/km (a) Rate on R2-R3 = 1 Gb/s Required condition on R3-R4: Link R3-R4 must transfer slower
27 shuch that:
Rate on R3-R4 =
ER3−R4 ER2−R3
× 1 Gb/s
(b) tp = 1,800 km × 1 μs/km = 1,800 μs 5,000 b/frame = 5 μs tf = 1 Gb/s w
ER2−R3 =
w+2
tp tf
=
5
5+2
μs μs
1,800 5
= 6.8×10−3 .
(c) tp = 800 km × 1 μs/km = 800 μs 5,000 b/frame E 6.8×10−3 = 5 ER2−R3 = 5 = tf = ER3−R4 ER3−R4 R3−R4 ×1 Gb/s ER2−R3 ER3−R4 =
1
1+2
tp tf
1
=
1+2
⇒ ER3−R4 = 4.6 × 10−3 .
800 0.034 ER3−R4
0.034 ER3−R4
28
Chapter 4. Data Links and Transmission
Chapter 5
Local-Area Networks and Networks of LANs 1. (a) 88 Bit Packet ⇒ Data Part=88-80 Prop. Speed= 200 m/μs One cycle time = (transmission time+propagation time) for data packet+ (transmission time+propagation time) for ack packet 3 256 b 10 m 88 b 103 m + 200×10 + 200×10 + = −6 6 106 b/s 106 b/s = 354 × 10−6 s Total time = (one cycle time) × (total bits)/data size of packet = (354 × 10−6 s) × (8 b/ch × 106 ch)/176 b/packet = 16 s/packet (b) One cycle time = (transmission time + propagation time)for data packet+ (transmission time + propagation time)for ack packet 50 nodes×1 b/node 256 b+(100/2)×1 b 103 m 103 m + + + = 200×106 106 b/s 200×106 m/s 106 b/s −6 = 366×10 sTotal time = (one cycle time)×(total bits)/data size of packet = (366 × 10−6 s) × (8 b/ch × 106 ch)/176 b/packet = 16.54 s/packet
29
30
Chapter 5. Local-Area Networks and Networks of LANs
4th Floor
3rd Floor
5 5 2nd Floor
LAN 3
Ground Floor
Figure 5.1: Answer to exercise. The LAN overview of connections in a building. 2. Assuming that the computers and phones are placed at the corners of rooms, the overview of the LAN connections in a building is shown in Figure 5.1. (a) 2nd floor: d=5+3+5=13 m 3rd floor: d=5+3+3+5=16 m 4th floor: d=5+3+3+3+5=19 m (b) VoIP rate per office=64 × 2 = 128 Kb/s 1 )(2 min ) ×8 b/B =5.86 Web rate per office=(22 KB/page/s × 60
Kb/s LAN rate=(128+5.86) × 12 offices=1.6 Mb/s
3. Please make the following correction: 100 m to be 1 km. Also combine
31 Parts (a) and (b) to be Part (a) and thus Part (c) to become Part (b) Data rate=100 × 106 b/s Speed = 200 m/μs, Frame = 1000 b,
(a) Mean distance = 0.375 km total time/frame = (transmission time) + (propagation time) 103 b/frame + 0.3756km = 100×106 b/s 200×10 m/s = 11.87 μs (b) Time is seconds = to sense a collision in the midpoint of two users’ distance = total time to send a frame up to the midpoint leading to a collision, and then sense back the collision = 0.5 (11.87 μs) + 0.5 (11.87 μs) = 11.87 μs Time in bits = 11.87 μs × 107 b/s = 1, 187 b
4. 100 Mb/s 96 bit time to clear tp =180 b (a) g=2 Retransmission time = (96 + 512 × 2) × 10−8 = 1.12 × 10−5 s (b) g=1 Retransmission time = (96 × 512) × 10−8 = 6.08 × 10−6 s (c) tp = cl = 180 b = 1.8 × 10−6 s 100 Mb/s
5. (a) α =
tp T
β = λT
32
Chapter 5. Local-Area Networks and Networks of LANs R=
pt tB +t
Un =
=
e−λtp
−λtp −p 1 λ(T +2t p )+e +λ β −αβ e T β+2αβ+e−αβ
=
−p T +2tp − 1−eλ −αβ
λ(T +2αβ)+e−αβ
=
(b) Rn is in terms of frames/time slot due to the throughput R being normalized. Rn makes it easier to use for estimation of the system. β is called ”offered load” since β is equal to λ ( is the average arrival rate) multiplied by T (is a frame duration in seconds) resulting in “offered load”. (c) α = {0.001, 0.01, 0.1, 1.0} Un1 = Un2 = Un3 = Un4 =
β −0.001β /β + 2(0.001β) + e−0.001β Te β −0.01β /β + 2(0.01β) + e−0.01β Te β −0.1β /β + 2(0.1β) + e−0.1β Te β −1β /β + 2(1β) + e−1β Te
6. N/A
7. na = 4 = 10 n = 10 f = 1500 byte × 8 b/1 byte = 12, 000 b 10 = 3.33 × 10−8 s 3×108 tr = = 12,0009 bits = 1.2 × 10−6 10×10 b/s tr 1.2×10−6 s u = tr +tp = 1.2×10−6 s+3.33×10−8 s
(a) tp =
(b)
c f r
=
= 0.973 s
(c) pc =
na −1 na
na −1
=
(d) na = 7, i = 7
pc =
na −1 na
na −1
=
4−1 4
7−1 7
4−1 7−1
s
= 0.422
= 0.387
33 pi = pc (1 − pc )i = 0.387(1 − 0.387)7 = 0.0116 (e) na = 4 E[c] =
1−pc pc
=
1−0.387 0.387
= 1.52
a) Listen to Medium
Medium available
Transmit data
Collision
Medium not available
Wait for 512g Bit Time
b)
Transmission success
Random #g
Fixed P Listen to Medium Medium not available
Medium available
Transmit with Prob P for Max tp Collision
Figure 5.2: Answer to exercise.
8. See Figure 5.3.
Transmission success
34
Chapter 5. Local-Area Networks and Networks of LANs
Network Analyzer
To Other Buildings
Hub
Repeater
Hub
To Internet
R1
Bridge
Figure 5.3: Answer to exercise.
Chapter 6
Wireless Networks and Mobile IP 1. N/A
2. N/A
3. N/A
4. N/A
5. N/A
6. (a) The probability of reaching a cell boundary or the probability of requiring a handoff as a function of db is shown in Figures 6.1 and 6.2. Suppose that a vehicle initiates a call in a cell with 10 miles radius. The vehicle speed is chosen to be 45 m/h (within a city) 35
36
Chapter 6. Wireless Networks and Mobile IP
Table 6.1: Probability of having a handoff for Case 4 db Handoff Probability (%) α01 (m) k = 35 m/h k = 60 m/h 0 50 50 5 37 43 1 10 29 36 20 17 26 0 50 50 5 12 22 5 10 4 9 20 1 2 0 50 50 5 4 9 10 10 1 2 20 0 0
and 75 m/h (on highways). In case 1, since a vehicle is resting all the time with an average speed of 0 m/h, the probability of reaching a cell boundary is clearly 0 percent. In contrast to Case 1, for a vehicle is moving with an average speed in Case 2, the chance of reaching a cell boundary is always 100 percent. Thus, when a vehicle is either at rest or moving with some speed, the probability of requiring a handoff is independent on db . From the figure, we see that as α01 increases, the chance of reaching a cell boundary is lower. Also, with a fixed db , the handoff probability in highway is much higher than in the city area. This is because the higher the speed limit, the higher the probability of reaching a cell. Table 6.1 summarizes the results. Assume α01 = 1 and in the city area where k = 45 m/h: if db is 5 miles the only chance of reaching a cell is 87 percent, or the chance that a handoff occurs for the cell is 87 percent. If db is 10 miles, the probability of reaching a cell boundary is 76 percent. As db increases, the probability of
37
Probability of Requirement For Handoff
0.5
0.45
α01 = 1 α01 = 5 α = 10
0.4
01
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0
2
4
6
8
10
db, miles
12
14
16
18
20
Figure 6.1: The probability of reaching a cell boundary for Case 4: (a) within a city
Probability of Requirement For Handoff
1
0.9
α01 = 1 α01 = 5 α = 10
0.8
01
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
2
4
6
8
10
db, miles
12
14
16
18
20
Figure 6.2: The probability of reaching a cell boundary for Case 4: (b) in highway.
38
Chapter 6. Wireless Networks and Mobile IP
Probability of Requirement For Handoffs
0.9
0.8 α =1 01 α01 = 5 α01 = 10
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 25
30
35
40
45
50
55
60
65
70
75
Average Speed of A Vehicle, mph
Figure 6.3: The probability of reaching a cell boundary in terms of a vehicle’s speed for Case 4. reaching the call holding time decreases. As the cell size increases, the probability of reaching a boundary decreases in an exponential manner. The only difference between Case 3 and Case 4 is that the change of handoff probability for the latter is between 0 percent to 50 percent while the former has a change of probability between 0 percent to 100 percent. This is mainly due to the difference between the two initial state probabilities for both case. (b) The relationship between a vehicle’s speed and the chance of reaching a cell boundary is shown in Figure 6.3 As shown earlier, for Case 1 and Case 2, the probability of requiring a handoff is independent on the call holding time and db and therefore, it is also independent on the vehicle’s speed. For Case 1 in which a vehicle is resting all the time, the vehicle will never reach a cell boundary. For Case 2 in which a vehicle is moving all the time with some speed, the chance of reaching a cell boundary is always 100 percent. The probability of reaching a cell boundary is proportional to the vehicle’s speed. This is because the increase of the speed of a vehicle increases the chance of reaching a cell boundary. As α01
39 increases, the probability of requiring a handoff decreases.
7. N/A
40
Chapter 6. Wireless Networks and Mobile IP
Chapter 7
Routing and Inter-Networking 1. (a) See Figure 7.1 (a). min=2 max=2 =2 H = min+max 2 (b) See Figure 7.1 (b). min = 3 For max: n=4, max=4, n=5, max=9/2, n=6, max=5 in general, for n, the max is H = min+max = 3+(n/2+2) 2
n 2
2
(c) See Figure 7.1 (c). H=3 (d) See Figure 7.1 (d). min = 3 max = 4 H=
2(3)+(n−4)4 n−2
41
+2
42
Chapter 7. Routing and Inter-Networking
(a)
(c)
(b)
(d)
Figure 7.1: Answer to exercise. Four different network topologies to connect two users.
43
2. Using Dijkstra’s Algorithm Table 7.1: Solution to problem. k
(a)
{A} {A,F} {A,F,C} {A,F,C,D} {A,F,C,D,E} {A,F,C,D,E,B}
βA,C
βA,D
βA,F
βA,E
βA,B
AC(5)
×
AF(9)
×
×
AC(5)
AFD(12)
ACF(8)
AFE(10)
AFB(14)
AC(5)
ACD(9)
ACF(8)
ACE(7)
AFB(14)
AC(5)
ACD(9)
ACF(8)
ACE(7)
AFB(14)
AC(5)
ACD(9)
ACF(8)
ACE(7)
ACEB(9)
AC(5)
ACD(9)
ACF(8)
ACE(7)
ACEB(9)
(b) See Figure 7.2
3. Using Bellman-Ford Algorithm
(a)
βA,C
βA,D
βA,F
βA,E
βA,B
1
AC(5)
×
AF(9)
×
×
2
AC(5)
ACD(9)
ACF(8)
ACE(7)
AFB(14)
3
AC(5)
ACD(9)
ACF(8)
ACE(7)
ACEB(9)
4
AC(5)
ACD(9)
ACF(8)
ACE(7)
ACEB(9)
(b) See Figure 7.3.
4. Using Dijkstra’s Algorithm (b) See Figure 7.4.
5. Using Dijkstra’s Algorithm
44
Chapter 7. Routing and Inter-Networking
Figure 7.2: Answer to exercise.
Figure 7.3: Answer to exercise.
45
Figure 7.4: Answer to exercise.
Figure 7.5: Answer to exercise.
46
Chapter 7. Routing and Inter-Networking k
(a)
{A} {A,F} {A,F,C} {A,F,C,D} {A,F,C,D,E} {A,F,C,D,E,B}
βA,C
βA,D
βA,F
βA,E
βA,B
AC(5)
×
AF(9)
×
×
AC(5)
AFD(12)
ACF(8)
AFE(10)
AFB(14)
AC(5)
ACD(9)
ACF(8)
ACE(7)
ACFB(13)
AC(5)
ACD(9)
ACF(8)
ACE(7)
ACFB(13)
AC(5)
ACED(8)
ACF(8)
ACE(7)
ACEB(9)
AC(5)
ACED(8)
ACF(8)
ACE(7)
ACEB(9)
(a) k
{1} {1,2} {1,2,3} {1,2,3,4} {1,2,3,4,5} {1,2,3,4,5,6}
β1,2
β1,3
β1,4
β1,5
β1,6
β1,7
1,2(3)
1,3(3)
×
×
1,6(9)
1,2(3)
1,3(3)
1,2,4(11)
1,2,5(15)
1,6(9)
1,2(3)
1,3(3)
1,3,4(7)
1,3,5(7)
1,6(9)
1,2(3)
1,3(3)
1,3,4(7)
1,3,5(7)
1,6(9)
× × × ×
1,2(3)
1,3(3)
1,3,4(7)
1,3,5(7)
1,3,5,6(8)
1,3,5,7(20)
1,2(3)
1,3(3)
1,3,4(7)
1,3,5(7)
1,3,5,6(8)
1,3,5,6,7(16)
(b) See Figure 7.5.
6. Using Bellman-Ford Algorithm
(a)
β1,2
β1,3
β1,4
β1,5
β1,6
β1,7
1
1,2(3)
1,3(3)
×
×
1,6(9)
×
2
1,2(3)
1,3(3)
1,3,4(7)
1,3,5(7)
1,6(9)
1,6,7(17)
3
1,2(3)
1,3(3)
1,3,4(7)
1,3,5(7)
1,3,5,6(8)
1,6,7(17)
4
1,2(3)
1,3(3)
1,3,4(7)
1,3,5(7)
1,3,5,6(8)
1,3,5,6,7(16)
(b) See Figure 7.6.
7. From R1 to R4 using Dijkstra’s Algorithm (b) See Figure 7.7.
47
Figure 7.6: Answer to exercise.
R2
R3
R4
R4
R7
R2
R3
R1
R7
R6
R6
R5
R5 R2
R3
R2
R3
R1
R4
R4
R6
R6
R5
R5
R4
R2
R2
R3
R1
R4
R7
R6
R5
R4
R5 R2
R1
R7
R6
R3
R1
R7
R7
R3
R1
R1
R7 R6
R5
Figure 7.7: Answer to exercise.
48
Chapter 7. Routing and Inter-Networking k
(a)
{1} {1,6} {1,6,5} {1,6,5,4} {1,6,5,4} {1,6,5,4,3} {1,6,5,4,3,2} {1,6,5,4,3,2,7}
β1,2
β1,3
β1,4
β1,5
β1,6
β1,7
1,2(2)
×
×
1,6(2)
1,7(8)
1,2(2)
1,6,3(7)
× ×
1,6,5(7)
1,6(2)
1,6,7(3)
1,2(2)
1,6,3(7)
1,6,5,4(11)
1,6,5(7)
1,6(2)
1,6,7(3)
1,2(2)
1,6,3(7)
1,6,5,4(11)
1,6,5(7)
1,6(2)
1,6,7(3)
1,2(2)
1,6,3(7)
1,6,5,4(11)
1,6,5(7)
1,6(2)
1,6,7(3)
1,2(2)
1,6,3(7)
1,6,3,4(9)
1,6,5(7)
1,6(2)
1,6,7(3)
1,2(2)
1,2,3(6)
1,2,3,4(8)
1,6,5(7)
1,6(2)
1,6,7(3)
1,2(2)
1,2,3(6)
1,6,7,4(5)
1,6,5(7)
1,6(2)
1,6,7(3)
8. From R1 to R4 using Bellman-Ford Algorithm
(a)
β1,2 ()
β1,3 ()
1
1,2(2)
×
2
1,2(2)
1,2,3(6)
3
1,2(2)
1,2,3(6)
4
1,2(2)
1,2,3(6)
β1,4 ()
β1,5 ()
β1,6 ()
×
×
1,6(2)
1,7(8)
1,7,4(10)
1,6,5(7)
1,6(2)
1,6,7(3)
1,6,7,4(5)
1,6,7,5(4)
1,6(2)
1,6,7(3)
1,6,7,4(5)
1,6,7,5(4)
1,6(2)
1,6,7(3)
(b) See Figure 7.8.
9. PBC = (0.3)(0.1)(0.7) = 0.021 PCE = (0.3)(0.6) = 0.18 PCDF = 1 − (1 − 0.5)(1 − 0.8) = 0.9 PCEF = 1 − (1 − 0.18)(1 − 0.2) = 0.344 PCF = (0.9)(0.344) = 0.31 PBCF = 1 − (1 − 0.021)(1 − 0.31) = 0.324 PBF = (0.3)(0.324) = 0.097 PAF = 1 − (1 − 0.4)(1 − 0.097) = 0.458
10. PAB = 0.4 PBC = (0.3)(0.1)(0.7) = 0.021
β1,7 ()
49 R2
R3
R4
R4
R7
R2
R3
R1
R7
R6
R6
R5
R4
R5
R2
R3
R1
R4
R7
R2
R3
R1
R7
R6
R5
R6
R5
Figure 7.8: Answer to exercise. PCE = (0.3)(0.3)(0.6) = 0.054 PCDF = 1 − (1 − 0.5)(1 − 0.8) = 0.9 PEF = 0.2 PCF = [1 − (1 − PCE )(1 − PEF )]PCDF (0.3)(0.3) = 0.0197 PAC = 1 − (1 − PAB )(1 − PBC ) = 0.4126 PAF = 1 − (1 − PAC )(1 − PCF ) = 0.424
R1
50
Chapter 7. Routing and Inter-Networking
Chapter 8
Transport and End-to-End Protocols 1. Figure 8.1 shows the operation. Each packet size is 1000 bytes since that’s the MSS of Host B. For Host A: MSS = 2000, ISN = 2000, File Size = 200 Kb = 25 KB For Host B: MSS = 1000, ISN = 4000, Packet Size = 100 Kb = 25 KB where data per packet is 960 bytes Three stages in file transfer: Connection establishment, Segment transfer, and Connection termination.
2. (a) TCP sequence number field includes 4 B = 32 b. Thus: • Maximum number of bytes to be identified in a connection = 232 • We consume one sequence number for connection setup, seq(i), and one sequence number for connection termination, seq(k). As each byte of data is identified by a unique sequence number, therefore, the maximum number of data bytes that can be identified for a connection and we can transfer = f = 232 − 2 = 4, 294, 967, 294 B 51
52
Chapter 8. Transport and End-to-End Protocols PACKET TRANSMISSION STARTS 2-WAY CONNECTION ESTABILSHED
A gap in transmission to allow Host A not tot wait for ACK and keep transmitting
HOST A
HOST B
LAST 40 BYTES
2-WAY CONNECTION TERMINATED
Figure 8.1: Answer to exercise. (b) Total size of each segment = 2,000 B. Also, each segment has the following headers: 20 B Link + 20 B IP + 20 B TCP = 60 B. Thus: • Maximum size of data in each segment = 2,000 B - 60 B = 1,940 B. • Maximum number of segments to be produced in a connection B ≈ 2,213,901 = 4,294,967,294 1,940 B • Total size of all segment headers = 2,213,901 × (60 B) = 132,834,060 B • Total size of all segment headers and data = 4,294,967,294 B + 132,834,060 B = 4,427,801,354 B • Total time it takes to transfer all segments =
4,427,801,354 100×106
B×8 b/B b/s
53 = 354.21 s ≈ 5.9 minutes
3. (a) Slow start congestion control: Since the number of packets transmitted doubles every time, the number of round trips to reach n is log2 n − 1. (b) Additive increase congestion control: Since the number of packets transmitted increases by 1 every time, the number of round trips to reach n is n − 1.
4. This Problem belongs to Chapter 12 please see Problem 12.16.
5. N/A
6. = 1.2 Gb/s RTT = 3.3 ms File size f = 2 MB packet size = 1 KB Hence the total number of packets needed to be transmitted = 2MB / 1 KB = 2000 (a) With an additive increase/multiplative decrease protocol, the window size increases by one all the times until a congestion when the window size is divided by two. Therefore, the window size starting at wg = 1 KB changes its value as follows: wg = 1 KB, 2 KB, 3 KB, 4 KB, · · ·, n KB Therefore: 1 + 2 + 3 + · · · + n = 2000
54
Chapter 8. Transport and End-to-End Protocols Thus:
n(n+1) 2
= 2000
where we can obtain n = 62.74 ≈ 63 Since, the congestion window size of 500 KB is never reached, no multiplicative decrease takes place. Thus The total time = 63 × 3.3 ms = 207 ms Clearly, the window size takes a total of 500×RTT = 500×3.3=1.65 seconds to reach 500 KB. (b) With a slow start protocol, the window size is doubled all the times until a congestion. Therefore, the window size starting at wg = 1 KB changes its value as follows: wg = 1 KB, 2 KB, 4 KB, 8 KB, · · ·, approximatley 1,024 KB Therefore, we will have to make 11 roundtrips to transmit the file: 1+2+4+8+16+32+64+128+256+512+1024 = 2047 Thus the total time = 11 × RTT = 11 × 3.3 ms = 36.3 ms The window size takes a total of 10× (c) With the additive increase/multiplative decrease protocol, it takes 63 RTTs to transfer the 2 MB file. Therefore, Δ = 63 × 3.3 ms ≈ 208 ms f 2 MB = 208 r=Δ ms = 76.9 Mb/s (d) With the additive increase/multiplative decrease protocol, 76.9 Mb/s = 64 × 10−3 ρu = Br = 1.2 Gb/s
7. Round trip time = 0.5 s Packet transmitted every 50 ms Let’s assume packet (segment) P-11 is lost. The first acknowledgement, ACK-10, is received when P-21 is about to be transmitted. No ACK is received before P-22 as P-11 is lost. We receive the fourth ACK-10 after P-24 is sent. Segment loss is detected. P-11 is transmitted instead
55 0.5 sec
Destination
Source 50ms
LOSS OF SEGMENT DETECTED
Figure 8.2: Answer to exercise. of P-25. See Figure 8.2. (a) In this case, P-11 is transmitted and after 50 ms, P-25 is transmitted, and the cycle continues. Hence, we lose only 50 ms. See Figure 8.2. (b) In this case, the sender waits for the acknowledgment of retransmitted P-11. Thus, it has to wait for the complete round trip. Hence, the time lost here is 0.5 s.
56
Chapter 8. Transport and End-to-End Protocols
Chapter 9
Applications and Network Management 1. The command for is “nslookup.” In the command prompt in that window we put the name of the website in the format: www.name.com. When entering this command in the command prompt of windows, it can be seen that we get the server name, IP address, and Aliases. The command ns-lookup works both ways, that is, if we give the name we get the IP addresses and also vice versa. The snapshot in the Figure 9.1 shows the IP addresses of some of the most frequently used websites with their server names.
2. (a) To obtain the file name in a remote machine, the DNS server requests the local DNS server if it is not the local DNS server to contact the remote machine. These requests are carried out by the server either recursively or iteratively. On the other hand, if the server wants to obtain the file name from another DNS server, depending upon the type of information and the file location, the DNS server either requests the root DNS or another local DNS. (b) When we take the domain name from the DNS server, our query 57
58
Chapter 9. Applications and Network Management
Figure 9.1: Solution to exercise.
59 gives us a result which includes all the possible aliases of the particular domain names and their corresponding IP addresses. On the other hand, if the query is done using the IP address, then we get only the particular alias that corresponds to that IP address in response. This is illustrated by the Figure 9.2 using the example of gmail.com. (c) As seen in Figure 9.2, all hosts from the same subnet need not be identified by the same DNS server as we can assign different subnets with different IP addresses. This is done for various reasons like traffic sharing, having different names (alias) for same website, etc.
3. (a) SSH provides a far better security of transmission compared with TELNET. (b) The functionality given by Rlogin implementation in Telnet are: It passes terminal type It bypasses the need for username/password to be entered. No newline etc processing is applied to data transferred It has better out-of-band data handling It has better flow-control handling It has window-size negotiation
4. (a) No, FTP does not compute checksum any checksum for its file transfer. It relies on the underlying TCP layer for error control. TCP layer uses checksum for error control. (b) If the TCP connection is shut down, the browser tries to set up the connection once. If this attempt fails then the browser quits the file transfer. (c) Following are the list of commands that may be set for FTP clients.
60
Chapter 9. Applications and Network Management
Figure 9.2: Solution to exercise.
61 Command Explanation ABOR Abort an active file transfer. ACCT Account information. ALLO Allocate sufficient disk space to receive a file. APPE Append CDUP Change to Parent Directory. CLNT Send FTP Client Name to server CWD Change working directory. DELE Delete file. EPSV Enter extended passive mode. EPRT Specifies an extended address and port to which the server should connect. FEAT Get the feature list implemented by the server. GET Use to download a file from remote HELP Returns usage documentation on a command if specified, else a general help document is returned. LIST Returns information of a file or directory if specified, else information of the current working directory is returned. LPSV Enter long passive mode. LPRT Specifies a long address and port to which the server should connect. MDTM Return the last-modified time of a specified file. MGET Use to download multiple files from remote MKD Make directory (folder). MODE Sets the transfer mode. MPUT Use to upload multiple files to remote NLST Returns a list of filenames in a specified directory. NOOP No operation (dummy packet; used mostly on keep alive). OPTS Select options for a feature. PASS Authentication password.
62
Chapter 9. Applications and Network Management PASV Enter passive mode. PORT Specifies an address and port to which the server should connect. PUT Use to upload a file to remote PWD Print working directory. Returns the current directory of the host. QUIT Disconnect. REIN Re initializes the connection. REST Restart transfer from the specified point. RETR Retrieve a remote file. RMD Remove a directory. RNFR Rename from RNTO Rename to. SITE Sends site specific commands to remote server. SIZE Return the size of a file. SMNT Mount file structure. STAT Returns the current status. STOR Store a file. STOU Store a file uniquely. STRU Set file transfer structure. SYST Return system type. TYPE Sets the transfer mode (ASCII/Binary). USER Authentication username.
5. The total file transfer delay is: (a) On both directions when the network is in its best state of traffic, the average file transfer delay is 3.5 ms. (b) On both directions when the network is in its worst state of traffic, the average file transfer delay is 9 ms.
63 (c) On one direction when we try FTP from one computer to itself: The average file transfer is 7.5 ms.
6. All characters of the URL must be from the following: A-Z, a-z, 0-9 .
\ / ∼ % - + & # ? ! = () @
If a URL contains a different character it should be converted; for example,ˆ must be written as %5e, the hexadecimal ASCII value with a percent sign in front. A blank space can also be converted into an underscore.
7. (a) The purpose of the GET command in the HTTP is to request a representation of the specified resource. The GET method retrieves whatever information (in the form of an entity) identified by the Request-URI. If the Request-URI refers to a data-producing process, it is the produced data is returned as the entity in the response and not the source text of the process, unless that text happens to be the output of the process. (b) The purpose of the PUT command in HTTP is to request the enclosed entity to be stored under the supplied Request-URI. Thus it basically uploads a representation of the specified resource. (c) The GET command needs to use the name of the contacted server when it is applied as HTTP is a stateless protocol. This means that it keeps the state information and live connections to remote clients. Thus, we connect to the server, get the info we need, and then disconnect. Therefore, we need to give the name of the contacted server.
64
Chapter 9. Applications and Network Management
8. (a) The role of ASN.1 on the 7 layer OSI model is shown in Figure 9.3. The ASN.1 notation is used in the application layer as a notation. (b) The impact of constructing a grand set of unique global ASN.1 names for MIB systems is that the network management can identify an object by a sequence of names or numbers from the root to that object. This enables designers to produce specifications without undue consideration to the encoding issues. (c) A US based organization must register under the following: Root : ISO : company name : dod : internet : MIB
Figure 9.3: Solution to exercise.
9. (a) The SNMP protocol has the function wherein the network manager can use this protocol to find the location of fault. The task of SNMP is to transport MIB information among all the managing centers and agents executing on its behalf. The most efficient method for the above functions is unarguably UDP as it will be faster and efficient. (b) The pros of letting all the managing centers access the MIB is better connection ability and better communication. It would greatly
65 help in the development and servicing of the MIB. All these things would result in the better utilization of the network and also more efficiency. On the other hand, the price to pay for this kind of flexibility is having a huge impact on the security of the network. Also, even if the security aspect is taken care of, letting everyone access to the MIB variables increases the complexity of the MIB design and maintenance. (c) MIB is the information storage medium that contains managed objects reflecting the current status of the network. Now, if the MIB variables are located in the router memory, it would greatly improve the efficiency and the speed of the process. However, it brings with it the problem of updating the MIB. In the scenario where router B is not involved in, any communication must also be notified about the change in order to update the MIB if the communication is limited between routers A and C. This would create unnecessary overheads and wastage of network bandwidth. This is of course on top of increasing the router complexity, buffer size, and host of other problems. Thus, MIB variables should not be organized in the local router memory.
66
Chapter 9. Applications and Network Management
Chapter 10
Network Security 1. L4 = 4de5635d, R4 = 3412a90e, k5 = be11427e6ac2. L4 = 0100;1110;1111;0101;0110;0011;0101;1110. R4 = 0011;0100;0001;0010;1010;0101;0000;1110. After the expansion stage the right half will become: R4 = 000110;101000;000010;100101;010100;001010;100001;011100. k5 = 101111;110001;000101;000010;011111;110110;101011;000010. R4 Xor k5 gives us: 101001;011001;000111;100111;001011;111100;001010;011110. Now passing it through the S-Box: R4 = 0100;0110;1001;0110;0111;1011;0000;0111. L4 = 0100;1110;1111;0101;0110;0011;0101;1110. Xor with the left half: R4 = 0000;1000;0110;0011;0001;1000;0101;1001 After permutation: R5 = 1011;1010;0100;1001;0010;1000;0000;0100 = ba492804 L5 = 0011;0100;0001;0010;1010;0101;0000;1110 = 3412a50e
67
68
Chapter 10. Network Security
2. Key generation: The key is 010101. . . .01 and is 56 bit long. Thus, the parity bits have already been discarded. The key is first divided into two blocks of 28 bits using the standard permutation block provided by the DES algorithm: the left block say C0 = 0000000; 0111111; 1100000; 0001111. the right block say D0= 0000000; 0111111; 1100000; 0001111. Now, we shift left both C0 and D0 by 1 thus we get C1 and D1 as follows: C1 = 0000000; 1111111; 1000000; 0011110. D1 = 0000000; 1111111; 1000000; 0011110. ki (left) = 101100;001001;001011;001010. ki (right) = 010101;010000;001001;010100. ki = ki (left);ki (right). Message generation: The message is all ones: 111. . . .111 (64). With left half (32) 11. . . .11 and right half(32) 11. . . 11. (Here the initial permutation has no effect.) Converting the 32 message into 48 by passing through the mangler (1): 111. . . 11(32 bit) = 111. . . 111(48bits). Xoring with the key ki : 010011; 110110; 110100; 110101; 101010; 101111; 110110; 101011. Now, passing it through the S-Box: 0110;0110;0010;0101;1101;1011;1000;1010. Xor with left half: 1001;1001;1101;1010;0010;0100;0111;0101. After permutation of the right half we get: 0000;0110;1101;1001;0100;1101;1110;1010 (R1 ) 1111;1111;1111;1111;1111;1111;1111;1111 (L1 ).
69
3. N/A
4. N/A
5. From the text book: c = mx mod n and m = cy mod n. Note that x and y are mod inverse of each other. Thus, c = ((cy )x ) mod n. Since x and y are inverse of each other, we then get c = c mod n = c.
6. M = 1010. The two four bit primes are a = 5 and b = 11. Also x = 3. To find the keys, we have n = ab = (5)(11) = 55 q = (a − 1)(b − 1) = (4)(10) = 40 Thus, xy mod (a − 1)(b − 1) = 1 resulting in 3y mod 40 = 1 which implies that y = 27, since (3)(27) = 81 and 81 mod 40 = 1. Therefore the keys are: The public key = {3, 55} The private key = {27, 55}. Thus: the cipher text from the message 1010 (10 in decimal) is 103 mod 55 = 1000 mod 55 = 10 mod 55. Therefore, the cipher text is 10.
7. m = 13 a=5
70
Chapter 10. Network Security b = 11 x=7 (a) Encryption: The public key = {7, 55} C = 137 mod 55 = 62748517 mod 55 = (55)(1140882) + 7 mod 55 = 7 mod 55. C = 7. (b) The corresponding y is given as follows: n = ab = (5)(11) = 55 q = (a − 1)(b − 1) = (4)(10) = 40 Also x = 7 Thus, xy mod (a − 1)(b − 1) = 1 7y mod 40 = 1 Which implies y = 23 (since (7)(23) = 161 and 161 mod 40 = 1) The private key = {23, 55}. (c) The decryption is 723 mod 55 = 13.
8. (a) When encrypting with small values of the m, the (non-modular) result of me may be strictly less than the modulus n. In this case, ciphertexts may be easily decrypted by taking the the root of the ciphertext with regardless of the modulus. For systems that conventionally use small values of e, such as 3, the AES key of 256 bits using this scheme would be insecure since the largest m would have a value of 2563 , and 2553 is less than any reasonable modulus. Such plaintexts could be recovered by simply taking the cube root of the ciphertext. Thus, the 256-bit AES key, k, chosen by user 1 is too small to encrypt securely with RSA having a public key as {x, 5} since k e < x. Thus, ke mod x = ke and an intruder only recovers k by
71 taking the eth root. (b) The values m = 0 or m = 1 always produce ciphertexts equal to 0 or 1 respectively, due to the properties of exponentiation. Thus, the keys containing of all 0’s or all 1’s can be easily recovered by the attacker. An example could be {x = 3, y = 7}.
9. To overcome the vulnerability in the above combination, practical RSA implementations typically embed some form of structured, randomized padding into the value m before encrypting it. This padding ensures that m does not fall into the range of insecure plaintexts, and that a given message, once padded, encrypts to one of a large number of different possible ciphertexts. The latter property can increase the cost of a dictionary attack beyond the capabilities of a reasonable attacker. Modern constructions use secure techniques such as optimal asymmetric encryption padding (OAEP) to protect messages. The intuitive solution to this problem is that user 1 must select a larger random number for RSA encryption. In this case, both users 1 and 2 use this number to create key, k. A second solution is that user 1 pads k with random bits so that the message has almost the same number bits as x does.
10. Suppose that user 1 chooses a prime number a, a random number x1 , and a generator g and creates y1 . We can say: k1 = y2x1 mod a = (gx2 mod a)x1 mod a = [(gx2 )x1 mod a] mod a = [(g x1 )x2 mod a] mod a = (gx1 mod a)x2 mod a = y1x2 mod a = k2
72
Chapter 10. Network Security
Part II
Advanced Concepts
73
Chapter 11
Packet Queues and Delay Analysis 1. Note to Instructors: This problem requires a good understanding of queueing theory without any use of formulas. Advance explanation of objectives of this problem to students will be very helpful. We can summarize the queing situation as follows: • Interarrival time = 20 μs • Service time = 0 μs, if there is no packet misordering = 10 + 30×(n=number of misorderings) μs, if there are n packet misorderings in a block. (a) Packet block arrival and departure activities are shown in Table 11.1. If we consider one packet block between arrival time 20 μs, and departure time 90 μs (for the duration of 70 μs), and then one packet block between arrival time 40 μs, and departure time 170 μs (for the duration of 130 μs), and continue this trend, the queuing behaviour can be shown in Figure 11.1. (b) Mean number of packet blocks (Service Times)×(1 packet block) = Duration of System Processing Time blocks = 1.76 = 1,200 680 μs 75
76
Chapter 11. Packet Queues and Delay Analysis
Table 11.1: Packet block arrival and Packet Number Arrival Block of Misor- Time in μs Number derings 1 2 20 4 40 2 0 60 3 0 80 4 1 100 5 4 120 6 3 140 7 5 160 8 2 180 9 4 200 20 0 220 11 2 240 12 5 260 13 2 280 14 1 300 15
departure activities. Service Departure Time in μs Time in μs 70 130 10 10 40 130 100 160 70 130 10 70 160 70 40
90 170 70 90 140 250 280 320 250 330 230 310 420 350 340
Number of Waiting Packet-Blocks in Queue 7
6
5
4
3
…
2
1
0
20 40
90
…
600
640 680
Processing Time (micro-sec)
Figure 11.1: Solution to exercise. The trend of packets accumulated in the queue over time.
77 (c) Percentage of time that the buffer is not empty can be realized from Figure 11.1. If we include all the activities of the queue as described in Part (a), the queue activity stops at around 680 μs. Thus, the percentage of time that the buffer is not empty 20 μs Time That the Buffer Is Empty = 680 = μs = 0.029 Duration of System Processing Time Percentage of time the buffer is not empty = 1 - 0.029 = 0.97
2. (a) E[Kq (t)] = λE[Tq ] =
ρ2 1−ρ
=
0.92 1−0.9
= 8.1
(b) E[Tq ] = E[T ] − E[Ts ] 1 μ(1−ρ) E[Ts ] = μ1
E[T ] =
1 μ−λ λ = λρ = 44.44 μ = ρ, ⇒ μ 1 = E[Tq ] = 0.9 44.44−40
⇒ E[Tq ] = ρ
0.2 s
(c) P0 = 1 − ρ = 1 − 0.9 = 0.1 λ = 40 packets/s μ = 44.44 packets/s
3. (a) T = min(T1 , T2 , ..., Ti ) (b) P [T > t] = P [time until next packet departures] = P [min(T1 , T2 , ..., Ti )] = P [T1 > t1 , T2 > t2 , ..., Ti > ti ] = P [T1 > t]P [T2 > t]...P [Ti > t] = e−μt e−μt ...e−μt = e−iμt
78
Chapter 11. Packet Queues and Delay Analysis
4. (a) P [K(t) < k] = 1 − P [K(t) ≥ k] =1−
∞
i=k
Pi = 1 −
∞
i=k
ρi (1 − ρ) = 1 −
(1−ρ)ρk 1−ρ
= 1 − ρk
(b) P [K(t) < 20] = 0.9904 ⇒ k = 20 ⇒ 0.9904 = 1 − ρ20 ⇒ ρ = 0.7927 ρ=
λ μ
= 0.7927 =
300 μ
⇒ μ = 378.45 packet/s
5. (a) P [K(t) ≥ k] = (1 − ρ)
∞
k
j=k
ρ ρj = (1 − ρ) 1−ρ = ρk
(b) P [K(t) ≥ 60] = ρ60 = 0.01 ⇒ ρ ≈ 0.92 ⇒ λ ≈ 0.92μ
6. (a) The Markov chain is similar to a regular M/M/1 except the arraival rate to any state i is λi =
λ i
(b) For State 0: p0 λ = p1 μ ⇒ p1 = λμ p0 ⇒ p1 =ρp0 For State 1: p1 λ2 + p1 μ = p0 + p2 μ ⇒ p1 λ2 = p2 μ ⇒ p2 =
λ 2μ p1
⇒ p2 = 12 ρp1 = 12 ρ2 p0
Continuing this trend for next states, a generic form can be developed as: pi−1 λi = pi μ ⇒ pi = Since
∞
⇒ pi =
i=0 pi = 1 i −ρ i! ρ e
λ iμ pi−1
=
1⇒ p0 = e−ρ
1 i i! ρ p0
79 (c) When i → ∞ while ρ < 1, we have pi → 0, thus, the system is in a steady state. (d) The utilization (of the server) is: λi μ
ρi =
1 λ i
=
μ
(e) E[K(t)] =
=
∞
λ iμ
i=0 iP [K(t) = i] =
Solving this equation using
∞
i=0 i
∞ ρe i=0 e! =
ρk −ρ i! e eρ will result
in:
E[K(t)] = ρ (f) The mean system delay considering any State i is obtained using Little’s law: E[T ]i =
E[K(t)] λi
=
ρ λ i
=
iρ λ
However, since the arrival rate is different in each state, we need to compute the mean over all states: E[T ] = =
∞
i=0 pi E[T ]i =
∞
iρ λ
i=0 pi
ρ2 λ
Since E[Ts ] = μ1 , then: E[Tq ] = E[T] - E[Ts ] =
ρ2 λ
−
=ρ
∞ 1 i −ρ i i=0 i!λ ρ e λ
1 μ
7. a = 2 1 μ
= 100 ms/packet
λ = 18 packets/s (a) Prob[blocking a packet]=Prob[all servers are busy] ρ1 =
= (18)(100 × 10−3 ) = 1.8
ρ=
= 18/2.1 = 0.9
λ μ λ aμ
P [i > 2] = Pa = P2 = P0 =
Pa 1−ρ ρ21 2! P0 1
(1+ρ1 )+
ρ2 1 1 2! 1−ρ
=
1 (1+1.8)+ 1.82 2!
1 1−0.9
= 0.052
80
Chapter 11. Packet Queues and Delay Analysis ρ = 18/20 = 0.9 P [K(t) = 2] = Pa =
ρa 2 a! ρ0
=
1.82 2 (0.0526)
⇒ Prob[Waiting] = P [K(t) ≥ 2] =
= 0.0853
∞
i=a Pi
=
Pa 1−ρ
=
0.0853 1−0.9
=
0.853 Pa ρ (1−ρ)2 + ρ1 (0.0853)(0.9) (1−0.9)2 + 1.8 = 9.474
(b) E[K(t)] = =
(c) E[T ] = E[Tq ] + E[Ts ] =
Pa aμ(1−ρ)2
+
1 μ
= 0.421 + 0.1 = 0.521 s
= = = =
∞
i=50 ρi i−a i Pa = Pρ2a ∞ i=50 ρ i=50 ρ Pa ∞ i i=50 ρi ] ρ2 [ i=50 ρ − 1−ρ50+1 Pa 1 1−ρ ] ρ2 [ 1−ρ − 51 Pa ρ 0.0853 0.951 ρ2 [ 1−ρ ] = 0.92 0.1
(d) P [K(t) > 50] = ∞
P [K(t) > 50] = 0.00488
8. (a) Prob[blocking a packet]=Prob[all servers are busy] λ = 100 packets/s mean service rate:μ = 20 packets/s; ρ1 = 100/20 = 5 Use Erlang-B: Pa = (b) Pa =
ρa 1 a!
a
1
ρi 1 i=0 i!
0.1935 2
=
56 6!
6 1
5i i=0 i!
= 0.1935
= 0.0967
By plugging numbers in the equation, we have: P7 = 0.121 P8 = 0.075 We need two switches to lower the blocking probability below 0.0967, this is 4 more switches then the setup in Part (a).
81
λi 0
λi 1
μi
λi
λi
2
2μ i
λi
λi
j
3μ i
jμ i
ci
(j+1) μi
c i μi
Figure 11.2: Solution to exercise, Markov chain for the M/M/c/c system. 9. (a) The handoff process is modeled using M/M/c/c Markovian system in which there is no queueing line with random interarrival call time and exponential service time (channel holding time) with c servers (c channels) as well as c sources (c handoff calls). The Markovian chain of this system is illustrated in Figure 11.2 where in our case: λi = Handoff request rate for traffic type i ∈ {0, 1, ..., k} following a Poisson process. 1/μi = Mean holding time of a channel or mean channel exchange time for traffic type i with an exponential distribution. Let j be channels that are busy. Thus, handoff calls depart at rate jμi . (b) When the number of requested channels reaches the total number of available channels (ci ), ie. j = ci , then, all ci channels are in use and the channel exchange rate is ci μi . In this case, any new arriving handoff calls are blocked since there is no queueing. The global balance equations are: λi P0 = μi P1 λi Pj−1 = jμi Pj
f or f or
j=0 0 < j ≤ ci
(11.1) (11.2)
where, P0 is the probability that no channel exchange is requested for traffic type i, and Pj is the probability that j channel exchanges
82
Chapter 11. Packet Queues and Delay Analysis are requested for traffic type i. It then follows that P1 = ρi P0
Pj
=
(11.3)
ρi Pj−1 j
(11.4)
where, ρi = λi /μi is the offer load of the system. In Equation (11.4), let j = 2 and 3, then: P2 =
ρi 2 P0 ρi 2 P0 ρi P1 = = 2 2×1 2!
(11.5)
P3 =
ρi 3 P0 ρi 3 P0 ρi P2 = = 3 3 × 2! 3!
(11.6)
By induction from Equations 11.5 and 11.6 =
Pj
ρi j P0 j!
(11.7)
Knowing that the sum of the probabilities must be one 1 =
ci
ρi j P0 j=0
j!
⇒ P0 = c i
1
ρi j j=0 j!
(11.8)
Equations (11.7) and (11.8) can be combined as: Pj
=
1 ρi j ci j! j=0
ρi j j!
(11.9)
When j = ci , all the channels are busy and any handoff call gets blocked. The handoff blocking probability denoted as Pci are expressed by Pci
=
1 ρi ci ci ci ! j=0
ρi j j!
(11.10)
83
0.9
Handoff Blocking Probablity (%)
0.8
c = 50 i
0.7
0.6
1/μ = 30 ms → i
0.5
← 1/μi = 20 ms
0.4
0.3
← 1/μ = 10 ms
0.2
i
0.1
0
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
Handoff Request Rate (calls/sec)
Figure 11.3: Hanfoff Blocking Probability (ci = 50).
Handoff Blocking Probablity (%)
0.7
0.6
ci = 100 0.5
0.4
1/μ = 30 ms →
0.3
i
← 1/μ = 20 ms
0.2
i
0.1
1/μ = 10 ms i
0
0
1000
2000
3000
4000
5000
6000
7000
8000
↓ 9000
10000
Handoff Request Rate (calls/sec)
Figure 11.4: Hanfoff Blocking Probability (ci = 100).
84
Chapter 11. Packet Queues and Delay Analysis
1
Handoff Blocking Probablity (%)
0.9
0.8
← ci = 10
0.7
0.6
ci = 50 →
0.5
0.4
← ci = 100
0.3
0.2
1/μ = 30 ms i 0.1
0
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
Handoff Request Rate (calls/sec)
Figure 11.5: Hanfoff Blocking Probability (ci = 50 and 100, 1/μi = 30 ms).
0.5
Handoff Blocking Probablity (%)
0.45
0.4
0.35
−−− 1/μi = 20 ms
0.3
−.−. 1/μi = 10 ms
0.25
0.2
0.15
0.1
← 1/μi = 30 ms
0.05
0
0
50
100
150
200
250
300
Number of Channels
Figure 11.6: Hanfoff Blocking Probability (ci ranging from 0 to 300).
85 10. (a) The handoff blocking probability of the selected handoffs as a function of system offered load are shown in Figures 11.3, 11.4, 11.5, and 11.6. (b) In Figure 11.4, we assume the total available channels (ci ) to be 50 and 100 respectively. The handoff blocking probabilities with a choice of three different mean holding times of 15, 20, and 30 ms are plotted. The times shown in the plot are estimates of the switched or exchanged channel latencies. The figures show that the blocking probability is directly proportional to the mean channel exchange time. As the mean holding time increases, the performance reaches to its ideal value. In Figure 11.5, the handoff blocking probability drops when the numbers of available channel increases. At last, we plot the graph of blocking probability versus the number of channels for different values of 1/μi in Figure 11.6. The handoff call request rate is fixed while we vary the number of channels from 0 to 300. For 1/μi = 30 ms, the blocking probability is dramatically decreased when the number of channels are less than 150. After exceeding 150, the behavior of decreasing is not obviously seen.
11. (a) λ1 = α + λ2 λ2 = λ3 λ3 = 0.41 λ4 = 0.6λ1 λ1 = α + λ3 = α + 0.4λ1 =
α 0.6
= 33 packets/ms
λ2 = λ3 = 0.4λ1 = 0.4 × 33 = 13.33 packet/ms λ4 = 0.6 × 33 = 20 packet/ms (b) ρ1 = ρ2 =
λ1 μ1 λ2 μ2
= 0.33 = 0.133
86
Chapter 11. Packet Queues and Delay Analysis ρ3 = ρ4 =
λ3 μ3 λ4 μ4
= 0.666 = 0.666
E[K1 (t)] = E[K2 (t)] = E[K3 (t)] = E[K4 (t)] = 4
(c) E[T ] =
ρ1 1−ρ1 = 0.5 packets 0.133 1−0.133 = 0.15 packets 0.666 1−0.666 = 2 packets 0.666 1−0.666 = 2 packets
i=1
E[Ki (t)] α
=
0.5+0.15+2+2 20
= 0.232 ms
12. N/A
13. (a) Queuing unit 1: λ1 = 3λ Queuing unit 2: λ2 = 6λ + 6λ + 3λ + λ = 20λ Queuing unit 3: λ3 = 6λ Queuing unit 4: λ4 = 6λ (b) E[K1 ] = E[K2 ] = E[K3 ] = E[K4 ] = (c) E[T ] =
3λ μ1 = 20λ μ2 6λ = μ3 = 6λ μ4 E[K2 ] E[K3 ] 20λ + 6λ
ρ1 1−ρ1 ; ρ1 ρ2 1−ρ2 ; ρ2 ρ3 1−ρ3 ; ρ3 ρ4 1−ρ4 ; ρ4
E[K1 ] 3λ
+
=
+
E[K4 ] 6λ
14. α = 200 packets/ms μ0 = 100 packets/ms μi = 10 packets/ms (a) λ = 0.4λ + α ⇒ λ = α/0.6 = 333.33 packets/s Thus, the arrival rate to each of the queuing units in parallel is: λi = 0.4/5 × 333.33 = 26.67 packets/s
87 (b) E[K0 (t)] = Since ρ0 =
ρ0 1−ρ0 λ μ0 , then:
E[K0 (t)] =
λ μ0 1− μλ 0
=
λ μ0 −λ
=
100×103
3.34 × 10−3 packets E[Ki (t)] =
ρi 1−ρi
=
λi μi −λi
=
10−3 (for 1 ≤ i ≤ 5) (c) E[T0 ] =
E[K0 (t)] λ
=
= 10−5 s E[Ti ] =
10×103
333.33 packets/s = packets−333.33 packets/s
26.67 packets/s = 2.67× packets−26.67 packets/s
3.34×10−3 packets 333.33 packets/s
E[K0 (t)]+E[E[Ki (t)]] α
=
E[K0 (t)]+
5 i=1
Pi E[Ki(t)]
α
=
2.67×10−3 +5(0.2E[Ki (t)]) 200
= 30 ms
15. (a) λ3 = 0.3λ − 1 λ2 = α + 0.3λ1 λ1 = λ2 + λ3 + λ4 λ4 = 0.3λ1 λ1 = λ2 + λ3 + λ4 = α + 0.3λ1 + 0.3λ1 + 0.3λ1 = α + 0.9λ1 ⇒ λ1 = 8 packets/ms λ2 = α + 0.3λ1 = α + 0.3 × 10α = 4α = 3.2 packets/ms λ3 = 0.3 × 80 = 2.4 packets/ms λ4 = 0.3λ1 = 0.3 × 80 packets/ms (b) ρ1 = ρ2 = ρ3 = ρ4 = (c) E[T ]
ρ1 8 0.8 10 ; E[K1 (t)] = 1−ρ1 = 1−0.8 = 4 ρ2 3.2 0.267 12 ; E[K2 (t)] = 1−ρ2 = 1−0.267 = 0.364 ρ1 2.4 0.171 14 ; E[K3 (t)] = 1−ρ1 = 1−0.171 = 0.206 ρ1 2.4 0.15 16 ; E[K4 (t)] = 1−ρ1 = 1−0.15 = 0.176 = 4+0.364+0.206+0.176 = 5.93 ms = E[K(t)] α 0.8
88
Chapter 11. Packet Queues and Delay Analysis
Chapter 12
Quality-of-Service and Resource Allocation 1. (a) P0 = PX (0)P1 + [PX (0) + PX (1)]P0 P1 = PX (2)P0 + PX (1)P1 + PX (0)P2 P2 = PX (3)P0 + PX (2)P1 + PX (1)P2 + PX (0)P3 P3 = PX (4)P0 + PX (3)P1 + PX (2)P2 + PX (1)P3 + PX (0)P4 P4 = PX (5)P0 +PX (4)P1 +PX (3)P2 +PX (2)P3 +PX (1)P4 +PX (0)P5 (b) See Figure 12.1. Px(5)
Px(4)
Px(4) Px(3)
Px(2)
1
Px(0) + Px(1)
Px(1)
3
2 Px(0)
Px(0)
Px(0)
Px(2)
Px(2)
Px(2)
0
Px(3)
Px(3)
Px(1)
Px(0) Px(1)
Figure 12.1: Markov chain.
89
4
Px(1)
90
Chapter 12. Quality-of-Service and Resource Allocation
2. (a) PX (k) =
1 4
for k = 0, 1, 2, 3
PX (k) = 0 for k = 0, 1, 2, 3 Also for k = 0, 1, 2, 3 : PX (0) = PX (1) = PX (2) = PX (3) = (b) P0 = [PX (0) + PX (1)]P0 + PX (0)P1 = ( 14 + 14 )P0 + 14 P1 = 12 P0 + 14 P1 ⇒ 12 P0 = 14 P1 ⇒ 2P0 = p1 P1 = PX (2)P0 + PX (1)P1 + PX (0)P2 = 14 P0 + 14 P1 + 14 P2 = 14 P1 +
1 4
× 12 P1 + 14 P2
⇒ 52 P1 = P2 , P2 =
5 2
× 2P0 = 5P0
P2 = PX (3)P0 + Px (2)P1 + Px (1)P2 + PX (0)P3 = 14 (P0 + P1 + P2 + P3 ) = 14 ( 15 P2 + 25 P2 + P2 + P3 ) ⇒ P3 =
12 5 P2 , P3
= 12P0
P0 = 0.008 P1 = 2P0 = 0.016 (c) See Figure 12.2.
¼ ¼
¼ 0 ½
1 ¼
2 ¼
¼
¼
Figure 12.2: Markov chain. P00 = PX (0) + PX (1) = P01 = PX (2) = P02 = PX (3) = P11 = PX (1) = P10 = PX (0) = P12 = PX (2) =
1 4 1 4 1 4 1 4 1 4
1 4
+
1 4
=
1 2
3
1 4
91 P20 = 0 P21 = PX (0) = P22 = PX (1) =
1 4 1 4
3. (a) For Poisson distribution: (λt)x e−λt . x!
PX (x) = For t =
1 g
PX (x) =
(λ )x e g x!
−λ g
.
With λ = 20 packets/s and g = 30 packets/s 2
PX (k) =
( 23 )k e− 3 k!
.
(b) P0 = [PX (0) + PX (1)]P0 + PX (0)P1 ⇒ P1 =
[1−PX (0)−PX (1)]P0 PX (0)
We know P0 = 0.007. From Part (a), we also know: 2
PX (0) = PX (1) =
e− 3 1 = 0.513 2 2 −3 e 3 = 0.34, 1
thus:
P1 = 0.00197 P1 = PX (2)P0 + PX (1)]P1 + PX (0)P2 ⇒ P2 =
[1−PX (1)]P1 −PX (2)P0 PX (0)
We know PX (2) =
( 23 )
2 −2 e 3
2!
= 0.114, thus:
P2 = 0.249 P2 = PX (3)P0 + PX (2)]P1 + PX (1)P2 + PX (0)P3 ⇒ P3 =
[1−PX (1)]P2 −PX (3)P0 −PX (2)P1 PX (0)
We know PX (3) =
( 23 )
3 −2 e 3
3!
P3 = 0.318 (c) Transition probabilities:
= 0.025, thus:
92
Chapter 12. Quality-of-Service and Resource Allocation P00 = PX (0) + PX (1) = 0.855 P01 = PX (2) = 0.114 P02 = PX (3) = 0.025 P03 = PX (4) = 0.004 P01 = PX (2) = 0.114 P10 = PX (0) = 0.513 P11 = PX (1) = 0.342 P12 = PX (2) = 0.114 P13 = PX (3) = 0.025 P20 = 0 P21 = PX (0) = 0.513 P22 = PX (1) = 0.342 P23 = PX (2) = 0.114 P30 = 0 P31 = 0 P32 = PX (0) = 0.513 P33 = PX (1) = 0.342 (d) Sketch from Part (c).
4. (a) b + vTb = zTb Tb =
b z−v
(b) b = 0.5 Mb z = 100 Mb/s v = 10 Mb/s Tb =
100
0.5 Mb = 5.56 ms Mb/s−10 Mb/s
93 5. N/A
6. Solving this problem requires a great deal of ellaborations in complex mathematical background. We try to summarize this background. In general, for a continuous random variable X ≥ 0, with mean E[X, second moment E[X 2 ], and PDF fX (x), the Laplace-Steiljes transform (LST) of FX (x) is defined by fˆX (δ) = 0∞ e−δx fX (x)dx
Hence, for the residual time distribution, Rj (t), LST is given by rˆj (δ) =
1−fˆX (δ) E[X]δ
Now, we can derive the mean residual time, rj , can be derived as 2] drˆ (δ) rj = −limδ→0 j = E[X 2E[X] dδ
7. (a) For non-preemtive scheduler: E[T1 ] = 0.37 s, E[T2 ] = 0.62 s, E[T3 ] = 0.25 s (b) For preemtive scheduler: E[T1 ] = 0.12 s, E[T2 ] = 0.66 s, E[T3 ] = 1.91 s (c) The waiting time E[Ti ] for class i packet is lower for preemtive scheduler as long as i is low and it will be reversed for higher is.
8. We compare the impact of an increased number of inputs on total delay in priority scheduler with: three flows (n=3), and four flows (n=4). λi = λ = 0.2 packets/ms
94
Chapter 12. Quality-of-Service and Resource Allocation 1 μi
=
1 μ
= 1ms
ri = r = 0.5 ms (a) For a non-preemptive scheduler, we know: E[Tq,i ] = Wx + E[Tq,i ]2 + E[Tq,i ]3 =
n
j=1 ρj rj
+
i
j=1 ρj E[Tq,j ] +
E[Tq,i ]
i−1
j=1 ρj
n = 3, non-preemptive scheduler: Wx =
n
j=1 ρj rj
For i = 1: E[Tq,1 ] = Wx +
=
3
j=1 ρj rj
= (3)(0.2)(0.5) = 0.3
1
j=1 ρj E[Tq,j ] +
0 = 0.3 + ρ1 E[Tq,1 ]
= 0.3 + 0.2E[Tq,1 ] E[Tq,1 ] = 0.375 For i = 2: E[Tq,2 ] = Wx +
2
j=1 ρj E[Tq,j ] +
E[Tq,2 ]
1
j=1 ρj
= 0.3 + ρ1 E[Tq,1 ] + ρ2 E[Tq,2 ] + E[Tq,2 ]ρ1 = 0.3 + 0.2 × 0.375 + 0.2E[Tq,2 ] + 0.2E[Tq,2 ] E[Tq,2 ] = 0.625 ms For i = 3: E[Tq,3 ] = Wx +
3
j=1 ρj E[Tq,j ] +
E[Tq,3 ]
2
j=1 ρj
= 0.3 + ρ1 E[Tq,1 ] + ρ2 E[Tq,2 ] + ρ3 E[Tq,3 ] + ρ1 E[Tq,3 ] + ρ2 E[Tq,3 ] = 0.3 + 0.2 × 0.375 + 0.2 × 0.625 + 0.2E[Tq,3 ] E[Tq,3 ] = 1.25 ms E[Ti ] = E[Tq,i ] +
1 μi
For i = 3 ⇒ E[T3 ] = E[Tq,i ] +
1 μ3
= 1.25 + 1 = 2.25 ms
95 n = 4, non-preemptive scheduler: Wx =
n
j=1 ρj rj
For i = 1: E[Tq,1 ] = Wx +
=
4
j=1 ρj rj
1
= (4)(0.2)(0.5) = 0.4
j=1 ρj E[Tq,j ]
+ 0 = 0.4 + 0.2E[Tq,1 ]
E[Tq,1 ] = 0.5 For i = 2: E[Tq,2 ] =0.4 + ρ1 E[Tq,1 ] + ρ2 E[Tq,2 ] + E[Tq,2 ]ρ1 = 0.4 + 0.2 × 0.5 + 0.4E[Tq,2 ] + 0.2E[Tq,2 ] E[Tq,2 ] = 0.833 ms For i = 3: E[Tq,3 ] =0.4 + ρ1 E[Tq,1 ] + ρ2 E[Tq,2 ] + ρ3 E[Tq,3 ] + 2ρ1 E[Tq,3 ] = 0.4 + 0.2 × 0.5 + 0.2 × 0.833 + 0.6E[Tq,3 ] E[Tq,3 ] = 1.667 ms 1 μi
E[Ti ] = E[Tq,i ] + For i = 3 ⇒ E[T3 ] = E[Tq,3 ] +
1 μ3
= 1.667 + 1 = 2.667 ms
(b) For a preemptive scheduler, we know: E[Ti ] = E[Tq,i ] + θi where: θi =
μi 1−
1
i−1
ρ j=1 j
n = j = 3: From non-preemtive case: E[Tq,3 ] = 1.25 ms θ3 =
μ3 1−
1
2
= 1.667
ρ j=1 j
E[T3 ] = 1.25 + 1.667 = 2.92 ms
96
Chapter 12. Quality-of-Service and Resource Allocation
n = j = 4: From non-preemtive case: E[Tq,3 ] = 1.667 ms θ3 =
μ3 1−
1
3 j=1
= 2.5 ρj
E[T3 ] = 2.5 + 1.667 = 4.167 ms (c) The total delay obtained in the non-preemtive scheduler where n = 3 and n = 4 are close. In a preemtive scheduler, the difference in delay when n = 3 and n = 4 is very large. This is very obvious as when the numbers of flows increases in the case of preemtive scheduler, the waiting time becomes large for a low priority packet to be procesed. However, in a non-preemtive scheduler, this has a littile impact. This is because in an nonpreemptive scheduler, lower priority packets cannot be interrupted immidiately upon the arriaval of higher priority packets.
9. (a) 2.1,2.1,4.1,1.1,3.2,4.2,1.2,4.3,1.3,2.2,4.4,1.4,3.3,1.5,2.3,3.4,2.4,3.6,4.5,2.5 (b) 2.1,3.1,4.1,4.2,4.3,4.4,1.1,2.2,2.3,3.2,3.3,3.4,4.5,1.3,2.4,2.5,1.5
10. N/A
11. N/A
97 Packet No. 1 2 3 4 5 6 7 8 9 10 11 12
Size 110 110 110 100 100 100 100 200 200 240 240 240
Flow 1 1 1 1 1 1 2 2 3 3 3 4
Fi (FQ) 110 220 330 430 530 630 100 300 200 440 680 240
Fi (WQ) 1100 2200 3300 4300 5300 6300 500 1500 666.6 1466.6 2266.6 600
12. Priority Queueing: (a) With 10% of the bandwidth,the low-priority will at least be able to transmit,without the guarantee bandwidth, a low-priority might never transmit. (b) The high-priority flows will lose 10% bandwidth and this 10% hit will be distributed evenly throughout all the high-priority flows, therefore the performance hit will not be noticeable.
13. See the table for the following arrangement: Flow1 110,110,110,100,100,100,(Flow2) 100,200, (Flow3) 200,240,240, (Flow4) 240 (a) Packets in fair queueing: 7,1,9,2,12,8,3,4,10,5,6,11
(b) Packet in weighted queueing: 7,12,9,1,10,8,2,11,3,4,5,6
14. (a) Priority Q.: A1,B1,B2,B3,A2,C1,C2,C3,A3,A4,B4,B5,C4,D1,D2,D3,A5,C5,D4,D5
98
Chapter 12. Quality-of-Service and Resource Allocation (b) Fair Q.: A1,B1,D1,B2,C1,D2,A2,B3,C2,D3,A3,B4,C3,D4,A4,B5,C4,D5,A5,C5 (c) Weighted: A1,B1,B2,C1,C2,C3,D1,D2,D3,D4,A2,B3,B4,C4,D5,A3,C5,B5,A4,A5
15. Please make a correction: Part (c), flow D is 30 percent. (a) Priority Q.: B1,–,B2,A1,A2,B3,C1,B4,A3,A4,B5,C2,C3,C4,C5,D1,D2,D3,A5,D4,D5 (b) Fair Q.: B1,–,B2,C1,D1,A1,B3,C2,D2,A2,B4,C3,D3,A3,B5,C4,D4,,A4,C5,,D5,A5 (c) Weighted: B1,–,B2,C1,C2,C3,C4,D1,D2,A1,A2,A3,B3,C5,D3,D4,A4,B4,D5,A5,B5
16. This Problem is a result of moving Problem 8.4 to here as Problem 12.16. (a) Fairness index of B1 , B2 , B3 is given by 2 ( n fi ) σ = n i=1 n 2 f i=1 i
= =
(f1 +f2 +f3 )2 3(f12 +f22 +f32 ) (1+1+1)2 3(12 +12 +12 )
=1
(b) For equal throughput rates, the fairness index is (f +f +f )2 3(f 2 +f 2 +f 2 ) (3f )2 3(3f 2 ) = 1
σ= =
We know: 0 is for the worst and 1 is for the best allocation of resource allocation. Thus from result of Part (a) we can say that we can have the best resource allocation when the throughput rates are equal. (c) Fairness index of B1 , through B5 is given by (f1 +f2 +f3 +f4 +f5 )2 5(f12 +f22 +f32 +f42 +f52 ) (1+1+1+1.2+16)2 5(1+1+1+1.44+256) =
σ= =
0.313
99 (d) The result of Part (c) shows us that the resource allocation is not the best when the throughput rates are different. This is because the network cannot offer the fair amount of resource to each flow.
100
Chapter 12. Quality-of-Service and Resource Allocation
Chapter 13
Networks in Switch Fabrics 1. XCrossbar = n2 XDelta = nd logd n n 22 24 25
Complexity of crossbar 16 256 1024
Complexity of Delta network 16 128 320
The complexity of crossbar increases dramatically compared to Delta network as n increases. See Figures 13.1.
2. (a) See Figure 13.2. (b) Complexity: X(D16,2 ) =
n d
× d2 logd n = nd logd n
= (16)(2) log 2 16 = 128 X(D16,4 ) = 2
n d
× d2 = (2)(16)(2) = 128
They have the same complexity
101
102
Chapter 13. Networks in Switch Fabrics Complexity
1000
os Cr
sb
ar
500 Delta network
5
10
15
20
25
30
n
Figure 13.1: Comparison on the complexity of crossbar and Delta network.
Communication Delay: The delay for D16,2 is higher than D16,4 because it has to go through more stages. However, D16,4 is nonblocking while D16,2 is blocking.
3. (a) See Figures 13.3 and 13.4. (b) Complexity: X(Ω16,2 ) =
n d
× d2 logd n = nd logd n
= (16)(2) log 2 16 = 128 X(Ω16,4 ) = 2
n d
× d2 = (2)(16)(2) = 128
They have the same complexity Communication Delay: The delay for Ω16,2 is higher than Ω16,4 because it has to go through more stages. However, Ω16,4 is nonblocking while Ω16,2 is blocking.
103
(a) Figure 13.2: (a) D16,2 switch fabric, (b) a D16,4 switch fabric.
4. B = 1 − (1 − P )3 B=P
5. (a) See Figures 13.5 and 13.6. (b) B16,2 number of stages → 2 log2 16 − 1 = 7 B16,4 number of stages → 2 log4 16 − 1 = 3 Compare in terms of complexity: In general, the comlexity of Bn,d is : nd(r logd n − 1) B16,2 : 32 × 7 = 224 B16,4 ) : 64 × 3 = 192
(b)
104
Chapter 13. Networks in Switch Fabrics
Figure 13.3: Ω16,2 switch fabric.
105
Figure 13.4: Ω16,4 switch fabric.
106
Chapter 13. Networks in Switch Fabrics
11
21
31
41
31
21
13
23
33
43
33
23
25
35
45
35
25
15
27
37
47
37
27
17
17
Figure 13.5: B16,2 switch fabric.
11
107
Figure 13.6: B16,4 switch fabric.
108
Chapter 13. Networks in Switch Fabrics Compare in terms of communication Delay : B16,2 : 7 × delay per stage B16,4 : 3 × delay per stage Thus, B16,2 has higher complexity and delay.
6. See Figures 13.7 and 13.8. B16,2 : P2 = 1 − (1 − P1 )2 P3 = (1 − (1 − P1 )2 )2 P4 = 1 − (1 − P1 )2 (1 − P3 ) P5 = P4 P6 = 1 − (1 − P1 )2 (1 − P5 ) Pblock = P62 = (1 − (1 − P1 )2 (1 − P5 ))2 = (1 − (1 − P1 )2 (1 − P42 ))2 = (1 − (1 − P1 )2 (1 − (1 − (1 − P1 )2 (1 − (1 − (1 − P1 )2 )2 )2 ))2 B16,4 : Pblock = (1 − (1 − P1 )2 )4 Pblock (B16,2 ) < Pblock (B16,4 )
7. (a) See Figures 13.9 and 13.10. (b) The Banyan network is similar to Delta network. Thus, the routing rule of the Delta network can be applied to a Banyan network.
8. (a) See Figure 13.11. (b) B9,3 : XB = nd(2 log d n − 1) = 9(3)(2 log 3 9 − 1) = 81 crosspoints s = 2 logd n − 1 = 2 log2 9 − 1 = 3 stages
109
P1
P1
P1
P1
P1
P1 P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
P1
Figure 13.7: Lee’s blocking model for B16,2 switch fabric.
P1
P1
P1 P1
P1
P1
P1
P1
Figure 13.8: Lee’s blocking model for B16,4 switch fabric.
110
Chapter 13. Networks in Switch Fabrics
0000 0001 0010 0011
0100 0101
0110
0111
1000 1001 1010 1011 1100 1101 1110 1111
Figure 13.9: Y16,2 switch fabric.
111
Figure 13.10: Y16,4 switch fabric.
112
Chapter 13. Networks in Switch Fabrics
(a)
(b) Figure 13.11: Comparing two switching networks: (a) B9,3 and (b) Ω9,3 .
113
9. (a) N/A (b) The complexity of this network is estimated to be (nc /n) = d(h + logd n) (nL /n) = 1 + h + logd n E ,the blocking probability (c) For the extended the Delta network Dn,d,h
is estimated as follows Pp (n, d, 0) = 1 − (1 − p)k−1 Pp (n, d, 0) = [1 − (1 − p)2 (1 − Pp (n/d, d, h − 1))]d
10. (a) Architecture 1: choose d = 2 k ≥ 2d − 1 ⇒ k = 3 complexity = 4(2 × 3) + 3(4 × 4) + 4(3 × 2) = 96 This is optimal in terms of complexity:Dopt =
n 2
=2
⇒k=3 See Figure 13.12
Figure 13.12: Clos network, d = 2 and k = 3. Architecture 2: choose d = 4 k ≥ 2d − 1 ⇒ k = 7 Complexity = 2(4 × 7) + 7(2 × 2) + 2(7 × 4) = 140
114
Chapter 13. Networks in Switch Fabrics See Figure 13.13
d =4
k=7
Figure 13.13: Clos network, d = 2 and k = 3. (b) For reliability and fault talerance.
11. N/A
12. The Clos network is designed to be non-blocking with extra link in the middle stages, however the Lee’s model referred to the possible congestion for each link, there for the number calculated values using Lee’s method referred to the conceptual blocking of each link between notes. Summary: With Lee’s method, we consider only one path. But for the analysis of blocking k ≥ 2d − 1 we consider all paths.
115
(a)
(b) Figure 13.14: Comparing two Clos networks: (a) C6,2,3 and C6,3,5 switching networks (b) Comparing Lee’s models.
13. (a) See Figure 13.14. d =
6/2 ⇒ d = 2
k ≥ 2d − 1 = 2 × 3 − 1 ⇒ k = 3 k =2×3−1 = 5 (b) Lee’s model, total blocking probability. Assuming the probability of deley for all links are p, C6,2,3 ⇒ B = (1 − (1 − p)2 )3 = (2p − p2 )3 C6,3,5 ⇒ B = (1 − (1 − p)2 )5 = (2p − p2 )5
116
Chapter 13. Networks in Switch Fabrics
(c) C6,3,5 has lower probability of blocking. But it yields a higher complexity and higher cost due to more crossbars is used. So it really depends on the need of the network to decide which is better.
14. Five-stage Clos network with n = 8, d = 2. (a) See Figure 13.15. (b) See Figure 13.16. (c) Total blocking probability with p = 0.2. The probability of blocking for the middle stage is (2p−p2 )3 . Thus B = (1 − (1 − p) × (1 − p) × [1 − (2p − p2 )3 ])3 = 0.059
15. (a) See Figure 13.17. (b) See Figure 13.18. BXY = [1−(1−p)2 (1−(1−p)2 )] = [1−(1−0.2)(1−0.2)(1−0.059)] = 0.0629 BZW = [1 − (1 − 0.2)(1 − 0.0629)(1 − 0.2)]3 = 0.02
16. Consider a five-stage Clos network (similar to problem 13.14) whose stages use the following crossbar dimensions: 1st stage: d × k, 2nd stage: e × j, 3rd stage:
n de
×
n de ,
4th stage: j × e, and 1st stage: k × d.
(a) (b) We find k, d, j, e in terms of n. Non blocking conditions: j ≥ 2e − 1 k ≥ 2d − 1 Xc = dk nd + k[ej n/d e + = kn + k( jn d +
jn2 d2 e2
+
n/d n/d e × e jn d ) + kn
n × j + ej n/d e ] + dk d
117
D=2 K=3
D=2 K=3
D=2 K=2
Figure 13.15: Five-stage Clos network with n = 8, d = 2.
118
Chapter 13. Networks in Switch Fabrics
Figure 13.16: Lee’s model for the five-stage Clos network with n = 8, d = 2.
Xc = 2kn + k( 2jn d +
jn2 d2 e2 )
k = 2d − 1 j = 2e − 1 n2 (2e−1) d2 e2 ] 2n 2n2 n2 d − d2 e − d2 e2
+ Xcnb = 2n(2d − 1) + (2d − 1)[ 2n(2e−1) e = 4nd − 6n + 8ne +
4n2 de2
−
2n2 de2
−
4ne d
−
To optimize: dXcnb de
⇒ eopt ≈
2
2
2
2
4n 4n 2n 2n + 4n de3 − d + d2 e2 + d2 e3 = 0 de2
= 8n −
−4n2 d+2n2 −8nd2 +4nd
We know: dopt ≈
n 2
⇒ eopt ≈
(c) Plug dopt and eopt into Xcnb .
−4n2
√n 2
+2n2
−4n2 +4n
√n 2
119 Multiplexer
n=8
n 5-Stage Clos Network
Figure 13.17: A Cantor network with three parallel switching planes.
17. (a) m = 512 bytes × 16 = 8192 bytes 8192×8 B/b (b) rows = = 2048 rows 32 C = log2 2048 = 11 bits (c) RAM segment process time: 518 × 8 ×
1 32
× (2 ns + 2 ns + 1 ns) =
640 ns/segment s Transmission time of a segment = 0.4μ 16 = 25 ns bits = 6.15 × 109 = 6.15 Gb/s Bit rate = 512×8 640+25 6.15 Gb/s = 1.5 Msegment/sec/port (d) 512×8
120
Chapter 13. Networks in Switch Fabrics
X
Z
Y
W
Figure 13.18: Lee’s blocking probability model for the Cantor network with three parallel switching planes.
Chapter 14
Optical Switches and Networks, and WDM 1. n ≤ ≤ 2n − 1 8 ≤ ≤ 15 (a) Minimum loss = 8 × 40 = 32 dB Maximum loss = 15 × 40 = 600 dB (b) ave ≈
n+(2n−1) 2
≈
3n−1 2
≈ 12
Average loss = 12 × 40 = 240 dB
2. N/A
3. N/A
4. N/A
5. N/A 121
122
Chapter 14. Optical Switches and Networks, and WDM
6. (a) E[T ] =
1 μi,j −Li,j
(b) E[Ts ] =
1 μi,j −si,j Li,j
(c) E[Tn ] =
1 n(n−1)
7. N/A
8. N/A
9. N/A
10. N/A
si,j i,j μi,j −si,j Λ
Chapter 15
Multicasting Techniques and Protocols 1. Multicast Connection. A multicast protocol such as MOSPF, PIM, and CBT helps keep the traffic down by requiring the source to transmit only one packet. In the down side, a multicast protocol causes some computational delay associated with the implementation of the protocol in a router. Additionally, a special type router is needed to implement the multicast protocol. Another major drwback with the multicast protocol is that if the source packet is lost on its way before being copied, no multicast group members receive a copy of the packet. Multicast Connection. In contrast, sending the packet separately to each group would increase the traffic substantially. But it is more convenient from the network management stand-point.
2. In the sparse-mode algorithm, a shared-tree technique is used, and a relatively low cost path is selected. As a result, the sparse-mode approach intruduces extra delay than the dense-mode approach.
123
124
Chapter 15. Multicasting Techniques and Protocols R1 LAN 5
R2 R3
R4 LAN 4
LAN 1
R5 R7 R6
R8
Multicast Group 2
LAN 2 LAN 3
Figure 15.1: MOSPF protocol. A rendesvous point (RP) router is selected as a shared root of distribution sub-tree. RP router is used to coordinate forwarding packets and prevent initial flooding of datagram. However, RP router can a hot spot for multicast traffic congestion and a possible point of failure in routing. Also, as finding a low cost path is not necessary in the sparse-mode, less hardware complexity meay be needed. However, the sparse-mode approach lowers the efficiancy of MOSPF as this protocol employs OSPF unicast routing that requires that each router in a network be aware of all available links. If the sparse mode is chosen, each router using a shared-tree may cause some involving routers select longers paths than they would normally select in a densemode.
3. (a) See Figure 15.1.
125 k
(b)
{3} {2,3} {2,3,4} {2,3,4,7} {2,3,4,7,8} {2,3,4,5,7,8} {2,3,4,5,6,7,8}
β3,2
β3,4
β3,5
β3,6
β3,7
β3,8
3-2(5)
3-4(7)
×
3-7(10)
3-8(11)
3-2(5)
3-4(7)
3-2-5(12)
3-7(10)
3-8(11)
3-2(5)
3-4(7)
3-2-5(12)
× × ×
3-7(10)
3-8(11)
3-2(5)
3-4(7)
3-2-5(12)
3-7-6(23)
3-7(10)
3-8(11)
3-2(5)
3-4(7)
3-2-5(12)
3-7-6(23)
3-7(10)
3-8(11)
3-2(5)
3-4(7)
3-2-5(12)
3-7-6(23)
3-7(10)
3-8(11)
3-2(5)
3-4(7)
3-2-5(12)
3-7-6(23)
3-7(10)
3-8(11)
The copying router is R5 .
4. N/A
5. (a) We choose R2 as a rendezvous point as this node is at the network edge and has the least cost to multicast group. (b) Form a least-cost tree for the multicast action. For LAN 1: R3 → R2 → R5 → LAN 4, and R3 → R2 → R5 → R6 → LAN 3. The copying router is R5 , and The total cost = 26.
6. N/A
7. (a) In the MOSPF deployment, the multicast tree is: R3 → R8 → LAN2, and R8 → R6 → LAN3. The total cost is 2.
126
Chapter 15. Multicasting Techniques and Protocols (b) In the PIM deployment, we choose R4 as the rendezvous router. The multicast tree is: R3 → R7 → R4 → R7 → R6 → LAN3, and R7 → R8 → LAN2. The total cost is 5.
8. See Figure 15.2. For the copy network with F = 7, d = 2, and k = 7: Divide the stages into two halves. The first half has 72 = 3 stages. For Stages 1, 2, and 3: Initialize: F1 = 7, f1 = 1 ⇒ route the packet randomly then at each stage: The second half of the network has 72 = 4 stages. Stages 4, 5, 6, and 7: 7 = 78 = 1 Stage 4: F4 = 71 = 7 and f4 = 27−4
⇒ make one copy at this stage 7 = 74 = 2 Stage 5: F4 = 71 = 7 and f4 = 27−5
⇒ make two copy at this stage 7 = 42 = 2 Stage 6: F 4 = 72 = 4 and f 4 = 27−6
⇒ make two copy at this stage 7 = 21 = 2 Stage 7: F 4 = 42 = 2 and f 4 = 27−7
⇒ make two copy at this stage At the end of stage 7, we have 8 copies in total, since we only need 7, we’ll discard one of them and send the remaining 7 copies to the interface.
9. (a) We prefer a copy node not to be at the edge of a network:
127
11
21
31
41
31
21
13
23
33
43
33
23
25
35
45
35
25
15
27
37
47
37
27
17
17
11
Figure 15.2: Multicasting with F = 7 copies in a copy network with d = 2 and k = 7.
128
Chapter 15. Multicasting Techniques and Protocols R12 → R13 → R10 , and R12 → R17 → R15 → R16 → LAN 1, and R12 → R17 → R15 → R14 → LAN 2. Total multicast cost is 10. (b) R12 → R13 → R10 → R20 → R28 → R27 → R26 → R24 → R25 → R23 → LAN 4, and R12 → R13 → R10 → R20 → R28 → R27 → R26 → R24 → R25 → R21 → LAN 3. Total multicast cost is 27. (c) R12 → R13 → R10 → R30 → R37 → R36 → R34 → R33 → R32 → LAN 5. Total multicast cost is 31.
10. (a) See Figure 15.3. (b) The complexity of Boolean splitting multicast algorithm is less than the one for tree-based algorthm since it does not need two switch fabrics for routing and copying. But with the Boolean splitting multicast algorithm if the number of packets increases, the congestion can happen because the copying and routing are functioned at the same time. Thus, the performance of the Boolean splitting algorithm is better than the tree-base algorithm only for making low number of packets.
11. (a) See Figure 15.4. (b) See Figure 15.5. s = k = logd n = log2 8 = 3 for 1 ≤ k2 = 3 F = 5, fj = 1
129 j =4 1
j =3 00 0 1
j =2
000 1
0 0
0010 0 0 01
j =1 0001
0
0 0 1 1
1
0010
0
0001
0010
001 0 001 1
1
0011
0011
0010 0 1 11
0011
1
01 1 1
0111
011 1
1
1
0111 1010
1
1 1 0 1 0
0
10 1 0
0 1010
1 0 1 0
1100 1100 1111 1 1 11
0 1
1 1 0 0
0
110 0
1100
1 1 1 1
1 1
111 1
1111
Figure 15.3: Multicasting with F = 7 copies in a D16,2 copy network using Boolean splitting algorithm. F4 5 = 26−4 252 = 2 j = 4 : F4 = Ff33 = 51 = 5f4 = dk−4
5 = 32 = 2 j = 5 : F5 = Ff44 = 52 = 3f5 = dF6−4 6 = 2 = 2 j = 6 : F6 = Ff55 = 32 = 2f6 = dF6−6
12. See Figure 15.6. To construct a 4 × 4 Crossbar switch, we need four 4 × 1 multiplexers. if in0 wants to send packets to Out1 and Out2, it just needs to set the control-bits C4-C6 to accept packets from In0.
130
Chapter 15. Multicasting Techniques and Protocols
3-Stage Routing (All Identical)
3-Stage Copying
0
0 0
1 1
0 1
0 1
1 0
1 0
1 0 0 1
0 00 1 0 0 0 1
0 0 1
0 1 0
0 1 0
1 0 0
1 0 0
1 0 1
1 0 1
1 1 1
1 1 1
1
0 1 0
0 0 1 0 0 1
1 0 0
0 1 0 11
1 0 0
1
1 0 1
1 0 1 1 11 1
1 1 1 1
1
1 1 1
Figure 15.4: Multicasting with F = 5 copies in a Ω8,2 copy network using Boolean splitting algorithm.
000 001
000 001
010 011
010 011
100
100 101
101 110 111
110 111
Figure 15.5: Cascading two Omega networks to form a copy network.
131
int0 int1 int2 int3 C0
C1
C2
C3
C4
C5
C6
C7
Out 1
Out 2
Out 3
Out 4
Figure 15.6: Internal srtructure of the a multicast switch.
132
Chapter 15. Multicasting Techniques and Protocols
Chapter 16
VPNs, Tunneling, and Overlay Networks 1. (a) The advantage to having egress nodes estimate routing is that the routing computations are distributed throughout the MPLS network, and require fewer entries in the forwarding table for each egress node. This could provide faster assignment of labels (and therefore faster transmission) to packets entering the MPLS network, due to the smaller tables. (b) The advantage of a preassigned router is that convergence of the best routes would be faster and synchronized, as updates to the topography would not need to be sent between all routers in the MPLS network (such as when LSRs go up and down which initiate topography changes).
2. N/A
3. N/A 133
134
4. N/A
5. N/A
6. N/A
Chapter 16. VPNs, Tunneling, and Overlay Networks
Chapter 17
Compression of Digital Voice and Video 1. See Figure 17.1.
g(t) = sin(200πt) s(t) =
+∞
n=−∞
t−nTs [ τ ] = +∞ n=−∞ [2000t − 2n] +2
gs (t) = g(t) × s(t) =
n=−2
[2000t − 2n] sin(200πt)
G(f ) = j[δ(f − 100) + δ(f + 100)] S(f ) =
1 2
+∞
n=−∞ sinc[5
× 10−4 (f − 1000n)]
Gs (f ) = G(f ) × S(f ) =
1 2
+2
n=−2
sinc[5 × 10−4 (f − 1100n)] + sinc[5 × 10−4 (f − 900n)]
2. N/A
3. Mean: E[X] = 0 Rate Capacity: R = 4 b/sample Variance: V [X] = 2 (a) For this source, we know Db = V [X]2−2R . With R = 4, we obtain 135
136
Chapter 17. Compression of Digital Voice and Video g(t)
g(t) s
s(t)
1
-2
-2 -1 0 t
1 2
-2
-1 0
G(f)
-100
1
2
-1 0
t
f
1000
t
G(f) s
S(f)
100
1 2
1000
f
-1100
100 1100
f
Figure 17.1: Solution to exercise. Sampling process in time and frequency domains. Db =0.0078 (b) If tolerable distortion becomes Db = 0.05 by using the same formula we obtain R=
1 2
log2
1 Db
= 2.66
Thus, the required transmission capacity=2.66 b/sample
4. Variance = σ 2 = V [X] = 10 N = 12
137 (a) √ Δ
= 0.4238 √ Δ = 0.4238( 10) = 1.34 V [X]
(b) ai = −aN −i = −( N2 − i)Δ a1 = −a11 = −(6 − 1)1.34 = −6.7 a2 = −a10 = −(6 − 2)1.34 = −5.36 a3 = −a9 = −(6 − 3)1.34 = −4.02 a4 = −a8 = −(6 − 4)1.34 = −2.68 a5 = −a7 = −(6 − 5)1.34 = −1.34 a6 = 0 xN +1−i = −( N2 − i + 12 )Δ (c) x ˆi = −ˆ x12 = −(6 − 1 + 12 )1.34 = −7.37 x ˆ1 = −ˆ x10 = −(6 − 2 + 12 )1.34 = −6.03 x ˆ2 = −ˆ x9 = −(6 − 3 + 12 )1.34 = −4.69 x ˆ3 = −ˆ x8 = −(6 − 4 + 12 )1.34 = −3.35 x ˆ4 = −ˆ x7 = −(6 − 5 + 12 )1.34 = −2.01 x ˆ5 = −ˆ x6 = −(6 − 6 + 12 )1.34 = −0.67 x ˆ6 = −ˆ (d)
D V [X]
= 0.0.1885
D = 0.1885 (e) For the source, we have Db = V [X]2−2R . Here, we are asked to find Db given R. N = 12 R = log2 N ≈ 4 Db = V [X]2−2R = 0.039
5. Note to instructors: this problem needs to be corrected to “· · · repeat Problem 4, this time using a 16-level optimal uniform quantizer · · ·”
138
Chapter 17. Compression of Digital Voice and Video Variance = σ 2 = V [X] = 10 N = 16
(a) √ Δ
= 0.4908 √ Δ = 0.4908( 10) = 1.06 V [X]
(b) ai = −aN −i = −( N2 − i)Δ a1 = −a15 = −(8 − 1)1.06 = −7.42 a2 = −a14 = −(8 − 1)1.06 = −6.36 a3 = −a13 = −(8 − 1)1.06 = −5.3 a4 = −a12 = −(8 − 1)1.06 = −4.24 a5 = −a11 = −(8 − 1)1.06 = −3.18 a6 = −a10 = −(8 − 1)1.06 = −2.12 a7 = −a9 = −(8 − 2)1.06 = −1.06 a8 = 0 xN +1−i = −( N2 − i + 12 )Δ (c) x ˆi = −ˆ x16 = −(8 − 1 + 12 )1.06 = −7.95 x ˆ1 = −ˆ x15 = −(8 − 2 + 12 )1.06 = −6.69 x ˆ2 = −ˆ x14 = −(8 − 3 + 12 )1.06 = −5.83 x ˆ3 = −ˆ x13 = −(8 − 4 + 12 )1.06 = −4.77 x ˆ4 = −ˆ x12 = −(8 − 5 + 12 )1.06 = −3.71 x ˆ5 = −ˆ x11 = −(8 − 6 + 12 )1.06 = −2.65 x ˆ6 = −ˆ x10 = −(8 − 6 + 12 )1.06 = −1.59 x ˆ7 = −ˆ x9 = −(8 − 6 + 12 )1.06 = −0.53 x ˆ8 = −ˆ (d)
D V [X]
= 0.01154
D = 0.1154
6. N/A
139
7. The sampling rate is fs = 80, 000 meaning that we take 80,000 samples per second. Each sample is quantised using 16 bits so the total number of bits per second is 80,000 × 16. For a music piece of duration 60 min=3000 sec the resulting number of bits is 80, 000 × 16 × 3000 = 3.8 × 109
8. We define Λ(x) as follows: ⎧ ⎪ ⎨ x+1
Λ(x) =
−1 ≤ x ≤ 0 0≤x≤1 otherwise
−x + 1 ⎪ ⎩ 0
We define the PDF as follows:
fX (x) = 12 Λ( 12 x) =
⎧ 1 x x+2 ⎪ ⎨ 2 ( 2 + 1) = 4 1 −x 2( 2
⎪ ⎩ 0
+ 1) =
−2 ≤ x ≤ 0 0≤x≤2 otherwise
−x+2 4
˜ in the book. Q(X) is in fact the quantiztion function denoted by X ˜ = Q(X). We define the quantization error by a new random Thus X ˜ variable Y = X − X: For − 2 < x ≤ −1 ⇒ x ˜1 = −1.5 fX (x1 ) =
x1 +2 4
=
(y1 −˜ x1 )+2 4
=
(y1 −1.5)+2 4
=
y1 +0.5 4
=
y2 +1.5 4
For − 1 < x ≤ 0 ⇒ x ˜2 = −0.5 fX (x2 ) =
x2 +2 4
=
(y2 −˜ x2 )+2 4
=
(y2 −0.5)+2 4
For 0 < x ≤ −1 ⇒ x ˜3 = 0.5 fX (x3 ) =
−x3 +2 4
=
−(y3 −˜ x3 )+2 4
=
−(y3 +0.5)+2 4
=
−y3 +1.5 4
140
Chapter 17. Compression of Digital Voice and Video For 1 < x ≤ 2 ⇒ x ˜4 = 1.5 fX (x4 ) =
−x4 +2 4
=
−(y4 −˜ x4 )+2 4
=
−(y4 +1.5)+2 4
=
−y4 +0.5 4
To find fY (y), we use the important property of the PDF and a function of a random variable Y = g(X), as: fY (y) =
n
i=1
We know that fY (y) =
4
fX (xi ) dy dx
dy dx
= 1. Thus:
i=1 fX (xi )
=
y1 +0.5 4
+
y2 +1.5 4
+
−y3 +1.5 4
+
−y4 +0.5 4
=1
9. See Table 17.1 and Figure 17.2. Table 17.1: Encoded words. Input: ABC +7 = 111 +5 = 110 +3 = 101 +1 = 100 -1 = 011 -3 = 010 -5 = 001 -7 = 000
Output: XYZ 110 111 101 100 000 001 011 010
10. (a) See Figure 17.3 (b) 513 Cc − 46, 2 Cc 0, 3 -2 Cc 0, 2 -2 0 1 0 1 0 -1 Cc 0, 4 -1 Cc 0, 43
11. HX (x) = −
0
−1 (x
+ 1) ln(x + 1)dx −
1 0
(−x + 1) ln(−x + 1)dx
141
A
BC 0
0
0
0
1
1
1
1
X=A X
BC A 1
1
0
0
0
0
1
1
A Y B Z
Y= A + B =AB + AB C BC A 0
1
0
1
0
1
0
1
Z=B + C= BC +BC
Figure 17.2: PCM encoder design: Karnaugh maps and the logic circuit. =
1 2
12. Sample space (alphabet) = {a1 , a2 , a3 , a4 , a5 } Corresponding probabilities = {0.23, 0.30, 0.07, 0.28, 0.12}. (a) Entropy H(x) = −
5
i=1 Pi
log Pi
= −(0.23 log 2 0.23 + 0.3 log 2 0.3 + 0.07 log 2 0.07 + 0.28 log 2 0.28 + 0.12 log 2 0.12) = 2.157 b/sample (b) {a1 , a2 , a3 , a4 , a5 } { 15 , 15 , 15 , 15 , 15 } H2 (x) = − = −5 ×
5
i=1 Pi log Pi 1 1 5 log 2 5 = 2.32 b/sample
142
Chapter 17. Compression of Digital Voice and Video
513 -46
0
-2
0
-1
0
0
-46
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
-2
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Figure 17.3: Quantization of a still image to produce Matrix Q[i][j], and the order of matrix elements for transmission.
Entropy of a uniformly distributed source is more compared to the above entropy of a non-uniformly distributed suorce.
13. (a) PX (1) = PXY (1, 1) + PXY (1, 2) + PXY (1, 3) = 0.1 + 0.2 + 0.4 = 0.7 PX (2) = PXY (2, 1) + PXY (2, 2) + PXY (2, 3) = 0.1 + 0 + 0.2 = 0.3 PX (3) = PXY (3, 1) + PXY (3, 2) + PXY (3, 3) = 0 H(X) = −(0.7 log 0.7 + 0.3 log 0.3) = 0.881 PY (1) = PY (2) = PY (3) =
3
x=1 PXY
(x, 1) = 0.1 + 0.1 = 0.2
x=1 PXY
(x, 2) = 0.2
x=1 PXY
(x, 3) = 0.4 + 0.2 = 0.6
3 3
H(Y ) = −(0.2 log 0.2 + 0.2 log 0.2 + 0.6 log 0.6) = 1.371
(b) The marginal entropy shows the average information that we recieve from one source if we know the other one. (c) H(X, Y ) = −(0.1 log 0.1 + 0.2 log 0.2 + 0.2 log 0.2 + 0.1 log 0.1 +
143 0.4 log 0.4) = 2.122
(d) Joint entropy shows the average information thet we recieve from combination of two source.
14. H(X|Y ) = H(X, Y ) − H(Y ) H(Y |X) = H(X, Y ) − H(X)
15. (a) Sample output a4 (b) The information content of samples a1 and a5 is: I = I(P1 ) + I(P5 ) = − log2 (0.1) − log2 (0.2) = 5.79 b (c) Least prob. seq. = {a4 , a4 , a4 , a4 , a4 , a4 , a4 , a4 , a4 , a4 } And, its probability = (0.05)10 = 9.76 × 10−14 No, it is not. (d) H(X) = −
7
i=0 Pi
log2 Pi = 2.501 b/symbol
F = 50 HZ × 2 = 100 symbol/sec H(X) = 100 symbol/sec × 2.501 b/symbol = 250 b/s (e) number of typical seq. = 2nH(X) = 225 = 3.4 × 107 number of non-typical seq. = number of seq.-number of typical seq. = N n − 2nH(X) = 28.7 × 107 − 3.4 × 107
16. Sample Space (Alphabet) = {a1 , a2 , a3 , a4 } P ∈ {0.15, 0.20, 0.30, 0.35}
144
Chapter 17. Compression of Digital Voice and Video (a) H(x) = −
N
k=1 Pk
log2 Pk
= −(0.15 log 2 0.15 + 0.2 log2 0.2 + 0.3 log 2 0.3 + 0.35 log 2 0.35) = 1.926 Number of typical sequences = 2nH(x) = 2100(1.926) = 9.6 × 1057 (b) Total number of sequences = N n = 4100 = 1.607 × 1060 Total number of non-typical sequences = 1.607 × 1060 − 9.9 × 1057 ≈ 1.597 × 1060 Number of typical sequences = 9.6×1057 = 0.006 1.597×1060 Number of non-typical sequences ¯ = 2−nH(x) = 2−100(1.926) = 1.04 × 10−58 (c) P [X] (d) Number of bits to represent typical sequence= nH(x) = 100(1.926) = 193 b (e) Most probable sequence is: {a4 , a4 , · · · , a4 } with probability P4n = (0.35)100 = 2.55 × 10−46
17. (a) See Figure 17.4. The compressed codes are: a0 = 0, a1 = 110, a2 = 10100, a3 = 100, a4 = 1011, a5 = 111, a6 = 10101.
(b) The entropy of the source is computed as H(X) = −
6
k=0 Pk
log2 Pk = 2.07
The average code length is: ¯ = 6 Pi i = 1(0.55) + 3(0.1) + 5(0.05) + 3(0.14) + 4(0.06) + R i=0
3(0.08) + 5(0.02) = 2.1 ¯ ≤ H(X) + 1 As it is expected: H(X) ≤ R
145
a0 a3 a1 a5 a4 a2 a6
0.55
0.55
0.55
0.14
0.18
0.55
0 0.14
0.27 0
0.13
0.10
0 1 0.27 0.18
0.10
0.08
0.45 1
0.14
0 0.07
0.06 0.05
0 0.07
0.02
1
0.18 0 0.13
0.13 1
0.08 1
0.06 1
Figure 17.4: Huffman encoder. Code efficiency: η=
H(X) ¯ R
=
2.07 2.1
= 0.98
18. {−3, −2, −1, 0, 2, 3, 5} P ∈ {0.05, 0.1, 0.1, 0.15, 0.05, 0.25, 0.3} (a) H(x) = −
6
k=0 Pk
log2 Pk
= −(0.05 log 2 0.05+0.1 log 2 0.1+0.1 log 2 0.1+0.15 log 2 0.15+0.05 log 2 0.05+ 0.25 log 2 0.25 + 0.3 log 2 0.3) = 2.528 bits/sample (b) fs = 4000 guard = 200 Entropy rate = (2fs + guard)H(x) = (4000 × 2 + 200)H(x) = 20, 731 bits/sec
146
Chapter 17. Compression of Digital Voice and Video (c) See Figure 17.5. The generated codes are: 5(00), 3(01), 0(100), −1(101), −2(110), 2(1110), −3(1111). 0.45
0.3
5
0.55 0
3
0.25
0
0.15
0.3
1
0
0.45 0.55
0.25 1 0.25
0.1
-1
0.15 0
-2
0.1
2
0.05
0.2
0 0.45 1
0.1 1 0.25 0 0.2 0
1 0.1
-3
0.05
1
Figure 17.5: Huffman encoder. ¯ = 6 Pi i = (0.3 + 0.25)(2) + (0.1 + 0.1 + 0.15)(3) + (0.05 + (d) R i=0 0.05)4 = 2.55 ¯ 2.55 R log2 N = 3 H(X) = 2.528 ¯ 2.55 = R
Cr =
= 0.85
η=
0.99
19. Source sequence is 0, 1, 01, 00, 001, 000, 11, 111, 0010, 10, 101, 1111, 010, 0101, 0101 Find the smallest phrases that have not appeared. Phrases are encoded Table 17.2.
20. See Table 17.3.
147
Table 17.2: Lempel-Ziv coding process Parser Output 0 1 01 00 001 000 11 001 010 10 101 111 010 101 1010
Location 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 0110 1011 1100 1110 1111
Encoded Output 00000 00001 00011 00010 01001 01000 00101 01111 01010 00100 10101 10001 00110 11011 11100
148
Chapter 17. Compression of Digital Voice and Video
Table 17.3: Lempel-Ziv coding process Parser Output 1 11 110 0 01 010 10 101 1101 0111 100 0101 01010 00 1111 010100 001
Location 00001 00010 00011 00100 00101 00110 00111 01000 01001 01010 01011 01100 01101 01110 01111 10000 10001
Encoded Output 000001 000011 000100 000000 001001 001010 000010 001111 000111 000101 001110 001101 011000 001000 010101 011010 011101
Chapter 18
VoIP and Multimedia Networking 1. (a) Voice B.W. for telecommunication = 4 KHz Number of samples per second = 2 × 4= 8 Ksamples/s 8 b/sample = 8 × 8 =64 kb/s (b) RTP header=12 bytes (c) RTP encapsulation: RTP(12),UDP(8),IP(≥ 20) Header=12+8+20=40 byte Data=0.5 sec ⇒ 0.5 × 64 kb/s = 32 kb = 4 kbyte Packet = 4 kbyte + 40 byte ≈ 4040 byte
2. (a) Assume packets (segments) of 1,500 byte long including IP, UDP, and RTP headers. The RTP header consists of: • 12 B Common RTP header: 1 B (V+P+X+CSC), 1 B (M+Payload Type), 2 B (Seq. No.), 4 B (Time Stamp), and 4 B (Sync Source ID). • 8 B Contributing Source RTP header: 4 B (Contributing Source1 ID), and 4 B (Contributing Source2 ID). Thus, the total packet payload size = 1,500 B - [20B (IP header) 149
150
Chapter 18. VoIP and Multimedia Networking + 12B (UDP header) + 20B (total RTP headers)]= 1,448 B (b) Total combined two source data rates = 2 × 31 Kb/s ×(1/8) B/b= 7.75 KB/s Time required for one packet =
1,448 B 7.75 KB/s
= 186.8 ms
Number of packets generated in 5 minutes =
5×60 186.8 ms
= 1,605
packets.
3. N/A
4. 1,280×1,024 pixel blocks 1 packet = 0.1 row 1 chunk = 1 pixel cblock (a) Sample sapce ⇒ 77 samples ⇒ each sample (pixel) = 7 b A chunk = 1 pixel block = (8×8 pixels) × 7 b = 448 b A chunk + header = 448 b + (4 B) × 8 (b/B) = 480 b (b) 1 packet = 0.1 row = (0.1)(1,280) = 128 pixel blocks = 128 chunks = 128 chunks × 480 b/chunk = 61,440 b A packet + header = 61,440 b + 12 × 8 (b/B) = 61,536 b (c) Video clip = 4 minutes 1 sec = 30 images Total number of images/min = (4 min) × 60 (sec/min) × 30 = 7,200 images Total number of packets/image = (10 packets/row) × 1,024 (rows) = 10,240 packets/image Total number of packets/4 min = (7,200) × (10,240) = 73,728,000
151 Total number of bits/4 min = (73,728,000) × (61,440 b/packets) = 4.53 Tb Bandwidth =
4.53Tb 4 min
= 18.8 Gb/s
5. Data of Chunk: 1 pixel block × 10 phrases × 5 bits/phrase = 50 b Header of Chunk: = 4 B Chunk = 50 + 4 × 8 = 82 bits (a) Row data = 1280 pixel block × 82 b/pixel block SCTP header = 12 bytes Packet =
1280×82 8
+ 12 = 13, 132 byte ≈ 12.8 kbyte
(b) One frame = 12.8 kbyte/row × 1024 rows = 12.8 Mbyte Required B.W. = 12.8 Mbyte/frame × 30 frames/s × 8 b/B = 3 Gb/s (c) Total data = 3 Gb/s × 2 × 60 × 60 = 2, 700 GB/h
6. (a) FY (t) (y) = P [Y (t) ≤ y] = P [X(t) + 2t ≤ y] = P [X(t) − 2t] = FX(t) (y − 2t) ⇒ fY (t) (y) = FX (y − 2t) =
2 √ 1 e−(y−2t) /2αt 2παt
(b) FY (t)Y (t+1) (y1 , y2 ) = P [X(t) + 2t ≤ y1 , X(t + 1) + 2(t + 1) ≤ y2 ] = FX(t),X(t+1) (y1 − 2t, y2 − 2(t + 1)) ⇒ fY (t)Y (t+s) (y1 , y2 ) = fX(t),X(t+1) (y1 − 2t, y2 − 2(t + 1)) = fX(t) (y1 − 2t)fX(t) (y2 − y1 − 2) =
2
3−(y1√−2t)) /2αt 2παt
7. 70 cycles/min 6 pulses/cycle 1 pulse/chunk
×
−2) e−(y2 −y √1 2πα
2 /2α
152
Chapter 18. VoIP and Multimedia Networking
Q,R,S → 4 samples P,T,U → 1 sample (a) We use variable size chunk. Assume each sample is encoded by 8 bits. A packet consists of: 12B SCTP header + 6 chunks each with 16B chunk header. Total bits of the 3 chunks made by P,T,U pulses = 3(16B (chunk header) × 8 b/B + 1 sample/pulse × 8 b/sample) = 408 b Total bits of the 3 chunks made by Q,R,S pulses = 3(16B (chunk header) × 8 b/B+ 4 samples/pulse × 8 b/sample) = 480 b Total SCTP packet size = (408b + 480b) + [20B (IP header) + 12B (SCTP header)] × 8 b/B = 1144 b/(SCTP packet) 70×1144 b/packet = 1335 b/s Bandwidth = 70 cycles/min = 60s (b) H = Max. Number of Human’s Heartbeat Cycles G = Max. number of Patients L = Max. Link Bandwidth We can choose the above parameters as long as we must can satisfy: L=
H×G×1144 60s
(c) The second option reduces the transmission overhead (packet overhead) but requires a larger bandwidth.
8. N/A
153
9. N/A
10. N/A
154
Chapter 18. VoIP and Multimedia Networking
Chapter 19
Mobile Ad-Hoc Networks 1. N/A
2. N/A
3. N/A
4. N/A
5. N/A
155
156
Chapter 19. Mobile Ad-Hoc Networks
Chapter 20
Wireless Sensor Networks 1. N/A
2. N/A
3. N/A
4. N/A
157