hsn solution

January 8, 2019 | Author: muthuramrr | Category: Osi Model, Computer Network, Internet Protocols, I Pv6, Internet Protocol Suite
Share Embed Donate


Short Description

Download hsn solution...

Description

SOLUTIONS MANUAL

HIGH-SPEED NETWORKS AND INTERNETS PERFORMANCE AND QUALITY OF SERVICE SECOND EDITION

WILLIAM STALLINGS

Copyri ght 2002 2002:: William Stallin gs

TABLE TABLE OF CON TENTS

Chap ter 2: Cha p ter 3: Cha p ter 4: Chap ter 5: 5: Chap ter 6: Chapter 7: Chap ter 8: Cha p ter 9: Chapter 10: 10: Chap ter 11: Cha p ter 12: 12: Chapter 13: 13: Chapter 14: 14: Chap ter 15: 15: Chap ter 16: 16: Chap ter 17: 17: Cha p ter 18: 18: Chap ter 19: 19: Cha p ter 20: 20: Cha p ter 21: 21:

Protocols and A rchitecture .... ....... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ....1 .1 TCP an d IP ....... .......... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ......3 ...3 Fram e Relay ....... .......... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....8 8 Asyn chronou s Transfer Mod e.... e...... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ....1 .11 1 H igh-Speed igh-Speed LANs ..... ....... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ...1 .16 6 Overview of Probability Probability and Stochastic Stochastic Processes Processes.. .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ..18 18 Qu euing Ana lysis... lysis..... ..... ..... .... .... ..... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... ...2 .22 2 Self-Si elf-Sim m ilar Traffic ....... .......... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....27 .27 Congestion Congestion Control in Data Networ ks and Internets .... ...... .... .... .... .... .... .... .... .... .... .... ...2 .28 8 Link Link Level Flow Flow and Error Con trol ..... ....... .... ..... ..... .... .... ..... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ...3 .30 0 TCP Traffic Con tro l...... l.......... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... .......3 ....34 4 Traffi Trafficc and Congestion Congestion Control in in ATM Netw orks .... ...... .... .... .... .... .... .... .... .... .... .... .... .... ..37 37 Overview of Graph Theory and LeastLeast-Cost Cost Paths Paths .... ...... .... .... .... .... .... .... .... .... .... .... .... .... .... ..40 40 Interior Routing Protocols .... ...... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ...4 .45 5 Exterior Exterior Rout Rout ing Protocols and M ulticast .... ...... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ....4 ..47 7 Integrated and Differentiated Differentiated Services .... ....... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ....4 ..49 9 RSVP RSVP an d MPLS......... MPLS............ ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ......5 ...52 2 Overv iew of Inform ation Theory..... Theory....... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... .... ..... ...54 54 Lossless Com p ression ....... .......... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ......5 ...56 6 Lossy Com p ression ....... ........... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ......6 ...61 1

TABLE TABLE OF CON TENTS

Chap ter 2: Cha p ter 3: Cha p ter 4: Chap ter 5: 5: Chap ter 6: Chapter 7: Chap ter 8: Cha p ter 9: Chapter 10: 10: Chap ter 11: Cha p ter 12: 12: Chapter 13: 13: Chapter 14: 14: Chap ter 15: 15: Chap ter 16: 16: Chap ter 17: 17: Cha p ter 18: 18: Chap ter 19: 19: Cha p ter 20: 20: Cha p ter 21: 21:

Protocols and A rchitecture .... ....... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ....1 .1 TCP an d IP ....... .......... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ......3 ...3 Fram e Relay ....... .......... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....8 8 Asyn chronou s Transfer Mod e.... e...... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ....1 .11 1 H igh-Speed igh-Speed LANs ..... ....... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ...1 .16 6 Overview of Probability Probability and Stochastic Stochastic Processes Processes.. .... .... .... .... .... .... .... .... .... .... .... .... .... .... .... ..18 18 Qu euing Ana lysis... lysis..... ..... ..... .... .... ..... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... ...2 .22 2 Self-Si elf-Sim m ilar Traffic ....... .......... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....27 .27 Congestion Congestion Control in Data Networ ks and Internets .... ...... .... .... .... .... .... .... .... .... .... .... ...2 .28 8 Link Link Level Flow Flow and Error Con trol ..... ....... .... ..... ..... .... .... ..... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ...3 .30 0 TCP Traffic Con tro l...... l.......... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... .......3 ....34 4 Traffi Trafficc and Congestion Congestion Control in in ATM Netw orks .... ...... .... .... .... .... .... .... .... .... .... .... .... .... ..37 37 Overview of Graph Theory and LeastLeast-Cost Cost Paths Paths .... ...... .... .... .... .... .... .... .... .... .... .... .... .... .... ..40 40 Interior Routing Protocols .... ...... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ...4 .45 5 Exterior Exterior Rout Rout ing Protocols and M ulticast .... ...... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ....4 ..47 7 Integrated and Differentiated Differentiated Services .... ....... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ....4 ..49 9 RSVP RSVP an d MPLS......... MPLS............ ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ......5 ...52 2 Overv iew of Inform ation Theory..... Theory....... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... ..... ..... .... .... .... ..... ...54 54 Lossless Com p ression ....... .......... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ......5 ...56 6 Lossy Com p ression ....... ........... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ....... ....... ...... ....... ....... ...... ....... ....... ...... ....... ....... ......6 ...61 1

N OTICE

This This man ual contains solutions to all of of the review qu estions estions and homework problems problems in  High-Speed Networks and Internets . If If you sp ot an error in a solution solution or in the word ing of a problem, I wou ld greatly app reciate reciate it if if you wou ld forward the informa informa tion via email email to me at w [email protected]. [email protected]. An errata sh eet for this man ua l, if need ed, is available at ftp:/ tp:/ / shel shelll.shor shore. e.ne net/ t/ membe members rs// w/ s/ ws/ S/  W.S.

CHAPTER 2 PROTOCOLS AND ARCHITECTURE 2.1 The guest effectively places the ord er w ith the cook. The host comm un icates this ord er to the clerk, wh o places the order w ith the cook. The ph one system provid es the p hysical means for the order to be transp orted from host to clerk. The cook  gives the pizza to th e clerk w ith the ord er form (acting as a "header" to the p izza). The clerk boxes the pizza w ith the d elivery ad dr ess, and the d elivery van encloses all of the orders to be d elivered. The road p rovides the p hysical path for d elivery. 2.2 a.

The PMs speak as if they are speaking d irectly to each oth er. For examp le, w hen th e French PM sp eaks, he ad d resses his remarks d irectly to the Chinese PM. H owever, the message is actually passed thr ough two tra nslators via the ph one system. The French PM's tran slator tran slates his remarks into English an d teleph ones these to the Chinese PM's translator, who translates these remarks into Chinese. b.

An interm ediate nod e serves to translate the message before passing it on. 2.3 Perhap s the major disadvan tage is the processing and data overhead . There is processing overh ead because as m any a s seven mod ules (OSI mod el) are invoked to move d ata from the ap plication through the commu nications softw are. There is da ta overhead because of the app ending of mu ltiple headers to the data. Another possible disadvan tage is that there m ust be at least one protocol standard per layer. With so many layers, it takes a long time to develop and prom ulgate the standar ds. 2.4 N o. There is no way to be assured th at the last message gets throu gh, except by acknowledging it. Thus, either the acknowled gment p rocess continues forever, or one army h as to send the last message and th en act with un certainty. 2.5 A case could be mad e either w ay. First , look at the functions performed at the netw ork layer to deal with the commu nications network (hiding th e details from the up per layers). The netw ork layer is responsible for routing d ata throu gh the netw ork, but with a broad cast networ k, routing is not needed . Other fun ctions, such as sequencing, flow control, error control between end systems, can be accomp lished at layer 2, becau se the link layer w ill be a pr otocol directly between -1-

the tw o end systems, with n o intervening switches. So it wou ld seem that a netw ork layer is not needed . Second , consider the netw ork layer from th e point of  view of the u pp er layer using it. The up per layer sees itself attached to an access point into a netw ork sup porting comm un ication with m ultiple devices. The layer for assuring th at da ta sent across a netw ork is delivered to one of a num ber of other end systems is the netw ork layer. This argu es for inclusion of a netw ork layer. In fact, the OSI layer 2 is split into tw o su blayers. The lower su blayer is concerned w ith med ium access control (MAC), assuring that only one end system at a time transm its; the MAC su blayer is also responsible for ad dr essing oth er end system s across the LAN . The u pp er su blayer is called Logical Link Con trol (LLC). LLC p erforms trad itional link control fun ctions. With the MAC/ LLC combination, no netw ork layer is needed (but an internet layer may be needed ). 2.6 The internet p rotocol can be d efined a s a sepa rate layer. The fun ctions p erformed by IP are clearly d istinct from th ose performed at a netw ork layer and those performed at a transport layer, so this would m ake good sense. The session and transp ort layer both are involved in p roviding an en d-to-end service to the OSI u ser, and could ea sily be comb ined . This has been d one in TCP/ IP, wh ich p rovides a d irect ap plication interface to TCP. 2.7 a. N o. This w ould v iolate the pr inciple of separ ation of layers. To layer (N – 1), the N -level PDU is simp ly data. The (N – 1) entity does not know about th e internal format of the N -level PDU. It breaks that PDU into fragments and reassembles them in the proper ord er. b . Each N -level PDU mu st retain its own head er, for the sam e reason given in (a).

-2-

CHAPTER 3 TCP AND IP 3.1 A single control chan nel implies a single control entity that can man age all the resources associated w ith connections to a particular remote station. This may allow m ore pow erful resource control mechanisms. On the other hand , this strategy requires a substantial num ber of perm anent connections, which may lead to bu ffers or state table overhead. 3.2 UDP provid es the source and destination port ad dresses and a checksum th at covers the data field. These functions w ould n ot normally be performed by protocols above the transport layer. Thus UDP provid es a useful, though limited, service. 3.3 The IP entity in the source may n eed the ID and don 't-fragment p aram eters. If the IP source entity needs to fragment, these tw o p aram eters are essential. Ideally, the IP source entity shou ld n ot need to bother looking at th e TTL param eter, since it should have been set to some p ositive value by the source IP user. It can be examined as a reality check. Intermediate systems clearly need to examine the TTL par ameter and w ill need to examine the ID and d on't-fragmen t param eters if  fragmentation is desired. The destination IP entity needs to examine the ID par ameter if reassembly is to be d one and , also the TTL param eter if that is u sed to place a time limit on reassembly. The destination IP entity shou ld n ot need to look  at the don't-fragment parameter. 3.4 If intermediate reassembly is not allowed , the datagram mu st eventually be segmented to the smallest allowable size along the rou te. Once the d atagram has passed throu gh th e netw ork that im poses the smallest-size restriction, the segments may be un necessarily small for later networks, d egrading p erforman ce. On the other han d, intermed iate reassembly requires comp ute and buffer resources at the intermed iate routers. Fur thermore, all segments of a given original datagram wou ld have to p ass through the same intermediate node for reassembly, wh ich w ould p rohibit d ynamic routing. 3.5 The head er is a minimu m of 20 octets. 3.6 Possible reasons for strict source routing: (1) to test som e characteristics of a par ticular p ath, such as transit delay or wh ether or not the p ath even exists; (2) th e source wishes to avoid certain u np rotected n etworks for security reasons; (3) th e source does not trust that the routers are routing p roperly. Possible reason s for loose source routin g: (1) Allow s the sour ce to control some asp ects of the rou te, similar to choosing a long-d istance carrier in th e telephone network; (2) it may be th at not a ll of the routers r ecognize all addr esses and that for a particular remote d estination, the datagram needs to be routed throu gh a "smart" router. 3.7 A buffer is set aside big enough to hold the arr iving d atagram, w hich may arrive in fragments. The difficulty is to know w hen the entire datagram has arrived. N ote that w e cannot simp ly coun t the n um ber of octets (bytes) received so far because du plicate fragm ents may arrive. -3-

a. Each h ole is described b y a d escript or consisting of hole.first, the n u mb er of the first octet in the h ole, relative to the beginn ing of the bu ffer, and hole.last, the nu mb er of the last octet. Initially, there is a single hole d escriptor w ith hole.first = 1 and hole.last = some maximum value. Each arriving fragment is characterized by fragment.first and fragment.last, wh ich can be calculated from the Length a nd Offset fields. 1. Select the next hole descriptor from the hole descriptor list. If there are no more en tries, go to step eight. 2. If fragment.first is greater than hole.last, go to step one. 3. If fragment.last is less than hole.first, go to step one. (If either step tw o or step th ree is true, then the new ly arrived fragment does not overlap w ith the hole in any w ay, so we need p ay no further attention to this hole. We return to the beginning of the algorithm wh ere w e select the next hole for exam ination.) 4. Delete the current entry from the hole descriptor list. (Since neither step two n or step three w as true, the newly arrived fragment does interact with this hole in some w ay. Therefore, the curr ent d escriptor w ill no longer be valid. We will destroy it, and in th e next two steps w e will determ ine wh ether or not it is necessary to create any n ew h ole descriptors.) 5. If fragment.first is greater than hole.first, then create a new hole descriptor "new -hole" w ith new -hole.first equ al to hole.first, and new -hole.last equal to fragment.first minu s one. (If the test in step five is tru e, then th e first p art of the original ho le is not filled by this fragm ent. We create a new d escriptor for this smaller hole.) 6. If fragment.last is less than hole.last and fragment.more fragmen ts is true, then create a new hole descriptor "new-hole", with new -hole.first equal to fragment.last p lus one and new -hole.last equal to hole.last. (This test is the mirror of step five with on e add itional featu re. Initially, we did not know how long the resembled d atagram wou ld be, and therefore w e created a hole reaching from zer o to (effectively) infinity. Eventu ally, w e will receive the last fragmen t of the datagram . At this point, that hole descriptor which reaches from the last octet of the buffer to infinity can be discarded. The fragmen t w hich contains the last fragm ent ind icates this fact by a flag in the inter net head er called "mor e fragm ents". The test of this bit creating in th is statement p revents us from creating a d escriptor for the un needed hole which describes the space from the end of the datagr am to infinity.) 7. Go to step one. 8. If the hole descriptor list is now em pty, the datagram is now comp lete. Pass it on to the higher level protocol processor for fur ther han dling. Otherw ise, return. b . The hole d escriptor list could be mana ged as a separate list. A simpler techniqu e is to pu t each hole descriptor in th e first octets of the hole itself. The descriptor mu st contain hole.last plus p ointer to the pr edecessor and successor holes. 3.8 The original da tagra m in clud es a 20-octet head er and a d ata field o f 4460 octets. The Ethernet frame can t ake a p ayload of 1500 octets, so each frame can carry an IP d atagr am w ith a 20-octet head er and 1480 d ata octets. Note th e 1480 is divisible by 8, so we can u se the m aximu m size fram e for each fragm ent except the last. To fit 4460 d ata octets into frames th at carry 1480 data octets w e need : -4-

3 datagrams × 1480 octets = 4440 octets, p lus 1 datagr am th at carries 20 d ata octets (p lus 20 IP header octets) The relevant fields in each IP fragm ent: Total Lengt h = 1500 More Flag = 1 Offset = 0

Total Lengt h = 1500 More Flag = 1 Offset = 185

Total Lengt h = 1500 More Flag = 1 Offset = 370

Total Length = 40 More Flag = 0 Offset = 555

3.9 For comp uting th e IP checksum , the head er is treated as a sequence of 16-bit word s. and the 16-bit checksu m field is set to all zeros. The checksum is then form ed by performing the one's comp lement add ition of all word s in the head er, and then taking the one's comp lement of the resu lt. We can express this as follows: A = W(1) +' W(2) +' . . . +' W (M) C = ∼A where +' C A ∼A M W(i)

= = = = = =

one's com plement ad d ition 16-bit checksu m on e's com p lem en t su m m at io n m a d e w ith th e ch eck su m va lu e set t o 0 one's com plement of A length of block in 16-bit w ord s valu e of ith w ord

Sup pose that the valu e in word k is changed by Z = new _value – old_value. Let A" be the valu e of A after the chan ge and C" be the value of C after the change. Then A" = A +' Z C" = ∼(A +' Z) = C +' ∼Z 3.10

In general, those param eters that influen ce the p rogress of the fragments thr ough the intern et mu st be includ ed w ith each fragm ent. RFC 791 lists the followin g options that m ust be included in each fragment: Security (each fragment mu st be treated w ith the sam e secu rity policy); Loose Sou rce Rout ing (each fragm ent m ust follow t he sam e loose rout e); Strict Sou rce Rou ting (each fragm ent m ust follow the sam e strict rou te). RFC 791 lists the followin g op tions tha t shou ld be includ ed on ly in the first fragment: Record Route (un necessary, and too m uch overhead to do in all fragments); Internet Timestamp (unnecessary, and too mu ch overhead to d o in all fragments).

3.11

Data plus tran sport head er plus internet header equ als 1820 bits. This da ta is d elivered in a sequ ence of pa ckets, each of which contains 24 bits of netwo rk  head er and up to 776 bits of higher-layer headers and/ or data. Three netw ork  pa ckets are need ed. Total bits delivered = 1820 + 3 × 24 = 1892 bits.

3.12

• Version: Sam e field ap pear s in IPv6 head er; valu e chan ges from 4 to 6. • IHL: Eliminat ed; not need ed since IPv6 head er is fixed length. • Type of Service: This field is eliminat ed . Three b its of this field d efine an 8-level precedence value; this is replaced w ith the priority field, wh ich defines an 8-5-

level preceden ce valu e for no n-congestion-contr ol traffic. The rem aining bits of  the typ e-of-service field deal w ith reliability, d elay, and throu ghp ut; equivalent functionality may be su pp lied w ith the flow label filed. • Total Length : Replaced by Payload Length field. • Identification: Same field, with m ore bits, app ears in Fragm ent head er. • Flags: The More bit app ears in the Fragment head er. In IPv6, only source fragmenting is allowed; therefore, the Don 't Fragment bit is eliminated. • Fragment O ffset: Same field ap pears in Fragment head er • Time to Live: Replaced by H op Limit field in IPv6 header, w ith in p ractice the same interpretation. • Protocol: Replaced by N ext Head er field in IPv6 head er, wh ich either sp ecifies the n ext IPv6 extension head er, or specifies the pr otocol tha t uses IPv6. • Header Checksum : This field is eliminat ed. It wa s felt tha t the va lue of this checksum w as outw eighed by the performan ce pena lty. • Source Address: The 32-bit IPv4 sour ce add ress field is replaced by th e 128-bit source add ress in the IPv6 head er. • Destination Address: The 32-bit IPv4 destina tion ad d ress field is replaced by the 128-bit sour ce ad d ress in the IPv6 head er. 3.13

The recomm end ed ord er, w ith comm ents: 1. IPv6 header: This is the only mand atory head er, and will ind icate if one or more ad ditional headers follow. 2. Hop-by-Hop O ptions header: After the IPv6 head er is processed, a rou ter will need to examine this head er immed iately afterw ard s to determine if  there are any op tions to be exercised on th is hop. 3. Destination O ptions header: For options to be pr ocessed by t he first destination that ap pears in the IPv6 Destination Ad dr ess field plus subsequent d estinations listed in the Routing head er. This head er mu st precede the rou ting head er, because it contains instructions to the nod e that receives this datagram and that w ill subsequently forw ard th is da tagram using the Routing header information. 4. Routing header: This head er mu st precede the Fragmen t header, w hich is processed at th e final destination, wh ereas the routing head er mu st be processed by an interm ediate node. 5. Fragment head er: This head er mu st precede the Authentication head er, because au thentication is performed based on a calculation on the original datagr am; therefore the datagram mu st be reassembled p rior to authentication. 6. Authentication header: The relative order of this head er and the Encapsu lating Security Payload h eader d epend s on the security configuration. Generally, au thentication is performed before encryption so that the au thentication information can be conveniently stored w ith the message at th e d estination for later reference. It is more convenient to do this if the au thentication information ap plies to the unencryp ted m essage; otherw ise the message wou ld hav e to be re-encrypted to verify the auth entication information. 7. Encapsulatin g Security Payload h eader: see preceding comm ent. 8. Destination O ptions header: For op tions to be processed only by the final destination of the p acket. These options ar e only to be processed after th e datagr am has been reassembled (if necessary) and auth enticated (if  necessary). If the Encapsulating Security Payload header is used, then this head er is encrypted and can only be processed after processing tha t head er. -6-

There is no ad vantage to p lacing this head er before the Encapsulating Security Payload h eader; one might as w ell encrypt as m uch of the d atagram as p ossible for security.

-7-

CHAPTER 4 FRAME RELAY 4.1 The argum ent ignores the overh ead of the initial circuit setup and the circuit teardown. 4.2

a. Circuit T = C1 = C2 = C1 = C2 = = = T =

Switching C 1 + C 2 where Ca ll Set up Tim e Message Delivery Time S = 0.2 Propagation Delay + Transmission Time N × D + L/ B 4 × 0.001 + 3200/ 9600 = 0.337 0.2 + 0.337 = 0.537 sec

Datagram Packet Switchin g T = D1 + D 2 + D 3 + D 4 where D 1 = Time to Transmit and Deliver all packets through first hop D 2 = Time to Deliver last packet across second hop D 3 = Time to Deliver last packet across third hop D 4 = Time to Deliver last packet across forth hop There ar e P – H = 1024 – 16 = 1008 d ata bits p er pa cket. A messa ge of 3200 bits require four p ackets (3200 bits/ 1008 bits/ packet = 3.17 packets wh ich w e roun d up to 4 pa ckets). D 1 = 4 × t + p where t = transm ission tim e for one p acket p = p rop agation d elay for one hop D 1 = 4 × (P/ B) + D = 4 × (1024/ 9600) + 0.001 = 0.428 D 2 = D3 = D 4 = t + p = (P/ B) + D = (1024/ 9600) + 0.001 = 0.108 T = 0.428 + 0.108 + 0.108 + 0.108 = 0.752 sec Virtual Circuit Packet Sw itchin g T = V1 + V 2 where V 1 = Ca ll Set up Tim e V 2 = Datagram Packet Switching Time T = S + 0.752 = 0.2 + 0.752 = 0.952 sec

-8-

b. Circuit Tc = Tc = Td = Np Td D1 D2 D1 D2 T T S+

=

Switching vs. Diagram Packet Switching End-to-End Delay, Circuit Switching S + N × D + L/ B End -to-End Delay, Datagram Packet Switching N u m ber of p ack et s =

  L   P − H 

= D1 + (N – 1)D2 = Time to Transmit and Deliver all packets through first hop = Time to Deliver last packet through a hop = N p (P/ B) + D = P/ B + D = (N p + N – 1)(P/ B) + N x D = Td L/ B = (N p + N – 1)(P/ B)

Circuit Sw itchin g vs. Virtual Circuit Packet Sw itchin g TV = End -to-End Delay, Virtual Circuit Packet Switching TV = S + Td TC = TV L/ B = (N p + N – 1)P × B Datagram vs. Virtual Circuit Packet Sw itchin g Td = T V – S 4.3 From Problem 4.2, we have Td = (N p + N – 1)(P/ B) + N

×D

For maximum efficiency, we assum e that N p = L/ (P – H ) is an integer . Also, it is assum ed that D = 0. Thus Td = (L/ (P – H ) + N – 1)(P/ B) To minimize as a function of P, take the d erivative: 0

=

0 = 0 = 0 = (P – H )2 P=H+

d T d / (dP) (1/ B)(L/ (P – H ) + N – 1) – (P/ B)L/ (P – H )2 L(P – H ) + (N – 1) (P – H )2 – LP –LH + (N – 1)(P – H )2 = LH/ (N – 1)  LH   N  − 1

-9-

4.4

The num ber of hop s is one less than th e num ber of nod es visited. a. The fixed nu mber of hops is 2. b . The furth est distance from a station is half-w ay aroun d th e loop. On average, a station w ill send d ata half this distance. For an N-node netw ork, the average num ber of hop s is (N/ 4) – 1. c. 1.

4.5 The mean nod e-nod e path is twice the mean nod e-root path. Nu mber the levels of  the tree with the root as 1 and the deepest level as N. The path from the root to level N r equires N – 1 hop s and 0.5 of the nodes are at this level. The path from the root to level N – 1 has 0.25 of the n od es and a length of N – 2 hop s. H ence the mean path length, L, is given by L =

0.5 × (N – 1) + 0.25 × (N – 2) + 0.125 × (N – 3) + . . .



L =



∑ N (0.5 ) − ∑ i (0.5 )i = N  − 2

i =1

i

i =1

Thus the mean n ode-node p ath is 2N – 4 4.6 Yes. Error s are caugh t at the link level, but th is only catches tran smission err ors. If a packet-switching nod e fails or corrup ts a p acket, the p acket w ill not be delivered correctly. A higher-layer end -to-end protocol, such as TCP, mu st p rovide end -toend reliability, if d esired. 4.7 Reset collision is like clear collision. Since both sides kn ow tha t the oth er w ant s to reset the circuit, they just r eset their variables. 4.8 On each end, a virtual circuit number is chosen from the pool of locally available nu mbers an d has only local significance. Otherwise, there wou ld h ave to be global management of num bers.

-10-

CHAPTER 5 ASYNCHRONOUS TRANSFER MODE 5.1 Controlling

controlled

Controlled

controlling

0000

NO_HALT, NULL

0000

Terminal is uncontrolled. Cell is assigned or on an uncontrolled ATM connection.

1 0 00

HALT, NULL_A, NULL_B

0 00 1

Terminal is controlled. Cell is unassigned or on an uncontrolled ATM connection.

0 10 0

NO_HALT, SET_A, NULL_B

0 1 01

Terminal is controlled. Cell on a controlled ATM connection Group A.

1100

HALT, SET_A, NULL_B

0 0 11

Terminal is controlled. Cell on a controlled ATM connection Group B.

0 01 0

NO_HALT, NULL_A, SET_B

1 01 0

HALT, NULL_A, SET_B

0 11 0

NO_HALT, SET_A, SET_B

1110

HALT, SET_A, SET_B

All other values are ignored . 5.2

Bit by bit

HUNT

Correct Cell by cell Incorrect HUNT

consecutive incorrect HEC

HUNT

consecutive correct HEC

The procedu re is as follows:

-11-

1. In the H UN T state, a cell delineation algorithm is p erformed bit by bit to determine if the HEC coding law is observed (i.e., match between received HEC and calculated H EC). Once a m atch is achieved, it is assum ed th at one head er has been foun d, and the meth od enters the PRESYN CH state. 2. In the PRESYN CH state, a cell stru cture is now assu med . The cell d elineation algorithm is performed cell by cell until the encoding law has been confirmed δ times consecutively. 3. In the SYN CH sta te, the HEC is used for error d etection and corr ection (see Figu re 17.17). Cell delineation is assu med to be lost if the H EC cod ing law is recognized as incorrect α times consecutively. The values of α an d δ are d esign p arameters. Greater values of  δ result in longer delays in establishing syn chronization bu t in gr eater robustn ess against false delineation. Greater v alues of α result in longer d elays in r ecognizing a misalignm ent bu t in greater robu stness against false misalignment. 5.3

a . We reason a s follows. A total of X octets are to be transm itted. This will require  X   a total of    cells. Each cell consists of (L + H) octets, w here L is the n um ber  L  of data field octets and H is the num ber of header octets. Thu s  X   N  =  X  ( L +  H )  L  The efficiency is optima l for all values of X w hich are integ er m u ltiples of the cell informa tion size. In th e op timal case, the efficiency becomes  X   L =  N opt =  X  + H   L + H   L For the case of ATM, with L = 48 and H = 5, w e hav e N op t = 0.91 b . Assu me th at the ent ire X octets to be tran smitted can fit into a single variablelength cell. Then  X   N  =  X  + H +  H v c.

-12-

Transmission efficiency N (variable)

1.0

Nopt 0.9 0.8 0.7 N (fixed) 0.6 0.5 0.4 0.3 0.2 0.1 X

0 48

96

144

192

240

N for fixed-sized cells has a saw tooth shap e. For long m essages, the optim al achievable efficiency is ap pr oached . It is only for v ery sh ort cells that efficiency is rath er low . For va riable-length cells, efficiency can be qu ite high, ap pr oaching 100% for large X. How ever, it does not p rovid e significant g ains over fixed-length cells for most valu es of X. 5.4

a . As w e have alread y seen in Problem 5.3: b .  D =

8 ×  L  R

c.

-13-

 N  =

 L  L + H 

Packetization delay (ms)

Transmission efficiency

2

1.0 D64 N

D32

0.9

4

0.8 8 0.7

16 0.6

8

16

32

64

128

Data field size of cell in octets A d ata field of 48 octets, which is wh at is used in ATM, seems to p rovide a reasonably good trad eoff betw een the requirements of low delay and high efficiency. 5.5 a. The tran smission time for one cell thr ou gh on e switch is t = (53 × 8)/ (43 × 106 ) = 9.86µs. b . The maximum time from w hen a typ ical video cell arrives at the first sw itch (and possibly w aits) un til it is finished being tran smitted by the 5th and last one is 2 × 5 × 9.86µs = 98.6µs. c. The average time from th e inpu t of the first switch to clearing the fifth is (5 + 0.6 × 5 × 0.5) × 9.86µ s = 64.09µs. d . The transmission time is alw ays incurred so the jitter is due on ly to the w aiting for switches to clear. In the first case the ma ximum jitter is 49.3 µs. In the second case th e av erage jitter is 64.09 – 49.3 = 14.79µs.

-14-

5.6

a . The reception of a valid BOM is requ ired to enter reassembly mod e, so any sub sequ ent COM and EOM cells are rejected. Thu s, the loss of a BOM resu lts in the complete loss of the PDU. b . Incorrect SN pr ogression betw een SAR-PDUs reveals th e loss of a COM, except for the cases covered by (c) and (d) below. The resu lt is that at least th e first 40 octets of the AAL-SDU is correctly received , nam ely the BOM tha t origina lly began the reassembly, and any subsequ ent COMs up to th e point wh ere cell loss occurr ed. Data from t he BOM th rou gh th e last SAR-PDU received w ith a correct SN can be p assed u p to the AA L user as a p artial CPCS-PDU. c. The SN wr aps arou nd so that there is no incorrect SN progr ession, but th e loss of data is detected by th e CPCS-PDU being un dersized. In th is case, only th e BOM may be legitimately retrieved. d . The same an sw er as for (c).

5.7

a . If the BOM of th e second block arrives before th e EOM of the first block, then the partially reassembled first block must be released. The entire partially reassembled CPCS-PDU received to th at p oint is considered valid and passed to the AAL user along with an error indication. This mechanism w orks wh en  just th e EOM is lost or w hen a cell burst knocks out som e COMs followed by the EOM. b . This might be d etected by th e SN m echanism . If not, the loss will be d etected w hen the Btag an d Etag fields fail to ma tch. The Length ind ication m ay fail to pick up this error if the cell bust loses as m any cells as are ad ded by concatenation th e tw o CPCS-PDU fragmen ts. In th is case only the first BOM may be legitimately retrieved.

5.8

a . Single bit errors are n ot picked u p u ntil the CPCS-PDU CRC is calcu lated, an d they resu lt in the discarding of the entire CPCS-PDU. b . Loss of a cell w ith SDU=0 is detected by a n in correct CRC. If the CRC fails to catch the error, the Length field m ismatch ensu res that th e CPCS-PDU is discarded. c. Loss of a cell with SDU=1 is d etectable in three w ays. First, the SAR-PDUs of  the following CPCS-PDU may be ap pend ed to the first, resulting in a CRC error or Length mismatch being flagged w hen th e second CPCS-PDU trailer arrives. Second, th e AAL may en force a length lim it wh ich, if exceeded w hile app end ing the second CPCS-PDU, can flag an error an d cause the assembled da ta to be discarded. Third, a timer m ay be attached to the CPCS-PDU reassem bly; if it expires b efore the CPCS-PDU is comp letely received, th e assembled data is discard ed.

-15-

CHAPTER 6 HIGH-SPEED LANS 6.1

a . Assu me a m ean d istance between stations of 0.375 km. This is an app roximation based on the following observation. For a station on on e end, the aver age d istance to any other station is 0.5 km . For a station in the center, the average d istance is 0.25 km. With this assum ption, the time to send equals transm ission time plu s propag ation time. 3

T  =

10 bits 7

10 bps

+

375 m 6

200 × 10 m sec

= 102

sec

b. T interfere

=

375 m 6

200 × 10 m sec

T interfere (bit −times )

6.2

= 1.875

sec

= 107 × 1.875 × 10 −6 = 18.75 bit − times

a . Again, assum e a m ean d istance between stations of 0.375 km. 3

T  =

10 bits 8

10 bps

+

375 m 6

200 × 10 m sec

= 12

sec

b. T interfere

=

375 m 6

200 × 10 m sec

T interfere (bit −times )

= 1.875

sec

= 108 × 1.875 × 10−6 = 187.5 bit − times

6.3 The fraction of slots wasted du e to mu ltiple transm ission attemp ts is equal to th e probability that there w ill be 2 or more tran smission attem pts in a slot. Pr[2 or m ore attem pt s] = 1 – Pr[no att emp ts] – Pr[exactly 1 attem pt ] = 1 – (1 – p)N – Np (1 – p)N –1 6.4 Slot time

Slot rate

=

Propagation Time + Transmission Time

=

103m / (2 × 108 m/ sec) + 102 bits/ 107 bp s

=

15 × 10 –6 sec

=

1/ Slot tim e = 6.67 × 104

Data rate =

Data r ate p er station for N 6.5 Define PF i = Pi

=

× Slot r ate = 6.67 × 106 bp s stations = (6.67 × 106 )/ N

100 bits/ slot

Pr[fail on attempt i] Pr[fail on first (i – 1) attemp ts, succeed on i th ] -16-

Then, the mean nu mber of retransm ission attemp ts before one station successfully retransm its is given by: E [retransmissions] =



∑ iPi

i =1

For two stations: PF i = 1/ 2K w here K = M IN[i, 10] i −1 Pi = (1 − PF i ) × PF   j  j =1



PF 1 =

0.5; P1 = 0.5

PF 2 =

0.25; P2 = 0.375

PF 3 =

0.125; P3 = 0.109

PF 4 =

0.0625; P4 = 0.015

The remaining term s are n egligible E[retransmissions] = 1.637

-17-

CHAPTER 7 OVERVIEW OF PROBABILITY AND STOCHASTIC PROCESSES 7.1 You sh ould always sw itch to th e other box. The box you initially chose had a 1/ 3 probability of containing th e bankn ote. The other tw o, taken together , have a 2/ 3 probability. But once I open the on e that is emp ty, the other now has a 2/ 3 probability all by itself. Therefore by switching you raise your chances from 1/ 3 to 2/ 3. Don't feel bad if you got this w rong. Because th ere are only tw o boxes to choose from a t the end , there is a strong in tu ition to feel tha t the od d s are 50-50. Marilyn Vos Savant published the correct solution in Parade Magazine, and subsequently (December 2, 1990) published letters from several distinguished math ematicians blasting her (they w ere w rong, not her). A cognitive scientist at MIT presented this problem to a nu mber of Nobel physicists and th ey system atically gave th e wr ong an sw er (Piattelli-Palmar ini, M. "Probab ility: N either Rational nor Capricious."  Bostonia, March 1991). 7.2 The same  Bostonia article referenced above reports on an experiment w ith this pr oblem. Most subjects gave the an sw er 87%. The vast majority, includ ing m any ph ysicians, gave a n u mb er above 50%. We need Bayes' Theorem to g et the correct answer: Pr [Disease positive ] =

=

Pr[positive Disease ]Pr [Disease ]

Pr[ positive Disease ]Pr[ Disease] + P r[p ositiv e W ell ]P r[ W ell ]

(0.87 )(0.01) = 0.063 ( 0.87 )( 0.01) + (0.13)(0.99 )

Many p hysicians wh o guessed w rong lamented , "If you are right, there is no point in m aking clinical tests!" 7.3 Let WB equ al th e even t {w itness rep orts Blue cab}. Then : Pr [Blu e WB] =

=

Pr [WB Blu e ]Pr[ Blue ]

Pr [WB Blu e ]Pr[ Blue ] + Pr [WB Green ]Pr [Gr een ]

( 0.8 )(0.15 ) = 0.41 (0.8 )(0.15) + ( 0.2 )( 0.85)

This exam ple, or som ething similar, is referred to as "the jur or's fallacy."

-18-

7.4

a . If K > 365, then it is imp ossible for all values to be d ifferent. So, we assu me K  365. N ow, consider the n um ber of different w ays,  N , that we can have K values w ith no d up licates. We may choose any of the 365 values for the first item, any of the remaining 364 nu mbers for the second item, and so on. H ence, the nu mber of different w ays is:  N  = 365 × 364×

K

× (365 − K + 1) =

365!

(365 − K )!

If we remove th e restriction that th ere are no d up licates, then each item can be any of 365 valu es, and the tot al nu mb er of possibilities is 365 K . So th e probability of no d up licates is simply th e fraction of sets of values th at h ave no d u plicates out of all possible sets of valu es: Q (K ) =

b . P (K ) = 1 − Q( K ) = 1 −

365! ( 365 − K )! 365 K 

=

365!

( 365 − K ) !× 365 K 

365!

(365 − K )!× 365 K 

Many p eople wou ld gu ess that to have a probability greater than 0.5 that there is at least one du plicate, the nu mber of people in the grou p w ould h ave to be about 100. In fact, the n u mber is 23, with P(23) = 0.5073. For K = 100, th e p robability of at least on e d up licate is 0.9999997. 7.5

a . The samp le space consists of the 36 equ ally probab le ordered pa irs of integers from 1 to 6. The d istribution of X is given in th e followin g table. xi

1

2

3

4

5

6

pi

1/ 36

3/ 36

5/ 36

7/ 36

9/ 36

11/ 36

For example, there are 3 pairs whose maximum is 2: (1, 2), (2, 2), (2, 1). b.

  1     3     5     7     9    11   161 = E[ X ] = 1  + 2  + 3   + 4   + 5  + 6   = ≈ 4.5  36    36    36    36    36    36   36   1  

  3  

  5  

  7  

  9  

 11  

[ 2 ] = 1 36   + 4 36   + 9  36   + 16 36   + 25 36    + 36 36    =

E  X 

Var[X] = E[X2] – µ2 = 1.75 7.6

 X 

=

1.75

791 36

≈ 22.0

= 1.3

a. xi

-1

2

3

-4

5

-6

pi

1/ 6

1/ 6

1/ 6

1/ 6

1/ 6

1/ 6

b . E[X] = –1/ 6. On a verage th e player loses, so the gam e is unfair -19-

7.7

a. xi

-E

E

2E

3E

pi

125/ 216

75/ 216

15/ 216

1/ 216

b . E[X] = – 17E/ 216 = -0.8E. H ere's anoth er gam e to avoid . 7.8

a . E[X2] = Var[X] + µ2 = 2504 b . Var[2X + 3] = 4Var[X] + Var[3] = 16; stan d ard d eviat ion = 4 c. Var[–X] = (–1) 2 Var[X] = 4; stand ard d eviation = 2

7.9 The probability d ensity fun ction f(r) mu st be a constant in th e interval (900, 1100) and its integra l mu st equ al 1; ther efore f(r) = 1/ 200 in th e interv al (900, 1100). The n P r[950 ≤ r ≤ 1050] = 7.10

1 1050 ∫ dr = 0.5 200 950

Let Z = X + Y Var[Z] = E[(Z – µ Z )2] = E[(X + Y – µX – µY)2] = E[(X – µ X)2] + E[(Y – µ Y)2] + 2E[(X – µ X)(Y – µY)] = Var[X] + Var[Y] + 2r(X, Y)σXσY The greater th e correlation coefficient, the greater the va riance of the su m. A similar derivation shows that the greater the correlation coefficient, the smaller the va riance of the d ifference.

7.11

E[X] = Pr[X = 1]; E[Y] = Pr[Y = 1]; E[XY] = Pr [X = 1, Y = 1] If X an d Y are uncor related , then r(X, Y) = Cov[X, Y] = 0. By th e d efinition of  covar iance, we therefore have E[XY] = E[X]E[Y]. Ther efore Pr[X = 1, Y = 1] = Pr [X = 1]Pr[Y = 1] By similar reasoning , you can show tha t the other th ree joint p robabilities are also products of the corresponding marginal (single variable) probabilities. This pr oves th at P(x, y) = P(x)P(y) for a ll (x, y), w hich is th e d efinition of  independence.

7.12 a. N o. X and Y are d epend ent because the value of X determ ines the value of Y. b . E[XY] = (–1)(1)(0.25) + (0)(0)(0.5) + (1)(1)(0.25) = 0 E[X] = (–1)(0.25) + (0)(0.5) + (1)(0.25) = 0 E[Y] = (1)(0.25) + (0)(0.5) + (1)(0.25) = 0.5 Cov(X, Y) = E[XY] – E[X]E[Y] = 0 c. X and Y are u ncorrelated because Cov(X, Y) = 0 7.13

µ(t) = E[g(t)] = g(t);

7.14

E[Z] = µ(5) = 3; E[W] = µ(8) = 3 E[Z2] = R(5, 5) = 13; E[W 2] = R(8, 8) = 13

Var[g(t)] = 0; R(t 1, t 2) = g(t 1)g(t 2)

-20-

E[ZW] = R(5, 8) = 9 + 4 exp(–0.6) = 11.195 Var[Z] = E[Z 2 ] – (E[Z])2 = 4; Var[W] = 4 Cov(Z, W) = E[ZW] – E[Z]E[W] = 2.195 7.15

[ 2]=1

We have E[ Z i ] = 0

Var[ Z i ] = E  Z i

[

E  Z i Z  j

]=0

for i

≠  j

The autocovariance: C(t m , t m +n ) = R(tm , t m +n ) – E[Z m ]E[Z m +n ] = E[Z m Z m +n ]

  K  E[ Z m Z m + n ] = E  ∑  i = 0

   K  ∑ i Z m − i      j= 0

   j Z m + n − j      

Because E[Z iZ j] = 0, only the ter ms w ith (m – i) = (m + n – j) are non zero. Using

[ 2 ] = 1, w e have

E  Z i

C[t m t m + n ] =





i =0

i

n +i

w ith the convention that α j = 0 for j > K

The covariance does not depen d on m an d so the sequen ce is stationary. 7.16

E[A] = E[B] = 0; E[A 2] = E[B2] = 1; E[AB] = 0; E[Xn ] = 0 C(t m , t m +n ) = R(tm , t m +n ) – E[Xm ]E[Xm +n ] = E[Xm Xm +n ] = E[(A cos(m λ) + B sin(m λ ))(A cos((m+n) λ) + B sin((m+ n) λ ))] = cos(m λ )cos((m+n) λ ) + sin(m λ)sin((m+n)λ) = cos(n λ) The final step uses the identities sin(α + β) = sin α cos β + cos α sin β ;

cos(α + β ) = cos α cos β – sin α sin β

-21-

CHAPTER 8 QUEUING ANALYSIS 8.1 Consid er the experience of a single item ar riving. When t he item arr ives, it will find on average r items already in th e queu e or being served. When th e item comp letes service and leaves the system, it will leave behind on average th e same nu mber of  items in the queue or being served, nam ely r . This is in accord w ith saying that the average num ber in the system is r . Furth er, the average time that the item w as in the system is T r . Since items arrive at a rate of  λ, we can reason that in the time T r , a total of λT r  items must have arrived. Thu s r  = λT r . 8.2 a.

The height of the shad ed p ortion equals n(t). b . Select a time p eriod that begins and end s with th e system em pty. Without loss of generality, set the beginning of the period as t = 0 and the end as t = τ. Then a(0) = d (0) = 0 and a(τ) = d(τ). The shad ed area can be comp uted in two different w ays, which mu st yield the same resu lt: a(

)

∫ [a(t ) − d(t )]dt  = ∑ t k  k =1

0

If w e view th e shad ed ar ea as a sequen ce (from left to right) of vertical rectangles, we get the integral on the left. We can also view th e shad ed area as a sequence (from bottom to top) of horizontal rectangles with u nit height and

-22-

width t k , where t k  is the time that th e kth item remains in the system; then w e get the sum mation on the right. Therefore:

∫ [n( t ) ]dt  = a( )T 



0

8.3

Tq = q /  λ = 8/ 18 = 0.44 hou rs

8.4

No.



=



=

T r  T r 

12.356 ≠ 25.6 × 7.34

8.5 If there ar e N serv ers, each serv er expects λ T/ N customers during time T. The expected time for that server to service all these customers is λ TTs/ N. Dividing by the length of the p eriod T, we are left w ith a u tilization of  λTs/ N . 8.6

ρ = λ Ts = 2 × 0.25 = 0.5

Average nu mber in system = r = ρ/ (1 – ρ) = 1 Average nu mber in service = r – w = ρ = 0.5 8.7 We need to use the solution of the quad ratic equation, wh ich says that for ax 2

+ bx + c = 0, we h ave  x

w=

ρ

2/

(1 – ρ) = 4

=

−b ±

b2

− 4ac

2a

ρ2 + 4ρ – 4 = 0 = −2 + 8.8

8

= 0.828

ρ = λ Ts = 0.2 × 2 = 0.4 Tw = ρTs/ (1 – ρ) = 0.8/ 0.6 = 4/ 3 minutes = 80 second s Tr = T w + T s = 200 second s mT w (90) =

T w

  100     = 1.333 ln ( 4) = 4.62 minutes  100 − 90  0.4

ln 

8.9 Ts = (1000 octets

× 8 bits/ octet)/ 9600 bp s = 0.833 secs. ρ = 0.7

Constant-length message: T w = ρTs / (2 – 2 ρ) = 0.972 secs. Exponentially-distributed: T w = ρTs/ (1 – ρ) = 1.944 secs.

-23-

8.10 Ts = (1)(0.7) + (3)(0.2) + 10(0.1) = 2.3 ms Define S as the ran d om va riable that rep resents service time. Then 2 T s

= Var[S] = E[(S − T s )2 ] = (1 − 2.3 )2 (0.7 ) + ( 3 − 2.3 )2 (0.2 ) + (10 − 2.3 )2 (0.1) = 7.21ms

Using th e equat ions in Figu re 8.6a: A = 0.5 (1 + (7.21/ 5.29)) = 1.18 a.

ρ = λ Ts = 0.33 × 2.3 = 0.767 Tr = T s + ( ρTs A)/ (1 – ρ ) = 2.3 + (0.767 r=

b.

× 2.3 × 1.18)/ (0.233) = 11.23 ms

ρ + (ρ 2 A)/ (1 – ρ ) = 3.73 messag es

ρ = λ Ts = 0.25 × 2.3 = 0.575 Tr = 2.3 + (0.575 × 2.3 × 1.18)/ (0.425) = 5.97 ms r = 0.575 + (0.575 × 0.575 × 1.18)/ (0.425) = 1.49 messag es

c.

ρ = λ Ts = 0.2 × 2.3 = 0.46 Tr = 2.3 + (0.46 × 2.3 × 1.18)/ (0.54) = 4.61 ms r = 0.46 + (0.46 × 0.46 × 1.18)/ (0.54) = 0.92 messa ges

8.11

λ = 0.05 msg/ sec;

Ts = (14,400 × 8)/ 9600 = 12 sec;

ρ = 0.6

a. Tw = ρTs/ (1 – ρ) = 18 sec

ρ2 / (1 – p) = 0.9

b. w =

8.12 a. mean batch service time = T sb = M Ts ; batch variance =

2 T sb

=  M 

2 T s

To get the w aiting tim e, Twb, use the equation for T w from Table 8.6a:

T w

=

T wb

=

T s A

1−

=

(T 2s +

2(1 −

2 T s

)

);

2  M 2T s2 + M  T  ( )  M  s

2(1 −

)

=

then,

( M T 2s + 2(1 −

-24-

2 T s

)

),

w here

=

( M T s ) =

 M 

T s

b . T 1

T w

=

1  M 

×0+

1  M 

× T s +

= T wb + T 1 =

1

× 2T s + +

1

K

 M 

( M T 2s + 2(1 −

 M 

2 T s

)

× ( M  − 1)T s =

 M  − 1

2

T s

) +  M  − 1 T  2

s

c. The equation for Tw in (b) redu ces to the equ ation for T w in (a) for M = 1. For batch sizes of M > 1, the waiting time is greater than the w aiting time for Poisson inpu t. This is due largely to the second term, caused by w aiting w ithin the batch. 8.13 a.

ρ = λ Ts = 0.2 × 4 = 0.8;

b . Tr = 12;

T q

r = 2.4;

σr = 2.4

= 9.24

8.14 Ts = T s1 = T s2 = 1 ms a. If we assum e that the two types are ind epend ent, then the combined arrival rate is Poisson. λ = 800; ρ = 0.8; Tr = 5 ms b. c. d.

λ 1 = 200; λ2 = 600; ρ1 = 0.2 ρ2 = 0.6; λ 1 = 400; λ2 = 400; ρ1 = 0.4 ρ2 = 0.4; λ 1 = 600; λ2 = 200; ρ1 = 0.6 ρ2 = 0.2;

8.15 Ts = (100 octets

Tr1 = 2 ms; Tr2 = 6 ms Tr1 = 2.33 ms;

Tr2 = 7.65 ms

Tr1 = 3 m s; Tr2 = 11 ms

× 8)/ (9600 bps) = 0.0833 sec.

a. If the load is evenly d istributed amon g the links, then the load for each link is 48/ 5 = 9.6 packets per second . ρ = 9.6 × 0.0833 = 0.8; Tr = 0.0833/ (1 – 0.8) = 0.42 sec b.

ρ = λ Ts/ N = (48 × 0.833)/ 5 = 0.8 T r 

= T s +

× T s = 0.13 sec  N  × (1 − ) C 

8.16 a. The first char acter in a M-character p acket mu st w ait an avera ge of M – 1 character interarrival times; the second character m ust w ait M – 2 interarrival times, and so on to th e last character, which d oes not w ait at all. The expected interar rival time is 1/  λ . The avera ge w aiting time is therefore W i = (M – 1)/ 2λ . b . The arrival rate is K λ / M, and the service time is (M + H)/ C, giving a utilization of  ρ = K λ(M + H)/ MC. Using the M/ D/ 1 formula,

W o =

T s

2−2

( M  + H )2 (C 2 M ) = 2 − ( 2K ( M  +  H ) CM ) K

c.

-25-

T  = W i + W o + T s

=

 M  − 1

2

2 M  + H ) ( + 2 M C 2 − 2 K C( M  + H )

K

The plot is U-shap ed. 8.17 a. System throughpu t: Λ = λ + P Λ; therefore Server utilization: ρ = ΛTs = λTs / (1 – P) (1 − P )T s T s = T q = 1− 1 − P − T s

Λ = λ / (1 – P)

b . A single custom er may cycle around mu ltiple times before leaving th e system. The mean nu mber of p asses J is given by: ∞ i− 1 1 2 =  J = 1(1 − P ) + 2(1 − P )P + 3(1 − P )P + = (1 − P ) ∑ iP 1−P i =1 K

Because the n um ber of passes is indep end ent of the queu ing time, the total time in the system is given by the p rodu ct of the tw o mean s, JT r .

-26-

CHAPTER 9 SELF-SIMILAR TRAFFIC 9.1 L0 = 1; L1 = 2/ 3; L2 = (2/ 3)(2/ 3) = (2/ 3)2 ; LN = (2/ 3) N 9.2 Base 3 uses th e d igits 0, 1, 2. A fraction in b ase 3 is represen ted by X = 0.A 1 A2 A3 …, w here A i is 0, 1, or 2. The value r epr esented is: A A A X = 1 + 22 + 33 + 3 3 3 If w e imagine th e interval [0, 1] is divid ed in to thr ee equa l pieces, then th e first d igit A 1 tells us w heth er X is in th e left, mid d le, or righ t piece. For exam ple, all nu mb ers with A 1 = 0 are in th e left piece (valu e less tha n 1/ 3). The second d igit A 2 tells us w heth er X is in th e left, midd le, or right t hird of the given p iece specified by A 1. In the Can tor set, we begin by d eleting the mid d le third of [0, 1]; this rem oves all po ints, and o nly those p oints, wh ose first digit is 1. The p oints left over (the only ones w ith a remaining chance of being in the Cantor set) mu st have a 0 or 2 in the first d igit. Similarly, points w hose second d igit is 1 are d eleted at th e next stage in the construction. By rep eating this argum ent, we see that the Cantor set consists of  all points w hose base-3 expan sion contains no 1's. L

9.3 Multiplying each mem ber of this sequence by a factor of 2 prod uces … 1/ 4, 1/ 2, 1, 2, 4, 8, 16 … w hich is the sam e sequen ce. The sequen ce is self-similar w ith a scaling factor of 2. 9.4 For the Pareto d istribution Pr[X Pr[Y

y] = Pr[ln(X/ k)

y] = Pr[X

x] = F(x) = 1 – (k/ x) α ke y ] = 1 – (k/ (key ))α = 1 – e–α y

w hich is the expon ential distribution. 9.5

a . H =1: E[x(at)] = aE[x(t)]; Var[x(at)] = a2 Var[x(t)] H =0.5: E[x(at)] = a0.5E[x(t)]; Var[x(at)] = aVar[x(at)] For H = 1, the m ean scales linearly w ith the scaling factor and the var iance scales as the squ are of the scaling factor. Intu itively, one m ight arg ue th at this is a "strong " man ifestation of self-sim ilarity. b . In genera l, E[ax(t)] = aE[x(t)]; Var[ax(t)] = a 2 Var[x(t)]. Therefor e E[x(at)] = aH–1E[ax(t)]; Var[x(at)] = a 2H–2Var[ax(t)] c. H =1: E[x(at)] = E[ax(t)]; Var[x(at)] = Var[ax(t)] 0.5 H =0.5: E[x(at)] = E[ax(t)]/ a ; Var[x(at)] = Var[x(at)]/ a Again , there is a strong m anifestation of self-similarity for H = 1.

-27-

CHAPTER 10 CONGESTION CONTROL IN DATA NETWORKS AND INTERNETS 10.1 1. It does not guaran tee that a particular node w ill not be swamp ed w ith frames. 2. There is no good w ay of distributing perm its w here they are most needed . 3. If a per mit is accid entally d estroyed , the cap acity of the netw ork is inadvertently reduced. 10.2 Throughp ut is as follows:

λ

0 A-A' traffic

0.8

B-B' traffic

λ

Total throu ghpu t

0.8 +

1

λ

1

10

(1.8 × 0.8)/ (0.8 + λ) 1.8λ / (0.8 +

λ

λ

10

1.44/ 10.8 = 0.13

λ)

18/ 10.8 = 1.67

1.8

1.8

The plot: 2.0

Total Throughput

1.5

     t     u     p       h     g     u     o     r       h       T

B-B' 1.0

A-A'

0.5

0.0 0

For 1

λ

1

2

3

4

5

6

7

8

9

λ 10, the fraction of throu ghp ut that is A-A' traffic is 0.8/ (0.8 +

10

λ).

10.3 This problem is analyzed in [NAGL87]. The queue length for each outgoing link  from a router w ill grow continually; eventually, the transit time throu gh the qu eue exceeds th e TTL of the incoming p ackets. A simp listic analysis wou ld say th at, un der th ese circum stances, the incoming packets are discard ed an d n ot forw ard ed and no d atagram s get through . But w e have to take into account the fact that the -28-

router shou ld be able to discard a packet faster than it transmits it, and if the rou ter is d iscard ing a lot of p ackets, qu eue tran sit time declines. At this point, the argum ent gets a bit involved, but [NAGL87] show s that, at best, the rou ter w ill transm it some datagram s w ith TTL = 1. These will have to be discard ed by the next router th at is visited. 10.4 k  = 2 + 2 ×

T td  + R u

= 2 + 2a 8 × Ld  wh ere the variable a is the same one d efined in Ch apter 11. In essence, the u pp er part of the fraction is the length of the link in bits, and th e lower p art of the fraction is the length of a fram e in bits. So the fraction tells you how ma ny fram es can be laid out on the link at one time. Multiplying by 2 gives you the rou nd -trip length of the link. You w ant you r sliding w indow to accommod ate that nu mber of  frames so that you can continu e to send frames un til an acknow ledgment is received. Add ing 1 to that total takes care of roun ding u p to th e next whole nu mber of frames. Add ing 2 instead of 1 is just an add itional mar gin of safety.

10.5 The average qu eue size over th e p revious cycle and the curren t cycle is calculated. This value is the thr eshold . By avera ging over tw o cycles instead of just mon itoring current qu eue length, the system avoids reacting to temporary su rges that w ould n ot necessarily prod uce congestion. The average queue length m ay be comp uted by determ ining the area (prod uct of queu e size and time interval) over the tw o cycles and dividing by the time of the two cycles

-29-

CHAPTER 11 LINK-LEVEL FLOW AND ERROR CONTROL 11.1 a. Because only one frame can be sent at a time, and transm ission mu st stop u ntil an ackn ow ledgm ent is received , there is little effect in increasing the size of the message if the frame size remains the same. All that this would affect is connect and disconnect time. b . Increasing the nu mber of frames wou ld d ecrease frame size (num ber of  bits/ frame). This wou ld low er line efficiency, because the p ropagation tim e is unchanged but m ore acknowledgments would be needed. c. For a given m essage size, increasing th e frame size decreases the nu mber of  frames. This is th e rever se of (b). 11.2 Let L be the n um ber of bits in a fram e. Then , using Equ ation 11.4: a=

Propagation Delay Transmission Time

=

20 × 10−3

(

 L 4 × 103

)

=

80  L

Using Equation 11.2:

1

S

=

 L

≥ 160

1 + 2a

=

1 1 + (160 L)

≥ 0.5

Therefore, an efficiency of at least 50% requ ires a frame size of at least 160 bits. 11.3 a =

Propagation Delay  L R

=

270 × 10 −3 10 3 106

= 270

a. S = 1/ (1 + 2a) = 1/ 541 = 0.002 b . Using Equ ation 11.5: S = W/ (1 + 2a) = 7/ 541 = 0.013 c. S = 127/ 541 = 0.23 d . S = 255/ 541 = 0.47 11.4 A

→ B:

B → C:

× 5 µsec = 20 msec 1000 = 10 msec Transmission time p er frame = 100 × 103 Pr op a ga tion t im e = 1000 × 5 µsec = 5 msec Propagation time = 4000

Transmission time p er frame = x = 1000/ R R = data rate between B and C (un know n) A can transmit three frames to B and then m ust w ait for the acknowledgm ent of the first fram e before tran smitting a d d itional fram es. The first frame takes 10 msec to tran smit; the last bit of the first fram e arrives at B 20 msec after it was transm itted, and therefore 30 msec after the frame tran smission began . It will take -30-

an ad ditional 20 msec for B's acknow ledgment to return to A. Thus, A can transm it 3 frames in 50 msec. B can tran smit on e frame to C a t a tim e. It takes 5 + x msec for the fram e to be received at C and an ad ditional 5 msec for C's acknow ledgment to retu rn to A. Thus, B can tran smit on e frame every 10 + x msec, or 3 fram es every 30 + 3x msec. Thus: 30 + 3x = 50 x = 6.66 msec R = 1000/ x = 150 kbp s 11.5 Roun d tr ip pr opagation d elay of the link = 2

×L×t

Time to transmit a frame = B/ R To reach 100% utilization, the transmitter should be able to transmit frames continu ously du ring a round trip prop agation time. Thu s, the total nu mber of  frames transmitted w ithout an ACK is:  N  =

2 × L × t     B R + 1 ,  

where  X  is the smallest integer greater than or equal to X

This num ber can be accomm odated by an M-bit sequence nu mber w ith:  M  = log 2 ( N )

11.6 In fact, REJ is not n eeded at all, since the send er w ill time out if it fails to receive an ACK. The REJ imp roves efficiency by inform ing th e send er of a bad frame a s early as possible. 11.7 Assum e a 2-bit sequence num ber: 1. Station  A send s fram es 0, 1, 2 to station B. 2. Station  B receives all three fram es and cumu latively acknowledges w ith RR 3. 3. Because of a noise burst, the RR 3 is lost. 4.  A times out and retransm its frame 0. 5.  B has alread y ad van ced its receive w ind ow to accept fram es 3, 0, 1, 2. Thu s it assum es that frame 3 has been lost and th at this is a new fram e 0, wh ich it accepts. 11.8 Use the following formu las: a S&W GBN (7)

0.1 (1 – P)/ 1.2 (1–P)/ (1+0.2P)

1. (1 – P)/ 3 (1–P)/ (1+2P)

GBN (127) SREJ (7) SREJ (127)

(1–P)/ (1+0.2P) 1–P 1–P

(1–P)/ (1+2P) 1–P 1–P -31-

10 100 (1 – P)/ 21 (1 – P)/ 201 7(1–P)/ 21(1+6P 7(1 – P)/ 201(1+6P) ) (1 – P)/ (1+20P) 127(1–P)/ 201(1+126P) 7(1 – P)/ 21 7(1 – P)/ 201 1–P 127(1 – P)/ 201

For a given valu e of a, the u tilization v alues chan ge very little as a fun ction of P over a reason able range (say 10 -3 to 10-12). We have th e following ap pr oximate valu es for P = 10-6: a Stop -and -w ait GBN (7) GBN (127) SREJ (7) SREJ (127)

0.1 0.83 1.0 1.0 1.0 1.0

1.0 0.33 1.0 1.0 1.0 1.0

10 0.05 0.33 1.0 0.33 1.0

100 0.005 0.035 0.63 0.035 0.63

11.9a.

•••

0

1

2

3

4

5

6

7

0

•••

•••

0

1

2

3

4

5

6

7

0

•••

•••

0

1

2

3

4

5

6

7

0

•••

b.

c.

11.10 A lost SREJ frame can cause p roblems. The send er never know s that the frame was not received, unless the receiver times out and retransmits the SREJ. 11.11 From th e stand ard : "A SREJ frame sh all not be tra nsm itted if an ear lier REJ exception condition has n ot been cleared (To do so w ould request retransm ission of a d ata frame that w ould be retransmitted by th e REJ operation.)" In other w ord s, since the REJ requ ires the station r eceiving th e REJ to retra nsm it the rejected frame and all subsequent frames, it is redun da nt to p erform a SREJ on a frame that is already schedu led for retransmission. Also from the stan d ard : "Likew ise, a REJ frame sh all not be tran smitted if one or m ore ear lier SREJ except ion cond itions ha ve n ot been cleared ." The REJ frame ind icates the a ccept ance of all frames p rior to th e frame r ejected by th e REJ frame. This wou ld contr ad ict the inten t of the SREJ frame or fram es. 11.12 Let t 1 =

time to transmit a single frame

-32-

t 1 =

1024 bits 106 bp s

= 1.024 m sec

The transmitting station can send 7 frames without a n acknowledg ment. From the beginn ing of the transm ission of the first frame, the time to receive the acknowledgm ent of that frame is: t2 =

270 + t 1 + 270 = 541.024 m sec

During th e time t 2, 7 frames ar e sent. Dat a p er fra me = 1024 – 48 = 976 7 × 976 bits Throughput = 541.024 × 10− 3 sec

= 12.6 kbps

11.13 The selective-reject app roach wou ld bu rd en the server w ith the task of managing and maintaining large amounts of informa tion about wh at has and h as not been successfu lly tran smitted to the clients; the go-back-N ap pr oach w ou ld be less of a burd en on the server.

-33-

CHAPTER 12 TCP TRAFFIC CONTROL 12.1

The num ber of un acknowledged segments in the "pipeline" at any time is 5. Thus, once steady state is reached , the maximum achievable through pu t is equal to the norm alized theoretical maximum of 1.

12.2

This will depend on w hether m ultiplexing or splitting occurs. If there is a one-toone relationship betw een netw ork connections and transp ort connections, then it will do no good to grant credit at the tr ansport level in excess of the w indow size at the network level. If one transp ort connection is split among m ultiple network  connections (each on e d edicated to that single transport connection), then a practical upp er bound on the transport credit is the sum of the networ k wind ow sizes. If mu ltiple transp ort connections are multiplexed on a single network  connection, their aggregate credit shou ld not exceed the n etwork w indow size. Furtherm ore, the relative am oun t of credit w ill result in a form of p riority mechanism.

12.3

In TCP, no provision is made. A later segment can pr ovide a new credit allocation. Provision is mad e for misord ered a nd lost cred it allocations in the ISO transp ort p rotocol (TP) standard . In ISO TP, ACK/ Credit m essages (AK) are in separ ate PDUs, not p art of a d ata PDU . Each AK TPDU contain s a YR-TU-N R field, wh ich is the sequen ce nu mb er of the next expected d ata TPDU, a CDT field, wh ich gran ts credit, and a "subsequence num ber", which is used to assure that the credit gran ts are pr ocessed in th e correct sequen ce. Furth er, each AK contains a "flow control confirmation" value w hich echoes the param eter values in the last AK received (YR-TU-N R, CDT, sub sequ ence nu mb er). This can be u sed t o d eal with lost AKs.

12.4

The upp er limit ensures that the m aximu m d ifference between send er and receiver can be n o greater than 2 31. Without su ch a limit, TCP might not be able to tell w hen th e 32-bit sequence num ber had rolled over from 2 31 – 1 back to 0.

12.5

a. SRTT(n) = α × SRTT(0) + (1 – α) × RTT × (α n-1 + αn-2 + … α + 1) = α × SRTT(0) + (1 – α) × RTT × (1 – α n )/ (1- α) SRTT(19) = 1.1 se c b . SRTT(19) = 2.9 sec; in bot h cases, the conv ergence sp eed is slow , because in both cases, the initial SRTT(0) is imp rop erly chosen.

12.6

When the 50-octet segment a rrives at th e recipient , it retu rn s a credit of 1000 octets. However, the send er w ill now compu te that th ere are 950 octets in transit in the netw ork, so that the u sable window is now on ly 50 octets. Thus, the sender will once again send a 50-octet segment, even thou gh th ere is no longer a na tural boun dar y to force it. In general, wh enever the acknowledg ment of a small segment comes back, the usable wind ow associated w ith that acknowledgm ent will cause another segment of the sam e small size to be sent, un til some abn ormality breaks the pattern . Once the condition occurs, there is no natu ral wa y for those credit allocations to be recombined; thu s the breaking u p of the u sable window into sma ll p ieces will persist. -34-

12.7

a. As segments arrive at th e receiver, the amou nt of available buffer space contracts. As data from the bu ffer is consum ed (passed on to an app lication), the am ou nt of available buffer space expand s. If SWS is not taken in to accou nt, the following procedu re is followed : When a segment is received, th e recipient should respond with an acknowledgment that p rovides credit equal to the available buffer space. The SWS avoid ance algorithm introd uces the follow ing ru le: When a segment is received, the recipient should not p rovide ad ditional credit un less the following condition is met: available buffer space

 buffersize ≥ MIN    2

   

, m axim u m s eg m en t siz e 

The second term is easily explained : if the av ailable buffer spa ce is greater th an the larg est possible segmen t, then clearly SWS cann ot occur . The first term is a reason able guid eline that states th at if at least half of the bu ffer is free, the sender shou ld be provid ed th e available of credit. b . The suggested strategy is referred to as th e Nagle algorithm an d can be stated as follows: If there is unacknow ledged data, then the send er bu ffers all data un til the outstand ing data have been acknow ledged or un til a maximum -sized segment can be sent. Thus, the send er accum ulates d ata locally to a void SWS. 12.8 a. E[X] =

µ X = 1/ 4 = 0.25

Var[X] = E[X2] – E2 [X] = (1/ 4) – (1/ 16) = 0.1875 σX = (0.1875)0.5 = 0.433 MDEV[X] = E[| X – µ X| ] = (1/ 4)[(3/ 4) + (1/ 4) + (1/ 4) + (1/ 4)] = 0.612 In th is case, σX < MDEV[X] b . E[Y] = µ Y = 0.7 Var[Y] = E[Y2] – E2 [Y] = 0.7 – 0.49 = 0.21 σX = (0.21)0.5 = 0.458 MDEV[X] = E[| Y – µ Y| ] = (0.3)(0.7) + (0.7)(0.3) = 0.422 In th is case, σX > MDEV[X] 12.9

SRTT(K + 1) = (1 – g)SRTT(K) + gRTT(K + 1) SERR(K + 1) = RTT(K + 1) – SRTT(K) Sub stitutin g for SRTT(K) in the first equa tion from th e second equ ation: SRTT(K + 1) = RTT(K + 1) – (1 – g)SERR(K + 1) Think of RTT(K + 1) as a p red iction of the next m easu rem ent a nd SERR(K + 1) as the error in the last pred iction. The above expression says we m ake a n ew pred iction based on th e old p rediction p lus some fraction of the pred iction error.

12.10 TCP initializes the congestion w indow to 1, send s an initial segment, and w aits. When the ACK arrives, it increases the congestion w indow to 2, sends 2 segmen ts, and w aits. When the 2 ACKS arrives, they each increase the congestion w indow by one, so that it can send 4 segments. In general, it takes log 2 N round trips before TCP can send N segments. 12.11 a. W = (109

× 0.06)/ (576 × 8) ≈ 13,000 segm ent s -35-

If the w indow size grows linearly from 1, it w ill take about 13,000 roun d trips, or about 13 minu tes to get the correct wind ow size. b . W = (109 × 0.06)/ (16,000 × 8) ≈ 460 segmen ts In this case, it takes abou t 460 roun d t rips, w hich is less than 30 second s. 12.12 Recall from ou r d iscussion in Section 12.1 that a r eceiver m ay ad opt a conservative flow control strategy by issuing credit only u p to the limit of  curr ently available buffer space, or an op timistic strategy by issu ing credit for space that is curren tly un available bu t w hich the receiver an ticipates w ill soon become ava ilable. In the latter case, buffer overflow is po ssible. 12.13 AAL5 accepts a stream of cells beginn ing w ith th e first SDU=0 after an SDU=1 and continu ing u ntil a cell w ith SDU=1 is received, and assemb les all of these cells, includ ing th e final SDU=1 cell, as a single segm ent. With PPD, some initial portion of an SDU=0 sequen ce may get th rough w ith the remaind er of the SDU=0 sequ ence discard ed. If the final SDU=1 cell is also discard ed, th en th e receiving AAL5 will combine the first part of the p artially discarded segment w ith the stream of cells that make u p th e next segment.

-36-

CHAPTER 13 TRAFFIC AND CONGESTION CONTROL IN ATM NETWORKS 13.1

Yes, but ATM does n ot includ e such sliding-window mechanisms. In som e cases, a higher-level protocol above ATM w ill provid e such m echanisms, but not in all cases.

13.2 a. We can d emonstrate this by ind uction. We wan t to show that th e maximum nu mb er of conforming back-to-back cells, N, satisfies:

 

 N  = 1 +

   = + 1  T −  T  − 

First, sup pose τ < T – δ. Then N = 1 and back-to-back cells are not a llowed . To see this, supp ose that th e first cell arrives at t a (1) = 0. Since th e cell insert ion time is δ, a second cell back to back wou ld ar rive at t a (2) = δ. But using th e virtual schedu ling algorithm, w e see that the second cell must ar rive no earlier than T – τ. Therefore, we m ust h ave: t a (2) > T – τ δ>T–τ τ>T–δ w hich is not true. Therefore, N = 1. Now supp ose T – δ < τ < 2(T – δ). Then N = 2 and t w o consecutive cells are allowed , but n ot three or more. We can see that tw o consecutive cells are allowed since T – δ < τ satisfies the required condition just derived, above. To see that three consecutive cells are not allowed, let us assu me th at thr ee consecutive cells do arr ive. Then , we ha ve t a (1) = 0, t a (2) = δ, and ta (3) = 2δ. Looking at the virtu al sched uling algor ithm (Figure 13.6a), w e see tha t the theor etical arr ival time for th e third cell is TAT = 2T. Therefore th e third cell mu st arrive no earlier than 2T – τ, and w e must have: t a (3) > 2T – τ 2δ > 2T – τ τ > 2(T – δ) w hich is not true. Therefore N = 2. This line of reasoning can be extend ed to lon ger string s of back-to-back  cells, and by ind u ction, Equ ation (13.1) is correct. b . We wan t to show th at the maximu m n um ber of cells, MBS, that may be transmitted at the peak cell rate satisfies:  MBS

    = 1 + S  = 1 +  S   T S − T   T S − T  -37-

The line of reasoning is th e same as for pa rt (a) of this p roblem. In this case, w e assum e that cells can arrive at a m aximu m of an interarrival time of T. Therefore, an MBS of 2 means t hat tw o cells can ar rive at a spa cing o f T; an MBS of 3 mean s that 3 cells can arr ive w ith su ccessive spacings of T, and so on. H ere is an example that shows the relationship am ong the relevant param eters. In th is exam ple, MBS = 4, TS = 5 δ, T = 2δ, and τS = 9 δ. MBS ta(i) Time S

X + LCT (TAT)

TS

T

c. Suppose that τS = (M BS – 1)(TS – T) exactly. Then , 1+

 S  ( MBS − 1)(T S − T )  T  − T  = 1 +   = 1 + ( MBS − 1) = MBS T S − T   S   

w hich satisfies Equation (13.2). Now sup pose th at w e have the following: (MBS – 1)(TS – T) < then the value

τS < M BS(TS – T)

S

is a num ber great er tha n th e integer (MBS – 1) and less T S − T  than the in teger MBS. Therefore: 1+

 S  T  − T  = 1 + ( MBS − 1) =  MBS  S 

w hich still satisfies Equation (13.2). Ho w ever if  τS

MBS(TS – T), then the

equation is not satisfied. 13.3 Observ e tha t if t (MBS × T), then the first term of the inequ ality app lies; otherw ise, the second term ap plies. 13.4 We hav e ER = ma x[Fairshare, VCshare], w here Fairshare = Target_rate/ #_connections VCshare = CCR/ LF = (CCR × Target_rate)/ Inpu t_rate By the first equ ation, w e know that ER Fairshare and ER Case I: LF > 1 In this case VCshare > CCR. Therefore, ER > CCR Case II: LF > 1 IIa Fair sh ar e > VCsh ar e -38-

VCshare

This cond ition holds if: Inpu t_rate > CCR × #_connections With th is cond ition ER = Fairshare Then ER > CCR if (Target_rate/ #_connections) > CCR IIb VCsh ar e > Fa ir sh ar e This cond ition holds if: Inpu t_rate < CCR × #_connections With th is cond ition ER = VCshare Then ER > CCR if Target _rate > IR Case I II: LF = 1 In this case, VCshar e = CCR IIa Fairshare > VCshare This cond ition holds if: Inp ut _rate > CCR × #_connections With th is cond ition ER = Fairshare Then ER > CCR if (Target_rate/ # _conn ections) > CCR IIb VCshare > Fairshare This cond ition holds if: Inp ut _rate < CCR × #_connections With th is cond ition ER = VCshare = CCR

-39-

CHAPTER 14 OVERVIEW OF GRAPH THEORY AND LEAST-COST PATHS 14.1 n(n – 1)/ 2 14.2 a.

b . (a, b, c, f); (a, b, c, e, f); (a, b, e, f); (a, b, e, c, f); (a, d, e, f); (a, d, e, b, c, f); (a, d , e, c, f); c. 3 14.3

14.4 If G is a connected grap h, then th e removal of that edge m akes the grap h disconnected. 14.5 First, show th at the grap h T constru cted is a tree. Then use ind uction on the level of T to show tha t T conta ins all vertices of G. 14.6 The inp ut is a graph w ith vertices ordered V 1, V2 , …, VN . 1. [Initialization ]. Set S to {V1 } and set T to the gra ph consisting of V 1 and no edges. Designate V1 as the root of the tree. 2. [Add edges]. Process the ver tices in S in ord er. For each vertex x in S, process each ad jacent vertex y in G in ord er; ad d edge (x, y) and vertex y to T provided that th is does not p rodu ce a cycle in T. If no ed ges are add ed in th is step, go to step 4. 3. [Update S]. Replace the contents of S w ith th e children in T of S ord ered consistent w ith the original ordering. Go to step 2. 4. [Return Result ]. If the n u mb er of vertices in T is N, retu rn CO N N ECTED; otherw ise, return NO T_CON NECTED. H alt.

-40-

14.7 Start with V1 Iteration 1: Ad d V5, V8 Iteration 2: Ad d V2, V6, V4 Iteration 3: Ad d V3, V7 Span ning Tree:

14.8 for n := 1 to N d o begin d[n] := ∞ ; p[n] := –1 end; d [srce] := 0 {initialize Q to contain srce only 0} insert srce at the head of Q; {initialization over } while Q is not empty d o begin delete the head n ode j from Q ; for each link jk th at start s at j d o begin new d ist := d[j] + c[j,k]; if  newd ist < d [k] then begin d[k] := new dist; p[nk ] := j if  k  ∉ Q then insert K at th e tail of Q; en d end; end; 14.9

This pr oof is based on one in [BERT92]. Let us claim t hat (1) L(i) L(j) for all i ∈ T and j ∉ T (2) For each nod e j, L(j) is the shortest d istance from s to j usin g pa ths w ith all nod es in T except p ossibly j. Cond ition (1) is satisfied in itially, and because w (i, j) ≥ 0 and L(i) = min j∉T L(j), it is p reserved by th e formu la in step 3 of the algorith m. Cond ition (2) then can be show n by ind uction. It hold s initially. Sup pose th at condition (2) holds at the beginning of some iteration. Let i be the node ad d ed to T at that iteration, and let L(k) be the label of each nod e k at th e beginn ing of the iteration . Cond ition (2) holds for i = j by the indu ction h ypoth esis, and it holds for all j ∈ T by condition (1) and the ind uction hyp othesis. Finally for a nod e j ∉ T ∪ i, consider a path from s to j w hich is shortest among all those in w hich all nodes of the path belong to T ∪ i and let L'(j) be the d istance. Let k be th e last nod e of this path before nod e j. Since k is in T ∪ i, the length of th is path from s to k is L(k). So w e hav e L'(j) = m in k ∉T∪i [w(k, j) +L(k)] = m in[m in k ∉T[w(k, j) +L(k)], w(i, j) +L(i)] -41-

The ind uction hy pot hesis implies that L(j) = min k ∉T[w(k, j) +L(k)], so w e ha ve L'(j) = min[L(j) , w(i, j) +L(i)] Thus in step 3, L(j) is set to the shor test d istance from s to j u sing p aths w ith all nod es except j belongin g to T ∪ i. 14.10 Consider the nod e i which has path length K+1, with the imm ediately preceding nod e on th e pat h being j. The distan ce to node i is w(j, i) plus th e distan ce to reach nod e j. This latter d istance mu st be L(j), the d istance to nod e j along th e optim al route, because otherw ise there wou ld be a route with shorter d istance foun d by going to j along the optima l route and then d irectly to i. 14.11 Not possible. A nod e w ill not be ad ded to T until its least-cost route is found . As long as the least-cost rou te has not been foun d, the last nod e on that r oute w ill be eligible for entr y into T before the n od e in question . 14.12 We show the results for starting from n ode 2. M

L(1)

Path

L(3)

Path

L(4)

Path

L(5)

Path

L(6)

Path

1

{2}

3

2-1

3

2-3

2

2-4







2

{2, 4}

3

2-1

3

2-3

2

2-4

3

2-4-5

3

{2, 4, 1}

3

2-1

3

2-3

2

2-4

3

2-4-5

∞ ∞ ∞

4

{2, 4, 1, 3}

3

2-1

3

2-3

2

2-4

3

2-4-5

8

2-3-6

5

{2, 4, 1, 3, 5}

3

2-1

3

2-3

2

2-4

3

2-4-5

5

2-4-5-6

6

{2, 4, 1, 3, 5, 6}

3

2-1

3

2-3

2

2-4

3

2-4-5

5

2-4-5-6

— —

14.13 We show the results for starting from n ode 2. h

Lh (1)

Path

Lh (3)

Path

Lh (4)

Path

Lh (5)

Path

Lh (6)

Path

0















3

2-1

3

2-3

2

2-4



∞ ∞



1

∞ ∞

2

3

2-1

3

2-3

2

2-4

3

2-4-5

8

2-3-6

3

3

2-1

3

2-3

2

2-4

3

2-4-5

5

2-4-5-6

4

3

2-1

3

2-3

2

2-4

3

2-4-5

5

2-4-5-6

-42-



14.14 a. We pr ovide a table for nod e 1 of network a; the figur e is easily generated . M

L(2)

Path

L(3)

Path

L(4)

Path

L(5)

Path

L(6)

Path

1

{1}

1

1-2





4

1-4







2

{1,2}

1

1-2

4

1-2-3

4

1-4

2

1-2-5

∞ ∞

3

{1,2,5}

1

1-2

3

1-2-5-3

3

1-2-5-4

2

1-2-5

6

1-2-5-6

4

{1,2,5,3}

1

1-2

3

1-2-5-3

3

1-2-5-4

2

1-2-5

5

1-2-5-3-6

5

{1,2,5,3,4}

1

1-2

3

1-2-5-3

3

1-2-5-4

2

1-2-5

5

1-2-5-3-6

6

{1,2,5,3,4,6}

1

1-2

3

1-2-5-3

3

1-2-5-4

2

1-2-5

5

1-2-5-3-6

b.



The table for netw ork b is similar in construction bu t mu ch larger. Here are th e results for nod e A:

A to B: A-B A to C: A-B-C A to D: A-E-G-H -D

A to E: A-E A to F: A-B-C-F A to G: A-E-G

A to H : A-E-G-H A to J: A-B-C-J A to K: A--E-G-H -D-K

14.15 h

Lh (2)

Path

Lh (3)

Path

Lh (4)

Path

Lh (5)

Path

Lh (6)

Path

0











1-2



4

1-4

2

1

1-2

4

1-2-3

4

1-4

2

1-2-5

∞ ∞ ∞



1

∞ ∞



1

∞ ∞

3

1

1-2

3

1-2-5-3

3

1-2-5-4

2

1-2-5

6

1-2-3-6

4

1

1-2

3

1-2-5-3

3

1-2-5-4

2

1-2-5

5

1-2-5-3-6



— —

14.16 If there is a u nique least-cost p ath, the tw o algorithms w ill yield th e same resu lt because they are both guar anteed to find the least-cost path . If there are two or more equal least-cost p aths, the tw o algorithms m ay find different least-cost path s, dep end ing on the order in w hich alternatives are explored. 14.17 This explanat ion is tak en from [BERT92]. The Floyd-Warsh all algorithm iterates on the set of nod es that are allowed as intermediate nod es on the path s. It starts like both Dijkstra's algorithm and the Bellman -Ford algorithm with single arc distances (i.e., no intermediate nodes) as starting estimates of shortest path lengths. It then calculates shortest paths u nd er the constraint that only nod e 1 can be used as an intermediate nod e, and then w ith the constraint that only nod es 1 and 2 can be used , and so forth. For n = 0, the initialization clearly gives the shortest p ath lengt hs su bject to the constraint of no intermed iate nod es on paths. Now , sup pose for a given n, Ln(i, j) in th e above algorithm gives the shortest p ath lengths u sing nod es 1 to n as intermed iate nodes. Then th e shortest path length from i to j, allowing nod es 1 to n+1 as p ossible interm ediate nod es, either contains n ode n +1 on the shortest path or d oesn't contain n ode n +1. For the first case, the constrained shortest p ath from i to j goes from i to n+1 and then from n +1 to j, giving the length in the final term of the equ ation in step 2 of the p roblem. For the second case, the constrained shortest path is the same as the one u sing nod es 1 to n as p ossible -43-

intermed iate nodes, yielding the length of the first term in th e equation in step 2 of the problem.

-44-

CHAPTER 15 INTERIOR ROUTING PROTOCOLS 15.1 N o. That w ou ld violate the "separa tion of layers" concept. There is no n eed for IP to know how da ta is routed w ithin a network. For the next hop, IP specifies another system on the same network and hand s that address dow n to the networklayer protocol which then initiates the netw ork-layer rou ting function. 15.2 When the un icast routing adap ts to the shorter path, the delay-band w idth betw een the send er and receiver decreases [than the case of the longer p ath], since shorter path s experience less d elays. The n ew path does not have th e capacity to contain all the p ackets w ith the w indow size filling th e longer path, and so some packets are drop ped and TCP cuts back its wind ow size. 15.3 A digraph could be draw n in a nu mber of ways, depend ing on what are defined to be nod es and wh at are d efined to be edges. As Figure 14.10 illustrates, each n ode and each rou ter can be d epicted as a n ode. The following d igraph is a simpler representation taken from the point of view of Host X:

15.4 B 3: N 1 1: N 2 4: N2, G, N3 3: N 2, D, N4 4: N 2, D, N4, F, N5 15.5 Destination Network  1 2

Next Router D D

C 8: N 1 8: N 3, E, N4, D, N2 5: N 3 6: N 3, E, N 4 6: N 3, H , N5

Metric L(A, j) 6 3

A 6: N 4, D, N 2, B, N 1 3: N 4, D, N 2 2: N 4, E, N 3 1: N 4 2: N 4, F, N 5

Destination Network  1 2 -45-

Next Router D D

Metric L(A, j) 6 3

3 4 5

D — F

6 1 5

3 4 5

A's routing table prior to up d ate

E — F

2 1 2

A's routing table after upd ate

15.6

First, a mechanism is need ed for getting all nod es to agree to start the a lgorithm . Second , a mechanism is needed to abort the algorithm an d start a new version if a link status or va lue chan ges as the algorithm is run ning.

15.7

a. Rou ter A B C D

Distance L(x, 5) 3 2 3 1

N ext H op R(x, 5) B D B —

b. R A B C D

L(x, 5)

R(x, 5)

3 B U U 3 B 1 — Iteration 1

L(x, 5)

R(x, 5)

4 C 4 C 4 A 1 — Iteration 2

L(x, 5)

R(x, 5)

5 C 5 C 5 A 1 — Iteration 3

L(x, 5)

• • • • •

• • • • •

• • • • •

R(x, 5)

11 C 11 C 11 A 1 — Iteration 9

L(x, 5)

R(x, 5)

12 C 12 C 11 D 1 — Iteration 10

U = Un reachable 15.8

Sup pose A has a rou te to D through C, so A sends a p oisoned r everse message to C that ind icates tha t D is un reachable via A (set m etric to infinity). B w ill receive this messag e and thin k tha t D is unr eachable via A. But B can get to C itself and does not n eed to go th rough A to d o so. If C is the best first hop for A to get to D, then C is also the best first hop for B to get to D across this netw ork.

15.9

RIP run s on top of UDP, which provides an op tional checksum for the data portion of the UDP datagr am. OSPF ru ns on top of IP. The IP checksum only covers the IP head er, so OSPF mu st ad d its ow n checksum . [STEV94]

15.10 Load balancing increases the chances of packets being delivered out of order, and possibly d istorts the rou nd -trip times calculated by TCP.

-46-

CHAPTER 16 EXTERIOR ROUTING PROTOCOLS AND MULTICAST 16.1

So that th e queries are not p ropagated outside of the local local network.

16.2

For conv conv enience, flip flip the ta ble on its side:

N1 1

N2 1

N3 1

N4 2

N5 1

N6 1

L1 1

L2 2

L3 2

L4 1

L5 2

To t a l 15 15

This is is the least effic efficient ient meth od , but it is very robu st: a p acket will get throu gh if  there is at least one path from source to d estination. estination. Also, no p rior routing information information n eeds to be exchanged exchanged . 16.3

Root to the left:

16.4 a. In this table, table, we show the hop cost cost for for each netw ork or link: N1 0

N3 1

N4 12

N5 1

N6 1

L3 3

L4 4

To t a l 22

N1 0

N3 1

N4 12

N5 1

N6 1

L3 3

L5 2

To t a l 20

b.

16.5

MOSPF MOSPF works well in in an en vironmen t wh ere group mem bers are relativel relatively y densely packed. In such an environmen t, bandw idth is likely likely to be plentiful, plentiful, with most group mem bers on shared shared LAN s with high-speed links links between the LAN LAN s. How ever, MOSPF MOSPF does not scale scale well to large, sparsely-packe sparsely-packed d mu lticas lticastt grou ps. -47-

Routing Routing information information is periodically periodically flooded flooded to all other rou ters in an area. This This can consum consum e considerable considerable resources if the rou ters are w idely dispersed; also the routers mu st maintain a lot of state information information about grou p m embership and location. location. The The use of areas can can cut d own on this problem bu t the fund amental scaling issue remains. 16.6

PIM allows allows the a d estination estination rou ter to replace replace the group -shared -shared tree with a shortest-path shortest-path tree to any source. Thu Thu s, this is is the shortest u nicast nicast path from destination to sou rce, w hich hich is not n ecessari ecessarily ly the reverse of the sh ortest un icast icast path from source to destination. destination.

16.7

First, First, sup pose th at all traffic traffic mu st go thr ou gh th e RP-based RP-based tr ee. If If the RP RP is placed placed ba dly w ith respect to the topology and distribution of the group r eceivers eceivers and senders, then the d elay is is likely likely to be large. How ever, for for ap plications plications for for wh ich ich d elay is is importan t and the d ata rate is high high enou gh, receive receivers' rs' last last hop routers m ay join join d irectly irectly to the sour ce and r eceive eceive packets packets over the sh ortest path . So, the RP placement is this case is is not so cru cial cial for good per forman ce.

-48-

CHAPTER 17 INTEGRATED AND DIFFERENTIATED SERVICES 17.1 These answ ers are t aken from RFC 180 1809, 9, Using the Flow Label Field in IPv6  (June 1995). a. The IPv6 specifi specification cation allows ro ut ers to ignor e Flow Flow Labels and also allows for the p ossibil ossibility ity that IPv6 da tagrams m ay carry flow flow setup information information in th eir options. Unknow n Flow Labels Labels may also occur occur if a router crashes and loses loses its state. Dur ing a recovery period, the router w ill ill receive receive datagram s w ith Flow Flow Labels Labels it it does not kn ow, but th is is is arguably not an error, but rather a p art of  the recovery p eriod. Finally, if the contro versial suggestion that ea ch TCP TCP connection connection be assigned a separate Flow Flow Label Label is adopted , it may be n ecessary ecessary to ma nag e Flow Flow Labels using an LRU LRU cache (to avoid Flow Label cache cache over flow flow in routers), in in w hich hich case an active active but infrequently u sed flow's state may hav e been inten tionally discard ed . In In an y case, it is is clear clear that tr eating th is situ situ ation as an error and, say dropp ing the datagram an d send ing an ICMP ICMP message, is inapp ropriate. Ind Ind eed, it seems seems likely likely that in most cases, simply forward ing the da tagram as one wou ld a datagram with a zero Flow Flow Label Label would give better service service to the flow flow than d ropp ing the datagram . b . An examp le is a router w hich hich has tw o path s to the datagram 's destination, destination, one via a high-band high-band wid th satellite satellite link link and the other via a low-band low-band w idth terrestrial link. link. A high band wid th flow flow obviously should be rou ted via the high-bandw idth link, but if the router loses the flow flow state, the router may rou te the traffic traffic via the low-band wid th link, link, w ith the potential for the flow flow 's traffic traffic to swam p th e low-bandw idth link. It It seems likely, likely, how ever, these situations situations will be exceptions exceptions rather th an th e rule. 17.2 These answ ers are t aken from RFC 180 1809. 9. a. An intern et may h ave par titioned titioned since since the flow flow wa s created. created. Or th e deletion deletion message may be lost before before reaching reaching all routers. Furtherm ore, the source may crash before before it can send out a Flow Flow Label Label deletion deletion message. b . The obvious mechanism is to u se a timer. Routers Routers shou ld d iscard iscard Flow Flow Labels Labels w hose state has not been refreshed refreshed w ithin some period of time. At the same time, a source that crashes crashes mu st observe a quiet time, during w hich hich it creates creates no flows, until it know s that all Flow Flow Labels Labels from from its previous life life mu st have expired. (Sources can avoid quiet time restrictions by keeping information abou t active Flow Flow Labels in stable storage that su rvives crashes). This This is precisely precisely how TCP initial initial sequence num bers are managed and it seems seems the same m echan echan ism should w ork w ell for Flow Flow Labels. Labels. 17.3

Again, RFC 1809: The argum ent in favor of using Flow Labels Labels on ind ividu ividu al TCP TCP connections connections is that even if the source does not r equest special service, service, a netw ork p rovider's rou ters ma y be able to recognize a large amou nt of traffic traffic and u se the Flow Label fiel field d to establish a sp ecial ecial rou te tha t gives the TCP connection better ser vice (e.g., (e.g., lower delay or bigger band wid th). Another argu ment is to assist assist in in effic efficient ient dem ux at the receiver (i.e. (i.e.,, IP IP and TCP dem u xing could be d one on ce). ce). An a rgum ent against u sing Flow Flow Labels Labels in in ind ividu ividu al TCP TCP connections connections is that it changes how w e hand le route caches caches in routers. Curren tly one can can cache a route for a d estination estination host, regardless of how man y d iffe ifferent rent sources are sending -49-

to that destination host. Thus, if five sources each have two TCP connections sending d ata to a server, one cache entry containing the rou te to the server handles all ten TCPs' traffic. Putting Flow Labels in each datagram changes the cache into a Flow Label cache, in wh ich th ere is a cache en try for every TCP connection. So ther e's a pot ential for cache explosion. There are w ays to alleviate this pr oblem, such as m ana ging th e Flow Label cache as an LRU cache, in w hich infrequently u sed Flow Labels get d iscard ed (and th en recovered later). It is not clear, how ever, wh ether this w ill cause cache th rashing. Observe that there is no easy comprom ise between th ese positions. One cannot, for instance, let the application decide whether to use a Flow Label. Those wh o w ant d ifferent Flow Labels for every TCP connection assum e that th ey may optimize a rou te w ithout th e app lication's know ledge. Forcing all app lications to use Flow Labels w ill force rou ting vend ors to d eal with the cache explosion issue, even if w e later d iscover that w e don't w ant to optim ize ind ividu al TCP connections. 17.4

From RFC 1809: Dur ing its discussions, the End -to-End group realized th is meant that if a rou ter forward ed a datagr am w ith an unkn own Flow Label, it had to ignore the Priority field, because th e p riority values m ight have been red efined. (For instance, the priorities might have been inverted ). The IPv6 comm un ity conclud ed th is behavior w as un desirable. Indeed, it seems likely that w hen the Flow Label is un know n, the rou ter w ill be able to give much better service if it u ses the Priority field to m ake a more informed routing d ecision.

17.5

a.

λ = λ 1 + λ2 = 0.5; Using the M/ M/ 1 equations in Table 8.6: Tr = T s/ (1 – λTs ) = 1/ (1 – 0.5) = 2; Then V = (4 – 2×2) + (4 – 2) = 2

b . Using the M/ M/ 1 equations from Table 8.9: ρ1 = λ1Ts1 = 0.25 = ρ 2 ; ρ = ρ 1 + ρ2 = 0.5 Tr1 = T s1 + ( ρ1 Ts1 + ρ2Ts2)/ (1 – ρ1 ) = 1.67 Tr2 = T s2 + (T q1 – Ts1)/ (1 – ρ) = 2.33 V = (4 – 2×1.67) + (4 – 2.33) = 2.33 Therefore, the strict priority serv ice is more efficient in t he sen se that it delivers a higher utility for the same throu ghp ut. 17.6

a . During a bu st of S second s, a total of MS octets are tran smitted. A burst emp ties the bucket (b octets) and, du ring the bu rst, tokens for an ad ditional rS octets are gen erated , for a tot al bur st size of (b + rs). Thu s, b + rS = MS S = b/ (M – r) b . S = (250 × 103 )/ (23 × 106) ≈ 11 msec

17.7

Sum the equ ation over a ll session j:

-50-

Si ( , t )  j

≥ S j (

, t ) i

Si ( , t )∑  j

≥ i ∑ S j (

Si ( , t )∑  j



iC



i

 j

 j

 j

Si ( , t )

, t )

(t − ) ∑ j

(t − ) C   j

The last inequality demon strates that session i is guaran teed a rate of  gi

=

i

∑  j



 j

17.8

a . If the traffic is fairly bu rsty, th en TH min shou ld be su fficiently large to allow the link utilization to be m aintained at an acceptably high level. b . The difference betw een the tw o thresholds shou ld be larger then th e typical increase in th e calculated av erage qu eue length in one RTT.

17.9

60 Mbits

-51-

CHAPTER 18 PROTOCOLS FOR QOS SUPPORT 18.1 a. Problem: IPv6 inserts a variable number of variable-length Internet-layer head ers before the tran sport h eader, increasing the d ifficulty and cost of packet classification for Q oS. Solution: Efficient classification of IPv6 data packets could be obtained using th e Flow Label field of the iPv6 head er. b. Problem: IP-level security, un d er either IPv4 or IPv6, ma y encryp t the entire transp ort header, hiding the port nu mbers of d ata packets from intermediate routers. Solution: There must be some m eans of iden tifying source and destination IP users (equivalent to th e TCP or UDP port nu mbers). With IPlevel security, ther e is a p aram eter, called th e Security Para meter Ind ex (SPI) carried in th e secur ity head er. While SPIs are allocated based o n d estination ad dr ess, they w ill typically be associated w ith a particular sender. As a result, tw o senders to the same u nicast destination w ill usually have different SPIs. In order to sup port the control of mu ltiple indepen den t flows between source and destination IP add resses, the SPI will be includ ed as p art of the FILTER_SPEC. 18.2

The diagr am is a simp lification. Recall that the text states th at "Tran smissions from all sources are forw ard ed to all destinations through this router."

18.3 a. The pu rpose of the label is to get the packet to the final router. Once the nextto-last router h as decided to send the p acket to the final router, the label no longer has any function, and need no longer be carried. b . It redu ces the p rocessing requ ired at th e last router: it need not p op th e label. [RFC 3031] 18.4 a. When the last label is popp ed from a p acket's label stack (resulting in the stack  being emp tied), further p rocessing of the p acket is based on the p acket's netw ork layer head er. The LSR wh ich pop s the last label off the stack mu st therefore be able to identify th e packet's networ k layer p rotocol. b . The identity of the netw ork layer pr otocol mu st be inferable from the value of  the label which is popp ed from th e bottom of the stack, possibly along with th e contents of the netw ork layer hea d er itself. Therefore, w hen th e first label is pu shed on to a netw ork layer packet, either the label must be one w hich is used ON LY for p ackets of a particular netw ork layer, or the label must be on e w hich is used O N LY for a specified set of netw ork layer p rotocols, w here p ackets of  the specified netw ork layers can be d istingu ished by inspection of the netw ork  layer header. Furtherm ore, w henever that label is replaced by an other label value du ring a packet's transit, the new v alue mu st also be one which meets the sam e criteria. If these cond itions are not met, the LSR w hich pop s the last label off a p acket w ill not be able to identify the p acket's netw ork layer p rotocol. c. The restrictions only a pp ly to th e bottom (last) label in the sta ck. [RFC3032] 18.5

RTP uses periodic status rep orting amon g all group participan ts. When th e nu mber of par ticipants becomes large, the period of the rep orting is scaled u p to redu ce the overall band wid th consum ption of control messages. Since the period of reporting ma y be large (on the ord er of tens of seconds), it m ay convey coarse grain information about th e netw ork congestion. Hence this information cannot -52-

be used to relief network congestion, wh ich n eeds finer grained feedback. This information, how ever, can be u sed as a mean s for session-level (as opp osed to packet level) flow control, wh ere a sou rce may a djust its transm ission rate, and receivers may ad just th eir playback points according to the coarse grained information. 18.6

Define SSRC_r = receiver issuing this r eceiver rep ort; SSRC_n = sou rce receiving this receiver rep ort; A = arriva l time of repor t; LSR = value in "Time of last send er rep ort" field; DLSR = valu e in "Delay since last send er r epor t" field . Then Total roun d -trip time = A – LSR Roun d -trip p rop agation d elay = A – LSR – DLSR

-53-

CHAPTER 19 OVERVIEW OF I NFORMATION THEORY 19.1

X Y

= H (P 1 + P 2 , P3 ) = –(P 1 + P 2 )log(P 1 + P 2 ) – P3 log(P 3 ) = (P 1 + P 2 )H[(P1 / (P 1 + P 2 ), (P2 / (P 1 + P 2 )] = (P 1 + P 2 )[(P 1 / (P 1 + P 2 )log(P 1 / (P 1 + P 2 ) + (P 2 / (P 1 + P 2 )log(P 2 / (P 1 + P 2 )] = –P 1 log(P 1 ) + P 1 log(P 1 + P 2) – P2 log(P 2 ) + P 2 log(P 1 + P 2 ) X + Y = –P 1 log(P 1 ) – P2 log(P 2 ) – P3 log(P 3) = H(P1, P2 , P3 )

19.2

N o, becau se the code for x2 =0001 is the p refix for x5=00011. The sequ ence 000110101111000110 can be d eciph ered as: 0001 101011 1100 0110 = x2 x8 x4 x3 00011 010 11110 00110 = x5 x1 x7 x6

19.3 a. x1 x2 x3 x4 x5 x6

19.4

00 11 010 011 100 101

b. x1 x2 x3 x4 x5 x6

1 00 010 0111 01100 01101

c. x1 x2 x3 x4 x5 x6 x7 x8 x9

00 10 010 110 0001 1111 1110 01101 01100

d. x1 x2 x3 x4 x5 x6 x7

10 000 011 110 111 0101 00100

x8 00110 x9 00111 x10 01000 x11 01001 x12 001010 x13 001011

Let N be the average code-w ord length of C and N ' that of C'. Then N' – N = (Pi Lk  + P k Li ) – (P i Li + P k Lk ) = (P i – Pk )(Lk  – Li ) < 0 Therefore C' is less opt imal tha n C.

19.5

Use Equation 6.1: P (X) =

∑ P (X Ei )P(E i ) and assume P A = 0.8 and P B = 0.2. i

PA = Pr(A/ A)PA + Pr(A/ B)PB = 0.9 × 0.8 + 0.4 × 0.2 = 0.72 + 0.08 = 0.8 PB = Pr(B/ A)PA + Pr(B/ B)PB = 0.1 × 0.8 + 0.6 × 0.2 = 0.08 + 0.12 = 0.2

-54-

19.6 Schem e 1: This coding scheme could h ave not been generated by a H uffman algorithm since it is not optimal. x4 could h ave been rep resented by a 1 instead of  10, and x3 could h ave been r epr esented by a 00 instead of 001. Scheme 2: It is a Hu ffma n code because it is a pr efix cod e, it is op timal, and it forms a full binary tree. P(x1 ) = 0.55; P(x 2) = 0.25; P(x 3 ) = 0.1; P(x 4 ) = 0.1 Scheme 3: This coding scheme could h ave not been generated by a H uffman algorithm since it is not a p refix scheme. x 1 is a p refix for x3 . Scheme 4: It is a Hu ffma n code because it is a pr efix cod e, it is op timal, and it forms a full binary tree. P(x1 ) = 0.3; P(x 2 ) = 0.2; P(x 3 ) = 0.1; P(x 4 ) = 0.4

-55-

CHAPTER 20 LOSSLESS COMPRESSION 20.1

Advantage: MNP eliminates the necessity of using a special compressionindicating character because the th ree repeating characters both ind icate that comp ression has occur red an d id entify the character that w as comp ressed . Disadvantages: (1) MNP results in d ata expansion w hen sequ ence of only three repeating characters are encoun tered in the original data stream . (2) MN P encoding requires four characters compared to three characters for ordinary ru nlength encoding.

20.2 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0

-56-

0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 20.3

Initial Dictionar y Ind ex Phrase 0 a 1 b 0

Decod ing:

a

Dictionary Ind ex Phrase 0 a 1 b 2 a_

0

Decod ing:

aa

Dictionary Ind ex Phrase 0 a 1 b 2 aa 3 a_

1

Decod ing:

aa b

Dictionary Ind ex Phrase 0 a 1 b 2 aa 3 ab 4 b_

3

Decod ing:

aab ab

Dictionary Ind ex Phrase 0 a 1 b 2 aa 3 ab 4 ba 5 ab_

-57-

5

Decod ing:

aabab aba

2

Decod ing:

aabababa aa Dictionary Ind ex Phrase 0 a 1 b 2 aa 3 ab 4 ba 5 ab a 6 aba a 7 aa_

0 0 1 3 5 2 => Fin al D eco d in g:

Dictionary Ind ex Phrase 0 a 1 b 2 aa 3 ab 4 ba 5 ab a 6 aba_

a ab ab ab aa a

20.4

a. In LZ77, compression takes longer because the encoder has to search the bu ffer to find the best ma tch, wh ereas the d ecoder’s job is only to retrieve the symbols from th e buffer, given an index and a length. b . Similarly, for LZ78, the encod ing p rocess requires th e dictionary to b e searched for the best entry matching the inp ut stream , whereas the d ecoder simply retrieves an entry given by an index. How ever, the difference might not be as significant as in LZ77, since both the encod er and d ecoder need to build a d ictionary.

20.5

LZ77 is a better techn ique for encod ing aaaa… aaa, assum ing the length of the bu ffer is sufficiently large com pa red to k. In th e best case, if the bu ffer size is k, the sequ ence can be encoded w ith tw o trip lets: and . LZ78 will need to genera te a d ictionary w ith entr ies like a, aa, aaa, etc., thu s generating a longer bitstream, beside the overhead from using a d ictionary. On t he oth er han d , if the size of the LZ77 bu ffer is small com pa red to k, LZ78 wou ld be slightly m ore efficient because LZ77 wou ld en code a fixed nu mber of  symb ols in every trip let (i.e., k sym bols), w hile LZ78 w ill encode on e ad d itional symbol with each encoding.

20.6 a. sym bol Probability cod e

g 8/ 40 00

f 7/ 40 110

e 6/ 40 111

d 5/ 40 010

b.

-58-

space 5/ 40 101

c 4/ 40 011

b 3/ 40 1000

a 2/ 40 1001

a 1space b 3b space c

1 2 3 4 5 6

6c 6space d 9d 10space

7 8 9 10 11

e 12e 13e 5f  f 

12 13 14 15 16

16f  17f  g 19g 20g

17 18 19 20 21

c. Symbol

Pi

Cumulative Probability

Interval

Binary Representation of Low er Bou nd

Code

a

0.05

0.05

[0, 0.05)

0.00000

00000

b

0.075

0.125

[0.05, 0.125)

0.00001

00001

c

0.1

0.225

[0.125, 0.225)

0.00100

0010

d

0.125

0.35

[0.225, 0.35)

0.00111

0011

e

0.15

0.5

[0.35, 0.5)

0.01011

01

f

0.175

0.675

[0.5, 0.675)

0.10000

100

g

0.2

0.875

[0.675, 0.875)

0.10101

101

0.125

1.0

[0.875, 1.0)

0.11100

11

space

20.7 a. We need to exploit the follow ing fact. Each row of the sorted ma trix is a circular buffer that contains the original sequence in p roper ord er, but rotated some nu mb er of p ositions. The following p rocedu re w ill d o the job. Label the final colum n in th e sorted m atrix L (the outpu t string), the first colum n F, and th e outp ut integer I. Then perform these steps. 1. Given L, reconstru ct F. This is easy: F conta ins th e chara cters in L in alphabetical ord er. 2. Start w ith th e character in row I, colu mn L. By d efinition, this is the first character of the original string. 3. Go to the character in F in the current row . This is the next character in the string. This is so becau se each row is a circular b uffer. 4. Go to the row that h as the same character foun d in (3) in th e Lth column . Repeat step s (3) and (4) until the en tire original string is recovered. The following figure is an example.

-59-

In step 4, there may be more than one character in the Lth colum n m atching th e current character in the F colum n. In our example, the first time that an A is encoun tered in F, there are tw o choices for A in L. The ambigu ity is resolved b y observing that mu ltiple instances of a character mu st app ear in the same ord er in both F and L, although in d ifferent row s. b . BWT alters the distribution of characters in a w ay th at enh ances comp ression. Each character in L is a p refix to the char acter string beginning w ith the character in F in the same row (except for the single case where L contains the last character in the string). BWT sorts the rows so that iden tical suffixes will be together . Often the sam e pr efix char acter w ill occur for mu ltiple instances of the sam e suffix. For examp le, if the sequ ence "hat" app ears mu ltiple times in a long string, all of these instances w ill be contiguou s in row s beginning w ith "hat ". Typ ically, the p refix character for almost all of these strings is "t". Thus, ther e cou ld be a n um ber of  consecutive instances of "t" in L, providing an opportunity for compression.

-60-

CHAPTER 21 LOSSY COMPRESSION 21.1

21.2

The only change is to the d c comp onent.

a.

b.

S(0)

S(1)

S(2)

S(3)

S(4)

S(5)

S(6)

S(7)

484

–129

–95

26

–73

4

–31

–11

 8 2 T1 =   20

2 0 2 0

4 –2 0 0

2 0 2 2

        

P1 = (W')–1T 1 W –1

 1 1 P1 =   11

1 1 –1 –1

1 –1 0 0

0 0 1 –1

  8   2   2    0

 1  = 11  1

1 1 –1

1 –1 0

–1

0

  14   0   4   –1   0

2 0 2 0

0 0 1

4 –2 0 0

6 4 4

8 2 2

0

2

  1   1   1     0

2 0 2 2

        

1 1 –1 0

1 –1 0 1

1 –1 0 –1

   18    10   =  14 −   −2   14

14 6 2

12 8 8

2

4

4 2 2

        4   4 8 0

We u se a gray scale with 21 levels, from 0 = wh ite to 20 = black 

P

P1

The two matrices are very similar.

c.

 8 0 T2 =   00

0 0 0 0

4 0 0 0

0 0 2 0

        

P2 = (W')–1T 2 W –1

-61-

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF