CCNA Study Guide Vol2
April 5, 2017 | Author: Cesar Morera Alpizar | Category: N/A
Short Description
Download CCNA Study Guide Vol2...
Description
Welcome, and thanks for purchasing my ICND2 Study Guide! You’re about to benefit from the same clear, comprehensive CCENT and CCNA instruction that thousands of students around the world have used to earn their certifications. They’ve done it, and you’re about to do the same!
On the next page or two, I’ve listed some additional free resources that will definitely help you on your way to the CCENT, the CCNA, and to realworld networking success. Use them to their fullest, and let’s get started on your exam pass! Chris Bryant “The Computer Certification Bulldog:
Udemy: https://www.udemy.com/u/chrisb Over 38,000 happy students have made me the #1 individual instructor on Udemy, and that link shows you a full list of my free and almost-free Video Boot Camps! (Use the discount code BULLDOG60 to join my 27-hour CCNA Video Boot Camp for just $44!)
YouTube : http://www.youtube.com/user/cc (Over 325 free training videos!)
Website: http://www.thebryantadvantage. (New look and easier-to-find tutorials in Dec. 2013! Facebook: http://on.fb.me/nlT8SD Twitter: https://twitter.com/ccie12933 See you there!
Chris B.
Copyright © 2013 The Bryant Advantage, Inc. All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the publisher except for the use of brief quotations in a book review. No part of this publication may be stored in a retrieval system,
transmitted, or reproduced in any way, including but not limited to photocopy, photograph, magnetic, or other record, without the prior agreement and written permission of the publisher. The Bryant Advantage, Inc., has attempted throughout this book to distinguish proprietary trademarks from descriptive terms by following the capitalization style used by the manufacturer. Copyrights and trademarks of all products and services listed or described herein are property of their respective owners and companies. All rules and laws pertaining to said copyrights and trademarks are
inferred. Printed in the United States of America First Printing, 2013 The Bryant Advantage, Inc. 9975 Revolutionary Place Mechanicsville, VA 23116
Contents
The Spanning Tree Protocol HDLC, PPP, and Frame Relay (Plus A Few Cables!) Routing And IP Addressing
Fundamentals The Wildcard Mask OSPF and Link-State Protocols EIGRP Intro To Network Managment and Licensing Intro To VPNs and Tunnels 1st-Hop Redundancy Protocols IP Version 6 Mastering Binary Math and Subnetting
The Spanning Tree Protocol I’ve said it before and I’ll say it again — in networking, as in life, we’ll take all the backup plans we can get! In our networks, that “Plan B” takes the form of redundancy, and in switching, that redundancy takes the form of having multiple paths available between any two given
endpoints in the network. That helps us avoid the single point of failure, which in today’s networks is totally unacceptable. (A single point of failure is a point in the network where if something goes down, the entire network comes to a standstill.) The benefit of those additional paths does carry some danger. If all the paths in the following diagram were available at all times, switching loops could form.
What we need is for one path between any two endpoints to be available, while stopping the other paths from being used unless the primary path goes
down. Then, of course, we want that backup path to become available ASAP. The Spanning Tree Protocol (STP), defined by IEEE 802.1d, does this for us by placing ports along the most desirable path into forwarding mode, while ports along less-desirable paths are placed into blocking mode. Once STP converges, every port on these paths is in either forwarding or blocking mode. At that point, only one path is available between any two
destinations, and a switching loop literally cannot occur. Note: You’re going to hear about routing loops later in your studies. Those happen at Layer 3. STP has nothing to do with routing loops. STP is strictly a Layer 2 protocol and is used to prevent switching loops. Watch that on your exam. If a problem arises with the open path, STP will run the spanning-tree algorithm to recalculate the available paths and determine the best path.
Ports along the new best path will be brought out of blocking mode and into forwarding mode, while ports along lessdesirable paths will remain in blocking mode. Once again, only one path will be available between any two endpoints. Ports do not transition from blocking to forwarding mode immediately. These built-in delays help guard against switching loops during the transition. More about those timers later in this section. Let’s say STP has decided the
best path from SW1 to SW3 is the most direct path. (This is not always the case, as you’ll see.) Logically, SW1 sees only one way to get to SW3.
If that path becomes unavailable, STP will recalculate its available paths. When that recalculation ends, STP will begin to bring the appropriate ports out of blocking mode and into forwarding mode.
Switching loops cause several
problems: Frames can’t reach their intended destination, either totally or in part, due to MAC address table entries that will continually change. Unnecessary strain put on switch CPUs. These continually flooded frames end up causing a broadcast storm. Unnecessary use of bandwidth.
Luckily for us, switching loops just don’t occur that often, because STP does a great job of preventing switching loops before they can occur. The benefits of STP begin with the exchange of BPDUs and the root bridge election.
The Root Bridge Election STP must first determine a root bridge for every Virtual LAN (VLAN). And yes, your root bridges will be switches. The term “root bridge” goes back to STP’s pre-switch days, and the term stuck even after the move away from bridges to switches. Just one of those things! Speaking of “one of those things”, the root bridge election is one of those things that can be confusing at first, since you’re reading about the theory
and you may not have seen these terms before. Don’t worry about it. Following the description of the process, I have two fully-illustrated examples for you that are both packed with readouts from live Cisco switches. So hang in there and you’ll knock this stuff out like a champ on exam day! Now on to the election…. When people are born, they act like they are the center of the universe. They yell, they scream, they expect to have their every desire carried out
immediately. (Some grow out of this; ssome do not.) In a similar fashion, when a switch is first powered on, it believes it is the root bridge for every VLAN on your network. There must be a selection process to determine the true root bridge for each VLAN, and our selection process is an election process. The election process is carried out by the exchange of BPDUs (Bridge Protocol Data Units). Switches are continually sending or forwarding BPDUs,
but hubs, repeaters, routers, and servers do not send BPDUs.
Real-world note: There are different types of BPDUs, and the one we talk about 99% of the time is technically called a Hello BPDU. This BPDU type is
often simply referred to as “BPDU”, and that’s the way I refer to it as well. The Hello BPDU contains a lot of important info… The root bridge’s Bridge ID (BID). The BID is a combination of the bridge’s priority and MAC address. The format of the BID puts the priority in front of the MAC address, so the only way the MAC address comes into play during the election is when the contending switches’ priority is exactly the same.
The bridge with the lowest BID will be elected root bridge. The default priority value is: 32768 + The Sys-Id-Ext, which just happens to be the VLAN number. For example, here’s SW1’s priority for VLAN 1: Bridge ID
Priority
32769 (
SW1’s priority for VLAN 100: Bridge ID
Priority
32868 (
I know you see the pattern.
Since the lowest BID wins, the switch with the lowest MAC address will become the root bridge for all VLANs in your network unless the priority is changed. Cost To Reach Root From This Bridge: The path with the lowest overall cost to the root is the best path. Every port is assigned a cost relative to its speed. The higher the speed, the lower the port cost. BID Of The BPDU’s Sender: This simply identifies which switch sent the BPDU.
The election proceeds as the BPDUs make their way amongst the switches…. When a switch receives a BPDU, the switch compares the root bridge BID contained in the BPDU against its own BID. If the incoming root bridge BID is lower than that of the switch receiving it, the switch starts announcing that device as the root bridge. The BPDU carrying this winning BID is called a superior BPDU, a term we’ll revisit later in this section. If the incoming BID is higher
than that of the receiver, the receiver continues to announce itself as the root. A BPDU that carries a non-winning BID is an inferior BPDU. This process continues until every switch has agreed on the root bridge. At that point, STP has reached a state of convergence. “Convergence” is just a fancy way of saying “everybody’s agreed on something.” Once all switches agree on the root bridge, every port on every path will be in blocking or
forwarding mode. There are intermediate STP port states you should be aware of: BLOCKING: Frames are not forwarded, but BPDUs are accepted. LISTENING: Frames are not forwarded, and we’re doing some spring cleaning on the MAC address table, as entries that aren’t heard from during this time are cleared out. LEARNING: Frames are
not forwarded, but fresh MAC entries are being placed into the MAC table as frames enter the switch. FORWARDING: Frames are forwarded, MAC addresses are still learned. There is a fifth STP state, disabled, and it’s just what it sounds like. The port is actually disabled, and disabled ports cannot accept BPDUs. We’re going to take two
illustrated looks at STP in action, the first with two switches and the second with three. In the first example, there are two separate crossover cables connecting the switches. It’s important to note that once STP has converged in this network, one port — and only one port — will be in blocking mode, with the other three in forwarding mode.
I haven’t configured anything on these switches beyond a hostname and the usual lab commands, so what VLANs, if any, will be running on these switches? We have five default VLANs, and only one is populated. You may never use those bottom four VLANs, but I’d have those
numbers memorized for the exam. SW1#show vlan brief VLAN Name
Sta
1 default act Fa0/5, Fa0/6, Fa0/7, Fa0/8 Fa0/9, Fa0/10, 1002 fddi-default 1003 token-ring-default 1004 fddinet-default 1005 trnet-default
I’ll edit those four bottom VLANs out for the rest of this section, so note them now. All ports belong to VLAN 1 by
default. There’s something missing, though… notice the ports used to connect the switches, Fa0/11 and Fa0/12, don’t show up in show vlan brief? That’s because they’re trunk ports, ports connected directly to other switches. You can see what ports are trunking with the show interface trunk command.
SW1#show interface trunk Port Mode Encapsulat Fa0/11 desirable 802.1q Fa0/12 desirable 802.1q
Port Vlans allowed on trunk Fa0/11 1—4094 Fa0/12 1—4094 Port Vlans allowed and active Fa0/11 1 Fa0/12 1
Port Vlans in spanning tree fo Fa0/11 1 Fa0/12 none
Running both show vlan brief and show interface trunk is a great way to start the L2 troubleshooting process. Now back to our network….
To see each switch’s STP values for VLAN 1, we’ll run show spanning-tree vlan 1. First, we’ll take a look at SW1’s output for that command. (By the way, we’re running PVST, or “Per-VLAN Spanning Tree”, which is why we have to put the VLAN number in. With PVST, each VLAN will run an
independent instance of STP.) SW1#show spanning-tree vlan 1
VLAN0001 Spanning tree enabled protoco Root ID Priority 32769 Address 000b.be2c.5180 Cost 19 Port 11 (Fa Hello Time 2 sec 15 sec
Bridge ID Priority 32769 (pr Address 000f.90e2.25c0 Hello Time 2 s Max Age 20 sec Forward Delay 15 sec Aging Time 300
Interface
Role Sts Cost
Pri
Fa0/11 Fa0/12
Root FWD 19 Altn BLK 19
128 128
The Root ID is the BID info for the root bridge, and the Bridge ID is the BID info for the local switch. Since the addresses are different for the Root and Bridge ID, this switch is definitely not the root switch. If they’re the same, you’re on the root switch! The BID of any switch is the priority followed by the MAC address, so let’s compare the
two values: Root ID BID: 32769:000b-be-2c-51-80 Bridge ID BID: 32769:000f-90-e2-25-c0 The device with the lowest BID will be elected root. Since both devices have the exact same priority, the switch with the lowest MAC address is named the root switch, and that’s exactly what happened here. On SW1, Fa0/11 is in FWD status, short for forwarding. This port is marked Root,
meaning this port will be used by SW1 to reach the root switch. Fa0/11 is SW1’s root port for VLAN 1. Fa0/12 is in BLK status, short for blocking. How did the switch decide to put Fa0/11 into forwarding mode while 0/12 goes into blocking? The switch first looked at the path cost, but that’s the same for both ports (19). The tiebreaker is the port priority, found under the “prio.nbr” field. Fa0/11’s port priority is lower, so it’s chosen as the root port.
Let’s mark that on our exhibit and then move on to SW2.
Here’s the output of show spanning-tree vlan 1 on SW2. SW2#show spanning-tree vlan 1
VLAN0001 Spanning tree enabled protoco
Root ID Priority 3276 Address 000b.be2c.5180 This bridge is the root Hello Time 2 sec
Max Age 20 sec Forward Delay 1 Bridge ID
Priority 3276 Address 000b Hello Time 2 se Max Age 20 sec Forward Delay 1 Aging Time 15 Interface
Role Sts Cost
P
Fa0/11 Fa0/12
Desg FWD 19 Desg FWD 19
1 1
We have two really big hints that SW2 is the root switch for
VLAN 1. The first is really, really big — the phrase “This bridge is the root”! The next isn’t quite as obvious. Both Fa0/11 and Fa0/12 are in FWD status. A root bridge will have all of its ports in forwarding mode. It would be easy to look at this simple network and say that two ports need to be blocked to prevent switching loops, but blocking one is actually enough to do the job. Here’s how our switches look now:
It’s a common misconception that the Fa0/12 port on both switches would be blocked in this situation, but now we know that just isn’t the case. Now we’ll take a look at a three-switch example from a live Cisco switching network
and bring another port type into the discussion. We have a three-switch full mesh topology. I’ll post the MAC addresses and BIDs of the switches below the diagram. We’ll follow that with a look at the election from each switch’s point of view and decide what we think should have happened in the root bridge election. Then we’ll see what happened in the root bridge election! This is an excellent practice exam question. You must be able to look at a diagram such
as this, along with the addresses, and be able to answer the following questions: Which bridge is the root? Which ports will the nonroot bridges select as their root? Where are the designated ports? How many ports will STP block once convergence is reached? All questions we’re about to answer with configs from live
Cisco switches! The switch MAC addresses: SW1: 000f.90e2.2540 SW2: 0022.91bf.5c80 SW3: 0022.91bf.bd80
The priorities and port speeds have all been left at the default.
Priority 32769 (priority 32768
The resulting BIDs: SW1: 32769:000f.90e2.2540 SW2: 32769:0022.91bf.5c80 SW3: 32769:0022.91bf.bd80
Here’s what happened during the election, assuming all three switches were turned on at the same time. SW1 sees BPDUs from SW2 and
SW3, both announcing they’re the root. From SW1’s point of view, these are inferior BPDUs; they contain BIDs that are higher than SW1’s. For that reason, SW1 continues to announce via BPDUs that it is the root. SW2 sees BPDUs from SW1 and SW3, both announcing they’re the root. SW2 sees the BIDs in them, and while SW3’s BPDU is an inferior BPDU, SW1’s is a superior BPDU, since SW1’s BID is lower than that of SW2. SW2 will now forward BDPUs it
receives announcing SW1 as the root.
SW3 is about to start developing a massive inferiority complex, since the BPDUs coming at it from SW1 and SW2 are both superior BPDUs. Since the BPDU from SW1 has the lowest BID of those two BPDUs, SW3 recognizes SW1 as the root and will forward BPDUs announcing that information. As the root switch, SW1 will have both ports placed into
Forwarding mode, as verified by the edited output of show spanning vlan 1. Note that both of these ports are designated ports. SW1#show spanning vlan 1 Interface
Role Sts Cost
Fa0/11 Fa0/12
Desg FWD 19 Desg FWD 19
SW2 and SW3 now need to select their root port. Each nonroot bridge has two different ports that it can use to reach the root bridge, but the cost is lower for the port that is physically closer to the root bridge (we’re assuming all port
speeds are the same). Those ports will now be selected as the root port on their respective switches, verified by show spanning vlan 1. SW2#show spanning vlan 1 Interface
Role Sts Cost
Fa0/11
Root FWD 19
SW3#show spanning vlan 1 Interface
Role Sts Cost
Fa0/11
Root FWD 19
We’re almost done! Either SW2 or SW3 must be elected the designated bridge of their common segment. The switch that advertises the lowest cost to the root bridge will be the designated bridge, and that switch’s port on the shared
segment will be the designated port (DP). In this network, SW2 and S3 will advertise the same cost to each other over the shared segment. In that case, the switch with the lowest BID will be the designated bridge, and we know that’s SW2. SW B’s Fa0/12 port will be put into forwarding mode and named the DP for that segment. SW C’s Fa0/12 port will be put into blocking mode and will be that segment’s non-designated port (NDP). The DP is always in
forwarding mode and the NDP will always be in blocking mode. All forwarding ports on the root switch are considered DPs. A root switch will not have root ports. It doesn’t have a specific port to use to reach the root, it is the root! We’ll verify the DP and NDP port selection with show spanning vlan 1. SW2#show spanning vlan 1 Interface
Role Sts Cost
Fa0/11 Fa0/12
Root FWD 19 Desg FWD 19
SW3#show spanning vlan 1 Interface Role Sts Cost Fa0/11 Fa0/12
Root FWD 19 Altn BLK 19
Now that STP has converged and all switches agree on the root, only the root will originate BPDUs. The other switches receive them, read them, update the port costs, and then forward them. Nonroot switches do not originate BPDUs. The amazing thing about that topology is that only one port ended up being put into blocking mode and five ports in forwarding mode! In the previous examples, the speed of both links between switches was the same. What if
the speeds were different?
In our earlier two-switch example, fast0/11 was chosen as the root port on SW1. The port cost was the same (19), so the port priority was the tiebreaker. In this scenario, the speeds of the links are not the same. The faster the port, the
lower the port cost, so now fast0/12 would be chosen as the RP on SW1. Here are some common port speeds and their associated STP port costs: 10 Mbps: 100 100 Mbps: 19 1 Gbps: 4 10 Gbps: 2 You must keep those costs in mind when examining a network diagram to determine
root ports, because it’s our nature to think the physically shortest path is the fastest path. STP does not see things that way. Consider:
At first glance, you’d think that SW B would select Fa0/1 as its root port. Would it? The BPDU carries the Root Cost, and this cost increments as the BPDU is forwarded throughout the network. An individual port’s STP cost is locally significant only and is unknown by downstream switches. The root bridge will originate a BPDU with the Root Cost set to zero. When a neighboring switch receives this BDPU, that switch adds the cost of the port
the BPDU was received on to the incoming Root Cost. Root Cost increments as BPDUs are received, not sent. That new value will be reflected in the outgoing BDPU that switch forwards. Let’s look at the network again, with the port costs listed.100 Mbps ports have a port cost of 19, and 1000 Mbps ports have a port cost of 4.
Reviewing two very important points regarding port cost: The root switch originates the BPDU with a cost of zero
The root port cost increments as BPDUs are received When SW A sends a BPDU directly to SW B, the root path cost is zero. That will increment to 19 as it’s received by SW B. When SW A sends a BPDU to SW C, the root path cost is zero. That will increment to 4 as it’s received by SW C. That BPDU is then forwarded to SW B, which then adds 4 to that cost as it’s received on Fa0/2. That results in an overall root path cost of 8, which will result
in SW B naming Fa 0/2 as the root port.
The moral of the story: The physically shortest path is not always the logically shortest
path. Watch for that any time you see different link speeds in a network diagram! You Might Be A Root Switch If…. I’m going to quickly list four ways you can tell if you’re on the root, and four ways you can tell if you’re NOT on the root. I recommend you check out my free videos on my YouTube channel on this subject. The videos are free and on exam day, you’ll be VERY glad you watched them!
http://www.youtube.com/watch? v=9Db_5o_eXKE
http://www.youtube.com/watch? v=Hxf8f5U3eKU Four tip-offs you’re NOT on the root bridge: No “this bridge is the root” message The MAC address of the Root ID and Bridge ID are different The bridge has a root port There’s a port in blocking mode
Four hints you ARE on the root bridge: There’s a “this bridge is the root” message The MAC of the Root ID and Bridge ID are the same There are no root ports No ports in blocking mode Changing The Root Bridge Election Results (How and Why) If STP was left to its own
devices, a single switch is going to be the root bridge for every single VLAN in your network. That single switch is going to be selected because it has a lower MAC address than every other switch, which isn’t exactly the criteria you want to use to select a single root bridge. The time will definitely come when you want to determine a particular switch to be the root bridge for your VLANs, or when you will want to spread the root bridge workload. You can make this happen with the spanning-
tree vlan root command. In our previous two-switch example, SW 1 is the root bridge of VLAN 1. We can create 3 more VLANs, and SW 1 will always be the root bridge for every VLAN. Why? Because its BID will always be lower than SW 2. For this demo, I’ve created VLANs 10, 20, and 30. The edited output of showspanningtree vlan shows that SW 1 is the root bridge for all these new VLANs.
SW1#show spanning-tree vlan 10 VLAN0010 Spanning tree enab Root ID Priority 32778 Address 000f.90e1.c240 This bridge is the root
SW1#show spanning-tree vlan 20 VLAN0020 Spanning tree enab Root ID Priority 32788 Address 000f.90e1.c240 This bridge is the root
SW1#show spanning-tree vlan 30 VLAN0030 Spanning tree enab Root ID Priority 32798 Address 000f.90e1.c240 This bridge is the root
We’d like SW 2 to act as the
root bridge for VLANs 20 and 30 while leaving SW 1 as the root for VLANs 1 and 10. To make this happen, we’ll go to SW 2 and use the spanning-tree vlan root primary command.
SW2(config)#spanning-tree vlan SW2(config)#spanning-tree vlan
SW2#show spanning vlan 20 VLAN0020 Spanning tree enab Root ID Priority 24596 Address 000f.90e2.1300 This bridge is the root
SW2#show spanning vlan 30 VLAN0030 Spanning tree enab
Root ID Priority 24606 Address 000f.90e2.1300 This bridge is the root
SW 2 is now the root bridge for both VLAN 20 and 30. Note the priority value, which we did not configure manually. More on that in a moment! This command has another great option:
SW2(config)#spanning-tree vlan primary Configure this switch secondary Configure switch as
You can configure a switch to be the standby root bridge with
the secondary option. This will change the priority just enough so the secondary root doesn’t become the primary immediately, but will become the primary if the current primary goes down. Let’s take a look at root secondary in action. We have a three-switch topology for this example. We’ll use the root primary command to make SW3 the root of VLAN 20. Which switch would become the root if SW3 went down?
SW3(config)#spanning vlan 20 r
SW3#show spanning vlan 20 VLAN0020 Spanning tree enab Root ID Priority 24596 Address 0011.9375 This bridge is the root Bridge ID Priority 24596 (priority 24576 Address 0011.9375.de00
SW2#show spanning vlan 20 VLAN0020 Spanning tree enab Root ID Priority 32788 Address 0011.9375.de00 Bridge ID Priority 32788 (prio Address 0018.19c7.2700
SW1#show spanning vlan 20 VLAN0020 Spanning tree enab Root ID Priority 32788 Address 0011.9375.de00 Bridge ID Priority 32788 (prio Address 0019.557d.8880
SW2 and SW1 have the same default priority, so the switch with the lowest MAC address will be the secondary root, and that’s SW2. Let’s use the root secondary command to make SW1 the secondary root switch for VLAN 20.
SW1(config)#spanning vlan 20 r SW1#show spanning vlan 20 VLAN0020 Spanning tree enab
Root ID Priority 24596 Address 0011.9375.de00 Bridge ID Priority 28692 (priority 28672 Address 0019.557d.8880
SW1 now has a priority of 28672, making SW1 the root if SW3 goes down. A priority value of 28672 is an excellent tipoff the root secondary command is in use. The config shows this as well:
spanning-tree mode pvstspannin spanning-tree vlan 20 priority
The big question at this point:
Where is STP coming up with these priority settings? We’re getting the desired effect, but it would be nice to know where the numbers are coming from. And by a strange coincidence, here’s where they’re coming from! If the current root bridge’s priority is greater than 24,576, the switch sets its priority to 24576 in order to become the root. You saw that in the previous example. If the current root bridge’s
priority is less than 24,576, the switch subtracts 4096 from the root bridge’s priority in order to become the root. If that’s not enough to get the job done, another 4096 will be subtracted. If you don’t like those rules or you’ve just gotta set the values manually, the spanning-tree vlan priority command will do the trick. I personally prefer the spanning-tree vlan root command, since that command ensures that the priority on the
local switch is lowered sufficiently for it to become the root. With the spanning-tree vlan priority command, you have to make sure the new priority is low enough for the local switch to become the root switch. As you’ll see, you also have to enter the new priority in multiples of 4096.
SW2(config)#spanning-tree vlan bridge priority in i
The STP Timers Once these elections have taken place, the root bridge will begin sending a Hello BPDU out all its ports every two seconds. This Hello BPDU serves as the heartbeat of STP. As long as the non-root bridges receive it, they know the path to the root is unchanged and stable. Once that heartbeat disappears, it’s an indication of a failure somewhere along the path. STP will run the spanningtree algorithm to determine the
best available path, and ports will be brought out of blocking mode as needed to build this path. The Hello BPDUs carry values for three timers: Hello Time: Time between Hello BPDUs. Default: 2 seconds. Max Age: The bridge should wait this amount of time after not hearing a Hello BPDU before running the STP algorithm. Default: 20
seconds. Forward Delay: The amount of time a port should stay in the listening and learning stages as it changes from blocking to forwarding mode. Default: 15 seconds. Two important notes regarding changing these timers: These timer values weren’t pulled out of the sky. Cisco has them set at these values to prevent
switching loops during STP recalculations. Change them at your peril. To change these timers, do so only on the root. You can change them on a non-root, but the changes will not be advertised to the other switches! You can change these timers with the spanning-tree vlan command, but if you have any funny ideas about disabling them by setting them to zero,
forget it! (I already tried.) Here are the acceptable values according to IOS Help, along with a look at the commands used to change these timers:
Switch(config)#spanning vlan ? WORD vlan range, example: 1,
Switch(config)#spanning vlan 1 forward-time Set the forwar hello-time Set the hello max-age Set the max ag priority Set the bridge root Configure swit
Switch(config)#spanning vlan 1
number of seconds for t
Switch(config)#spanning vlan 1 number of seconds betwe
Switch(config)#spanning vlan 1 maximum number of secon
Even if you try to sneak a zero past the router — forget it, the router sees that fastball coming!
Switch(config)#spanning vlan 1
% Invalid input detected at ’^
The STP Interface States The transition from blocking to forwarding is not instantaneous. STP has interfaces go through two intermediate states between blocking and forwarding -listening and learning. A port coming out of blocking first goes into listening. The port is listening for Hello BPDUs from other possible root switches, and also takes this opportunity to do some spring cleaning on its MAC table. (If a
MAC entry isn’t heard from in this time frame, it’s thrown out of the table.) This state’s length is defined by the Forward Delay timer, 15 seconds by default. The port will then go into learning state. During this state, the switch learns the new location of switches and puts fresh-baked entries into its MAC table. Ports in learning state do not forward frames. Learning state also lasts the duration of the ForwardDelay timer.
To review the order and timers involved: Switch waits 20 seconds without a Hello before beginning the transition process. Port comes out of blocking, goes into listening for 15 seconds. Port transitions from listening to learning, stays in learning for 15 seconds. Port transitions from
learning to forwarding. The one STP state not mentioned here is disabled. Some non-Cisco documentation does not consider this an official STP state, but since the CCNA is a Cisco exam, we certainly should! Ports in disabled mode are not learning MAC addresses, and they’re not accepting or sending BPDUs. They’re not doing anything! Those timers are there for a reason, but they’re still a pain in the butt on occasion. Let’s talk about one of those times
and what we can do about it!
Portfast Consider the amount of time a port ordinarily takes to go from blocking to forwarding when it stops receiving Hello BPDUs: Port stays in blocking mode for 20 seconds before beginning the transition to listening (as defined by the MaxAge value) Port stays in listening mode for 15 seconds before transition to
learning (as defined by the Forward Delay value) Port stays in learning mode for 15 seconds before transition to forwarding mode (also as defined by Forward Delay) That’s 50 seconds, or what seems like 50 hours in networking time. In certain circumstances, we can avoid these delays with Portfast. Portfast allows a port to bypass the listening and learning
stages of this process, but is only appropriate to use on switch ports that connect directly to an end-user device, such as a PC. Using portfast on a port leading to another networking device can lead to switching loops. That threat is so serious that Cisco even warns you about it on the router when you configure Portfast.
SW2(config)#int fast 0/6 SW2(config-if)#spanning portfa
%Warning: portfast should only
%Portfast has been configured
That’s a pretty serious warning! I love the mention of “temporary bridging loops”. All pain is temporary, but that doesn’t make it feel good at the time! Portfast can be a real help in the right circumstances….
… and a real hazard in the wrong circumstances.
Make sure you know which is which! One excellent real-world application for portfast is to configuring it on end-user ports that are having a little trouble getting IP addresses via DHCP. Those built-in delays can on
occasion interfere with DHCP. I’ve used it to speed up the IP address acquisition process more than once, and it works like a charm.
Per-VLAN Load Balancing And Etherchannels STP brings us a lot of good to our network, but on occasion, it gives us a bit of a kick in the butt. The kick here is that STP will leave only one trunk open between any two given switches, even if we have multiple crossover cables connecting them. While we obviously need STP to help us out with switching loop prevention, we’d really like to
use all of our available paths and bandwidth. Two ways to make that happen are per-VLAN load balancing and Etherchannels. Per-VLAN Spanning Tree (PVST) makes the load balancing option possible. Waaaay back in this section, I mentioned that every VLAN is running its own instance of STP in PVST. Now we’re going to see that in action! Let’s say we have VLANs 1
through 50 in our production network. We know that whether we have two switches or ten, by default one single switch will be the root for all VLANs.
We know we’ll have one root bridge selected; we’ll assume it’s the one on the right. We also know that the non-root bridge will select one root port,
and the other port leading to the root bridge will go into blocking mode. If we have 50 VLANs in this network, traffic for all 50 VLANs will go over one of the two available links while the other remains totally idle.
That’s not an efficient use of available resources! With PVST load balancing, we can finetune the port costs on a perVLAN basis to enable one port to be selected as the root port for half of the VLANs, and the other port to be selected as the root port for the other half. That’s per-VLAN load balancing!
I want you to see this feature in action, and I want you to see a classic “gotcha” in this config, so let’s head for the live equipment. We’re working with VLANs 1 and 100 in this lab, with R1 the root of both VLANs, as well as
any future VLANs. For clarity, I’m going to edit the Root ID and Bridge ID info from the output of show spanning vlan in this section, since we’re primarily concerned with the port role, status, and cost. We’ll run show spanning vlan 1 and show spanning vlan 100 on both switches. SW1#show spanning vlan 1 Interface
Role Sts Cost
Pr
Fa0/11 Fa0/12
Desg FWD 19 Desg FWD 19
12 12
SW1#show spanning vlan 100 Interface
Role Sts Cost
Pr
Fa0/11 Fa0/12
Desg FWD 19 Desg FWD 19
12 12
SW2#show spanning vlan 1 Interface
Role Sts Cost
Fa0/11 Fa0/12
Root FWD 19 Altn BLK 19
Pr
128 128
SW2#show spanning vlan 100 Interface
Role Sts Cost
Pr
Fa0/11
Root FWD 19
12
Fa0/12
Altn BLK 19
12
With SW1 as the root of both VLANs, both ports on that switch are forwarding. There’s a blocked port on SW2 courtesy of STP, which is preventing switching loops AND preventing us from using that second trunk. It’s just sitting there! With per-VLAN load balancing, we can bring VLAN 100’s traffic over the currently unused link. It’s as simple as lowering the blocked port’s cost for VLAN 100 below that of the currently forwarding port!
They’re both Fast Ethernet interfaces, so they each have a cost of 19. Let’s lower the cost on fast 0/12 for VLAN 100 to 12 and have a look around with IOS Help!
SW2(config)#int fast 0/12 SW2(config-if)#spanning ? bpdufilter Don’t send or bpduguard Don’t acce cost Change an guard Change an link-type Specify a mst Multiple span port-priority Change an portfast Enable an interfa stack-port Enable stack port vlan VLAN Switch Spanning T
SW2(config-if)#spanning cost ? port path cost
SW2(config-if)#spanning cost 1
The result is immediate. When I ran show spanning vlan 100 just seconds later… SW2#show spanning vlan 100 Interface
Role Sts Cost
Fa0/11 Fa0/12
Altn BLK 19 Root LIS 12
… and shortly after, fast 0/12 is now the forwarding port for
VLAN 100. SW2#show spanning vlan 100 Interface Fa0/11 Fa0/12
Role Sts Cost Altn BLK 19 Root FWD 12
VLAN 100 traffic will now go over fast 0/12 instead of fast 0/11. Pretty cool! To verify our load sharing, let’s run show spanning vlan 1 and be sure the traffic for that vlan is still going over fast 0/11. SW2#show spanning vlan 1
Interface
Role Sts Cos
Fa0/11 Fa0/12
Altn BLK 19 Root FWD 12
Hmm. All traffic for VLAN 1 is also going over fast 0/12. We’re not load balancing — we just changed the link all of the traffic is now using. Why? Here’s that gotcha I hinted about earlier. This particular command looks like the one you want, but the spanning cost command changes the port
cost for all VLANs. We need to remove that command and use the VLAN-specific version:
SW2(config)#int fast 0/12 SW2(config-if)#spanning ? bpdufilter Don’t send or bpduguard Don’t accept cost Change an inte guard Change an in link-type Specify a li mst Multiple spann port-priority Change an in portfast Enable an in stack-port Enable stack vlan VLAN Switch Spanning Tr
SW2(config-if)#spanning vlan ? WORD vlan range, example: 1,
SW2(config-if)#spanning vlan 1 cost Change an in port-priority Change an in
SW2(config-if)#spanning vlan 1 Change an inte
SW2(config-if)#spanning vlan 1
That’s what we needed! A minute or so later, I ran show spanning vlan 1 and show spanning vlan 100 on SW2. Notice the port blocked in each VLAN as well as the port costs. SW2# show spanning vlan 100 Interface
Role Sts Cost
Fa0/11 Fa0/12
Altn BLK 19 Root FWD 12
SW2#show spanning vlan 1 Interface
Role Sts Cost
Fa0/11 Fa0/12
Root FWD 19 Altn BLK 19
It’s business as usual for VLAN 1 on fast 0/11, but VLAN 100 traffic is now using the fast 0/12 link. Just watch your commands and per-VLAN load balancing is easy! Per-VLAN load balancing is one
great solution for those unused links, and here’s another one!
Etherchannels An Etherchannel is the logical bundling (aggregation) of two to eight parallel Ethernet trunks. This provides greater throughput, and is another effective way to avoid the 50second wait between blocking and forwarding states in case of a link failure. How do we avoid the delay entirely? STP considers an Etherchannel to be one physical link. If one of the physical links making up the logical
Etherchannel should fail, there’s no process of opening another port and the timers don’t come into play. STP sees only the Etherchannel as a whole. In this example, we have two switches connected by three separate crossover cables.
We’ll verify the connections with show interface trunk and then run show spanning-tree
vlan 1. SW1#show interface trunk Port
Mode
Encapsulatio
Fa0/10 desirable Fa0/11 desirable Fa0/12 desirable
802.1q 802.1q 802.1q
SW1#show spanning-tree vlan 1 Interface
Role Sts Cost
Fa0/10 Fa0/11 Fa0/12
Root FWD 19 Altn BLK 19 Altn BLK 19
We know this is not the root switch, because…
there’s no “this bridge is the root” message there is a root port, which is forwarding We have three physical connections between the two switches, and only one of them is in use. That’s a waste of bandwidth! Additionally, if the root port on SW1 goes down, we’re in for a delay while one of the other two ports comes out of blocking mode and through listening and learning mode on the way to forwarding.
That’s a long time for a trunk to be down (50 seconds). Both of these issues can be addressed by configuring an Etherchannel. By combining the three physical ports into a single logical link, not only is the bandwidth of the three links combined, but the failure of a single link will not force the STP timers to kick in. Ports are placed into an Etherchannel with the channelgroup command. The channelgroup number doesn’t have to match across the trunk, but it
does have to match between interfaces on the same switch that will be part of the same Etherchannel. Here’s the configuration, and this is a great chance to practice our interface range command! Nothing wrong with configuring each port individually, but this command saves time — on the job and in the exam room! To verify that the channelgroup number doesn’t have to match between switches, I’ll use group 1 to bundle the ports
on SW1 and group 5 to bundle the ports on SW2.
SW1(config)#interface range fa SW1(config-if-range)#channel-g Creating a port-channel interf 00:33:57: %LINK-3-UPDOWN: Inte 00:33:58: %LINEPROTO-5-UPDOWN: changed state to up SW2(config)#int range fast 0/1 SW2(config-if-range)#channel-g Creating a port-channel interf 00:47:36: %LINK-3-UPDOWN: Inte 00:47:37: %LINEPROTO-5-UPDOWN:
After configuring an Etherchannel on each router with the interface-level command channel-group, the
output of commands show interface trunk and show spanning vlan 1 verifies that STP now sees the three physical links as one logical link -- the virtual interface portchannel 1 (“Po1”). Note the Etherchannel’s cost is 9 instead of 19. This lower cost reflects the increased bandwidth of the Etherchannel as compared to a single FastEthernet physical connection. SW1#show interface trunk Port Mode Encapsulation
Po1
desirable
802.1q
SW1#show spanning vlan 1 Interface Role Sts Cost Po1 Root FWD 9 128.6
We’ll go to SW2 to use some other Etherchannel verification tools. You can use show interface port-channel to see the same info you’d see on a physical port. I’ll show you only the first two lines of output:
SW2#show int port-channel 5 Port-channel5 is up, line prot Hardware is EtherChannel, addr
With all this talk of channelgroups and port-channels, you may wonder if the word “Etherchannel” ever makes an appearance on the switch. Believe it or not, there is a show etherchannel command!
SW2# show etherchannel ? Channel group number detail load-balance port port-channel protocol summary |
Frankly, these aren’t commands you’re going to run often. show etherchannel summary gives you some good info to get started with troubleshooting:
SW2#show etherchannel summary Flags: D — down P — bundle I — stand-alone s — suspen H — Hot-standby (LACP only) R — Layer3 S — Layer2 U — in use f — failed M u w d
— — — —
not in use, minimum links unsuitable for bundling waiting to be aggregated default port
Number of channel-groups in us
Number of aggregators: Group 5
Port-channel Po5(SU)
1
Proto —
I also like show etherchannel port, since it shows you how long each port in the Etherchannel has been in that state. Here’s the info I received on all three ports (I’m showing you only port 0/10): SW2#show etherchannel port Channel-group listing: Group: 5 Ports in the group: Port: Fa0/10 Port state = Up Mstr In-Bndl
Channel group = 5 Port-channel = Po5 Port index = 0
Mode = GC = Load =
Age of the port in the current
Let’s see how STP reacts to losing one of the channels in our Etherchannel. Before configuring the Etherchannel, closing fast0/10 would have resulted in an STP recalculation and a temporary loss of connectivity between the switches. Now that the channels are bundled, I’ll close that port and immediately run
show spanning vlan 1. SW1(config)#int fast 0/10 SW1(config-if)#shut SW1#show spanning vlan 1 Interface Po1
Role Sts Cost Root FWD 12
STP does recalculate the cost of the Port-Channel interface. The cost is now higher since there are only two physical channels bundled instead of three, but the truly important point is that STP does not consider the Etherchannel to be down and
there’s no loss of connectivity between our switches.
BPDU Guard Remember that warning from the router when configuring PortFast?
SW1(config)#int fast 0/5 SW1(config-if)#spanning-tree p %Warning: portfast should only ports connected to a single host. Connecting hubs, concent switches, bridges, etc… to thi interface when portfast is ena cause temporary bridging loops Use with CAUTION
%Portfast has been configured have effect when the interface
You’d think that would be enough of a warning, but there is a chance that someone is going to manage to connect a switch to a port running Portfast, which in turn creates the possibility of a switching loop.
BPDU Guard protects against
this possibility. If any BPDU, superior or inferior, comes in on a port that’s running BPDU Guard, the port will be shut down and placed into error disabled state, shown on the switch as err-disabled. To configure BPDU Guard on a specific port only:
SW1(config)#int fast 0/5 SW1(config-if)#spanning-tree b % Incomplete command.
SW1(config-if)#spanning-tree b disable Disable BPDU guard for enable Enable BPDU guard for t
SW1(config-if)#spanning-tree b
To configure BPDU Guard on all portsrunning portfast on the switch:
SW1(config)#spanning-tree port
Note this command is a variation of the portfast command. There’s another guard, Root Guard, that is not on the CCNA exam but is perilously close in operation to BPDU guard. I want to clarify the difference:
Root Guard will bring a port down if a superior BPDU is received on that particular port. You’re guarding the local switch’s role as the root, since a superior BPDU would mean another switch would become the root. BPDU Guard brings a port down if any BPDU is received on that port. This helps prevent switching loops, and can also be used as a security feature by enabling it on unused switch ports. Let’s see BPDU Guard in action!
In this lab, SW2 is receiving BPDUs from SW1 on fast 0/10, 11, and 12. Let’s see what happens when we enable BPDU Guard on fast 0/10.
SW2(config)#int fast 0/10 SW2(config-if)#spanning bpdugu *Mar 1 02:19:26.604: %SPANTREE-2-BLOCK_BPDUGUARD: R on port Fa0/10 with BPDU Guar Disabling port. *Mar 1 02:19:26.604: %PM-4-ERR_DISABLE: bpduguard e on Fa0/10, putting Fa0/10 in
show int fast 0/10 verifies the port is in err-disabled state:
SW2#show int fast 0/10 FastEthernet0/10 is down, line
To put things right, we’ll remove BPDU Guard from port 0/10, and then reset it as required by the err-disabled message. After that, all is well!
SW2(config)#int fast 0/10 SW2(config-if)#no spanning bpd
(I could have used the spanning bpduguard disable command for the same end result.) SW2(config)#int fast 0/10
SW2(config-if)#shut SW2(config-if)#no shut
SW2#show int fast 0/10 FastEthernet0/10 is up, line p is up (connected)
That’s enough switching and Etherchanneling for now, but we’re not done at Layer 2. Next up, L2 WAN work, including Frame Relay!
HDLC, PPP, and Frame Relay (Plus A Few Cables) Here’s the deal with this section…. I’m going to discuss some Layer 1 WAN topics with you at the end of this section. That’ll include how I simulate a WAN in the labs you’ll see here, and some info on how I created a
frame relay cloud in my practice lab for us to use. Before we get to that info, I’d like you to see the actual labs, and there are plenty of them in this section! This is just a note not to skip the Physical layer info at the end of the discussion and labs involving HDLC, PPP, and Frame Relay — there’s some VERY important information regarding Layer 1 at the end of this section. With no further ado (whatever that is), let’s hit HDLC and PPP!
HDLC And PPP With a point-to-point WAN link, we have two options for encapsulation: HDLC and PPP. During our discussion of these protocols, we’ll be running a couple of labs with the following PTP link.
Cisco actually has its own HDLC
variation, known technically as cHDLC, which sounds more like a chemical element than a protocol. I doubt strongly you see the term “cHDLC” on your exams, as Cisco’s own books and webpages refer to this protocol as “HDLC”. Why did Cisco develop their own HDLC? The original HDLC didn’t have the capabilities for multiprotocol support. A couple of notes about Cisco HDLC: Cisco added the TYPE
field to allow that multiprotocol support. Cisco’s version of HDLC is not Cisco-proprietary. This is the default encapsulation on Cisco router serial interfaces. Let’s get started with some lab work! We’ll assign IP addresses, open the interfaces, wait 30 seconds, and verify our config with show interface serial.
R1(config)#int s1 R1(config-if)#ip address 172.1 R1(config-if)#no shut
R3(config)#int s1 R3(config-if)#ip address 172.1 R3(config-if)#no shut
R1#show int s1 Serial1 is up, line protocol i
R3#show int s1 Serial1 is up, line protocol i
The combination “serial1 is up, line protocol is down” means everything’s fine physically, but there’s a logical issue. As we saw earlier in this section, a PTP link in a lab is going to have a DTE on one end and a DCE on the other, and the DCE
must supply clockrate to the DTE. To see which is which, just run show controller serial on one router.
R1#show controller serial 1 HD unit 1, idb = 0x1DBFEC, dri buffer size 1524 HD unit 1, V.
If you see DTE on R1, you know R3 has to be the DCE end!
R3#show controller serial 1 HD unit 1, idb = 0x11B4DC, dri buffer size 1524 HD unit 1, V.
Put the clockrate on the DCE end and the line protocol
comes up in half a minute or so. We’ll again verify with show interface serial, and now I’ll show you where you can see the encapsulation that’s running on the interface — in this case, the default, which is HDLC.
R3(config)#int s1 R3(config-if)#clockrate 56000 19:13:42: %LINEPROTO-5-UPDOWN:
R1#show int s1 Serial1 is up, line protocol i
R3#show int s1 Serial1 is up, line protocol i Hardware is HD64570 Internet address is 172.12.13 MTU 1500 bytes, BW 1544 Kbit, reliability 255/255, txload 1 Encapsulation HDLC, loopback
At this point, each partner in the PTP link can ping the other.
R1#ping 172.12.13.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
R3#ping 172.12.13.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
The endpoints of a PTP link must agree on the encapsulation type. If one end is running HDLC, the other end must run HDLC as well or the line protocol will go down. If one of the routers is running another encapsulation type, the physical interfaces will still be up, but the line protocol will go
down and IP connectivity will be lost. To illustrate, I’ll change the encapsulation type on R3’s Serial1 interface to the PointTo-Point Protocol (PPP). I’ll use IOS Help to illustrate the three encap types we’ll work with in this section. I’ve edited other, less popular choices.
R3(config)#int s1 R3(config-if)#encapsulation ? frame-relay Frame Relay netwo hdlc Serial HDLC synchronous ppp Point-to-Point protocol R3(config-if)#encapsulation pp
A few seconds later, the line protocol goes down on R3.
19:18:11: %SYS-5-CONFIG_I: Con 19:18:12: %LINEPROTO-5-UPDOWN:
The encapsulation mismatch has brought the line protocol down, and to bring it back up, we simply need to make the encapsulation type match
again. Before doing so, let’s take a detailed look at PPP.
PPP Features The default setting of a Cisco serial interface is to use HDLC encapsulation, but you’re generally going to change that encap type to PPP. Why, you ask? Because PPP offers many features that HDLC does not, including: Authentication through the use of the Password Authentication Protocol (PAP) and the ChallengeHandshake Authentication
Protocol (CHAP) Support for error detection and error recovery features Multiprotocol support (which Cisco’s HDLC does offer, but the original HDLC does not) We can authenticate over PPP with either PAP or CHAP, and when you have two choices for the same task, you just know you’re going to see a lot of those two choices on your exams. Let’s discuss both of
them while seeing both in action on live Cisco routers! But before that… just a quick word! The authentications and labs you’ll see in this section are two-way authentications, where each router is actively authenticating the other. This gives us plenty of practice with our commands, including show and debug commands, but authentication isn’t required to be two-way. Each of the authentications are separate operations — they’re
not tied in to each other. For example, if we wanted R1 to authenticate R3 in any of the following labs, but not have R3 authenticate R1, that’s no problem.
PAP And / Or / Vs. CHAP First things first — we need to have PPP running over our PTP link before we can even start examining PAP and CHAP. When last we left our routers, R3 was running PPP and R1 was running HDLC, so let’s config R1 for PPP and then verify both interfaces.
R1(config)#int s1 R1(config-if)#encap ppp 19:37:20: %LINEPROTO-5-UPDOWN: R1#show int s1
Serial1 is up, line protocol i Encapsulation PPP, loopback n
R3#show int s1 Serial1 is up, line protocol i Encapsulation PPP, loopback n
There’s a lot going on behind the scenes with CHAP and PAP, so we’ll run some debugs during these labs to see exactly how these protocols operate. One major difference between the two -- CHAP is much more aggressive than PAP. Assume R1 is authenticating R3. With PAP, R1’s just going to sit there and wait for R3 to present a
password.
With CHAP, R1 challenges R3 to prove its identity. (To use the dreaded Buzzword Bingo word, CHAP is much more proactive than PAP.)
We’ll start our CHAP config by creating a username / password database. If you haven’t done that at this point in the course, you skipped something. ; ) No worries, it’s easy! On R3, we’ll create a database with R1’s name and the password CCNA, and on R1 we’ll create an entry with R3’s name and the same password.
R3(config)#username R1 passwor R1(config)#username R3 passwor
Now we’ll apply CHAP with the ppp authentication chap command on both R1 and R3’s serial interfaces. To watch the authentication process, we’ll run debug authentication ppp on R3 before finishing the config. R1(config)#int s1 R1(config-if)#ppp authen chap
R3#debug ppp authentication PPP authentication debugging i R3(config)#int s1 R3(config-if)#ppp authenticati
20:21:06: 20:21:06: 20:21:06: 20:21:06: 20:21:06: 20:21:06:
Se1 Se1 Se1 Se1 Se1 Se1
CHAP: CHAP: CHAP: CHAP: CHAP: CHAP:
O I O I O I
CHALLENG CHALLENG RESPONSE RESPONSE SUCCESS SUCCESS
Success! When all is well with CHAP authentication, this is the debug output. First, a set of challenges from each router, then a set of responses from each, and then two success messages. Now that we know what the debug output is when things
are great, let’s see what happens when the authentication’s off a bit. I’ll remove the database entry from R1 and replace it with one using ccna for the password instead of the upper-case CCNA. I’ll then reset the interface to trigger authentication.
R1(config)#no username R3 pass R1(config)#username R3 passwor R1(config)#int s1 R1(config-if)#shut 20:30:40: %LINK-5-CHANGED: Int 20:30:41: %LINEPROTO-5-UPDOWN: R1(config-if)#no shut
20:30:49: 20:30:49: 20:30:49: 20:30:49: 20:30:49: 20:30:49:
%LINK-3-UPDOWN: Inte Se1 CHAP: O CHALLENG Se1 CHAP: I CHALLENG Se1 CHAP: O RESPONSE Se1 CHAP: I RESPONSE Se1 CHAP: O FAILURE
The phrase “MD/DES compare failed” is a huge tipoff there’s an issue with the password. You’re going to see a full set of these messages every 2 seconds with that debug, so while you troubleshoot, you might want to turn the debug off. You may also see the physical state of the interface begin to flap — that is, go up
and down every few seconds. 20:31:43: %LINK-3UPDOWN: Interface Serial1, changed state to down 20:31:45: %LINK-3UPDOWN: Interface Serial1, changed state to up 20:31:57: %LINK-3UPDOWN: Interface Serial1, changed state to down 20:31:59: %LINK-3UPDOWN: Interface Serial1, changed state to up If you see that, I would shut
the interface down completely while you fix the config. This debug illustrates an important point. Your CHAP and PAP passwords are casesensitive, so “ccna” and “CCNA” are not the same password. After replacing the new database entry with the original and reopening the interface, the debug shows our link is again working properly.
R1(config)#no username R3 pass R1(config)#username R3 passwor R1(config)#int s1 R1(config-if)#no shut
20:38:09: 20:38:09: 20:38:09: 20:38:09: 20:38:09: 20:38:09: 20:38:09: 20:38:10:
%LINK-3-UPDOWN: Inte Se1 CHAP: O CHALLENG Se1 CHAP: I CHALLENG Se1 CHAP: O RESPONSE Se1 CHAP: I RESPONSE Se1 CHAP: O SUCCESS Se1 CHAP: I SUCCESS %LINEPROTO-5-UPDOWN:
Success! That’s why you want to practice with debugs in a lab environment when things are working properly. You see exactly what’s going on “behind the command” and it gives you a HUGE leg up when real-world troubleshooting time comes
around. If you get the username wrong, the output of that debug will be slightly different. I’ll remove the working username/password entry and replace it with one that has the right password but a mistyped username.
R1(config)#no username R3 pass R1(config)#username R33 passwo
After resetting the interface, this is the output of debug ppp authentication.
20:41:35: Se1 CHAP: O CHALLENG
20:41:35: Se1 CHAP: I CHALLENG 20:41:35: Se1 CHAP: Username R 20:41:35: Se1 CHAP: Unable to
That output is doing everything except fixing the problem for you! If the username isn’t found, that means there’s no entry for that username in the username/password database. Put one there and the problem is solved.
R1(config)#no username R33 pas R1(config)#username R3 passwor 20:47:52: Se1 CHAP: O CHALLENG 20:47:52: Se1 CHAP: I CHALLENG 20:47:52: Se1 CHAP: O RESPONSE 20:47:52: Se1 CHAP: I RESPONSE
20:47:53: Se1 CHAP: O SUCCESS 20:47:53: Se1 CHAP: I SUCCESS
The commands for PAP are much the same. PAP requires a username/password database exactly like the one we’ve already built, so we’ll continue to use that one. We’ll remove the CHAP configuration with no ppp authentication chap on both routers’ Serial1 interfaces. (There are exceptions, but you can usually negate a Cisco command simply by repeating the command with the word no in front of it.)
R1(config)#int s1 R1(config-if)#no ppp authentic R3(config)#int s1 R3(config-if)#no ppp authentic
Now we’ll put PAP into action on R1 first, and then run debug ppp authentication while configuring PAP on R3.
R1(config)#int s1 R1(config-if)#ppp authenticati R3(config)#int s1 R3(config-if)#ppp authenticati Here’s the result of the debug 2d05h: Se1 PAP: I AUTH-REQ id 2d05h: Se1 PAP: O AUTH-REQ id 2d05h: Se1 PAP: Authenticating 2d05h: Se1 PAP: O AUTH-ACK id 2d05h: Se1 PAP: I AUTH-ACK id
With PAP, there is no series of challenges. I’m always reminding you to use IOS Help even when you don’t need to, just to see what other options a given command has. I used it at the end of ppp authentication pap, and here are the results:
R3(config-if)#ppp authenticati callback Authenticate remote callin Authenticate remote on callout Authenticate remote o chap Challenge Handshake Auth ms-chap Microsoft Challenge H optional Allow peer to refuse
According to IOS Help, we can still enter CHAP in this command, even though we’ve already specified PAP as the authentication protocol to use. Now that’s interesting! Both of the following commands are actually legal:
R1(config-if)#ppp authenticati
R3(config-if)#ppp authenticati
This option allows the local router to attempt a secondary authentication protocol if the
primary one (the first one listed) is not in use by the remote router. This does not mean the second protocol will be used if authentication fails via the first protocol. For example, if we configured the following on R3….
R3(config-if)#ppp authenticati
… here are the possible results. If R3’s remote partner is not using PAP, R3 will then send CHAP messages.
If R1 does respond to the PAP messages and the result is failed authentication, R3 will *not* try CHAP.
Why CHAP Over PAP? The drawback with PAP: The username and password are sent over the WAN link in clear text. If a potential network intruder intercepts that information, they’re going to become an actual network intruder in no time, since they can easily read the username and password.
Both routers have to know the password in CHAP, but neither will ever send the actual password over the link. Earlier, we saw a CHAP router challenge the other router to prove its identity.
This challenge takes the form of a three-way handshake, but it’s not the TCP three-way handshake! Here’s the overall process: The authenticating router challenges the peer via a CHALLENGE packet, as discussed previously. Contained in that
challenge is a random number. The challenged router runs a hash algorithm against its password, using that random number as part of the process. The challenged router passes that value back to the authenticating router in a RESPONSE packet. The authenticating router looks at the algorithm result, and if it matches the answer the
authenticating router came up with using the same algorithm and the same random number, authentication has succeeded! The authenticating router sends an ack to the challenged router in the form of a SUCCESS message. In earlier labs, we had R3 authenticating R1 and R1 authenticating R3. When authentication was properly configured, we saw the
CHALLENGE and RESPONSE packets, followed by SUCCESS! 22:11:22: 22:11:22: 22:11:22: 22:11:22: 22:11:22: 22:11:22:
Se1 Se1 Se1 Se1 Se1 Se1
CHAP: CHAP: CHAP: CHAP: CHAP: CHAP:
O I O I O I
CHALLENG CHALLENG RESPONSE RESPONSE SUCCESS SUCCESS
“Who’s Causin’ All This?” A better way to ask this question is “Who’s handling all of these PPP capabilities?” The answer — the Link Control Protocol (LCP). Just as the Session layer is the “manager” of the entire OSI model, LCP is really the manager of PPP — the “control protocol”, technically. LCP handles the configuration, maintenance, and eventual teardown of any PPP
connection. All the features that make PPP so attractive to network admins — looped link detection, PAP and CHAP authentication, PPP multilink (load balancing), and error detection — are negotiated and handled by LCP. When a PPP link is up and running, both physically and logically, you’ll see “LCP Open” in the output of show interface serial.
R3#show int serial 1 Serial1 is up, line protocol i Hardware is HD64570
Internet address is 172.12.13 MTU 1500 bytes, BW 1544 Kbit, reliability 255/255, txload 1/ Encapsulation PPP, loopback n Keepalive set (10 sec) LCP Open
Just to cause trouble, I configured ppp authentication chap on R3’s S1 interface without doing so on R1. Note the “LCP TERMsent” message. When you see LCP TERMsent or LCP Closed there, you’ve got a problem. Of course, line protocol is down tells us there’s a problem as well!
R3(config)#int s1 R3(config-if)#ppp authenticati R3(config-if)#^Z R3# 1w0d: %LINEPROTO-5-UPDOWN: Lin 1w0d: %SYS-5-CONFIG_I: Configu R3#show int s1 Serial1 is up, line protocol i Hardware is HD64570 Internet address is 172.12.13 MTU 1500 bytes, BW 1544 Kbit, Encapsulation PPP, loopback n Keepalive set (10 sec) LCP TERMsent
Let me introduce you to debug ppp authentication’s talkative relative, debug ppp negotiation. You’ll still
see the authentication output, it’ll just be in the middle of the entire negotiation output. I’m showing you this large debug output primarily so you can see how busy LCP is during the entire PPP negotiation process, starting with the 3rd line after the debug is turned on.
R3#debug ppp negotiation PPP protocol negotiation debug 22:11:22: Se1 PPP: Phase is ES 22:11:22: Se1 LCP: O CONFREQ [ 22:11:22: Se1 LCP: AuthProto C
22:11:22: Se1 LCP: MagicNumber 22:11:22: Se1 LCP: I CONFREQ [ 22:11:22: Se1 LCP: AuthProto C 22:11:22: Se1 LCP: MagicNumber 22:11:22: Se1 LCP: O CONFACK [ 22:11:22: Se1 LCP: AuthProto C 22:11:22: Se1 LCP: MagicNumber 22:11:22: Se1 LCP: I CONFACK [ 22:11:22: Se1 LCP: AuthProto C 22:11:22: Se1 LCP: MagicNumber 22:11:22: Se1 LCP: State is Op 22:11:22: Se1 PPP: Phase is AU < CHAP authentication is then
There’s even more output after the authentication, but you get the point. LCP’s a busy protocol!
Keep the Link Control Protocol separate in your mind from another set of protocols that run over PPP, the Network Control Protocol. While both run at Layer 2, NCP does the legwork of negotiating options for our L3 protocols to run over the PPP link. For example, IP’s options are negotiated by the Internet Protocol Control Protocol. Now on to Frame Relay!
Frame Relay Point-to-point networks are nice, but there’s a limit to scalability. It’s just not practical to build a dedicated PTP link between every single router in our network, nor is it costeffective. It would be a lot easier (and cheaper) to share a network that’s already in place, and that’s where Frame Relay comes in! A frame relay network is a nonbroadcast multi-access (NBMA) network.
“nonbroadcast” means that broadcasts are not transmitted over frame relay by default, not that they cannot be sent. “multiaccess” means the frame relay network will be shared by multiple devices. The frame provider’s collection of frame relay switches has a curious name — frame relay cloud. You’ll often see the frame provider’s switches represented with a cloud drawing in network diagrams, much like this:
We have two kinds of equipment in this network: The Frame Relay switches, AKA the Data Communications Equipment (DCE). These belong to the frame relay provider, and we don’t have anything to do with their configuration. The routers, AKA the Data
Terminal Equipment. We have a lot to do with their configuration! Each router will be connected to a Frame Relay switch via a Serial interface connected to a leased line, and the DCE must send a clockrate to that DTE. If the clockrate isn’t there, the line protocol will go down.
Those two frame switches are not going to be the only switches in that cloud. Quite the contrary, there can be hundreds of them! For simplicity’s sake, the following diagram will have less than that.
You and I, the network admins, don’t need to list or even know
every possible path in that cloud. Frankly, we don’t care. The key here is to know that not only will there be multiple paths through that cloud from Router A to Router B, but data probably will take different paths through that cloud. That’s why we call this connection between the routers a virtual circuit. We can send data over it anytime we get ready, but data will not necessarily take the same path through the provider’s switches every time.
Frame relay is a packetswitching protocol. The packets may take different physical paths to the remote destination, at which point they will be reassembled and will take the form of the original message. In contrast, circuitswitching protocols have dedicated paths for data to travel from one point to another. There are two types of virtual circuits, one much more popular than the other. A permanent virtual circuit (PVC)
is available at all times, where a switched virtual circuit (SVC) is up only when certain criteria are met. You’re going to see PVCs in most of today’s networks, and that’s the kind of virtual circuit we’ll work with throughout this section. An SVC can be appropriate when data is rarely exchanged between two routers. For example, if you have a remote site that only needs to send data for 5 minutes every week, an SVC may be more costeffective than a PVC. An SVC is
really an “on-demand” VC, as it’s built when it’s needed and torn down when that need ends. A PVC can be used to build a full-mesh or partial-mesh network. A full mesh describes a topology where every router has a logical connection to every other router in the frame relay network.
The problem with full-mesh networks is that they’re simply not scalable. As the network grows, it becomes less and less feasible to maintain a full mesh. If we added just a single
router to the above network, we’d have to configure each router to have a VC to the new router. Stepping back to dedicated leased lines for a moment — if full-mesh networks aren’t terribly scalable, dedicated lines are even worse! Can you imagine putting in a dedicated line between every router in a 20-router network? Forget it! More common is the partialmesh topology, where a single router (the hub) has a logical connection to every other
router (the spokes). The spokes do not have a logical connection to each other. Communication between spokes will go through the hub.
You can see where this would beat the heck out of dedicated lines, especially as your network grows. Imagine the cost if you add seven more routers to that network and then try to connect them all to each other with dedicated lines! With PVCs, particularly in a hub-and-spoke network, you could quickly have that network up and running in minutes once your Frame Relay provider gives you the information you need to create your mappings.
We’ll get to that info and those mappings soon. Right now, let’s talk about the keepalive of our Frame Relay network!
The LMI: The Heartbeat Of Frame Relay Local Management Interface (LMI) messages are sent between the DCE and the DTE. The “management” part of the message refers to PVC management, and information regarding multicasts, addressing, and VC status is contained in the LMI. A particular kind of LMI message, the LMI Status message, serves as a keepalive for the logical connection
between the DTE and DCE. If these keepalives are not continually received by both the DCE and DTE, the line protocol will drop. The LMI also indicates the PVC status to the router, reflected as either active or inactive. The LMI types must match on the DTE and DCE for the PVC to be established. There are three types of LMI: Cisco (the default, AKA the “Gang Of Four” LMI) ansi
q933a The “Gang Of Four” refers to the four vendors involved in its development. (Cisco, StrataCom, DEC, NorTel) The LMI type can be changed with the frame lmi-type command. Before doing anything with the frame relay commands, we have to enable frame relay on the interface with the encapsulation framerelay command. Remember, the default encapsulation type on a Cisco Serial interface is HDLC.
R1(config)#interface serial0 R1(config-if)#encapsulation ?
atm-dxi ATM-DXI encapsulation frame-relay Frame Relay networ hdlc Serial HDLC synchronous lapb LAPB (X.25 Level 2) ppp Point-to-Point protocol smds Switched Megabit Data Ser x25 X.25
R1(config-if)#encapsulation fr R1(config-if)#frame-relay lmicisco ansi q933a
LMI Autosense will take effect when you don’t specify an LMI
type manually. When you open that interface, LMI Autosense has the router send out an LMI Status message for all three LMI types.
The router then waits for a response for one of those LMI types from the DCE. When the router sees the response to its
LMI Autosense messages, the router will then send only the same LMI type it received from the DCE.
The Frame Relay LMI isn’t exactly something we change on a regular basis, so once it’s up and running, mismatches
between the DTE and DCE are rare. To be sure we can spot one, and to be fully prepared for exam success, we’ll create an LMI mismatch between the DTE and DCE in our lab, and follow that with some debugging and troubleshooting. We’ll go through several full Frame Relay labs in this section, including some topics we haven’t covered here yet, but I want you to see the LMI info now. To that end, I’ve configured a working Frame
Relay network, which we’ll soon make not work. Our router is R1, and show frame lmi verifies it’s running Cisco LMI. The top line of output tells us both the interface and the LMI running on that interface.
R1#show frame lmi LMI Statistics for interface S Invalid Unnumbered info 0 In Invalid dummy Call Ref 0 Inv Invalid Status Message 0 Inv Invalid Information ID 0 Inv Invalid Report Request 0 Inv Num Status Enq. Sent 1390 Num Num Update Status Rcvd 0 Num
The fields we’re most interested in are the two bolded fields and the “Num Status Timeouts” value. As the LMIs continue to be exchanged, the “Enq Sent” and “Msgs Rcvd” should continue to increment and the Timeouts value should remain where it is. Let’s take another look at this output just a few minutes later. (From this point forward, I’ll cut the “invalid” fields out of this output.)
R1#show frame lmi LMI Statistics for interface S
Num Status Enq. Sent 64 Num S Num Update Status Rcvd 0 Num
show interface serial 0 verifies the interface is physically up and the line protocol (the logical state of the interface) is up as well. The keepalive for Frame Relay is set to 10 seconds — that’s how often LMI messages are going out.
R1#show int s0 Serial0 is up, line protocol i Internet address is 172.12.12 MTU 1500 bytes, BW 1544 Kbit, reliability 255/255, txload 1/ Encapsulation FRAME-RELAY, lo Keepalive set (10 sec)
Now that we know how things look when the LMI matches, let’s set the LMI type on the router to ansi and see what happens.
R1(config)#int serial0 R1(config-if)#frame lmi-type a About 30 seconds later, the li R1(config)#int serial0 R1(config-if)#frame lmi-type a R1(config-if)#
3d04h: %LINEPROTO-5-UPDOWN: Li
R1#show int s0 Serial0 is up, line protocol i
You and I know why the line protocol is down, since we did it deliberately. But what if you had just walked into a client site and their Frame Relay link is down? The first step in Frame troubleshooting is show interface serial, which we just ran. We see the line protocol is down and the interface is running Frame Relay. The “Serial0 is up” part of the
show int s0 output tells us that everything is fine physically, but there is a logical problem. Let’s run show frame lmi twice, a few minutes apart, and see what we can see.
R1#show frame lmi LMI Statistics for interface S Num Status Enq. Sent 121 Num S Num Update Status Rcvd 0 Num R1#show frame lmi LMI Statistics for interface S Num Status Enq. Sent 134 Num S Num Update Status Rcvd 0 Num S
LMI messages are still going out, so that’s good. The bad
part is the timeout counter incrementing while the msgs rcvd counter stands still. Let’s dig a little deeper and run debug frame lmi.
R1#debug frame lmi Frame Relay LMI debugging is o Displaying all Frame Relay LMI 3d04h: Serial0(out): StEnq, my 3d04h: datagramstart = 0xE329E 3d04h: FR encap = 0x00010308 3d04h: 00 75 95 01 01 00 03 02 3d04h: 3d04h: Serial0(out): StEnq, my 3d04h: datagramstart = 0xE2444 3d04h: FR encap = 0x00010308 3d04h: 00 75 95 01 01 00 03 02 3d04h: Serial0(out): StEnq, my 3d04h: datagramstart = 0xE2457
3d04h: FR encap = 0x00010308 3d04h: 00 75 95 01 01 00 03 02
R1#undebug all All possible debugging has bee
When myseq continues to increment but yourseen does not, that’s another indicator of an LMI mismatch. I’ll turn the debug back on, change the LMI type back to Cisco, and we’ll see the result. Warning: A lot of info ahead!
R1#debug frame lmi Frame Relay LMI debugging is o Displaying all Frame Relay LMI R1#conf t
Enter configuration commands, R1(config)#int s0 R1(config-if)#frame lmi-type c R1(config-if)# 3d04h: Serial0(out): StEnq, my 3d04h: datagramstart = 0xE0183 3d04h: FR encap = 0x00010308 3d04h: 00 75 95 01 01 00 03 02 3d04h: R1(config-if)# 3d04h: Serial0(out): StEnq, my 3d04h: datagramstart = 0xE01A9 3d04h: FR encap = 0xFCF10309 3d04h: 00 75 01 01 00 03 02 40 3d04h: 3d04h: Serial0(in): Status, my 3d04h: RT IE 1, length 1, type 3d04h: KA IE 3, length 2, your 3d04h: PVC IE 0x7 , length 0x6 3d04h: PVC IE 0x7 , length 0x6 R1(config-if)#
3d04h: Serial0(out): StEnq, my 3d04h: datagramstart = 0xE01CF 3d04h: FR encap = 0xFCF10309 3d04h: 00 75 01 01 01 03 02 41 3d04h: 3d04h: Serial0(in): Status, my 3d04h: RT IE 1, length 1, type 3d04h: KA IE 3, length 2, your R1(config-if)# 3d04h: Serial0(out): StEnq, my 3d04h: datagramstart = 0xE23BD 3d04h: FR encap = 0xFCF10309 3d04h: 00 75 01 01 01 03 02 42 3d04h: 3d04h: Serial0(in): Status, my 3d04h: RT IE 1, length 1, type 3d04h: KA IE 3, length 2, your 3d04h: PVC IE 0x7 , length 0x6 3d04h: PVC IE 0x7 , length 0x6 3d04h: %LINEPROTO-5-UPDOWN: Li R1(config-if)#^Z
R1# 3d04h: Serial0(out): StEnq, my 3d04h: datagramstart = 0xE23D0 3d04h: FR encap = 0xFCF10309 3d04h: 00 75 01 01 01 03 02 43 3d04h: 3d04h: Serial0(in): Status, my 3d04h: RT IE 1, length 1, type 3d04h: KA IE 3, length 2, your R1#undebug all All possible debugging has bee
As yourseq and yourseen begin to increment, the line protocol comes back up. Once you see that, you should be fine, but always stick around for a minute or so and make sure the line protocol stays up.
Verify the line protocol with show interface serial. Note you can see other information relating to the LMI in this output.
R1#show int s0 Serial0 is up, line protocol i Internet address is 172.12.12 Encapsulation FRAME-RELAY, lo Keepalive set (10 sec) LMI enq sent 180, LMI stat re LMI enqrecvd 0, LMI stat sent LMI DLCI 1023 LMI type is CISC
Before you leave the client site, turn off your debugs, either individually or with the
undebug all command.
All possible debugging has bee
The LMI must match in order for our line protocol to stay up, but so must the Frame encapsulation type. The encapsulation type must be agreed upon by the DTEs at each end of the connection; the DCE does not care which Frame encap type is used.
We have two Frame encapsulation choices: Cisco (the default ) IETF (the industry standard) Interestingly enough, IOS Help does not mention the Cisco default, only the option to
change the Frame encap to IETF.
R1(config)#int s0 R1(config-if)#encap frame ? ietf Use RFC1490/RFC2427 encap
DLCIs, Frame Maps, and Inverse ARP Frame Relay VCs use Data-Link Connection Identifiers (DLCIs) as their addresses. A DLCI is simply a Frame Relay Layer 2 address, but it’s a bit different from other addresses in that they can be reused from one
router to another in the same network. The reason DLCIs have local significance only is that DLCIs are not advertised to other routers. I know this sounds odd, but it will become clearer after we work through some examples of Frame Relay mapping, both dynamic and static. On that topic, stick with me while I tell you a short story. Years ago, my girlfriend-nowwife and I decided to take in a
movie. This being the 80s, we had to refer to an ancient information-gathering document called a newspaper to see what time the movie started. We saw the time the next show started, figured we had just enough time to make it, and hit the road. (This was very unusual for me. I’m one of those people who feels he’s late for something if he’s not at least 15 minutes early.) We walk in, there’s hardly anyone in the lobby, and I walk
up to the box office and ask for two tickets for that show. The fellow behind the counter tells me the movie started 20 minutes ago, which was 20 minutes earlier than the newspaper said it would start. I informed him of this. He looked me dead in the eye and said, “The paper ain’t always right.” Hmpf. What does this have to do with frame relay mapping, you ask? Just as the paper ain’t always
right, the theory ain’t always right. You know I’m all for the letting the routers and switches do their work dynamically whenever possible. Not only does that save our valuable time, but using dynamic address learning methods is usually much more effective than static methods. Without the right frame map statements, the rest of our frame relay work is useless, and we have two choices when it comes to Frame mapping:
Inverse ARP, the protocol that enables dynamic mapping Static frame map statements, which you and I have to write We’re going to continue this discussion as we build our first frame relay network. This network will be a hub-andspoke setup. The hub router, R1, has two DLCIs. DLCI 122 will be used for mapping a PVC to R2, and DLCI 123 will be used for
mapping a PVC to R3. The subnet used by all routers is 172.12.123.0 /24, with the router number as the last octet. This lab contains no subinterfaces and all routers are using their Serial0 interfaces. We have to get this L2 network up and running, because it’s the same network we’ll use as a foundation for our static routing, OSPF, and EIGRP labs, and you can’t have a successful L3 lab if L2 isn’t working perfectly!
Inverse ARP Inverse ARP is enabled by default on a Cisco interface running Frame Relay. When you enter the encapsulation framerelay command and then open the interface, you’re running Inverse ARP. It’s that easy! What’s supposed to happen next: The routers each send an Inverse ARP packet announcing its IP address. The receiving router opens the packet, sees the IP address and a DLCI, which will be one of the local
DLCIs on the receiving router. The receiving router then maps that remote IP address to the local DLCI, and puts that entry in its Frame Relay mapping table. That entry will be marked “dynamic”. That’s great if it works, but Inverse ARP can be quirky and tough to work with. Many network admins chose a long time ago to put static frame relay map statements in their networks, and once those static entries go in, they tend to stay
there. Again, nothing against Inverse ARP or the admins who use it. Theoretically, it’s great. In the real world, it doesn’t always work so well and you’ll wish you knew how to use static map statements. And after this next section, you will! I’ve removed all earlier configurations from the routers, so let’s configure R1 for frame encapsulation and then open the interface.
R1#conf t Enter configuration commands, R1(config)#int s0 R1(config-if)#ip address 172.1 R1(config-if)#encapsulation fr R1(config-if)#no shutdown R1(c 00:10:43: %SYS-5-CONFIG_I: Con 00:10:45: %LINK-3-UPDOWN: Inte 00:10:56: %LINEPROTO-5-UPDOWN:
The line protocol’s up, so we’re looking good. Let’s see if Inverse ARP has done anything by running show frame map. (This command displays both static and dynamic mappings.) R1#show frame map Serial0 (up): ip 0.0.0.0 dlci
broadcast, CISCO, status defined, inact Serial0 (up): ip 0.0.0.0 dlci broadcast, CISCO, status defined, inact
This mapping to “0.0.0.0” occasionally happens with Inverse ARP. These mappings don’t really hurt anything (except in the CCIE lab, of course), so if you want to leave them there, leave ’em. The only way I’ve ever seen to get rid of them is to disable Inverse ARP and reload the router. You can turn Inverse ARP off
with the no frame-relay inverse-arp command.
R1(config)#int s0 R1(config-if)#no frame-relay i
If you decide to turn it back on, use the frame-relay inverse-arp command.
R1(config)#int s0 R1(config-if)#frame inverse-ar
It won’t surprise you to learn that we’ll use the frame map command to create frame maps, but you must be careful with the syntax of this
command. That goes for the exam room and working with the real thing! Let’s take another look at the network.
The key to writing successful frame map statements is simple and straightforward: Always map the local DLCI to the remote IP address. When you follow that simple rule, you’ll always write correct frame map statements in the field and nail every Frame Relay question in the exam room. There are a few more details you need to learn about these statements, but the above rule is the key to success with the frame map command.
Now let’s write some static frame maps! I’ve removed all previous configurations, so we’re starting totally from scratch. We’ll start on R1 and use IOS Help to continually view our options with the frame map command. I have not opened this interface, and all Cisco router interfaces are closed by default.
R1(config)#int s0 R1(config-if)#ip address 172.1 R1(config-if)#encap frame R1(config-if)#no frame inverse R1(config-if)#frame map ? appletalk AppleTalk
bridge Bridging decnetDECnet ip IP ipx Novell IPX llc2 llc2
The first option is to enter the protocol we’re using, and that’s IP. Simple enough!
R1(config-if)#frame map ip ? A.B.C.D Protocol specific add
“protocol specific address” isn’t much of a hint, so we better know that we need to enter the remote IP address we’re mapping to. We’ll create this
map to R2’s IP address, 172.12.123.2.
R1(config-if)#frame map ip ? A.B.C.D Protocol specific add R1(config-if)#frame map ip 172 The next value needed is the D R1(config-if)#frame map ip 172 DLCI
… and we’re not given much of a hint as to which DLCI we’re supposed to enter — the one on R1 or on R2! Following our simple DLCI rule, we know to enter a local DLCI here. Never enter the remote router’s DLCI. The router will
accept the command, but the mapping will not work.
R1(config-if)#frame map ip 172 broadcast Broadcasts should b cisco Use CISCO Encapsulatio compress Enable TCP/IP and R ietf Use RFC1490/RFC2427 Enca nocompress Do not compress TC payload-compression Use paylo rtp RTP header compression pa tcp TCP header compression p
We’re getting somewhere, since we see a at the bottom, telling us what we’ve entered to this point is a legal command. Let’s go with this
command as it is, and write a similar map to R3 using DLCI 123.
R1(config-if)#frame map ip 172 R1(config-if)#frame map ip 172 R1(config-if)#no shut 00:14:32: %SYS-5-CONFIG_I: Con 00:14:33: %LINK-3-UPDOWN: Inte 00:14:44: %LINEPROTO-5-UPDOWN:
After opening the interface, we’ll check our mappings with show frame map. R1#show frame Serial0 (up): CISCO, status Serial0 (up):
map ip 172.12.123.2 deleted ip 172.12.123.3
CISCO, status deleted
Note static in this output. Mappings created with the frame map command will be denoted as static in the output of show frame map. If these mappings had been created by Inverse ARP, we’d see the word dynamic there. We also see status deleted, and that doesn’t sound good! In this case, we’re seeing that because we haven’t configured the spokes yet. IP addresses haven’t even been assigned to those routers yet, so let’s do
that and configure the appropriate mappings at the same time.
R2(config)#int s0 R2(config-if)#ip address 172.1 R2(config-if)#encap frame R2(config-if)#no frame inverse R2(config-if)#frame map ip 172 R2(config-if)#frame map ip 172 R2(config-if)#no shutdown 00:21:27: %SYS-5-CONFIG_I: Con 00:21:28: %LINK-3-UPDOWN: Inte 00:21:38: %FR-5-DLCICHANGE: In 00:21:39: %LINEPROTO-5-UPDOWN:
There’s a message about DLCI 221 changing to ACTIVE, so let’s run show frame map and
see what’s going on.
R2#show frame map Serial0 (up): ip 172.12.123.1 CISCO, status defined, active Serial0 (up): ip 172.12.123.3 CISCO, status defined, activ
Looks good! Let’s configure R3 and then see where things stand.
R3(config)#int serial0 R3(config-if)#ip address 172.1 R3(config-if)#encap frame R3(config-if)#no frame inver R3(config-if)#frame map ip 172 R3(config-if)#frame map ip 172 R3(config-if)#no shutdown
00:24:38: %LINEPROTO-5-UPDOWN: R3#show frame map Serial0 (up): ip 172.12.123.1 CISCO, status defined, activ Serial0 (up): ip 172.12.123.2
The mappings on both spokes are showing as active. Let’s check the hub! R1#show frame map Serial0 (up): ip 172.12.123.2 Serial0 (up): ip 172.12.123.3
Each router can now ping the other, and we have IP connectivity. I’m showing only the pings from the hub to both
spokes, but I did go to each router and make sure I could ping the other two routers.
R1#ping 172.12.123.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5 R1#ping 172.12.123.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
If I have 100% connectivity, why did I make kind of a big deal of leaving the broadcast option off the frame map
statements? Let’s configure OSPF on this network and find out. If you don’t know anything about OSPF yet, that’s fine -you will by the end of this course. All you need to know for now is that OSPF-enabled interfaces will send Hello packets in an attempt to create neighbor relationships with downstream routers, and those Hello packets are multicast to 224.0.0.5.
The key word there is “multicast”. Frame Relay treats a multicast just like a broadcast — these traffic types can only be forwarded if the broadcast option is configured on the frame map statements. Pings
went through because they’re unicasts, but routing protocol traffic can’t operate over Frame Relay if the broadcast option is left off the map statements.
R1(config-if)#frame map ip 172 broadcast Broadcasts should b R3(config-if)#frame map ip 172
If you’re having trouble with routing protocol Hellos or other multicasts and broadcasts not being received by routers on a Frame Relay network, I can practically guarantee you the problem is a missing broadcast
statement. You’ll usually see the broadcast statement on the end of all frame map statements. It’s so common that many admins think it’s required! You don’t have to put the broadcast option on spoke-tospoke mappings, since all spoke-to-spoke traffic goes through the hub, and the hub will not forward those broadcasts. In our lab, R2’s mapping to R3 doesn’t require broadcast, and vice versa. It doesn’t hurt anything, but it’s
not a requirement.
Subinterfaces And Frame Relay Up to now, we’ve used physical Serial interfaces for our Frame Relay networks. Using a physical Serial interface can lead to some routing complications, particularly on the hub router. One of those complications is split horizon. If we’re running OSPF on our network, there’s no problem. On EIGRP networks, split horizon can be a problem, as illustrated by the following
network topology. (I know we haven’t hit EIGRP in this course yet. No advance knowledge of EIGRP is needed to understand this lab.)
The three routers are using
their physical interfaces for Frame Relay, and each router is running EIGRP on that same physical interface. R2 is advertising its loopback address via EIGRP. Does R1 have the route?
R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 su D 2.2.2.2 [90/2297856] via 1
Yes! R3 is receiving EIGRP packets from R1 — does R3 have the route? R3#show ip route eigrp R3#
As I often say, “When a show command doesn’t show you anything, it has nothing to show you!” R3 has no EIGRP routes. The reason R3 doesn’t have that route is split horizon. This routing loop prevention feature prevents a router from advertising a route back out the same interface that will be used by that same router as an exit interface to reach that route. Or as I’ve always put it, “A
router can’t advertise a route out the same interface that it used to learn about the route in the first place.” Since R1 will send packets out Serial0 to reach the next-hop address for 2.2.2.2, it can’t send advertisements for that route out Serial0.
R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 su D 2.2.2.2 [90/2297856] via 1
We have three solutions to this problem: Create a logical full mesh between all routers Use the interface-level command no ip splithorizon
Use multipoint and/or point-to-point subinterfaces With three solutions, you just know there have to be at least two with some shortcomings! A logical full mesh wouldn’t be so bad between three routers, but not many production networks are made up of three routers. As you add dozens and/or hundreds of routers to this, you quickly understand that a logical full mesh is simply not a scalable solution.
The second solution, disabling split horizon, is simple enough in theory. We do that at the interface level in an EIGRP config with the no ip splithorizon eigrp command.
R1(config)#int s0 R1(config-if)#no ip split-hori
As a result, R1 advertises the missing route to R3, and it appears in R3’s route table.
R3#show ip route eigrp 2.0.0.0/32 is subnetted, 1 su D 2.2.2.2 [90/2809856] via 1
Simple enough, right? Welll…. Split horizon is enabled by default for a reason, and even though you may get the route advertisement that you do want after disabling it, you may quickly find routing loops that you don’t want. Should you ever disable SH in a production network, be ready for unexpected routing issues to pop up. Using subinterfaces is a better solution, since those subinterfaces are seen by split horizon as totally separate
interfaces. It also gives us a chance to practice using subinterfaces for our exam success, and I’ll also use this lab to introduce you to the frame interface-dlci command. We’ll start by re-enabling SH with the ip split eigrp command.
R1(config)#int s0 R1(config-if)#ip split eigrp 1
We’re going to assign a different subnet to each of our subinterfaces on R1, and change the addressing on R2
and R3 accordingly.
We have two choices for Frame subinterfaces, multipoint and point-to-point. Since both of our subinterfaces on R1 are going to communicate with one and
only one other router, we’ll make these point-to-point links. You must define the interface type when you create the interface.
R1(config)#int s0.12 ? multipoint Treat as a multipo point-to-point Treat as a poi
Here’s the configuration for R1. All frame relay commands from earlier labs have been removed. Note encapsulation frame-relay is still configured on R1’s Serial0 physical interface and the frame
interface-dlci command is used on point-to-point links. I’m disabling Inverse ARP at the interface level, so it’ll be disabled on all subinterfaces as well. R1:
R1(config)#int s0 R1(config-if)#encap frame R1(config-if)#no frame inverse
R1(config)#int s0.12 point-toR1(config-subif)#ip address 17 R1(config-subif)#frame-relay i
R1(config)#int s0.13 point-toR1(config-subif)#ip address 17
R1(config-subif)#frame-relay i
Don’t try to use the frame map command on a point-to-point interface — the router will not accept the command. The router will even tell you the right command to use on a PTP interface, but it’s a safe bet the exam isn’t gonna tell you!
R1(config)#int s0.12 R1(config-subif)#frame map ip FRAME-RELAY INTERFACE-DLCI com
The configurations on R2 and R3 are not using subinterfaces, so we’ll use frame map
statements. R2:
R2(config)#int s0 R2(config-if)#ip address 172.1 R2(config-if)#encap frame R2(config-if)#no frame inverse R2(config-if)#frame map ip 172
R3: R3(config)#int s0
R3(config-if)#ip address 172.1 R3(config-if)#encap frame
R3(config-if)#no frame inverse
R3(config-if)#frame map ip 172
Off screen, I’ve configured all three routers with EIGRP, including a loopback of 2.2.2.2 /32 on R2. R1 has the route and can ping 2.2.2.2, as verified by show ip route eigrp and ping.
R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 su D 2.2.2.2 [90/2297856] via 172 R1#ping 2.2.2.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
What about R3? Let’s check R3’s EIGRP table and find out!
R3#show ip route eigrp 2.0.0.0/32 is subnetted, 1 su D 2.2.2.2 [90/2809856] via 172 172.12.0.0/30 is subnetted, 2 D 172.12.123.0 [90/2681856] vi
R3#ping 2.2.2.2
Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
R3 has the route and can ping 2.2.2.2. R1 has no problem
advertising the route to R3, because split horizon never comes into play. The route came in on R1’s s0.12 subinterface and then left on s0.13. Split horizon considers subinterfaces on the same physical interface to be totally separate interfaces, so there’s no reason for split horizon to prevent R1 from receiving a route on one subinterface and then advertising it back out another subinterface.
Whew! To recap, we have three ways to circumvent the rule of Split Horizon: Create a logical full mesh. Disable split horizon at the interface level with no ip split-horizon.
Use subinterfaces, either point-to-point or multipoint. Generally, you’ll use the last method, but it’s always a good idea to know more than one way to do things in CiscoLand!
Configuring Multipoint Subinterfaces Had I chosen to configure multipoint subinterfaces in that lab, I would have configured them with the same command I use with physical interfaces — frame map. I’ll create an additional subinterface to illustrate:
R1(config)#int s0.14 multipoin R1(config-subif)#ip address 17 R1(config-subif)#frame map ip
When it comes to deciding
whether a subinterface should be point-to-point or multipoint, it really depends on the network topology and the number of remote routers a subinterface will be communicating with. There’s no “one size fits all” answer to that question, but for both exam room and server room success, it’s vital to know: Subinterfaces are often used to work around split horizon. You have to define subinterfaces as
multipoint or point-topoint. Always, always, always use the frame interfacedlci command with ptp subinterfaces.
Frame Relay Congestion Notification Techniques (With Bonus Acronyms!) Frame Relay uses two different values to indicate congestion: FECN — Forward Explicit Congestion Notification BECN — Backward Explicit Congestion Notification As I’m sure you can guess by the names, the main difference between the two is the direction! But what direction?
Glad you asked!
The frame relay cloud shown consists of multiple Frame Switches, but for clarity’s sake, I’ll only illustrate one. If that switch encounters transmission delays due to network congestion, the switch will set the FECN bit on the frames heading for Router B, since
that’s the direction in which the frames are traveling. The BECN bit will be set on frames being sent back to Router A.
When a frame arrives at a router with the FECN bit set, that means congestion was encountered in the direction in which the frame was traveling.
When a frame arrives at a router with the BECN bit set, congestion was encountered in the opposite direction in which the frame was traveling. The Discard Eligible bit is considered a Frame Relay congestion notification bit, but the purpose is a bit different from the BECN and FECN. Frames are sometimes dropped as a result of congestion, and frames with the DE bit set will be dropped before frames without that bit set. Basically, setting the DE bit on a frame
indicates data that’s considered less important than data without the DE bit set. The FECN, BECN, and DE values can be seen with show frame pvc.
R1#show frame pvc PVC Statistics for interface S
Local Switched Unused
Active 2 0 0
Inactive 0 0 0
DLCI = 122, DLCI USAGE = LOCAL
input pkts 30 output pkts out bytes 0 dropped pkt in BECN pkts 0 out FECN pk in DE pkts 0out DE pkts 0 ou bytes 0 pvc create time 00:07:45, last
And speaking of PVC Status messages….
It’s Your Fault (Or Possibly Yours, But It Sure Ain’t Mine) When you check PVCs with show frame-relay pvc, you’ll see one of three status messages for each PVC: active inactive deleted Active is what we’re after, and that’s what we saw in the previous example. But what’s
the difference between inactive and deleted? I’ll close R3’s Serial0 interface and see the result on R1. For clarity, I’m removing the information regarding the DLCI to R2. R3(config)#int s0 R3(config-if)#shut R1#show frame pvc
PVC Statistics for interface S Active Local 1 Switched 0 Unused 0
Inactive 1 0 0
DLCI = 123, DLCI USAGE = LOCAL input pkts 159 output pkts out bytes 0 dropped pk in BECN pkts 0 out FECN pkt in DE pkts 0 out DE pkts out bcast bytes 0
pvc create time 00:38:46, last
The DLCI to R3 has gone inactive because there’s a problem on R3 — in this case, the Serial interface is administratively down. On the other hand, deleted means the PVC isn’t locally present. Personally, I’ve always kept those two straight like this:
inactive means it’s the other guy’s fault (the problem is remote) deleted means it’s your fault (the problem is local) And You Thought I Had Forgotten Now about those cables…. I mentioned “leased lines” early in this section, and this is one of those terms that has about
47 other names. I usually call them “serial lines”, “serial links”, or if I’m tired and can’t spare the extra word, “serial”. Others call them T1s, T1 links, or just plain WAN links. One name or the other, they’re still leased lines. To get those leased lines to work in a production network, we need a device to send clocking to our router (the DTE). That device is going to
be the CSU/DSU, which is generally referred to as “the CSU”. Collectively, the DTE and CSU make up the Customer Premise Equipment (CPE). Your network may not have an external CSU. Many of today’s Cisco routers use WAN Interface Cards with an embedded CSU/DSU, which means you don’t need the external CSU. Believe me, that’s a good thing — it’s one less external device that could go down.
Here’s where acronym confusion comes in on occasion: The CSU/DSU acts as a DCE (Data Circuit-terminating Equipment; also called Data Customer Equipment on occasion). The DCE supplies clocking to the DTE, and in doing so tells the DTE — our Cisco router — when to send data and how fast it can do so. The DCE basically says “When I say JUMP, you’re gonna say HOW HIGH?”
In this case, it’s really “HOW FAST?”, and that depends on how much money we’re giving the provider. There are three Digital Speed values you should know, since they might show up on your exams and will show up during conversations with your provider: Digital Signal Zero (DS0) channels are 64Kbps each. According to Wikipedia, that’s enough for one digitized phone call, the purpose for which this channel size was originally
designed. Digital Signal One (DS1) channels run at 1.544 Kbps, and if that sounds familiar, that’s because we usually refer to DS1 lines as T1 lines. Digital Signal Three (DS3) channels run at 44.736 Mbps (sometimes rounded up to 45 Mbps in sales materials). T3 lines can carry 28 DS1 channels or 672 DS0 channels.
We’re not locked into these three speeds. If we need more than DS0 but less than DS1, we can buy speed in additional units of 64Kbps. Since we’re buying a fraction of T1 speed, this is called fractional T1. If we need more speed than a T1 line offers, but don’t need or want to pay for T3 speed, we can purchase additional speed in units of 1.536 Mbps. It’s no surprise this is called fractional T3.
The info you’ll find on the Wikipedia link below is beyond the scope of the CCENT and CCNA exams, but it does have important information on other Tx options (including T2) and the international differences between the channels and their speeds. You’ll also find excellent info on the overhead involved and much more detail on how these lines work. http://en.wikipedia.org/wiki/Tcarrier
It’s worth a read! I Like EWANs. I Hate EWOKs. That’s strictly an editorial comment. If you like both, that’s fine with me. What’s that? You’ve never heard of an EWAN? That’s an Ethernet WAN, and according to Cisco, it’s a pretty sweet deal!
“Ethernet has evolved from just a LAN technology to a scalable, cost-effective and manageable WAN solution for businesses of all sizes. Ethernet offers numerous cost and operational advantages over conventional WAN solutions. An EWAN offers robust and extremely scalable high-quality services that are superior to any traditional WAN technology.” Try getting an Ewok to do that!
The connection to an EWAN is similar to connecting to our Ethernet LAN, really. We’ll use an Ethernet interface to connect rather than the Serial interface we used in this section for our HDLC, PPP, and Frame Relay WANs. Here’s the source of that quote from earlier in this section, and an excellent guide on choosing the right router and/or switch for your EWAN:
http://www.cisco.com/en/US/pro 564978.html Those router choices include the popular Integrated Services Router (ISR):
http://www.cisco.com/en/US/pro Neither of those links are required reading for the CCENT or CCNA exams, but it’s good material to have handy when you’re the one making these
choices! A (Very) Little About MPLS Multiprotocol Label Switching (MPLS) is a complex topic, and we’re not going to go very far into it here. I do want to point out that where Frame Relay and EWANs run at Layer 2, MPLS VPNs can run at Layer 2 or 3, but when you hear someone mention “MPLS VPN”, they mean the Layer 3 variety.
Our MPLS VPN endpoints and midpoints consist of Customer Edge, Provider Edge, and Provider devices, all sending and forwarding IP packets to their proper destination. (We hope!)
There’s just a wee bit more to
this process, but we’ll save that for your future studies. By the way, I receive messages regularly from students telling me how popular MPLS is getting in their networks, so this is a topic well worth studying on your own when you’re done with your CCENT and CCNA! Next up, we’ll review important IP addressing and routing concepts from your ICND1
studies before tackling OSPF and EIGRP!
Routing And IP Addressing Fundamentals: A Review Before we head into our OSPF and EIGRP studies, spend some time with this chapter from my ICND1 Study Guide. When you’re comfortable with the routing fundamentals in this section, charge forward!
For one host to successfully send data to another, the sending host needs two destination addresses: destination MAC address (Layer 2) destination IP address (Layer 3)
In this section, we’re going to concentrate on Internet Protocol (IP) addressing. IP addresses are often referred to as “Network addresses” or “Layer 3 addresses”, since that is the OSI layer at which these addresses are used.
The IP address format you’re familiar with — addresses such as “192.168.1.1” — are IP version 4 addresses. That address type is the focus of this section. IP version 6 addresses are now in use, and they’re radically different from IPv4 addresses. I’ll introduce you to IPv6 later in this course, but unless I mention IPv6 specifically, every address you’ll see in this course is IPv4. The routing process and IP both operate at the Network layer of the OSI model, and the routing
process uses IP addresses to move packets across the network in the most effective manner possible. In this section, we’re going to first take a look at IP addresses in general, and then examine how routers make a decision on how to get packet from source to destination. The routing examples in this section are not complex, but they illustrate important fundamentals that you must have a firm grasp on before moving on to more complex
examples. To do any routing, we’ve got to understand IP addressing, so let’s start there! IP Addressing And An Introduction To Binary Conversions If you’ve worked as a network admin for any length of time, you’re already familiar with IP addresses. Every PC on a network will have one, as will other devices such as printers. The term for a network device with an IP address is host, and I’ll try to use that term as often as possible to get you used to
it! The PC…err, the host I’m creating this document on has an IP address, shown here with the Microsoft command ipconfig.
C:\>ipconfig Windows IP Configuration Ethernet adapter Local Area Co IP Address: 192.168.1.100 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.1.1
All three values are important, but we’re going to concentrate on the IP address and subnet mask for now. We’re going to
compare those two values, because that will allow us to see what network this particular host belongs to. To perform this comparison, we’re going to convert both the IP address and the subnet mask to binary strings. You’ll find this to be an easy conversion with practice. First we’ll convert the IP address 192.168.1.100 to a binary string. The format that we’re used to seeing IP addresses take, like the 192.168.1.100 shown here, is a
dotted decimal address. Each one of those numbers in the address are decimal representations of a binary string, and a binary string is simply a string of ones and zeroes. Remember — “it’s all ones and zeroes”! We’ll convert the decimal 192 to binary first. All we need to do is use the following series of numbers and write the decimal that requires conversion on the left side:
128
64
32
16
8
4
2
192
All you have to do now is work from left to right and ask yourself one question: “Can I subtract this number from the current remainder?” Let’s walk through this example and you’ll see how easy it is! Looking at that chart, ask yourself “Can I subtract 128 from 192?” Certainly we can. That means we put a “1” under “128”.
192
128 1
64
32
16
8
4
2
Subtract 128 from 192 and the remainder is 64. Now we ask ourselves “Can I subtract 64 from 64?” Certainly we can! Let’s put a “1” under “64”. 192
128 1
64 1
32
16
8
4
Subtract 64 from 64, and you have zero. You’re practically done with your first binary conversion. Once you reach zero, just put a zero under
2
every other remaining value, and you have your binary string! 192
128 1
64 1
32 0
16 0
8 0
4 0
2 0
The resulting binary string for the decimal 192 is 11000000. That’s all there is to it! If you know the basics of binary and decimal conversions, AND practice these skills diligently, you can answer any subnetting question Cisco asks you. I’ll go ahead and show you the
entire binary string for 192.168.1.100, and the subnet mask is expressed in binary directly below it. 192.168.1.100 = 11000000 10101000 00000001 01100100 255.255.255.0 = 11111111 11111111 11111111 00000000 The subnet mask indicates where the network bits and host bits are. The network bits of the IP address are indicated by a “1” in the subnet mask, and the host bits are where the subnet mask has a “0”. This address has 24 network bits,
and the network portion of this address is 192.168.1 in decimal. Any IP addresses that have the exact same network portion are on the same subnet. If the network is configured correctly, hosts on the same subnet should be found on one “side” of the router, as shown below.
Assuming a subnet mask of 255.255.255.0 for all hosts, we have two separate subnets, 192.168.1.x and 192.168.4.x. What you don’t want is the following:
This could lead to a problem, since hosts in the same subnet are separated by a router. We’ll see why this could be a problem when we examine the routing process later in this section, but for now keep in mind that having hosts in the same subnet separated by a
router is not a good idea! The IP Address Classes Way back in the ancient times of technology — September 1981, to be exact — IP address classes were defined in RFC 791. RFCs are Requests For Comments, which are technical proposals and/or documentation. Not always the most exciting reading in the world, but it’s well worth reading the RFC that deals with the subject you’re studying. Technical exams occasionally
refer to RFC numbers for a particular protocol or network service. To earn your CCENT and CCNA certifications, you must know these address classes and be able to quickly identify what class an IP address belongs to. Here are the three ranges of addresses that can be assigned to hosts: Class A: 1 — 126 Class B: 128 — 191 Class C: 192 — 223
The following classes are reserved and cannot be assigned to hosts: Class D: 224 — 239. Reserved for multicasting, a topic not covered on the CCENT or CCNA exams, although you will need to know a few reserved addresses from that range. You’ll find those throughout the course. Class E: 240 — 255. Reserved for future use, also called “experimental addresses”.
Any address with a first octet of 127 is reserved for loopback interfaces. This range is *not* for Cisco router loopback interfaces. For your exams, I strongly recommend that you know which ranges can be assigned to hosts and which ones cannot. Be able to identify which class a given IP address belongs to. It’s straightforward, but I guarantee those skills will serve you well on exam day!
The rest of this section concentrates on Class A, B, and C networks. Each class has its own default network mask, default number of network bits, and default number of host bits. We’ll manipulate these bits in the subnetting section, and you must know the following values in order to answer subnetting questions successfully — in the exam room or on the job! Class A: Network mask: 255.0.0.0
Number of network bits: 8 Number of host bits: 24 Class B: Network mask: 255.255.0.0 Number of network bits: 16 Number of host bits: 16 Class C: Network mask: 255.255.255.0 Number of network bits:
24 Number of host bits: 8 The RFC 1918 Private Address Classes If you’ve worked on different production networks, you may have noticed that the hosts at different sites use similar IP addresses. That’s because certain IP address ranges have been reserved for internal networks — that is, networks with hosts that do not need to communicate with other hosts outside their own internal
network. Address classes A, B, and C all have their own reserved range of addresses. You should be able to recognize an address from any of these ranges immediately. Class A: 10.0.0.0 — 10.255.255.255 Class B: 172.16.0.0 — 172.31.255.255 Class C: 192.168.0.0 — 192.168.255.255 You should be ready to identify
those ranges in that format, with the dotted decimal masks, or with prefix notation. (More about prefix notation later in this section.) Class A: 10.0.0.0 255.0.0.0, or 10.0.0.0 /8 Class B: 172.16.0.0 255.240.0.0, or 172.16.0.0 /12 Class C: 192.168.0.0 255.255.0.0, or 192.168.0.0 /16 You may already be thinking
“Hey, we use some of those addresses on our network hosts and they get out to the Internet with no problem at all.” (It’s a rare network that bans hosts from the Internet today — that approach just isn’t practical.) The network services NAT and PAT (Network Address Translation and Port Address Translation) make that possible, but these are not default behaviors. We have to configure NAT and PAT manually. We’re going to do
just that later in this course, but for now, make sure you know those three address ranges cold!
Introduction To The Routing Process Before we start working with routing protocols, we need to understand the very basics of the routing process and how routers decide where to send packets. We’ll take a look at a basic network and follow the decision-making process from the point of view of the host, then the router. We’ll then examine the previous example in this section to see why it’s a
bad idea to have hosts from the same subnet separated by a router. Let’s take another look at a PC’s ipconfig output.
C:\>ipconfig Windows IP Configuration Ethernet adapter Local Area Co IP Address: 192.168.1.100 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.1.1
When this host is ready to send packets, there are two and only two possibilities regarding the destination address:
It’s on the 192.168.1.0 255.255.255.0 network. It’s not. If the destination is on the same subnet as the host, the packet’s destination IP address will be that of the destination host. In the following example, this PC is sending packets to 192.168.1.15, a host on the same subnet, so there is no need for the router to get involved. In effect, those packets go straight to 192.168.1.15.
192.168.1.100 now wants to send packets to the host at 10.1.1.5, and 192.168.1.100 knows it’s not on the same subnet as 10.1.1.5. In that case, the host will send the packets to its default gateway − in this case, the router’s
ethernet0 interface. The transmitting host is basically saying “I have no idea where this address is, so I’ll send it to my default gateway and let that device figure it out. In Cisco Router I trust!”
When a router receives a packet, there are three possibilities regarding its destination: Destined for a directly
connected network. Destined for a nondirectly connected network that the router has an entry for in its routing table. Destined for a nondirectly connected network that the router does not have an entry for. Let’s take an illustrated look at each of these three possibilities.
How A Router Handles A Packet Destined For A Directly Connected Network We’ll use the following network in this section:
The router has two Ethernet interfaces, referred to in the rest of this example as “E0” and “E1”. The switch ports will
not have IP addresses, but the router’s Ethernet interfaces will — E0 is 10.1.1.2, E1 is 20.1.1.2. Host A sends a packet destined for Host B at 20.1.1.1. The router will receive that packet on its E0 interface and see the destination IP address of 20.1.1.1.
The router will then check its routing table to see if there’s an entry for the 20.0.0.0 255.0.0.0 network. Assuming no static routes or dynamic routing protocols have been configured, the router’s IP routing table will look like this:
R1#show ip route Codes: C — connected, S — stat Gateway of last resort is not C C
20.0.0.0/8 is directly con 10.0.0.0/8 is directly con
See the “C” and the “S” next to
the word “codes”? You’ll see anywhere from 15 — 20 different types of routes listed there, and I’ve removed those for clarity’s sake. You don’t see the mask expressed as “255.0.0.0” — you see it as “/8”. This is prefix notation, and the number simply represents the number of 1s at the beginning of the network mask when expressed in binary. That “/8” is pronounced “slash eight”. 255.0.0.0 = binary string 11111111 00000000 00000000
00000000 = /8 The “C” indicates a directly connected network, and there is an entry for 20.0.0.0. The router will then send the packet out its E1 interface and Host B will receive it.
Simple enough, right? Of course, the destination network
will not always be directly connected. We’re not getting off that easy! How The Router Handles A Packet Destined For A Remote Network That Is Present — Or Not — In The Routing Table Here’s the topology for this example:
If Host A wants to transmit packets to Host B, there’s a problem. The first router that packet hits will not have an entry for the 30.0.0.0 /8 network, will have no idea how to route the packets, and the packets will be dropped. There are no static routes or dynamic routing protocols in action on a Cisco router by default. Once we apply those IP addresses and then open the interfaces, there will be a connected route entry for each
of those interfaces with IP addresses, but that’s it. When R1 receives the packet destined for 30.1.1.2, R1 will perform a routing table lookup to see if there’s a route for 30.0.0.0. The problem is that there is no such route, since R1 only knows about the directly connected networks 10.0.0.0 and 20.0.0.0.
R1#show ip route Codes: C — connected, S — stat Gateway of last resort is not C 20.0.0.0/8 is directly con
C
10.0.0.0/8 is directly con
Without some kind of route to 30.0.0.0, the packet will simply be dropped by R1.
We can use a static route or a dynamic routing protocol to resolve this. Let’s go with static
routes, which are created with the ip route command. The interface named at the end of the command is the local router’s exit interface. (Plenty more on this command coming in a later section!)
R1(config)#ip route 30.0.0.0 2
The routing table now displays a route for the 30.0.0.0 /8 network. The letter “S” indicates a static route.
R1#show ip route Codes: C — connected, S — stat C 20.0.0.0/8 is directly con
C S
10.0.0.0/8 is directly con 30.0.0.0/8 is directly con
R1 now has an entry for the 30.0.0.0 network, and sends the packet out its E1 interface. R2 will have no problem forwarding the packet destined for 30.1.1.2, since R2 is directly connected to that network.
If Host B wants to respond to Host A’s packet, there would be a problem at R2, since the incoming destination address of the reply packet would be 10.1.1.1, and R2 has no entry for that network. A static route or dynamic routing protocol would be needed to get such a route into R2’s routing table. The moral of the story: Just because “Point A” can get packets to “Point B”, it doesn’t mean B can get packets back to A!
Why We Want To Keep Hosts In One Subnet On One Side Of The Router Earlier in this section, the following topology served as an example of how not to configure a network.
Now that we’ve gone through some routing process examples, we can see why this is a bad setup. Let’s say a packet destined for 192.168.1.17 is coming in on another router interface.
The router receives that packet and performs a routing table lookup for 192.168.1.0 255.255.255.0, and sees that network is directly connected via interface E0. The router will then send the packet out the E0 interface, even though the destination IP address is actually found off the E1 interface!
In future studies, you’ll learn ways to get the packets to 192.168.1.17. For your CCENT and CCNA exams, keep in mind that it’s a good practice to keep all members of a given subnet on one side of a router. It’s
good practice for production networks, too! Now that we have a firm grasp on IP addressing and the overall routing process, let’s move forward and tackle wildcard masking and OSPF!
The Wildcard Mask ACLs use wildcard masks to determine what part of a network number should and should not be examined for matches against the ACL. Wildcard masks are written in binary, and then converted to dotted decimal for router configuration. Zeroes indicate to the router that this particular bit must match, and ones are
used as “I don’t care” bits — the ACL does not care if there is a match or not. In this example, all packets that have a source IP address on the 196.17.100.0 /24 network should be allowed to enter the router’s Ethernet0 interface. No other packets should be allowed to do so. We need to write an ACL that allows packets in if the first 24 bits match 196.17.100.0 exactly, and does not allow any other packets regardless of source IP address.
1st Octet — All bits must match. 2nd Octet — All bits must match. 3rd Octet — All bits must match. 4th Octet — “I don’t care” Resulting Wildcard Mask:
00000000
00000000
00000000
11111111 00000000 00000000 00000000 11111111
Use this binary math chart to convert from binary to dotted decimal: 128 64 32 16 8 4 2 1st Octet: 2nd Octet: 3rd Octet: 4th Octet:
0
0
0
0
0 0 0
0
0
0
0
0 0 0
0
0
0
0
0 0 0
1
1
1
1
1 1 1
Converted to dotted decimal, the wildcard mask is 0.0.0.255. Watch that on your exam. Don’t choose a network mask of 255.0.0.0 for an ACL when you mean to have a wildcard mask of 0.0.0.255. I grant you that this is an easy wildcard mask to determine without writing everything out. You’re going to run into plenty of wildcard masks that aren’t as obvious, so practice this method until you’re totally comfortable with this process. We also use wildcard masks in
EIGRP and OSPF configurations. Consider a router with the following interfaces: serial0: 172.12.12.12 /28 (or in dotted decimal, 255.255.255.240) serial1: 172.12.12.17 /28 The two interfaces are on different subnetworks. Serial0 is on the 172.12.12.0 /28 subnet, where Serial1 is on the 172.12.12.16 /28 subnet. If we wanted to run OSPF on serial0 but not serial1, using a wildcard mask makes this possible.
The wildcard mask will require the first 28 bits to match 172.12.12.0; the mask doesn’t care what the last 4 bits are. 1st Octet: All bits must match. 2nd Octet: All bits must match. 3rd Octet: All bits must match. 4th Octet: First four bits
00000000
00000000
00000000 00001111
must match. Resulting Wildcard Mask:
00000000 00000000 00000000 00001111
Converted to dotted decimal, the wildcard mask is 0.0.0.15. Let’s tackle and conquer OSPF!
OSPF And Link-State Protocols Link-State Protocol Concepts A major drawback of distance vector protocols is their transmission of full routing tables far too often. When a RIP router sends a routing update packet, that packet contains every single RIP route that router has!
This takes up valuable bandwidth and puts an unnecessary drain on the receiving router’s CPU. Sending full routing updates on a regular basis is unnecessary. You’ll see very few networks that have a change in their topology every 30 seconds, but that’s how often a RIP-enabled interface will send a full routing update. Another major difference is the form of a distance vector protocol update as opposed to a link state update.
At the end of the Static Routing section, a RIP debug showed us that routes and metrics themselves are in the RIP routing updates. Link state protocols do not exchange routes and metrics. Link-state protocols exchange just that — the state of their links, and the cost associated with those links. (OSPF refers to its metric as cost, a term we’ll revisit later in this section.) As these Link State Advertisements (LSAs) arrive
from OSPF neighbors, the router places them into a Link State Database (LSDB). Later, the router performs a series of computations against the LSDB contents, giving the router a complete picture of the network. This series of computations is known as the Shortest Path First (SPF) algorithm, also referred to as the Dijkstra algorithm. You can see the LSDB with show ip ospf database. This is a very small database for OSPF!
Technically, what you see above is a routing table, but I wouldn’t want to figure out the routes. Luckily, the SPF algorithm will do the dirty work for us and leave us with a routing table that’s much easier on the eyes. This exchange of LSAs between neighbors helps bring about one major advantage of link state protocols — all routers in the network will have a similar view of the overall network. In comparison to RIP updates (every 30 seconds!), OSPF LSAs
aren’t sent out all that often. They’re flooded when there’s an actual change in the network, and each LSA is refreshed every 30 minutes. Before any LSA exchange can begin, a neighbor relationship must be formed. Neighbors must be discovered and then an adjacency is formed with that neighbor, after which LSAs will be exchanged. More about that after we discuss the DR, BDR, and a few things that can mess up our adjacencies! The Designated Router And
Backup Designated Router If all routers in an OSPF network had to form adjacencies with every other router, and continued to exchange LSAs with every other router, a large amount of bandwidth would be used any time a router flooded a network topology change. In short, that would be an inefficient design and a real waste of network resources. Most OSPF segments will elect a designated router and a backup designated router to
handle network change notifications. The designated router is the router that will receive the LSA regarding the network change from the router that detected the change. The DR will then flood the LSA indicating the network change to all non-DR and non-BDR routers. Routers that are neither the DR nor the BDR for a given network segment are indicated in show ip ospf neighbor as DROTHERS, as you’ll see shortly. Instead of having every router
flooding the network with LSAs after a network change, the change notification is sent straight to the DR and BDR, and the DR then floods the network with the change. The update is sent by the router detecting the change to the DR and BDR via 224.0.0.6…..
.. and the DR then floods the change to 224.0.0.5, the same address to which Hello packets are sent.
If the DR fails, the backup designated router (BDR) takes its place. The BDR is promoted to DR and another election is held, this one to elect a new BDR. That’s why those network changes are sent to both the
DR and BDR — that way, the BDR is ready to step into the DR role at a moment’s notice. How The Dijkstra Algorithm Assists With Loop Prevention The Dijkstra Algorithm (also known as the SPF algorithm) recalculates network changes so quickly that routing loops literally have no time to form. The routers run the SPF Algorithm immediately after learning of any network change, and new routes are determined almost immediately.
Hello Packets: The “Heartbeat” Of OSPF Hello packets perform two main tasks in OSPF, both vital: OSPF Hellos allow neighbors to dynamically discover each other OSPF Hellos allow the neighbors to remind each other that they are still there, which means they’re still neighbors! OSPF-enabled interfaces send hello packets at regularly
scheduled intervals. The default intervals are 10 seconds on a broadcast segment such as Ethernet and 30 seconds for non-broadcast links such as Serial links. OSPF Hellos have a destination IP address of 224.0.0.5, an address from the reserved Class D range of multicast addresses (224.0.0.0 — 239.255.255.255).
OSPF neighbor relationships are just like neighbor relationships between people. As human beings, we know that just because someone moves in next door and says “Hello!”, it doesn’t mean that we’re going to be true neighbors with that person. Maybe they play their music too loud, have noisy parties, or don’t mow their lawn.
OSPF routers don’t care how loud the potential neighbor is, but potential OSPF neighbors must agree on some important values before they actually become neighbors. I’m going to show you those values now and I’d definitely have them down cold for the exam and the real world. Mismatches regarding the following values between potential neighbors are the #1 reason OSPF adjacencies do not form as expected. Troubleshooting OSPF adjacencies is usually simple —
you just have to know where to look. We’ll assume an Ethernet link between the routers in question, but potential OSPF neighbors must agree on the following values regardless of the link type. Ordinarily there will be a switch between these two routers, but for clarity’s sake I have left that out. Neighbor Value #1 & 2: Subnet Number And Mask Simple enough — if the routers are not on the same subnet
and using the same mask, they will not become neighbors.
There’s no problem with these routers pinging each other: R2#ping 172.12.23.3
Type escape sequence to abort. Sending 5, 100-byte ICMP Echos Success rate is 100 percent (5 R3#ping 172.12.23.2
Type escape sequence to abort. Sending 5, 100-byte ICMP Echos Success rate is 100 percent (5
But will they become OSPF neighbors? We’ll examine the network command in more detail throughout this section, and here’s what the network statements will look like. OSPF network statements use wildcard masks, not subnet or network masks.
R2(config)#router ospf 1 R2(config-router)#network 172. R3(config)#router ospf 1 R3(config-router)#network 172.
A few minutes after entering that configuration, I ran show ip ospf neighbor on R2 and saw… nothing. R2#show ip ospf neighbor R2#
When you run a show command and you’re shown nothing, there’s nothing to show you! There’s no OSPF adjacency between R2 and R3. To find out why, run debug ip ospf adj.
R2#debug ip ospf adj OSPF adjacency events debuggin
R2# 00:22:29: OSPF: Rcv hello from 172.12.23.3 00:22:29: OSPF: Mismatched hel 00:22:29: Dead R 40 C 40, Hell
I love this debug! It shows us immediately that the problem is “mismatched hello parameters from 172.12.23.3”, and then lists the parameters in question. “Dead” and “Hello” match up, but the mask is different. That’s the problem right there. Since we’re on R2, we’ll change the E0 mask to 255.255.255.128, change the OSPF network command, and
see what happens. I’ll remove the previous network command by repeating it with the word no in front of the entire command.
R2(config)#int e0 R2(config-if)#ip address 172.1
R2(config)#router ospf 1 R2(config-router)#no network 1
R2(config-router)#network 172.
Let’s run show ip ospf neighbor to see if we have an adjacency: R2#show ip ospf nei
Neighbor ID 172.12.23.3
Pri 1
State FULL/DR
We do! Let’s now switch focus to the other two values you saw in that debug command — the Hello and Dead timers. Neighbor Value #3 & 4: The Hello And Dead Timers These timers have vastly different roles, but they are bound together in one very important way. The Hello timer defines how often OSPF Hello packets will
be multicast to 224.0.0.5, while the Dead timer is how long an OSPF router will wait to hear a Hello from an existing neighbor. When the Dead timer expires, the adjacency is dropped! Note in the previous example that show ip ospf neighbor shows the dead time for each neighbor. The default dead time for OSPF is four times the hello time, which makes it 40 seconds for Ethernet links and 120 seconds for non-broadcast links. The OSPF dead time adjusts dynamically if the hello time is
changed. If you change the hello time to 15 seconds on an Ethernet interface, the dead time will then be 60 seconds. Let’s see that in action. The command show ip ospf interface will show us a wealth of information, including the Hello and Dead timer values for a given interface. Given the defaults mentioned earlier, what timers should we expect to see on the Ethernet interface?
R2#show ip ospf interface Ethernet0 is up, line protocol
Internet Address 172.12.23.2/ Process ID 1, Router ID 172.1 Transmit Delay is 1 sec, Stat Designated Router (ID) 172.12 Backup Designated router (ID) Timer intervals configured, H Hello due in 00:00:05 Neighbor Count is 1, Adjacent Adjacent with neighbor 172.12 Suppress hello for 0 neighbor
OSPF broadcast interfaces have defaults of 10 seconds for the Hello timer and 40 for the Dead timer (four times the Hello timer). What happens if we change the Hello timer to 15 seconds with the interface-level command ip ospf hello?
R2(config)#interface ethernet0 R2(config-if)#ip ospf hello ? Seconds
R2(config-if)#ip ospf hello 15
R2#show ip ospf interface ethe Ethernet0 is up, line protocol Internet Address 172.12.23.2/ Process ID 1, Router ID 172.1 Designated Router (ID) 172.12 No backup designated router o Timer intervals configured, H Neighbor Count is 0, Adjacent Suppress hello for 0 neighbor
Two things have happened, one that we knew about and another we should have suspected:
The Hello and Dead timers both changed We lost the adjacency to R3, indicated by the adjacent neighbor count falling to zero show ip ospf neighbor verifies no OSPF neighbors on R2. What happened? R2#show ip ospf neighbor R2#
I’m sure you already know, but let’s run debug ip ospf adj and find out for sure!
R2#debug ip ospf adj OSPF adjacency events debuggin R2# 00:54:19: OSPF: Rcv hello from 172.12.23.3 00:54:19: OSPF: Mismatched hel 00:54:19: Dead R 40 C 60, Hell 255.255.255.128
We again have mismatched hello parameters, but this time it’s the Hello and Dead timer mismatch that brought the adjacency down. We’ll change the Hello timer on R2 back to its default of 10 seconds by negating the previous command, and see if the
adjacency comes back. Be ready -- we’re going to get quite a bit of output here. I’m showing you all of the output so you can see the DR/BDR election proceed.
R2(config)#int e0 R2(config-if)#no ip ospf hello R2(config-if)#^Z R2# 00:56:19: %SYS-5-CONFIG_I: Con 00:56:19: OSPF: Rcv hello from 00:56:19: OSPF: End of hello p R2# 00:56:27: OSPF: Rcv DBD from 1 00:56:27: OSPF: 2 Way Communic 00:56:27: OSPF: Neighbor chang 00:56:27: OSPF: DR/BDR electio
00:56:27: 00:56:27: 00:56:27: 00:56:27: 00:56:27:
OSPF: Elect BDR 0.0. OSPF: Elect DR 172.1 OSPF: Elect BDR 172. OSPF: Elect DR 172.1 DR: 172.12.23.3 (Id)
R2#00:56:27: OSPF: Send DBD to 00:56:27: OSPF: Set Ethernet0 00:56:27: OSPF: Remember old D 00:56:27: OSPF: NBR Negotiatio 00:56:27: OSPF: Send DBD to 17 00:56:27: OSPF: Rcv DBD from 1 00:56:27: OSPF: Send DBD to 17 00:56:27: OSPF: Database reque 00:56:27: OSPF: sent LS REQ pa 00:56:27: OSPF: Rcv DBD from 1 00:56:27: OSPF: Exchange Done 00:56:27: OSPF: Send DBD to 17 00:56:27: OSPF: Synchronized w 00:56:27: OSPF: Reset old DR o 00:56:27: OSPF: Build router L
0x800000 09 00:56:29: OSPF: Rcv hello from 172.12.23.3 00:56:29: OSPF: End of hello p R2# 00:56:39: OSPF: Rcv hello from 172.12.23.3 00:56:39: OSPF: Neighbor chang 00:56:39: OSPF: DR/BDR electio 00:56:39: OSPF: Elect BDR 172. 00:56:39: OSPF: Elect DR 172.1 00:56:39: DR: 172.12.23.3 (Id) 00:56:39: OSPF: End of hello p
Since the Hello and Dead timers again match, the OSPF adjacency comes back up. There’s no need to reset an interface or the router.
Two more things… always verify, and always turn your debugs off! We’ll verify the adjacency with show ip ospf neighbor…. R2#show ip ospf neighbor Neighbor ID
Pri
172.12.23.3
1
State FULL/DR
… and turn off all debugs with undebug all.
R2#undebug all All possible debugging has bee
I would know those Hello and Dead timers like the back of my hand for both the exam room and working with production networks. Before we start our first OSPF network (and I have a feeling we’ll be practicing some troubleshooting, too!), let’s take a closer look at the Link State Advertisements.
LSA vs. LSU? That might sound like the 2017 SEC Championship Game, but the LSU (Link State Update) actually carries the LSAs we mentioned earlier. Those LSAs are first exchanged between OSPF routers when the adjacency reaches the 2-Way State. We saw the huge output earlier in this section where we saw an adjacency first come up. The 2Way state was mentioned :
00:56:27: OSPF: 2 Way Communic
Once the adjacency reaches that state, you can start breathing! Your adjacency is just about finished, and the routers have begun exchanging LSAs. Let’s take a detailed look at the LSA and adjacency-related readout from that earlier content. The DBDs continually being sent and received are the aptlynamed Database Description packets (also called Database
Descriptor packets on Cisco’s website), which describe the contents of the sending router’s LSDB. These DBD packets do not contain the full LSA, just their headers. Note “state INIT” in the first line, which is the INITial stage of the adjacency.
00:56:27: OSPF: Rcv DBD from 1
When an OSPF router receives a Database Description packet, that takes the adjacency all the way to 2-way, shown on the next line.
00:56:27: OSPF: 2 Way Communic
R2#00:56:27: OSPF: Send DBD to
00:56:27: OSPF: Send DBD to 17
The LSA headers in the Database Description packet allow the receiving router to send a database request for the needed LSAs via LS Request packets.
00:56:27: OSPF: Database reque
00:56:27: OSPF: sent LS REQ pa
00:56:27: OSPF: Rcv DBD from 1
00:56:27: OSPF: Exchange Done
After the exchange of LSAs is done and the LSDBs on the routers are synchronized, the adjacency reaches Full status!
00:56:27: OSPF: Synchronized w
That debug didn’t show us every OSPF adjacency, so here’s a full list of states along with a quick description of each. You’ll see some of these in show ip ospf neighbor, so we need to
know what’s going on in each state. DOWN: The first OSPF adjacency state. This doesn’t mean the interface is down, though. It just means that no Hello packet has been received from that router. ATTEMPT: You’ll only see this in NBMA networks. ATTEMPT means unicast Hello packets have been sent to the neighbor, but no reply has been received.
INIT: A Hello has been received from the remote router, but it didn’t contain the local router’s OSPF RID. That means this Hello is not serving as an acknowledgement of any Hello packets the local router sent. 2-WAY: This is the “we’re almost there!” state. There are two separate actions that can bring an adjacency to 2-way: The local router receives a Hello with its own RID in there
The local router is in INIT stage of an adjacency and then receives a Database Descriptor packet from that potential neighbor. EXSTART: DBD packets continue to be exchanged. It’s the EXchange START. (Get it?) LOADING: LSAs are requested, received, and loaded into the LSDB.
FULL: Finally, the adjacencies are in place and databases are synched. The LSA Types Our Type 1 LSA is also known as the Router LSA, and every router on our OSPF network will generate these. Type 1 LSAs contain pretty much what you’d think — general info about the router, including IP address / mask. This LSA type also contains the router’s OSPF RID, and the “Link ID” you see associated with this LSA type in the LSDB will be that very RID.
Type 1 LSAs are flooded across their own area, but never leave that area. The database entries came from the previous lab’s network, except that the 172.23.23.0 /24 network has been removed. As expected, we have three LSAs for Area 0, and the Link IDs are the RIDs for those routers.
R1#show ip ospf database OSPF Router with ID (1.1.1.1) Router Link States (Area 0) Link ID 1.1.1.1
ADV Router 1.1.1.1
Age 319
2.2.2.2 3.3.3.3
2.2.2.2 3.3.3.3
319 79
Type 2 LSAs are Network LSAs, and they identify the DRs and BDRs on our segments. Interestingly, the Link ID for this LSA is the IP address of the DR. Just like Type 1 LSAs, Network LSAs do not leave their area. Net Link States (Area 0) Link ID ADV Router 172.12.123.1 1.1.1.1
There’s only one link listed for
A 2
our NBMA network, since no BDR was elected. Type 3 LSAs, our Summary LSAs, are generated only by Area Border Routers. That’s because Type 3 LSAs contain info about other areas that router is connected to, so a non-ABR literally wouldn’t have anything to say in a Type 3 LSA. There’s a Type 3 LSA from each of the three routers in this LSDB, since each router in the network is an ABR.
Summary Net Link States (Are Link ID 1.1.1.1 2.2.2.2 3.3.3.3
ADV Router 1.1.1.1 2.2.2.2 3.3.3.3
Age 67 1 186
There are two more LSA types I want you to know, even though they deal with a topic not on the CCNA exam. When a router takes routes from one source (say, a separate routing protocol) and injects them into another routing protocol, that’s route redistribution.
There are two LSA types that deal with route redistribution. Any OSPF router injecting routes into OSPF via route redistribution is an Autonomous System Border Router, and the LSA Type 4 lets other routers know where that ASBR is. LSA Type 5s contain the actual info
regarding routes injected into OSPF. OSPF Areas And Hub-AndSpoke Networks OSPF is commonly configured on hub-and-spoke networks, and that’s exactly the one we’re going to use here. R2 and R3 are connected via an Ethernet segment as well; we’ll configure that after taking care of the hub-and-spoke network. Here are the network numbers, with each router’s number acting as the last octet for all subnets on that router.
Frame Relay network: 172.12.123.0 /24 Ethernet segment : 172.23.23.0 /24 In turn, each router is using a different kind of interface on the Frame Relay network. Please note that this is not a typical real-world network. I’m using this config to illustrate several important OSPF concepts on live equipment. R1 is using Serial0, the physical interface
R2 is using Serial0.123, a multipoint subinterface R3 is using Serial0.31, a point-to-point subinterface Each router has a loopback with its own number for each octet. Each loopback has a subnet mask of 255.255.255.255 (a host mask).
Area 0 is the backbone area of OSPF. Every non-backbone area must contain at least one router that also has an interface in Area 0 and this topology meets that
requirement. Before we dive into the lab, let’s chat about OSPF areas. As you go through OSPF in your CCNA and CCNP studies, and you’re introduced to the different area types we have available, and the operation and rules for each one, you’re going to wonder why we don’t just chuck all our routers into one big Area 0 and just be done with it!
Excellent question, and there are several excellent reasons for using areas. This is where I usually say “It helps limit the impact of network changes”, which sounds great, but what exactly does that mean? Glad you asked! If we just leave all of our routers in one area, the databases on each router are going to be pretty darn large. A relative term, certainly, but this
leads to several problems. Any network change ends with every single router having to run the SPF algorithm. Might not sound like much, but it’s generally unnecessary, and it’s a hit to the CPU. Pair that with the bigger-than-it-needs-to-be database sucking up valuable memory, and the helpfulness of areas becomes clear. In turn, our OSPF areas define OSPF router types. These types
are not all mutually exclusive — a router can be more than one. Be ready to identify these on your exam. Internal router: All interfaces in one, nonbackbone area. Backbone router: All interfaces in the backbone area. Area Border Router
(ABR): A router with at least one interface in Area 0 and at least one other interface in another area. Autonomous System Border Router (ASBR): A router that is performing route redistribution. Redistribution into OSPF, that is! Route redistribution itself is not on your CCNA exam, but as a bonus I’ll perform some quick redistribution at the end of our
lab and you can see how to verify that a router’s acting as an ASBR. Now let’s hit the lab work! You’ll see that the OSPF configuration on the Ethernet segment is very straightforward — it’ll literally take only one command on each router on the segment — but a hub-andspoke OSPF deployment must take several factors into account. The first is that the spoke routers must be prevented from ever becoming the DR or BDR. We’ll do that
with the ip ospf priority command on R2 and R3. The default priority of an OSPFenabled interface is 1. The interface with the highest priority becomes the DR, and the interface with the secondhighest priority will become the BDR. With this topology, it’s not enough here to make R1 the DR. We want to prevent R2 or R3 from ever becoming the DR or BDR on the hub-and-spoke segment, even if R1 is reloaded. We’ll do so by setting the appropriate priorities to
zero.
R2(config)#int s0.123 R2(config-subif)#ip ospf prior R3(config)#int s0.31 R3(config-subif)#ip ospf prior
Now we’ll go to R1 and begin the configuration with the router ospf command. Note that the number following “router ospf” in the command is the OSPF process number. OSPF can run multiple processes on one router, and the links are not advertised from one process to another
unless we specifically configure OSPF to do so. OSPF process numbers are locally significant only and do not have to be agreed upon by potential neighbors. Most networks keep it simple by using the same process number, so that’s what we’ll do here. Keep in mind that this is done for consistency’s sake and is not a necessary part of a successful OSPF deployment. R1(config)#router ospf ? Process ID
R1(config)#router ospf 1
On R1, we want to enable OSPF on the serial interface (172.12.123.1) and the loopback (1.1.1.1). Where EIGRP configurations consider the wildcard mask optional, OSPF requires it; you cannot simply enter the network number with OSPF. After the wildcard mask, you must enter the Area number of the interface as well. Since we gave the loopback interface a host mask of /32,
we’ll assign it a wildcard mask of 0.0.0.0. We’ll use a wildcard mask of 0.0.0.255 for the other network. Once we’re done on R1, we’ll go to R2 and R3 and enter the appropriate network statements.
R1(config)#router ospf 1 R1(config-router)#network 172. R1(config-router)#network 1.1.
R2(config)#router ospf 1 R2(config-router)#network 172. R2(config-router)#network 2.2.
R3(config)#router ospf 1 R3(config-router)#network 172.
Let’s check the adjacencies on R1. R1#show ip ospf nei R1#
No neighbors = big problem. To get OSPF adjacencies up and running on a hub-and-spoke, you’ve got to use the neighbor command on the hub to indicate the IP addresses of the remote neighbors-to-be.
R1(config)#router ospf 1 R1(config-router)#neighbor 172 R1(config-router)#neighbor 172
About 30 seconds later, we get this message from the console:
00:05:35: %OSPF-5-ADJCHG: Proc
One down, one to go! Problem is, there is no second message. We’ve got another problem preventing the adjacency between R1 and R3 from forming. I’ll take this opportunity to introduce you to the show ip ospf interface command.
R1#show ip ospf interface seri Serial0 is up, line protocol i Internet Address 172.12.123.1
Process ID 1, Router ID 1.1.1 Transmit Delay is 1 sec, Stat Designated Router (ID) 1.1.1. No backup designated router o Timer intervals configured, Hello due in 00:00:23 Index 1/1, flood queue length Next 0x0(0)/0x0(0) Last flood scan length is 2, Last flood scan time is 0 mse Neighbor Count is 1, Adjacent Adjacent with neighbor 2.2.2. Suppress hello for 0 neighbor
R3#show ip ospf interface seri Serial0.31 is up, line protoco Internet Address 172.12.123.3 Process ID 1, Router ID 3.3.3 Transmit Delay is 1 sec, Stat Timer intervals configured, Hello due in 00:00:04
Index 1/1, flood queue length Next 0x0(0)/0x0(0) Last flood scan length is 0, Last flood scan time is 0 mse Neighbor Count is 0, Adjacent Suppress hello for 0 neighbor
See the problem? The Hello and Dead timers don’t match. When you configure OSPF on a point-to-point link, the interface naturally defaults to an OSPF point-to-point network… and the timers on that network type are 10 and 40, respectively. We’ve got to fix that before an adjacency can form. We have two options:
Use the ip ospf hello command to change the hello timer on R3 (If we change it on R1, we’ll lose the adjacency we already have with R2) Use the ip ospf network command to change R3’s OSPF network type on that subinterface to nonbroadcast, which will make it match R1’s hello and dead timers In this case, we’ll use the ip ospf network command.
R3(config)#int s0.31 R3(config-subif)#ip ospf netwo broadcast Specify OSPF broadca point-to-multipoint Specify OS
point-to-point Specify OSPF po
R3(config-subif)#ip ospf netwo
We’ll verify the changes with show ip ospf interface….
R3#show ip ospf interface seri Serial0.31 is up, line protoco Internet Address 172.12.123.3 Process ID 1, Router ID 3.3.3 Transmit Delay is 1 sec, Stat No designated router on this No backup designated router o Timer intervals configured, H
Hello due in 00:00:25
…. and shortly after that change, the OSPF adjacency between R1 and R3 comes up. R3 sees R1 as the DR, and R1 sees R2 and R3 as neither a DR nor a BDR — in other words, they’re “DROTHERS”. R3#show ip ospf nei Neighbor ID 1.1.1.1
Pri 1
State FULL/DR
R1#show ip ospf neighbor Neighbor ID
Pri
State
2.2.2.2 3.3.3.3
0 0
FULL/DROTHER FULL/DROTHER
Let’s take a closer look at each value from show ip ospf neighbor. Neighbor ID: By default, a router’s OSPF ID is the highest IP address configured on a LOOPBACK interface. This can also be manually configured with the command router-id in OSPF configuration mode, and is usually set with that command instead of leaving the RID selection up to the router.
Pri: Short for “Priority”, this is the OSPF priority of the interface on the remote end of the adjacency. The spoke interfaces were manually set to 0 in the initial configuration to prevent them from becoming the DR or BDR. We could have raised R1’s priority if we wanted to — the maximum value is 255 — but we still have to set the spoke priorities to zero. State: FULL refers to the state of the adjacency. DROTHER means that particular router is neither the DR nor the BDR for
that particular segment. Dead Time: A decrementing timer that resets when a HELLO packet is received from the neighbor. Address: The IP address of the neighbor. Interface: The adjacency was created via this local interface. Now let’s review the original network diagram.
We’ve got the loopbacks in their respective areas, and we know all is well with Area 0. Let’s put the Ethernet interfaces on R2 and R3 into Area 23.
Configuring OSPF On Broadcast Networks After the NBMA lab, you’ll be relieved to know that configuring OSPF on a broadcast segment is pretty much a one-command deal. We’ll use the network command to add that network to the existing OSPF deployment.
R2(config)#router ospf 1 R2(config-router)#network 172.
R3(config)#router ospf 1 R3(config-router)#network 172.
Here’s the result: R2#show ip ospf nei Neighbor ID 3.3.3.3 1.1.1.1
Pri 1 1
State FULL/DR FULL/DR
R3#show ip ospf neighbor Neighbor ID 2.2.2.2 1.1.1.1
Pri 1 1
State FULL/BDR FULL/DR
The adjacency is complete. Let’s take a look at R1’s OSPF routing table. R1#show ip route ospf
2.0.0.0/32 is subnetted, 1 O IA 2.2.2.2 [110/65] via 172. 3.0.0.0/32 is subnetted, 1 O IA 3.3.3.3 [110/65] via 172. 172.23.0.0/24 is subnetted O IA 172.23.23.0 [110/74] via [110/74] via
R1 has two paths to the Ethernet segment connecting R2 and R3. They’re both there because the cost is exactly the same — 74. (The first number in the brackets in the OSPF table is the Administrative Distance; the second number is the OSPF cost to that network.) OSPF assigns a cost to every
OSPF-enabled interface. The interface cost is based on the interface’s speed. In this case, each path goes through an Ethernet interface with a cost of 10 and a Serial interface with a cost of 64, resulting in an overall path cost of 74. Most default OSPF costs are just fine, but there will be times that you’ll need to tweak them. More on that later in this section. Right now, let’s hit the RID! Configuring the OSPF Router ID By default, the OSPF Router ID
(RID) will be the numerically highest IP address of all loopback interfaces configured on the router. In the previous lab, the RID for each router was the IP address on the router’s loopback interface. That’s easy enough to remember, but why use a loopback address for the OSPF RID instead of the physical interfaces? A physical interface can become unavailable in a number of ways — the actual hardware can go bad, the cable attached to the interface can
come loose — but the only way for a loopback interface to be unavailable is for it to be manually deleted or for the entire router to go down. In turn, a loopback interface’s higher level of stability and availability results in fewer SPF recalculations, which results in a more stable network overall. Oddly enough, an interface does not have to be OSPFenabled to have its IP address used as the OSPF RID — it just has to be “up” if it’s a loopback, and physically “up” if it’s a
physical interface. It’s rare to have a router running OSPF that doesn’t have at least one loopback interface, but if there is no loopback, the highest IP address on the router’s physical interfaces will be the RID. You can hardcode the RID with the router-id command.
R1(config-router)#exit R1(conf R1(config-router)#router-id ? A.B.C.D OSPF router-id in IP
R1(config-router)#router-id 11 Reload or use “clear ip ospf p
Here’s a rarity, at least with Cisco. For the new RID to take effect, you must either reload the router or clear the OSPF processes. That’s a fancy way of saying “All existing OSPF adjacencies will be torn down.” The router will warn you of this when you run that command.
R1#clear ip ospf process Reset ALL OSPF processes? [no]
Remember — whenever the router’s prompt says “no”, you should think twice before saying yes!
Okay, I’ve thought twice. Let’s say yes!
R1#clear ip ospf process Reset ALL OSPF processes? [no] R1# 00:28:20: OSPF: Interface Loop 00:28:20: OSPF: 1.1.1.1 addres 00:28:20: OSPF: Interface Seri 00:28:20: OSPF: 1.1.1.1 addres 00:28:20: OSPF: Neighbor chang 00:28:20: OSPF: DR/BDR electio 00:28:20: OSPF: Elect BDR 0.0. 00:28:20: OSPF: Elect BDR 0.0. 00:28:20: OSPF: Elect DR 0.0.0
I won’t show you all the output here, since I was still running debug ip ospf adj. Take my
word for it, the existing adjacencies to R2 and R3 were torn down. They came right back up, but this is a command you should definitely think twice about before issuing it in a production network. On R1, we’ll verify the change with show ip ospf.
R1#show ip ospf Routing Process “ospf 1” with
R2 and R3 both now see R1 as having a RID of 11.11.11.11, as verified by show ip ospf neighbor.
R2#show ip ospf neighbor Neighbor ID 11.11.11.11
Pri 1
State FULL/DR
R3#show ip ospf neighbor Neighbor ID
Pri
State
11.11.11.11
1
FULL/DR
And what if OSPF can’t find any IP address to use? Let’s find out on this router with no IP addresses configured: R1(config)#router ospf 1 R1(config)#
03:10:09: %OSPF-4-NORTRID: OSP
Sounds like you better have an IP address ready to go! Default-Information Originate (Always?) One of the benefits of running OSPF is that all of our routers have a similar view of the network. There are times, though, that you may not want all of your routers to have a full routing table. This involves the use of stub and total stub areas, and while the configuration of those areas
is beyond the scope of the CCENT exam, I do want to show you an example of when we might configure such an area. This also helps to illustrate a command that you just might see on your exam!
There’s no reason for the three routers completely in Area 100 to have a full OSPF routing table. For those routers, a default route will do, since
there is only one possible nexthop IP address for any data sent by those three routers. If that central router has a default route that it can advertise to the stub routers, the default-information originate configured on the hub router will get the job done.
R1(config)#router ospf 1 R1(config-router)#default-info
That’s great, but what if the central router doesn’t have a default route to advertise?
Let’s use IOS Help to look at our options for this command — there’s a very important one here.
R1(config)#router ospf 1 R1(config-router)#default-info always Always advertise d metric OSPF default metr metric-type OSPF metric type route-map Route-map referen
The always option allows the router to advertise a default route without actually having one in its routing table. Without that option, the router must have a default route in its table
in order to advertise one.
R3(config)#router ospf 1 R3(config-router)#default-info always Always advertise d metric OSPF default met metric-type OSPF metric type route-map Route-map refere
R3(config-router)#default-info
You’ll learn much more about the different types of stub areas and their restrictions and requirements in your CCNA and CCNP studies. For now, know the difference between using default-information originate
with and without the always option.
Tweaking The OSPF Cost In the past, there was rarely a reason to change the OSPF cost, since this little formula worked just fine: 100,000 / Interface speed in Kbps The 100,000 in that formula is the default reference bandwidth, which we’ll be referencing in this section. Our Ethernet interface runs at 10 Mbps, or 10,000 Kbps. Plug
that 10,000 into the first formula… 100,000 / 10,000 = 10 … and it appears the OSPF cost of an Ethernet interface should be 10. Is it? R2#show ip ospf int e0
Ethernet0 is up, line protocol
Internet Address 172.23.23.2/ Process ID 1, Router ID 2.2.2
Yep!
This formula needed no real tweaking until we started getting interfaces on our routers that were faster than Fast Ethernet. Why was more speed a bad thing for this formula? Fast Ethernet’s bandwidth is exactly 100,000 Kbps, so when OSPF ran the formula… 100,000 / 100,000 = 1 … Fast Ethernet interfaces were assigned an OSPF cost of one. No problem?
Yes problem! Gig Ethernet came along, and 10 Gig Ethernet followed that! Since OSPF doesn’t allow cost to be expressed with a fraction or a number less than one, our Fast Ethernet, Gig Ethernet, 10 Gig Ethernet, and then 100 Gig Ethernet ended up with the exact same OSPF cost. That means that OSPF would have recognized both of these paths between R1 and R2 as having the same cost while the speed of one path is far greater than the other.
Not a good situation for a protocol that considers itself to be an intellectual. In such a situation, you can change the reference bandwidth in that formula with the auto-cost referencebandwidth command. Note the command refers to MBs.
R1(config)#router ospf 1
R1(config-router)#? Router configuration commands: area OSPF area parameter auto-cost Calculate OSPF inte (The rest of the OSPF commands R1(config-router)#auto-cost ? reference-bandwidth Use refere R1(config-router)#auto-cost re The reference bandw
Recommended settings: Highest post speed is 1 Gig Ethernet = Ref. bandwidth 1000 Mbps Highest port speed is 10 Gig
Ethernet = Ref. bandwidth 10000 Mbps Highest port speed is 100 Gig Ethernet = Ref. bandwidth 100000 Mbps Each of those scenarios would give your fastest ports an OSPF cost of 1. I don’t have to reload the router or clear the OSPF processes to make this command take effect, but I do get an interesting message from the router after entering this command:
R1(config-router)#auto-cost re % OSPF: Reference bandwidth is Please ensure reference bandwi
A darn good idea! By doing so, you’ll keep the interface speeds across your network accurate and consistent. Note the immediate change to the cost of our OSPF-enabled Serial interface after changing the reference cost to 10000:
R1#show ip ospf int s0 Serial0 is up, line protocol i Internet Address 172.12.123.1 Process ID 1, Router ID 1.1.1
Be sure to do your homework and more than a little lab work before breaking this command out in a production environment, and be sure to keep this value exactly the same on all routers in your network. There are two other ways to change OSPF interface costs, and one of them is at the interface level: R1(config)#int s0 R1(config-if)#ip ospf cost ? Cost R1(config-if)#ip ospf cost 64
R1#show ip ospf int s0 Serial0 is up, line protocol i Internet Address 172.12.123.1 Process ID 1, Router ID 1.1.1
The other method is also an interface-level command, but it’s not OSPF-specific:
R1(config)#int s0 R1(config-if)#? Interface configuration comman access-expression Bu appletalk Ap arp Se autodetect Au backup Mo bandwidth Se
While it would be great if the bandwidth command allowed us to add additional bandwidth at the press of a button, that’s not what this command does. It’s really the interface equivalent of the auto-cost reference-bandwidth command, except the bandwidth command is used by protocols and features other than OSPF. Note the bandwidth command is expressed in Kbps, not Mbps. R1(config-if)#bandwidth ?
Bandwidth in ki
I’m not a huge fan of the bandwidth option, since it can take more than a few minutes to get the costs right where you want them. The bandwidth command is not OSPF-specific, so you do run the risk of affecting other protocols and services without realizing it. If you have an overriding reason to change one particular interface cost, the ip ospf cost is the way to go. If you’re
dealing with the issue of Gig Ethernet, 10 Gig Ethernet, and 100 Gig Ethernet all having the same OSPF cost of 1 with the default reference bandwidth, the auto-cost referencebandwidth command is the way to go.
Using and Verifying the Passive-Interface Command On rare occasion, you may need to advertise a network while NOT forming adjacencies over that same network. In this lab, R3 is connected to 172.23.23.0 /24 and wants to advertise that route to R1. At the same time, R3 does not want to form an adjacency with any routers on that segment, including R4.
The OSPF passive-interface feature makes this possible! As we start this lab, R1 is learning about that segment from R3, but there is an adjacency between R3 and R4, and that’s what we want to get rid of.
R1#show ip route ospf 172.23.0.0/24 is subnetted, 1 O IA 172.23.23.0 [110/74] via R3#show ip ospf nei
Neighbor ID 4.4.4.4 1.1.1.1
Pri 1 1
State FULL/BDR FULL/DR
Let’s make Ethernet0 passive and see what happens. As odd as this will sound, don’t try to make an interface passive at the interface level, because the OSPF passive-interface command is configured at the protocol level:
R3(config)#router ospf 1 R3(config-router)#passive-inte Ethernet IEEE 802.3 Loopback Loopback interface Null Null interface Serial Serial
default Suppress routing upda
R3(config-router)#passive-inte 22:26:21: %OSPF-5-ADJCHG: Proc
The adjacency between R3 and R4 goes down in a matter of seconds. Is the route still on R1?
R1#show ip route ospf 172.23.0.0/24 is subnetted, 1 O IA 172.23.23.0 [110/74] via
Yes, it is! Note the time on that entry. The route has only been there for 31 seconds, which
means a new LSA was generated as a result of running passive-interface on R3. The most important thing is that the route is still on R1! You can verify the passiveinterface settings (and a lot of other settings!) with show ip protocols.
R3#show ip protocols Routing Protocol is “ospf 1” Outgoing update filter list f Incoming update filter list f Router ID 3.3.3.3 It is an area border router Number of areas in this route Maximum path: 4
Routing for Networks: 3.3.3.3 0.0.0.0 area 3 172.12.123.0 0.0.0.255 area 0 172.23.23.0 0.0.0.255 area 23 Passive Interface(s): Ethernet0 Routing Information Sources: Gateway Distance Last 4.4.4.4 110 00:0 2.2.2.2 110 00:3 1.1.1.1 110 00:0 3.3.3.3 110 00:0 172.23.23.3 110 13:4 Distance: (default is 110)
Speaking of IP protocols, it’s time for us to master EIGRP!
EIGRP Over the years, EIGRP has been called each of the following: A hybrid of distance vector and link state protocols A super-duper advanced distance vector protocol (okay, maybe just “advanced”) None or both of the above
I personally think the “hybrid” term is the most accurate, since EIGRP does act a little like a distance vector protocol and a little like a link state protocol, and in this section you’ll see those DV and LS behaviors demonstrated. EIGRP also used to be called “Cisco-proprietary”, since Cisco kept EIGRP to itself — other vendors’ routers couldn’t run it. That’s no longer the case, and that’s a big change from the last version of the CCNA exam! Cisco-proprietary or not, EIGRP
brings a lot to the table, as well as major advantages over RIP and IGRP. (IGRP was the original version of EIGRP, and IGRP is now obsolete and not supported on current Cisco IOSes.) Rapid convergence upon a change in the network, because backup routes (“Feasible Successors”) are calculated before they’re actually needed due to the loss of a primary route (“Successor”) EIGRP considers the
bandwidth of a link when computing metrics, rather than the less-than-accurate “hop count” metric of RIP. More on that in a few minutes. Offers multiprotocol support (supports IP, IPX, and AppleTalk) Supports Variable-Length Subnet Masking (VLSM) and Classless Inter-Domain Routing (CIDR), where RIPv1 and IGRP did not. Full EIGRP routing tables are exchanged only after
an adjacency is formed. After that, EIGRP updates contain only the routes that have changed, and these updates are sent only when that change occurs. Hello Packets and RTP: The Heartbeat Of EIGRP EIGRP uses Hello packets (sent to multicast address 224.0.0.10) to establish and maintain neighbor relationships. The Reliable Transport Protocol (RTP) is used to handle the transport of
messages between EIGRPenabled routers. RTP allows for the use of sequencing numbers and acknowledgements. Unlike TCP, RTP doesn’t always use them and doesn’t always ask for acks. For example, a multicast Hello packet on a broadcast segment will not require acks, where routing updates will require them. EIGRP uses autonomous systems to identify routers that will belong to the same logical group. EIGRP routers that exist
in separate autonomous systems will not exchange routes. They won’t even become neighbors to begin with! For an EIGRP neighbor relationship to be established, routers must receive Hello packets from each other, be on the same subnet as the potential neighbor, and the Autonomous System number must match. EIGRP authentication is not part of the CCNA course, but naturally, if you have that in
place, the password must be agreed upon. Otherwise, what’s the point? As with OSPF, once the neighbor relationship is present, it is the Hello packets that keep it alive. If the Hellos are no longer received by a router, the neighbor relationship will eventually be terminated. Like OSPF, EIGRP has fixed times for sending Hello packets:
Broadcast, point-to-point serial, and high-bandwidth links send EIGRP Hellos every 5 seconds. (Anything over T1 speed is considered a highbandwidth link.) Multipoint links running at T1 speed or less will send Hellos every 60 seconds. There are major differences here between OSPF and EIGRP, though: EIGRP refers to its dead time as “Hold Time”
The EIGRP Hold Time is three times the Hello Interval by default (OSPF’s Dead Time is 4x the Hello Interval by default) EIGRP neighbors do not have to agree on the Hello and Hold Timers I strongly recommend that if you change the EIGRP timers on one router in an AS, change those timers to match on all other routers. You may inadvertently lose adjacencies if you just change them in one
place. The Successor and Feasible Successor EIGRP keeps three tables: the route table, containing the best route(s) to destinations the topology table, where those best routes are also kept, along with valid but less-desirable routes to those same destinations the neighbor table, where info about the neighbors
is kept As an EIGRP-enabled router learns about the network, the router will put the best route to a given destination in its routing table. EIGRP keeps the best routes along with all loop free, valid routes in the topology table. EIGRP actually calculates these backup routes before a failure occurs, making convergence after a failure pretty darn quick. The EIGRP term for the best route is the Successor. Any valid alternate route is referred
to as the Feasible Successor. We’ll see both route types and all three tables in action during our lab work, but first, we need to see how a route becomes a Feasible Successor. What exactly do we mean by a route being “valid but less desirable?” To get the right answer, we have to ask the right question — and in this case, that’s the EIGRP Feasible Successor Question, or Feasible Successor Condition. The EIGRP Feasible Successor Condition:
The router asks itself, “Is the Reported Distance (RD) for this route lower than the Feasible Distance (FD)?” Hmm. Sounds like our question has led to more questions! What the heck is a “Feasible Distance” and a “Reported Distance”? Some of the most convoluted explanations in the history of history have been given for these two terms, and I’m happy to cut through all of that and tell you….
The local router’s metric for a path is the Feasible Distance The next-hop router’s metric for the same path is the Reported Distance Let’s take out first look at the EIGRP topology table and use it to see what’s going on with the FD and RD.
P 172.23.0.0/16, 2 successors, via 172.12.123.2 (2195456/2816 via 172.12.123.3 (2195456/2816
The first number, 2195456, is
the route’s Feasible Distance. This is the metric of the route from the local router to the destination network. The second number, 281600, is the route’s Reported Distance. This is the metric from the next-hop router to the destination network. In this particular case, the FD of both routes is exactly the same. When that happens, both routes are marked as Successors, and the load to that network will be balanced over those two links. (And yes,
I put it that way for a reason. More on that later!) These distances are also used by EIGRP to determine what routes can be feasible successors. Let’s look at two routes to 3.0.0.0 /8: P 3.0.0.0/8, 1 successors, FD
via 172.12.123.3 (2297856/1282
via 172.12.123.2 (2323456/4096
We only have one successor here, and we know it’s the top route since the FD of that route
is the FD named in the top line (“FD is 2297856”). Can the route through 172.12.123.2 be a Feasible Successor? The RD of that route is 128256, which is less than the 2297856 FD of the Successor, so the route through 172.12.123.2 is indeed a Feasible Successor. By the way, the EIGRP topology table holds only Successor and Feasible Successor routes, so this was a little bit of a cheat. I’d be more than ready to determine Successor and
Feasible Successor routes on your CCNA exam by just being given the metrics of the paths. Let’s use some slightly smaller numbers to walk through an example without using the EIGRP topology table. We’ll assume a successor route and three possible feasible successors.
Successor: FD 5, RD 4 Possible Feasible Successor #1 Possible Feasible Successor #2 Possible Feasible Successor #3
To decide if any of these three
routes could be a feasible successor, just compare the RD of the feasible successor candidate to the FD of the successor. Routes #1 and #2 could not be feasible successors, because their RDs are larger than the FD of the successor. Route #3’s RD of 4 is less than the successor’s FD of 5. As a result, Route #3 will be placed into the EIGRP topology table and marked as a feasible successor. If the successor route goes down, Route #3 will
then be named the successor. It’s really just that simple! The EIGRP metrics you’ll see in your prep and your exam will obviously be larger than singledigit numbers, but the rules are the same, no matter how large or small the metric. Let’s take another look at feasible distance, reported distance, and the feasibility condition in action. You really have to watch these values, or what you think should happen with your network when a successor goes down might not
actually be what will happen.
The Feasibility Condition In Action R1 has three potential paths to R3. The feasible distances and reported distances for the paths from R1’s point of view are: R1 — R4 — R3: FD 40, RD 20 R1 — R2 — R3: FD 70, RD 20 R1 — R5 — R3: FD 115, RD 75 R1 will place the path through R4 into its routing table. Since that route has the lowest FD, it is the successor. The successor is
also placed into the topology table. R1 will consider the other two routes as potential feasible successors. The Feasibility Condition states that if a potential feasible successor’s RD is less than the successor’s FD, the route is an FS. The RD of the path through R2 is 20; the FD of the successor is 40. The path through R2 is a feasible successor and will be placed into the EIGRP topology table. What about the third path? The
RD of the path through R5 is 75, while the FD of the successor is 40. This indicates to EIGRP that the potential for a routing loop exists, so this route will not be made a feasible successor. If the path through R4 went down, the router would immediately begin using the path through R2, which has already been tagged as a feasible successor. The path through R2 would then be named the successor route, but the path through R5
still would not be a feasible successor, as the path through R5 has an AD of 75 and the path through R2 has a FD of 70.
What if there is no Feasible Successor? We love the fact that EIGRP calculates backup routes before they’re actually needed. But what if there is no feasible successor in the EIGRP topology table when we need one? If a successor route is lost and there is no feasible successor, the router takes two actions: The route is put into Active state, making the route unusable.
The router sends DUAL Query packets to EIGRP neighbors, asking them if they have a loopfree path to the destination in question. The neighbors will answer this query with an EIGRP reply packet, letting the Querying router know about the valid path — as long as they have one! If the queried neighbors do not have a path to that network, they’ll ask their neighbors if they have a path to the
network. This query process continues until a router returns a path to that network, or no router can do so and the query process finally ends. EIGRP’s Major Advantage Over RIP Consider the following:
If you or I were asked what the optimal path(s) are between R1 and R2, we wouldn’t hesitate — T1 lines run at 1544 kbps, almost thirty times faster than a 56 kbps line, so the extra
“hop” over the R1 paths will hardly matter. EIGRP would agree with us, but RIPv2 would not. RIPv2 only considers hop count as a metric. Therefore, RIPv2 would consider the path from R1-R5R2 the best path — and it’s nowhere near the best path! Since both EIGRP and OSPF consider the speed of a link in its calculations, we’re almost always better off to use those two protocols for our WANs. Configuring EIGRP
To enable EIGRP on a particular interface, we’ll use the network command. The use of wildcard masks with the EIGRP network command is optional, but you’ll see them in 99% of real-world EIGRP deployments. Just watch that on the exam — EIGRP and OSPF both use wildcard masks in their network statements, not subnet masks.
R1(config)#router eigrp 100 R1(config-router)#no auto-summ R1(config-router)#network 172.
R2(config)#router eigrp 100 R2(config-router)#no auto-summ R2(config-router)#network 172.
R3(config)#router eigrp 100 R3(config-router)#no auto-summ R3(config-router)#network 172.
Note that I disabled autosummarization on all three routers. EIGRP has autosummarization running by default, and usually you’re going to disable it even before you enter your network statements. We’ll discuss that command in another lab later in this section. You can enter the no autosummary command after you
enter the network statements if you like. With the above wildcard masks, any interfaces in the network 172.12.123.0 /24 will run EIGRP. A Quick Review of Wildcard Masks They’re really just “reverse subnet masks”. For instance, the network and mask 172.12.123.0 255.255.255.0 means that all hosts that begin with 172.12.123 are part of
that network. When you write out the network number and the mask in binary and compare the two, the ones in the subnet mask are “care” bits and the zeroes are “I don’t care” bits. 172.12.123.0 = 10101100 255.255.255.0 = 11111111
0000
111
What do I mean by “care” and “I don’t care”? For a host to be on the 172.12.123.0 /24 network, the host’s address must match every bit where
there is a 1 in the network mask. After that, I don’t care! Wildcard masks take the opposite approach. The zeroes are “I care”, and the ones are “I don’t care”. In this example, we want to enable EIGRP on all interfaces whose first three octets are 172.12.123, and after that, we don’t care! 10101100
00001100
01111011
00000000
00000000
00000000
An even quicker comparison of the two mask types:
Subnet masks begin with strings of consecutive 1s Wildcard masks begin with strings of consecutive 0s Now let’s get back to our EIGRP deployment! A few seconds after configuring the three routers with EIGRP, this console message appears on R1:
R1# 04:09:16: %DUAL-5-NBRCHANGE: I 04:09:19: %DUAL-5-NBRCHANGE: I
172.12.123.2 and 172.12.123.3, have formed adjacencies with R1. Show ip eigrp neighbors gives us the details, and I’ve removed some of the fields so we can pay attention to the really important stuff. R1#show ip eigrp neighbor
IP-EIGRP neighbors for process H 1 0
Address (sec) 172.12.123.2 172.12.123.3
Interfac
The key values are the IP
Se0 Se0
addresses of the EIGRP AS 100 neighbors, the interface on which they were discovered, and the Uptime, indicating how long the neighbor relationship has existed. So far, so good. Let’s add some networks! The loopbacks on each router will now be added to EIGRP 100, as well as the Ethernet subnet between R2 and R3. The ethernet segment’s network number is 172.23.23.0 /27, so we get a little more practice with our wildcard
masks! The loopbacks all have their router number for each octet, and each loopback has been configured with a host mask (255.255.255.255 or /32).
The additional configurations: R1(config)#router eigrp 100
R1(config-router)#network 1.1.
R2(config)#router eigrp 100 R2(config-router)#network 172. R2(config-router)#network 2.2.
R3(config)#router eigrp 100 R3(config-router)#network 172. R3(config-router)#network 3.3.
We’ll run show ip route eigrp 100 at each router to ensure that each is seeing the other routers’ loopbacks, and that R1 is seeing the Ethernet segment via EIGRP. R2 and R3 are both directly connected to the 172.23.23.0 /27 network, so there will be no EIGRP route to
that network in their EIGRP tables. The Successor routes appear in two of our three EIGRP tables. The EIGRP Route table, seen with show ip route eigrp, contains only the Successor routes. R1 has two Successor routes for 172.23.23.0 /27. R1#show ip route eigrp 2.0.0.0/32 is subnetted, D 2.2.2.2 [90/2297856] 3.0.0.0/32 is subnetted, D 3.3.3.3 [90/2297856]
1 su via 1 su via
172.23.0.0/27 is subnetted, 1 D 172.23.23.0 [90/2195456]
[90/2195456] via 172.12.123. R2#show ip route eigrp 1.0.0.0/32 is subnetted, 1 su D 1.1.1.1 [90/2297856] via D 3.3.3.3 [90/409600] via 1
R3#show ip route eigrp 1.0.0.0/32 is subnetted, 1 su D 1.1.1.1 [90/2297856] via D 2.2.2.2 [90/409600] via 1
As always, the first number in the brackets is the protocol’s Administrative Distance. The second number is the EIGRP metric for that route. Each router sees the other routers’ loopbacks, and can
ping them (ping results not shown). R1 can not only ping the Ethernet interfaces of R2 and R3, but has two routes to that subnet in its routing table. Here, EIGRP is performing equal-cost load balancing. The metric for the route is 2195456 for both routes, so data flows going from R1 to the 172.23.23.0 /27 network will be balanced over the two links. To see the Successor and Feasible Successor routes in EIGRP, run show ip eigrp topology. On R1, two
successors for the route 172.23.23.0/27 exist, so both are placed into the routing table as seen previously. There are also two routes for destinations 2.2.2.2/32 and 3.3.3.3/32, but those have not been placed into the EIGRP routing table. Why?
R1#show ip eigrp topology IP-EIGRP Topology Table for AS Codes: P — Passive, A — Active
P 3.3.3.3/32, 1 successors, FD via 172.12.123.3 (2297856/1282 via 172.12.123.2 (2323456/4096
P 2.2.2.2/32, 1 successors, FD
via 172.12.123.2 (2297856/1282 via 172.12.123.3 (2323456/4096
P 1.1.1.1/32, 1 successors, FD via Connected, Loopback0
P 172.23.23.0/27, 2 successors via 172.12.123.3 (2195456/2816 via 172.12.123.2 (2195456/2816
P 172.12.123.0/24, 1 successor via Connected, Serial0
R1 has two routes to 2.2.2.2/32 and 3.3.3.3/32 in its Topology table…
P 3.3.3.3/32, 1 successors, FD via 172.12.123.3 (2297856/1282 via 172.12.123.2 (2323456/4096
P 2.2.2.2/32, 1 successors, FD via 172.12.123.2 (2297856/1282 via 172.12.123.3 (2323456/4096
…. but the metrics are unequal, so only the best path (the Successor) is placed into the EIGRP Route table.
R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 su D 2.2.2.2 [90/2297856] via 1
D
3.0.0.0/32 is subnetted, 1 su 3.3.3.3 [90/2297856] via 1
172.23.0.0/27 is subnetted, 1 D 172.23.23.0 [90/2195456] v [90/2195456] via 172.12.123.
The metrics for those routes are very close, so close that it’s a good idea for us to use both of them for load balancing. We can use the variance command here to configure unequal-cost load balancing. Equal-cost and Unequalcost Load Balancing EIGRP performs equal-cost load balancing over a maximum of four paths by default, as verified by show ip protocols. I’ve removed the non-EIGRP fields from this output.
R1#show ip protocols Routing Protocol is “eigrp 100 EIGRP maximum hopcount 100 EIGRP maximum metric variance Redistributing: eigrp 100 Automatic network summarizati Maximum path: 4 Distance: internal 90 externa
If I hadn’t mentioned how many paths we can use for equal-cost load balancing by default, you may have missed it — and it’s easy to do so! The number next to “maximum path” is the max numbers of paths.
You can change that value with the maximum-paths command. I’ve seen different router models and different IOSes give different ranges for this command, anywhere from 6 to 32 paths. This particular router gives us 8:
R1(config)#router eigrp 100 R1(config-router)#maximum-path Number of paths Another router in my rack offe R7(config-router)#maximum-path Number of paths
Don’t worry about which router models give a certain number
of maximum links — just know how to change the default and how to verify the change (“show ip protocols”). There’s a very important value in that show ip protocols output that not only enables unequalcost load balancing, but determines the degree to which that balancing will be enabled: EIGRP maximum metric variance 1 The variance command is simply a multiplier. The router will multiply the Feasible
Distance by this value. Any feasible successor with a metric less than that new value will be entered into the routing table. This is one of those things that sounds ridiculously complicated when you read it, but when you see it in action, it makes a lot of sense. Consider the path from R1 to R2’s loopback in the previous tables. The primary route has a metric of 2297856; the other route has a metric of 2323456. By default, the second route will serve only as a backup and
will not carry packets unless the primary goes down. By configuring variance 2 in R1’s EIGRP process, the process multiplies the metric of the best route (2297856) by the variance value: 2297856 x 2 = 4595712 Any feasible successor with a metric less than 4595712 will now participate in unequal-cost load sharing. R1’s feasible successor to
2.2.2.2 has a metric of 2323456, so it qualifies! After changing the variance value to 2 (by default, it’s 1) and clearing the routing table, show ip route eigrp 100 verifies that two valid routes to both R2’s and R3’s loopbacks appear in the EIGRP routing table.
R1(config)#router eigrp 100 R1 Metric variance multip R1(config-router)#variance 2 R1#clear ip route *
(clears the routing table of a
R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 su
D
D
2.2.2.2 [90/2297856] via 1 [90/2323456] via 172.12.123.
3.0.0.0/32 is subnetted, 1 su 3.3.3.3 [90/2297856] via 1 [90/2323456] via 172.12.123.
172.23.0.0/27 is subnetted, 1 D 172.23.23.0 [90/2195456] v [90/2195456] via 172.12.123.
The variance command does not actually change the metrics; it makes a higher metric acceptable for load sharing.
When you saw that variance range of 1 — 128, you likely had the same thought I once did: “Why not just always set the variance to 128 on every router every time? That way you can use ALL of your possible routes!” Why You Don’t Set The Variance To 128 Every Time
Catchy section title, eh? Seriously, there are two very good reasons not to just set the variance command to its max every time: The CCNA exam is likely going to want you to use the lowest variance command possible. In both lab and production environments, you’re likely to bring in routes that you really
don’t want to use for load balancing if you use a ridiculously high variance. To illustrate, let’s add a 64 Kbps link between R1 and R3 (172.12.13.0 /24), and then add it to our EIGRP network. The variance is still 2.
R1 now has three paths to Router 2’s loopback: Directly to R2 over the 172.12.123.0 network Through R3 via the 172.12.123.0 network,
then over the Ethernet segment Through R3 via the 172.12.13.0 network, then over the Ethernet segment (the new route) All three routes appear in the topology table:
P 2.2.2.2/32, 1 successors, FD via 172.12.123.2 (2 via 172.13.13.3 (40 via 172.12.123.3 (2
That metric for the new route is
a LOT bigger than the other two. We could bring that third path in for unequal-cost load balancing, but the variance command would have to be raised to 18. After doing so and clearing the route table, all three routes now appear in the EIGRP routing table. R1(config)#router eigrp 100 R1(config-router)#variance 18 R1#clear ip route *
R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 su D 2.2.2.2 [90/2297856] via 1 [90/40665600] via [90/2323456] via 1
Bringing a link in for load sharing that is 18 times slower than the other links may lead to some routing issues. EIGRP unequal-cost load balancing is proportional to the metrics by default, so the slower route will be handling a lot less traffic than the fast links, but I’d still keep my eye on it after configuring this. Remember, just because you can do something doesn’t mean you should! Autosummarization — One
Default You’ll Want To Change EIGRP and RIP version 2 perform autosummarization by default, which is the act of summarizing network routes when those routes are sent across a network boundary — that is, when they are advertised via an interface that is not part of the network being summarized. In the earlier lab, I disabled autosummarization immediately, but I will not do so here.
To illustrate, we’ll use a huband-spoke network where both spokes have subnets of 20.0.0.0/8. The Serial interfaces are all on the 172.12.123.0 /24 network, with the router number serving as the final octet. All interfaces will be placed into EIGRP AS 100.
Here are the current configurations. I did not configure the auto-summary command -- it’s on by default and will appear in the router configuration. R1:
router eigrp 100 network 172.12.123.0 0.0.0.255 auto-summary R2: router eigrp 100 network 20.1.0.0 0.0.255.255 network 20.2.0.0 0.0.255.255 network 172.12.0.0 auto-summary R3: router eigrp 100 network 20.3.0.0 0.0.255.255 network 20.4.0.0 0.0.255.255 network 172.12.0.0 auto-summary
Network 20.0.0.0 is discontiguous — there is no single path to all subnets of the major network number. That’s a problem for routing protocols such as RIPv1 that do not carry subnet mask information. EIGRP and RIPv2 do carry subnet mask information, but the default autosummarization causes trouble with this network. R1 is now receiving the exact same update from both R2 and R3, and it’s for the classful network 20.0.0.0 /8.
Here’s R1’s EIGRP route table. None of the subnets are present in the routing table.
R1#show ip route eigrp D 20.0.0.0/8 [90/2297856] via [90/2297856] via 172.12.123.2,
Since the metrics for both paths
are exactly the same, equalcost load balancing for the classful network 20.0.0.0 will be performed, ensuring that at least half of the packets destined for any particular subnet of 20.0.0.0 will be going to the wrong router. If the metric were unequal, a single route for the classful network 20.0.0.0 would be placed into the routing table. All packets for the four subnets will go to the same router, and two of the four subnets will never receive any packets that
were originally intended for them. I’ll ping each loopback IP address from R1 — as you’d guess from that routing table, we’re going to get some really interesting results.
R1#ping 20.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !U!.! Success rate is 60 percent (3/
R1#ping 20.2.2.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos U!.!U Success rate is 40 percent (2/
R1#ping 20.3.3.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos U!.!U Success rate is 40 percent (2/
R1#ping 20.4.4.4 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !U!.! Success rate is 60 percent (3/
That is one ugly combination of successful pings, timeouts, and Unreachables — and an ugly success rate as well. This default behavior is easily removed with the no auto-
summary command. When both of the routers sending updates add this command to their EIGRP configuration, the routes will no longer be summarized at the network boundary. One often-ignored side effect of adding no auto-summary to an existing EIGRP configuration — the adjacencies will drop.
R3(config)#router eigrp 100 R3(config-router)#no auto-summ R3(config-router)#^Z
00:26:09: %DUAL-5-NBRCHANGE: I
After configuring no autosummary on both R2 and R3 and waiting for the adjacencies to reform, R1 now has a much more accurate routing table. R1#show ip route 20.0.0.0/16 D 20.4.0.0 D 20.1.0.0 D 20.2.0.0 D 20.3.0.0
eigrp is subnetted, [90/2297856] [90/2297856] [90/2297856] [90/2297856]
Bottom line: If you’re running EIGRP and you’re not seeing the subnets or routes you expect, the first thing I’d check is to see if the no autosummary command is in the configuration. If it’s not, I’d put it there.
Changing the EIGRP Hello Interval and Hold Timers Let’s cool down your Bulldog Brain with a less intensive but important command! Changing the hello intervals and hold times in EIGRP is easy, but the command’s a little odd. First, it goes on the interface, not in the general EIGRP config. Next, there are two numbers in the command — be sure you know which number is which! The commands begin with “ip”, not “eigrp”.
Here we go!
R1(config)#int s0 R1(config-if)#ip hello-interva eigrp Enhanced Interior Gatew
R1(config-if)#ip hello-interva Autonomous system nu
R1(config-if)#ip hello-interva Seconds between hell
R1(config-if)#ip hello-interva
R1(config-if)#ip hello-interva
The first number is the EIGRP AS the interface is part of; the
next number is the new hello interval, entered in seconds. The hold-time command has a similar syntax:
R1(config)#int s0 R1(config-if)#ip hold-time ? eigrp Enhanced Interior Gatew
R1(config-if)#ip hold-time eig Autonomous system nu
R1(config-if)#ip hold-time eig Seconds before neighb R1(config-if)#ip hold-time eig
R1(config-if)#ip hold-time eig
You’re not seeing double! The first “100” is the AS number, and the second number is the hold time, again in seconds. EIGRP is a bit of an egomaniac in that it REALLY likes to see its name and AS number in every command. Three important points regarding these values: Unlike OSPF, EIGRP neighbors do not have to agree on the hello and hold timers The EIGRP hold time
defaults to three times the hello time If you change the defaults on one router in an AS, you should change them on all the timers, as you may otherwise end up with flapping adjacencies. Right now, both R1 and R3 have the default broadcast EIGRP interval of 5 seconds between hellos, as verified with show ip eigrp interface static and show ip eigrp interface detail. R1#show ip eigrp int static
IP-EIGRP interfaces for proces Hello interval is 5 sec
R3#show ip eigrp int detail IP-EIGRP interfaces for proces Hello interval is 5 sec
If we change the default hellointerval on R1 to 20 seconds, we don’t lose the adjacency immediately, since EIGRP neighbors don’t have to agree on this value.
R1(config)#router eigrp 100 R1(config)#int fast 0/0 R1(config-if)#ip hello-interva Seconds between hell R1(config-if)#ip hello-interva
The problem is that R3 still has a hold time of 15 seconds, so it’s going to drop the adjacency eventually…. …and then 5 seconds later when the hello arrives from R1, the adjacency goes back up. And then down again. And then up again — hello, flapping adjacency!
*Aug 5 02:46:51.558: %DUAL-5-N
*Aug 5 02:46:55.156: %DUAL-5-N
*Aug 5 02:47:10.249: %DUAL-5-N
*Aug 5 02:47:14.620: %DUAL-5-N
*Aug 5 02:47:29.657: %DUAL-5-N
*Aug 5 02:47:31.780: %DUAL-5-N
*Aug 5 02:47:46.813: %DUAL-5-N
*Aug 5 02:47:51.264: %DUAL-5-N
Fine-Tuning EIGRP With The Bandwidth Command By default, EIGRP will assume a Serial interface is running at 1544 kbps:
R1#show int s0 Serial0 is up, line protocol i
Hardware is HD64570 Internet address is 172.12.12 MTU 1500 bytes, BW 1544 Kbit
For many serial interfaces, that’s exactly the case. But when that’s not the case, this assumption can lead to suboptimal routing. In the following network, there are three paths R1 can use to get data to the 172.23.23.0 /24 network.
With equal-cost load balancing in effect by default in EIGRP, R1’s routing table shows all three of these routes in the routing table. R1#show ip route eigrp 100
D
172.23.0.0/24 is subnetted 172.23.23.0 [90/219545 [90/219545 [90/219545
That’s fine IF all three links are actually running at 1544 kbps. What if the direct link between R1 and R3 is only a 56k line, to be used only in case the other two routes go down?
In that case, we’ll use the bandwidth command on both R1 and R3 to allow the routers to calculate the EIGRP metrics
using the more accurate 56 kbps setting for that link.
R1(config)#int s1 R1(config-if)#bandwidth ? Bandwidth in kilo R1(config-if)#bandwidth 56 R3(config)#int s1 R3(config-if)#bandwidth 56
WATCH THAT ENTRY! If you put “56000” behind bandwidth instead of “56”, you’ll make the slow link the most desirable link! No reload or reset is necessary
for the change to take effect, as we see just a few seconds later when we check the EIGRP route table. The route using the S1 interfaces on R1 and R3 is gone.
R1#show ip route eigrp 172.23.0.0/24 is subnetted, 1 D 172.23.23.0 [90/2195456] via 172.12.123.2, [90/2195456] via 172.12.123.3,
If those two routes disappear from the routing table, the R1R3 direct link route will reappear in the table. To prove it, we’ll close Serial0 on R1 and
then check the table.
R1#show ip route eigrp 172.23.0.0/24 is subnetted, 1 D 172.23.23.0 [90/46251776] vi
Once the problem with R1’s Serial0 interface is resolved and the EIGRP adjacencies formed over that interface reform, the better routes reappear in the table. The only issue with using this method to change EIGRP route metrics is that the bandwidth command is not an EIGRPspecific command. Other
important routing functions, particularly Quality Of Service (QOS), use this value. Just watch your other processes after changing the bandwidth value, and if anything odd pops up, it’s likely due to that bandwidth change. Using this method sure beats trying to tweak delay in order to get the desired results. Let’s chat a bit about EIGRP metric calculation and more about the delay option.
EIGRP Route Metric Calculation “What’s your prediction for using the delay metric in order to change an EIGRP route metric?” “PAIN!!!!” OSPF costs are easy to predict and relatively easy to work with. EIGRP — now that’s another story. All of a sudden we’re working with seven- and eight-
number metrics, and that’s just in a lab environment. How does EIGRP arrive at such big numbers? There are five values that either can, do, or don’t figure into the EIGRP metric. There’s been some differing information regarding these over the years, so let’s clear any confusion: Bandwidth: Used by default in EIGRP route calculation. Delay: Used by default in EIGRP route calculation. Load: Not used by default, but
can be used in calculation. Reliability: Not used by default, but can be used in calculation. MTU: Advertised, but not used in EIGRP route calculation. You’ve seen how to use the bandwidth command to change EIGRP route metrics, and it’s possible to change them using the delay command. It’s ridiculously complicated, though, and that’s coming from a guy who likes numbers and complex operations. Here’s why I really dislike this
method: R1(config)#int s0 R1(config-if)#delay ? Throughput delay
At the interface level, you’re dealing with tens of microseconds. Tens of microseconds. I’m not saying it can’t be done. I am saying you should strongly consider using the bandwidth command to tweak your EIGRP route metrics. EIGRP Timers and the EIGRP
RID Like OSPF, we can change the EIGRP hello-time and hold-time (dead time). Unlike OSPF, the commands are a bit longwinded, and the syntax is a little different than many EIGRP commands.
R1(config)#int s0 R1(config-if)#ip eigrp ? % Unrecognized command R1(config-if)#ip hello-interva eigrp Enhanced Interior Gatew
R1(config-if)#ip hello-interva Autonomous system nu
R1(config-if)#ip hello-interva Seconds between hell
R1(config-if)#ip hello-interva R1(config-if)#ip hold-time eig Seconds before neigh
Let’s wrap up our EIGRP discussion with this simple formula for determining the EIGRP RID: Highest IP address configured on a loopback. If no loopbacks, the highest IP address on a physical interface wins. You can hardcode the EIGRP
RID with the eigrp router-id command. Those rules should sound vaguely familiar!
Intro To Network Management and Licensing There’s a lot of “intro” in this section, for two reasons: The current CCNA exam requires some fundamental knowledge of these topics You could write an entire book on some of these
topics (and some people already have!) You don’t need a full books worth of knowledge on NetFlow or Cisco IOS Licensing for your CCNA exam, but a solid foundation in these topics will help you get CCNA certified and prepare you for future successes. Let’s jump right in!
The Simple Network Management Protocol Yes, “simple” is a relative term. You’ll find SNMP easy to understand, and since there are three different version of SNMP out there, you just know those version differences will rear their heads on the CCNA exam. You also know that knowing the differences lets you pick those points up, so let’s dive in! A description of SNMP’s purpose
from Wikipedia: “It is used mostly in network management systems to monitor networkattached devices for conditions that warrant administrative attention.” Translation: When bad (expletive deleted) happens, SNMP will let you know! These components of SNMP combine to make this happen:
The Network Management System (NMS), the actual software that runs on the Manager, which is the device that’s been assigned the task of managing a certain group of hardware, the managed devices.
The managed devices can be anything from a printer to a
router, and those devices will have a software agent running on it. That agent sends info back to the Manager, either in answer to a request (“poll”) from the Manager, or on its own. The agent has access to variables contained in the Management Information Base (MIB). In some cases, the agent can both access the MIB variables and write them (read/write access), and in other cases the agent can only
access them (read-only). SNMP isn’t just for notifying you and I of immediate network issues. SNMP is a great way to collect network performance data over time, and using that data, it’s possible to spot issues before they become major issues. It’s difficult if not impossible for you and I to spot slow increases in CPU usage, but our Network Management System will notice it and bring it to our attention.
When we’ve used debugs in this course, I’ve stressed how important it is to run debugs when things are going well so you can quickly spot issues with debug output when things aren’t going well. It’s just as important to have a picture of your network performance when things are going smoothly, since that picture helps you spot problems in your network when things aren’t going well. In network management, that
picture is a baseline, and SNMP can help you create that baseline. This baseline is vital to spotting usage anomalies early, before they really start reducing network performance. Ready, Steady, Trap! SNMP uses some odd terms to describe actions. Perhaps not odd, but we haven’t seen them with any other protocol, so let’s take a quick and close look!
GET: Sent by Manager to Agent, this is a request for information (“polling”), telling the Agent to send the value of a variable or set of variables.
SET: Sent by Manager to Agent, this is a request to actively change the value of a variable or set of variables.
TRAP: Sent by Agent to Manager, this is basically a message saying “I can’t wait for you to ask me, I’ve got to tell you about this NOW!” As with most urgent messages in networking and in life, it’s a notification that something bad has happened. That can be anything from a link going down to reporting an unsuccessful authentication attempt.
This Isn’t A History Lesson Even though SNMPv1 has really been gone from production networks for quite a while, we still covered it in any SNMP discussion. The CCNA concerns itself only with SNMP versions 2c and 3, so we’ll (thankfully) do the same!
A quick historical note: There was an SNMP v2, and it used a security setup that was actually considered by many as way too complex. Without going into details, it must have been way too complex, because SNMP v2C overcompensates for that! The biggest problem with SNMP v2C: Security is poor to the point of being non-existent. This version uses community strings, a fancy way of saying “clear-text passwords”. The strings could be set to allow
two kinds of access to the MIB variables, Read-Only (RO) and Read-Write (RW). Here’s the beginning of a typical SNMP v2C configuration, with the other 24 SNMP options edited out of IOS Help (we’ll save those for future studies!):
R1(config)#snmp-server ? chassis-id String to uniq community Enable SNMP; s
R1(config)#snmp-server communi WORD SNMP community string
R1(config)#snmp-server communi
WORD ipv6 ro rw view
Std IP accesslist Expanded IP acces string Access-list na Specify IPv6 N Read-only acce Read-write acc Restrict this
About the only security we have with that version is the ability to limit SNMP access by ACL. The clear-text password is a major risk. Thankfully, Version 3 is a huge step forward security-wise,
offering several features we’ll see discussed in the VPN section: Encryption: Using MD5 (Message-Digest 5) or Secure Hash Algorithm (SHA) to protect the contents from unauthorized eyes. Message Integrity: Making sure the message wasn’t altered in any fashion during transmission. Authentication: Making sure the source of the message is a trusted, valid source.
Origin Authentication: Making sure that the source of the data is who they say they are. SNMP v2C offers one security model and level, so we’re pretty much stuck with that one unless we go with SNMP V3! V2C: The level is noAuthNoPriv, using the community string for authentication. V3: Lowest level is also noAuthNoPriv, which also uses a community string for authentication.
Next level up is authNoPriv, which uses MD5 or SHA for authentication. There is no encryption at this level. The highest V3 level is authPriv, which offers a choice between MD5 and SHA for authentication and uses the Data Encryption Standard (DES) for encryption. While you may see questions on both of these SNMP versions on your exam, SNMP v3 is recognized as the only current standard version of the protocol by the IETF. Previous versions
are seen as “Historic” by the IETF, a really nice way of saying “outdated”.
Syslog Unlike some other vendor products, Cisco routers and switches speak to us in pretty clear terms when something’s going on. We just have to know where that conversation is happening, and in many cases it’s in the system logging messages, or Syslogs. Let’s take a detailed look at a message we’ve seen quite a bit of in this course:
2d03h: %LINEPROTO-5-UPDOWN: Li
Almost everything there is selfexplanatory, but that’s an odd timestamp in the front. Two days and three hours since what? More about that when we’re done with Syslog! The number in the middle of the message (in this case, the “5” in “SYS-5-CONFIG_I”) is the severity level of the message. We can use the severity number or the severity level name to filter the message we see at the console or have sent to another device. Here’s a full
list of the numbers, the corresponding level names, and the IOS Help description of each level. 7: Debugging (“Debugging Messages” — can’t argue with that.) 6: Informational (“Informational Messages” — ditto.) 5: Notification (“Normal but significant conditions”. Probably the most common of the levels, we’ve seen this on events from line protocols going up and down to EIGRP adjacencies
doing the same.) 4: Warning (“Warning Condition”) 3: Error (“Error Conditions”) 2: Critical (“Critical Conditions”) 1: Alert (“Immediate Action Needed” — uh oh) 0: Emergency (“System Is Unusable”) Use show logging to see the current syslog settings for Console, Monitor, Buffer, and Trap logging, as well as the
contents of the log buffer, starting with the most recent events.
R1#show logging Syslog logging: enabled (0 mes 0 overruns) Console logging: level debugg Monitor logging: level debugg Buffer logging: level debuggi Logging Exception size (4096 Trap logging: level informati
Log Buffer (4096 bytes): 1d02h: %DUAL-5-NBRCHANGE: IP-E 1d02h: %SYS-5-CONFIG_I: Config 1d02h: %DUAL-5-NBRCHANGE: IP-E 1d02h: %DUAL-5-NBRCHANGE: IP-E
When you see a level mentioned in the output of this command, it means all events at that level and below will be logged. For example, “level debugging” in the Console, Monitor, and Buffer descriptions mean that syslog messages of all levels are sent to those logs, since debugging is the highest severity level. To change the logging levels, use the logging command followed by the log whose severity level you want to change — “logging buffered”,
“logging monitor”, etc. R1(config)#logging ? Hostname or A.B.C.D buffered console exception facility history host monitor on rate-limit source-interface trap
IP addres Set buffe Set conso Limit siz Facility Configure Set syslo Set termi Enable lo Set messa Specify i transacti Set syslo
R1(config)#logging buffered ? Logging sev Logging buf alerts Immediate
critical debugging emergencies errors informational notifications warnings
Critical Debugging System is Error con Informati Normal bu Warning c
R1(config)#logging buffered 5
Two notes on that IOS Help readout: You can use the severity level number or name when setting logging levels. With “logging buffered”, you can change the size of the
buffer log. Another important option in that readout:
R1(config)#logging ? Hostname or A.B.C.D IP addres
This allows you to set the IP address of a syslog server, which is one of mankind’s greatest inventions. I like having the immediate access of the local router’s log contents, but there’s no way to filter or search the content. Setting up
a syslog server allows you to choose a program that will allow you to view the syslog message, to filter them, and to search them for a particular event or type of events.
Timestamps and Sequence Numbers In the Syslog section, we saw this message:
2d03h: %LINEPROTO-5-UPDOWN: Li
We’re used to seeing a timestamp at the beginning of these messages, but that’s a new one. Two days and three hours what? Two days and three hours since that router was last reloaded, that’s what! Our timestamps
can be set to reflect the current time or the overall uptime. Personally, I’m a big fan of the time over the uptime, so let’s change that value:, which can be set for the debug timestamps or the log timestamps.
R1(config)#service timestamps debug Timestamp debug mes log Timestamp log messa
R1(config)#service timestamps datetime Timestamp with date uptime Timestamp with sy
R1(config)#service timestamps localtime Use local time z msec Include millis show-timezone Add time zone R1(config)#service msec show-timezone
timestamp Include Add tim
R1(config)#service timestamps
To me, using milliseconds in your timestamps is overkill, but the option is there. There’s one related service you should know about:
R1(config)#service ? compress-config config dhcp disable-ip-fast-frag exec-callback exec-wait finger hide-telnet-addresses linenumber nagle old-slip-prompts pad password-encryption prompt pt-vty-logging sequence-numbers slave-log tcp-keepalives-in tcp-keepalives-out
Compr TFTP Enabl Disab Enabl Delay Allow Hide enable Enable Allow Enabl Encry Enabl Log si Stamp Enabl Gener conne Gener
tcp-small-servers
conne Enabl
The sequence-numbers service does exactly what you’d think it does. You can use it with timestamps…
R1(config)#service sequence-nu R1(config)#^Z 000156: Sep 8 12:05:58: %SYS-5
… or without.
R1(config)#no service timestam 000157: %SYS-5-CONFIG_I: Confi
To disable either timestamps or sequence numbers, just say “no” in front of the original commands. I’ll turn them both off here, and as a result the syslog message has no time or sequence information!
R1(config)#no service sequence R1(config)#no service timestam %SYS-5-CONFIG_I: Configured fr
What Puts The “Flow” In “NetFlow”? Cisco’s NetFlow has a singular purpose — to collect IP traffic statistics. Those statistics can be used for anything from creating that network baseline we talked about earlier to helping us improve network security. While Cisco developed NetFlow, it’s definitely not Ciscoproprietary. An online search
will quickly reveal that other vendors have happily developed their own NetFlow analysis and reporting software products. Best of all, NetFlow is transparent to our network devices! We do need to heed this warning from Cisco’s NetFlow configuration guide page: “NetFlow does consume additional memory and CPU resources; therefore, it is important to understand the resources required on your
router before enabling NetFlow.” Sound advice! Before we look at a NetFlow config, we need to answer the question, “What exactly is a ’flow’?” While advanced versions of NetFlow allow you and I as the network admins to create user-defined flows, it’s generally agreed between NetFlow versions and apps that any traffic that shares the following seven attributes is part of the same flow:
Ingress interface (input interface, that is) Source and destination IP address IP Protocol Source and destination port IP ToS (Type of Service) To start your NetFlow config from the CLI, you’ll need to define what flows to capture via the ip flow interface-level command….
R1(config)#int fast 0/0 R1(config-if)#ip flow ? egress Enable outbound Net ingress Enable inbound NetF
… followed by getting that information to the Collector device with the ip flow-export command. Note the ip flowexport command is a global command, which you’re guaranteed to try at the interface level sooner or later. I certainly have / do!
R1(config-if)#ip flow-export ? % Unrecognized command — whoop
R1(config-if)#exit R1(config)#ip flow-export ? destination Specify the interface-names Export inte source Specify the template Specify the version Specify the
A config using a Collector at 172.16.1.1 and the usual UDP port of 2055, NetFlow version 5, and using the router’s loopback1 interface as the source of the NetFlow info sent to the Collector would look like this:
R1(config)#ip flow-export dest
UDP/SCTP port number
R1(config)#ip flow-export dest R1(config)#ip flow-export vers R1(config)#ip flow-export sour
Note the port number in this config is a requirement, not an option. Verify your config with show ip flow interface and show ip flow export. At the very beginning of the second command output, you’ll see the NetFlow version number and the source and destination IP addresses. R1#show ip flow interface
FastEthernet0/0 ip route-cache flow ip flow ingress ip flow egress
R1#show ip flow export Flow export v5 is enabled for Export source and destination VRF ID : Default Source(1) 10.1.1.1 (L Destination(1) 172.16.1.1
To monitor the NetFlow info at the CLI, use show ip cache flow. In the show ip cache flow info, you can see I recently sent a string of pings through that interface.
R1#show ip cache flow IP packet size distribution (3 1—32 64 96 128 160 192 .000 .000 .000 .029 .000 .000 480 512 544 576 1024 .000 .000 .000 .000 .970
1536 .000
IP Flow Switching Cache, 27854 2 active, 4094 inactive, 3 ad 377 ager polls, 0 flow alloc Active flows timeout in 30 mi Inactive flows timeout in 15 IP Sub Flow Cache, 25800 bytes 2 active, 1022 inactive, 3 ad 0 alloc failures, 0 force fre 1 chunk, 1 chunk added last clearing of statistics n Protocol Idle(Sec)
Total
Flows
-------/Flow ICMP 15.2 Total: 15.2
Flows
/Sec
1
0.0
1
0.0
SrcIfSrcIPaddressDstIfDstIPadd Fa0/0 0.0.0.0 Null 255.2
There’s a lot more to learn about NetFlow, both in the Cisco and non-Cisco implementations. This section gives you a great head start on future studies, as well as on the CCNA exam!
Intro To Modern Cisco Licensing Once upon a time, when you needed a Cisco IOS image, you could just download one anytime you felt like it. That “once upon a time” was a long time ago! It’s a little more complicated in some ways to get the IOS image you want, but it’s gotten better in some ways as well. In the past, if you wanted the IP fundamentals along with
Voice capability, that was one feature set; if you wanted Security features added to that, that was another feature set. Naturally, the greater the capabilities of the feature set you purchased, the more moolah you had to cough up, and you didn’t want to pay for features you weren’t going to use. The new way of doing things is the Cisco Universal Image. You get one image, and then you pay for the features you want to use. Cisco then gives you a
key that will unlock the features you paid for while keeping the others secure (from you, that is). As mentioned earlier, downloading an IOS image today isn’t just a matter of going to Cisco’s website and picking the one you want. Your company must have a service agreement with Cisco, and even that’s not enough — the agreement must cover the download for the file you want. (Some nerve of them, eh?) Of the technology package
licenses available, you must start with the IP Base license, ipbasek9. That license is the foundation of your router’s processes — without that license, the other packages are useless. License suites are also available. Many Integrated Service Routers (ISRs) can run the Network Essential Suite, which contains the following: Securityk9 — Security, naturally, including IOS Firewall and IPS support Uck9 — Unified
Communications, including Voice over IP Datak9 -- Includes the moreimportant-by-the-moment MPLS Appxk9 -- Application Experience features Licensing Types and Processes It’s not always necessary to go through online software activation with a new Cisco router. If you buy a permanent license via Cisco’s online sales tool while choosing your router and IOS, the key and code are preinstalled. When you pop
that baby out of the box, it’s ready to go. If you choose to add a feature set later, you have three options: 1. The Cisco License Manager (CLM) 2. The Cisco Product License Registration Portal 3. Cisco Call Home (not to be confused with ET Phone Home, this allows the router to communicate
directly with the Cisco Product License Registration portal.) It’s likely you’ll use the Cisco License Manager (CLM) to handle the licensing. It’s an outstanding GUI that helps you keep track of your licensing. That might not sound like much, but in an enterprise network, keeping up with your licensing can be quite the enterprise. Once you go CLM, you’ll likely never go back to the CLI for activating licenses, but license
activation can be handled manually with a little help from the Cisco Product License Registration Portal. Courtesy of Cisco’s website, here’s the four-step process for manually activating a license. Note the first word in the entire process is “purchase”. 1. Purchase the Product Authorization Key for the license you need. 2. Get the Unique Device Identifier (UDI) from your
router with the show license udi command.
3. Head to Cisco’s Product License Registration Portal and enter the PAK and UDI information as prompted. 4. The Portal will send you a license file, which you can install via the CLI with the install license command. The Unique Device Identifier mentioned in Step 2 consists of
the Product ID (PID) and the Serial Number (SN). If you can’t get the info you need via the show license udi command, check the back of the hardware for a panel displaying that information. Your particular model may have that info somewhere else, so if you don’t see it there, check your model’s documentation. Here’s a quick look at the show license udi command. Router#show license udi
Device# *0
PID C3900-AAAAAA/K9
I hate to mention this, since I know you know, but during Step 4 you need to make the file available to the router via your favorite method (TFTP server, HTTP server, etc.) Rehosting>Regifting We’ve all been through this, probably with a laptop: 1. You purchase software. 2. You install it and start
S B
using it. 3. You buy a new laptop. 4. You realllllllly want to move that software license to the new laptop without buying another license. Realizing how often this happens, some software vendors have made this a much friendlier process, and that includes Cisco! You can actually move a license from one router to another via the Cisco Product License Registration online
portal via the process known as rehosting. Naturally, this process involves revoking the license on the router it’s on now. From Cisco’s site, here’s the 6step process: 1. Get the UDI from the source and destination devices with show license udi. 2. Enter that data as prompted on the Product License Registration page
on Cisco’s website. Use the License Transfer Portal Tool. 3. After choosing the license to be transferred, you’ll be issued a permissions ticket by the Portal. 4. Use the license revoke command to do just that to the source device, which in turn will give you a rehost ticket. 5. Enter the rehost ticket into the License Transfer
Portal, along with the destination router info as prompted. 6. After you get the new license key via email, install it! As of this writing, Cisco waits 60 days to totally revoke the source router’s license, which gives you plenty of time to complete the transfer. There is no guarantee that policy will continue, so please check Cisco’s website if you’re really
depending on that 60-day grace. Right-To-Use Licensing Cisco has two licensing formats, one of which we’ve discussed. The permanent license only requires you to accept the EndUser License Agreement, reboot the routers, and you’re off to the races. (You do read the entire EULA, don’t you? Suuuuure you do!) Cisco also allows the use of evaluation licenses, where you can work with the software for
a given period of time, usually 60 or 90 days. As you’d expect, you cannot rehost an evaluation license. The commands to install a permanent license and temp license at the CLI differ. For a permanent license, use the license install command. To activate an evaluation license, use the license bootmodule technology-package < package name> command. Congratulations for hanging in
there! I know reading about licensing isn’t exactly the most exciting thing in the world, and it’s not exactly lab-friendly. Let’s move on to the fundamentals of VPNs and tunnels!
Intro To VPNs And Tunnels VPNs are often referred to as tunnels, and we can apply security rules and policies to this tunnel without applying them to other WAN communications. For example, when we configure commands directly on the Serial0 interface, all
communications using that interface are subject to those commands. When we create a VPN, it’s actually seen as a separate interface, and we can apply rules to the VPN that are not applied to other communications using Serial0. In the following exhibit, a VPN has been created between two routers. Security policies can be enforced on the VPN between those two routers without affecting any WAN communications involving the
bottom router. Packets sent through the tunnel are encrypted before transmission, and an additional VPN header is tacked on to the packet as well.
If the two routers connected by the VPN belonged to the same
organization, we’d have an intranet; if they belonged to different organizations, we’d have an extranet. We see only routers in that example, but PCs and laptops can also serve as the endpoint of a VPN. This remote access VPN is usually initiated by the PC end via a VPN client, hopefully one the end user can just click on and connect with minimum effort / input on their part. Other Cisco devices, such as the very popular Adaptive
Security Appliances (ASA), can also serve as a VPN endpoint. The ASA does a lot more than create and maintain VPNs — so much so that the ASA actually has its own certification!
http://www.cisco.com/web/learn
Defining Our Terms Data origin authentication allows the receiver to guarantee the source of the packet.
Encryption is just that — the sender encrypts the packets before sending them. If an intruder picks them off the
wire, they will have no meaning.
Integrity is the receiver’s ability to ensure that the data was not affected or altered in any fashion as it traveled across the VPN.
Anti-replay protection (sometimes just called “replay protection”) protects against replay attacks, a malicious repeat and/or delay of a valid transmission. Replay attacks begin innocently enough. In this example, Router C requests proof of identity from Router A. Router
A responds with proof of identity.
The problem here is the intruder listening to the conversation and copying Router A’s proof of identity.
After A and C are done with their conversation, the Intruder starts a conversation with C, pretending to be A. When C asks for proof of identity, the Intruder submits A’s ID, and C will accept it.
Anti-replay protection can use several different methods of defeating such an attack, including the one-time use of tokens for the proof of identity or by using sequence numbers; a repeated sequence number will be rejected.
GRE and IPSec A GRE tunnel allows encapsulation of packets via a 24-byte header. 20 bytes of that is the new IP address header, and that header will contain the source and destination IP addresses of the tunnel endpoints. The rest is a 4-byte GRE header. Easy enough, right? Well, yeah, but there’s a problem. GRE’s drawback is that there’s no strong encryption scheme, and that’s a pretty big drawback.
This giant flaw is corrected by IP Security, generally referred to as IPSec. IPSec does offer encryption along with authentication, and that’s why you’ll see more IPSec in today’s networks than GRE.
Data Encryption Technologies For data to be encrypted, it follows that something’s got to perform this encryption! One such encryption tool is the Data Encryption Standard (DES). DES was developed in 1976, and just a few security issues with networking have popped up since then! The main issue is that the key used by DES to encrypt data is only 56 bits in size. (A key is a random string of binary digits.)
Thirty years ago, that was fine, but it doesn’t meet today’s demands. Depending on which documentation you read and what tool you’re using, DES keys can be broken in any time frame from 24 hours to ten minutes. Triple DES (3DES) is just what it sounds like — the DES encryption procedure is run three times, with three different 56-bit DES keys. Advanced Encryption Standard (AES) can run on any Cisco router that has IPSec DES/3DES
capability.
The IPSec Architecture IPSec is a combination of three protocols: Authentication Header (AH), which defines a method for authentication and securing data Encapsulating Security Payload (ESP), which defines a method for authenticating, securing, and encrypting data Internet Key Exchange
(IKE), which negotiates the security parameters and authentication keys The IPSec Packet Format
Authentication Header (AH) offers solid security -- it provides data origin authentication as well as offering optional anti-replay
protection. The drawback with AH is that the authentication it provides for the IP Header is not complete. That’s because some of the IP fields can’t be correctly predicted by the receiver — these are mutable fields which may change during transmission. AH will successfully protect the IP packet’s payload, though, which is really what we’re interested in. AH does not offer data confidentiality.
The Encapsulating Security Payload (ESP) does just that — as you can see from the IPSec packet illustration, there is an ESP Header and ESP Trailer surrounding, or encapsulating, the data. ESP offers all of the following: Data origin authentication Anti-replay protection Data confidentiality Comparing AH and ESP, you might be wondering why you’d ever choose AH over ESP. Here
are a few things to consider: ESP is more processorintensive than AH. If your data does not require data confidentiality, AH may meet all your requirements. ESP requires strong cryptography, which isn’t available and/or allowed everywhere. AH has no such requirement. There are a lot more details to an IPSec VPN than you’ll see
here — you’ll see them in your CCNA Security studies. I am not going to hit you with all of those details here, but I do want to give you an illustrated look at the overall process of building an IPSec VPN. Something has to trigger the building of a VPN, and that something is “interesting traffic”. We usually use an ACL to define that traffic.
The endpoints then enter into a negotiation of certain VPN values, such as the encryption and authentication methods to be used.
There’s an exchange of DiffieHellman public keys, followed by the initiator and recipient authenticating each other….
… and that’s almost it! The initiator then proposes values for a Service Association, the recipient responds with the parameters it considers acceptable, the initiator confirms receipt of that info….
.. and we have our tunnel!
The initiator will then send encrypted packets across the tunnel, and the recipient will de-crypt them with the same algorithm used to encrypt them (decided upon in the negotiation) along with the shared session key. Once the data exchange is complete, the tunnel can be
torn down. This tunnel termination can be configured to occur after a certain number of bytes have passed through the tunnel, or after the tunnel has been up for a certain number of seconds.
The Return Of GRE The Generic Routing Encapsulation (GRE) tunneling has actually made a comeback, since GRE can do things that IPSec can’t do, and vice versa. We used to love GRE’s multiprotocol capabilities, but that’s not as important to us in today’s networks as it once was. Combined with a lack of strong security features, GRE was pretty much dead for quite
a while. IPSec is very secure, but it does have drawbacks. Multicast traffic generated by OSPF and EIGRP can’t be carried by basic IPSec — we’ve got to run a combination of IPSec and GRE, commonly called GRE over IPSec. (As of IOS 12.4(4), IPSec supports multicast traffic but not dynamic routing protocols.) By combining GRE and IPSec, each protocol helps to compensate for the other’s limitation:
IPSec adds data integrity and confidentiality that GRE does not offer GRE offers the ability to carry routing protocol traffic, which IPSec does not offer Why call it “GRE over IPSec” rather than “IPSec over GRE”? Because the GRE encapsulation happens first, and then that encapsulation is encapsulated again, by IPSec. In effect, we have a GRE tunnel inside an IPSec tunnel. (You can call it
either, as those two terms are interchangeable.) With the fundamentals of VPNs down, let’s take another look at redundancy — at Layer 3 this time!
1st-Hop Redundancy Protocols You’ve heard this before, and you’re hearing it again -- we’ll take as much redundancy as we can get in our networks, and that’s particularly true of our routers! If a router goes down, we have real problems. Hosts are relying on that router as a gateway to send packets to remote
networks. In networking, it’s vital to avoid the “single point of failure”, which is a quick way of saying “if this thing goes down, we’re really in a lot of trouble”. R3 in the following illustration is definitely a single point of failure!
For true router redundancy, we need two things: A secondary router to handle the load immediately if the
primary goes down. A protocol to have the network use that secondary router quickly and transparently. Time is definitely of the essence here. We need a protocol to quickly detect the fact that the primary router’s down, and then we need a fast cutover to the secondary router. We also need this cutover to be transparent to the hosts, and that includes not moving them to a new default gateway. If
you’re wondering how we’re going to pull off that little trick, stick around! You’ll actually see HSRP on the CCNP SWITCH exam as well, when you might assume it would be the ROUTE exam. That’s because L3 switches have gotten so popular in today’s networks, and all of our router redundancy protocols can be configured on L3 switches as well as routers. Running first-hop redundancy protocols on L3 switches actually makes the cutover to a
backup device a little faster than configuring them on routers, since our end users are directly attached to the L3 switches, making this true firsthop redundancy (or “1-hop redundancy” in some documentation). We have several different methods that allow us to achieve the goal of router redundancy, and a very popular choice is HSRP — the Hot Standby Routing Protocol. Please note: In the following section, I’m going to refer to
routers rather than L3 switches, since the HSRP terminology itself refers to “Active routers”, “Standby routers”, and so forth. The commands and theory for all of the following protocols will be the same on an L3 switch as they are on a router.
Hot Standby Routing Protocol Defined in RFC 2281, HSRP is a Cisco-proprietary protocol in which routers are put into an HSRP router group. One of the routers in the HSRP router group will be selected as the Active Router, and that router will handle the routing while the other routers in the group are in standby, ready to handle the load if the primary router becomes unavailable.
The terms “active” and “standby” do not refer to the actual operational status of the routers, only to their status in the HSRP group. The hosts don’t know the actual IP or MAC addresses of the physical routers in the group. They’re set up to use a pseudorouter as their default gateway, a virtual router created by the HSRP configuration. This virtual router will have a MAC and IP address, just like a physical router.
Here’s the best part! The hosts will be configured to use the virtual router’s IP address as a default gateway, and if a physical router goes down and another steps in to take over the load, the hosts don’t need any reconfiguration. They’re sending their packets to the IP address of the virtual router, not the physical one. Here’s the network for our first HSRP lab:
R2 and R3 will both be configured to be in HSRP group 5. The virtual router will have an IP address of 172.12.23.10 /24, the address all hosts will
be using as their default gateway.
R2(config)#interface ethernet0 R2(config-if)#standby 5 ip 172
R3(config)#interface ethernet0 R3(config-if)#standby 5 ip 172
The main show command for HSRP is show standby, and it’s the first command you should run while verifying and troubleshooting HSRP. Let’s run it on both routers and compare results. R2#show standby
Ethernet0 — Group 5 Local state is Standby, prior Hellotime 3 sec, holdtime 10 Next hello sent in 0.776 Virtual IP address is 172.12. Active router is 172.12.23.3, Standby router is local 1 state changes, last state ch R3#show standby Ethernet0 — Group 5 Local state is Active, priori Hellotime 3 sec, holdtime 10 Next hello sent in 2.592 Virtual IP address is 172.12. Active router is local Standby router is 172.12.23.2 Virtual mac address is 0000.0 2 state changes, last state ch
R3 is in Active state, R2 is in Standby. When you see “Active
router is local” in this command, you’re on the Active router! The hosts are using 172.12.123.10 as their gateway, but R3 is actually handling the workload. R2 will take over if R3 becomes unavailable, and that cutover will be transparent to the hosts. Most importantly, no reconfig of the hosts’ default gateway setting is necessary — it stays at 172.12.123.10. An IP address was assigned to the virtual router during the
config, but not a MAC address. However, there is a MAC address under the show standby output on R3, the active router. How did the HSRP process arrive at a MAC of 0000-0c-07-ac-05 for a router that doesn’t physically exist? The MAC address 00-00-0c-07ac-xx is HSRP’s well-known virtual MAC address, with xx being the HSRP group number in hex. The group number is 5, which is expressed as 05 with a two-bit hex character. If the group
number had been 17, we’d see 11 at the end of the MAC address (one unit of 16, one unit of 1). The output of the show standby command tells us the HSRP speakers are sending Hellos every 3 seconds, with a 10second holdtime. These values can be changed with the standby command, but HSRP speakers in the same group should have the same timers. You can even tie down the hello time to the millisecond, but it’s realllly doubtful you’ll
ever need to do that.
R3(config-if)#standby 5 timers Hello interval in se msec Specify hello interv R3(config-if)#standby 5 timers Hold time in seconds R3(config-if)#standby 5 timers
A key value in the show standby command is the priority. The selection of the Active Router is tied directly into the HSRP priority, and I expect you to see this topic pop up on your CCNA exam. Best of all, when you learn a few very simple rules regarding
this value, you will destroy any questions they give you on HSRP! The default HSRP priority is 100, as shown in both of the above show standby outputs. The router with the highest priority will be the Active Router. If there’s a tie in priority, the router with the highest IP address on an HSRP-enabled interface is the Active router. That was R3, so R3 is the Active router and R2 is the standby.
Let’s say R2 is a much more powerful router model than R3, and you want R2 to take over as the Active router. We’ll raise the default priority on R2 right now and see the results.
R2(config)#interface ethernet0 R2(config-if)#standby 5 priori R2#show standby Ethernet0 — Group 5 Local state is Standby, prior Hellotime 4 sec, holdtime 12 Next hello sent in 0.896 Virtual IP address is 172.12. Active router is 172.12.23.3, Standby router is local 1 state changes, last state c
R2 now has a higher priority, but R3 is still the Active Router.
Why? The current Active router does not lose that role unless one of these two things happens: The current Active router goes down, with another Active router chosen in its
absence Another router has its priority set to a higher value than the Active router, AND the preempt option is used while doing so Here’s the command we need to get the job done, which we’ll verify with show standby.
R2(config-if)#standby 5 priori 1d11h: %STANDBY-6-STATECHANGE:
R2#show standby Ethernet0 — Group 5 Local state is Active, priori Hellotime 4 sec, holdtime 12 Next hello sent in 1.844 Virtual IP address is 172.12. Active router is local
Standby router is 172.12.23.3 Virtual mac address is 0000.0 2 state changes, last state c
In just a few seconds, a message appears that the local state has changed from standby to active. Show standby confirms that R2, the local router, is now the Active Router. R3 is now the standby. On rare occasions, you may have to change the MAC address assigned to the virtual router. This is done with the standby mac-address command. Just make sure
you’re not duplicating a MAC address that’s already on your network!
R2(config-if)#standby 5 mac-ad 1d12h: %STANDBY-6-STATECHANGE: R2#show standby Ethernet0 — Group 5 Local state is Active, priori Hellotime 4 sec, holdtime 12 Next hello sent in 3.476 Virtual IP address is 172.12. Active router is local Standby router is 172.12.23.3 Virtual mac address is 0000.1 4 state changes, last state c 1d12h: %STANDBY-6-STATECHANGE:
The MAC address will take a
few seconds to change, and the HSRP routers will go into Learn state for that time period. A real-world HSRP troubleshooting note: If you see constant state changes with your HSRP configuration, do what you should always do when troubleshooting — check the physical layer first.
Sharing The Load With HSRP (Multigroup HSRP, That Is!) One problem with HSRP is the Active router taking on the entire workload while the other routers in the group just sit around, doing nothing! We can fix that with Multigroup HSRP (MHSRP), usually referred to by the informal name HSRP load balancing. Here’s a network that lends itself nicely to HSRP load
balancing. In this case, we’d create two HSRP groups, and use the priority command to make sure R2 becomes the Active router for one group, and R3 the Active router for the other.
The configs: R2:
int e0 ip address 172.12.23.2 255.255 standby 11 ip 172.12.23.11 pre standby 22 ip 172.12.23.22 pre
standby 11 priority 99
R3: int e0 ip address standby 11 standby 22 standby 22
172.12.23.3 255.255 ip 172.12.23.11 pre priority 99 ip 172.12.23.22 pre
This config will make R2 the Active router for Group 22 and R3 the Active router for Group 11. Configure half of our hosts to use 172.12.23.11 as their default gateway, and the other half to use 172.12.23.22, and you’re all set.
You could also base your load balancing on VLAN memberships and subnets. This is not 50/50 load balancing, and if the hosts using .11 as their gateway are sending much more traffic than the hosts using .22, HSRP has no dynamic method of adapting. It’s still better than one router doing all the work and the other just sitting around!
Troubleshooting HSRP The show standby command is great for HSRP troubleshooting and verification. I’ve deliberately misconfigured HSRP on this router to illustrate a few things to watch out for.
R1#show standby FastEthernet0/0 — Group 1 State is Active 2 state changes, last state c Virtual IP address is 172.12. Active virtual MAC address is Local virtual MAC address is Hello time 3 sec, hold time 1 Next hello sent in 2.872 secs
Preemption disabled Active router is local Standby router is unknown Priority 100 (default 100) IP redundancy name is “hsrp-F
FastEthernet0/0 — Group 5 State is Init (virtual IP in Virtual IP address is 172.12. Active virtual MAC address is Local virtual MAC address is Hello time 3 sec, hold time 1 Preemption disabled Active router is unknown Standby router is unknown Priority 75 (default 100) IP redundancy name is “hsrp-F
We’ve got all sorts of problems here! In the Group 5 readout,
we see a message that the subnet is incorrect. Naturally, both the active and standby routers are going to be unknown. In the Group 1 readout, the Active router is local but the Standby is unknown. This is most likely a misconfiguration on our part as well, but along with checking the HSRP config, always remember − “Troubleshooting starts at the Physical layer!” Check your cabling as a loose cable can cause some real issues.
Then again, when can’t loose cables cause problems? Frankly, most HSRP issues you run into fall into these categories: The secondary router didn’t become the Active router when it should have. The former Active router didn’t take back over when it came back online. If either of those happens to you, check these values:
Is the preempt command properly configured? (I put this first in the list fora reason.) What are the priority values of each HSRP speaker? Whew! That’s a lot of detail — and only one of our redundancy choices. Let’s check out another one. You’ll be glad to know that in a way, you’ve already been studying this one!
Virtual Router Redundancy Protocol Defined in RFC 2338, VRRP is the open-standard equivalent of the Cisco-proprietary HSRP. The operation of the two is so similar that you basically learned VRRP while going through the HSRP section! There are some differences, a few of which are: VRRP’s equivalent to HSRP’s Active router is the Master router. (Some
VRRP documentation refers to this router as the IP Address Owner.) This is the router that has the virtual router’s IP address as a real IP address on the interface it will receive packets on. The physical routers in a VRRP Group combine to form a Virtual Router. The VRRP Virtual Router uses an IP address already configured on a router in its group, as opposed to how the HSRP router is
assigned a separate IP address. VRRP Advertisements are multicast to 224.0.0.18. VRRP’s equivalent to HSRP’s Standby router state is the Backup state. The MAC address of VRRP virtual routers is 00-005e-00-01-xx, and “xx” is the group number in hexadecimal. “preempt” is a default setting for VRRP routers.
Now on to our third option for router redundancy!
Gateway Load Balancing Protocol (GLBP) HSRP and its open-standard cousin VRRP have some great features, but accurate load balancing is not among them. While both allow a form of load sharing, it’s not true load balancing. The primary purpose of the Gateway Load Balancing Protocol (GLPB) is just that — load balancing! It’s also suitable for use only on Cisco routers, because GLBP is Ciscoproprietary.
As with HSRP and VRRP, GLBP routers will be placed into a router group. However, GLBP allows every router in the group to handle some of the load in a round-robin format, rather than having a primary router handle all of it while the standby routers remain idle. With GLBP, the hosts think they’re sending all of their data to a single gateway, but multiple gateways are actually in use simultaneously. GLBP allows standard configuration of the hosts, who
will all have their gateway address set to the virtual router’s address — none of this “some hosts point to gateway A, some hosts point to gateway B” business we had with HSRP load balancing. The key to GLBP: When a host sends an ARP request for the MAC of the virtual router, one of the physical routers will answer with its own MAC address. The host will then have the IP address of the GLBP virtual router and the MAC address of a physical router in the group.
Let’s take an illustrated look at GLBP’s operation. Here, six hosts are sending an ARP request for the MAC of the virtual router (10.1.1.10).
The Active Virtual Gateway (AVG), the router with the highest GLBP priority, will answer with ARP responses containing different virtual MAC
addresses.
The hosts will all have 10.1.1.10 as their default gateway, but some will have that address mapped to R2’s MAC in their ARP cache, some will have R3’s MAC, and some
will have R4’s MAC, which gives us the load balancing we want while keeping the same default gateway IP address on all hosts. If the AVG fails, the router serving as the standby AVG will take over. If any of the AVFs fails, another router will handle the load destined for a MAC on the downed router. GLBP routers use Hellos to detect whether other routers in their group are available or not. GLBP groups can have up to
four members. GLBP’s load balancing also offers the opportunity to finetune it to your network’s needs. GLBP offers three different forms of MAC address assignment, the default being round-robin. With round-robin assignments, a host that sends an ARP request will receive a response containing the next virtual MAC address in line. If a host or hosts need the same MAC gateway address every time it sends an ARP request, host-dependent load
balancing is the way to go. Weighted MAC assignments affect the percentage of traffic that will be sent to a given AVF. The higher the assigned weight, the more often that particular router’s virtual MAC will be sent to a requesting host. GLBP is enabled just as VRRP and HSRP are — by assigning an IP address to the virtual router. The following command will assign the address 172.1.1.10 to GLBP group 5.
MLS(config-if)# glbp 5 ip 172.
To change the interface priority, use the glbp priority command. To allow the local router to preempt the current AVG, use the glbp preempt command.
MLS(config-if)# glbp 5 priorit MLS(config-if)# glbp 5 preempt
With our router redundancy taken care of, let’s take a deep breath and jump into IPv6! Before we start, just a quick word…. Cisco split IP Version 6 up
between ICND 1 and ICND2, and about 95% of it is in ICND1. Accordingly, there’s a ton of IPv6 material in my ICND1 Study Guide. I know most of you bought that book along with this one, but I really want to make sure everyone is covered and ready for the ICND2 exam. For that reason, I’ve included the entire IPv6 section from the ICND1 book here in the ICND2 book. There is some ICND2-specific material dealing with OSPF and EIGRP for IPv6 at the end of
this section, so even if you read ICND1 yesterday, read this entire section today! With that, let’s jump right in and start mastering IPv6!
IP Version 6 IP Version 6 is all around us today, and even if you’re not working directly with it today, you will be one day! Well, you will be if you’ve taken the initiative to learn IPv6. A lot of network admins have put off learning IPv6, which is a huge mistake. Even if it doesn’t impact your current career, you’re definitely limiting your
future prospects if you aren’t strong with IPv6 — and you’re strengthening your prospects when you are! By studying the material in this section, you’ll have a strong foundation in IPv6, and your future success is all about the foundation you build today. The IPv6 addresses themselves are the scariest part of IPv6 for many admins, and we’ll dive right into addresses — and you’re going to master them! The IPv6 Address Format
Typical IPv4 address: 129.14.12.200
Typical IPv6 address: 1029:9183:81AE:0000:0000:0AC1 As you can see, IPv6 isn’t exactly just tacking two more octets onto an IPv4 address! I haven’t met too many networkers who really like typing, particularly numbers. You’ll be happy to know there are some rules that will shorten those addresses a bit, and it’s a very good idea to be fluent with these rules for your CCNA exam.
You’ll also need the skill of reexpanding the addresses from their compressed state to their full 128-bit glory, and you’ll develop that skill in this section as well. Be sure to have something to write with and on when studying this section. Zero Compression And Leading Zero Compression When you have consecutive blocks of zeroes in an IPv6 address, you can represent all of them with a single set of colons. It doesn’t matter if you have two fields or eight, you
can simply type two colons and that will represent all of them. The key is that you can only perform this zero compression once in an IPv6 address. Here’s an example:
Original format: 1234:1234:0000:0000:0000:0000 Using zero compression: 1234:1234::3456:3434 Since blocks of numbers are separated by a single colon in the first place, be careful when scanning IPv6 addresses for legality. If you see two sets of colons in the same address, it’s
an illegal address — period, no exceptions. (Hooray!) We can also drop leading zeroes in any block, but each block must have at least one number remaining. You can perform leading zero compression in any address as many times as you like. By the way, I refer to each individual set of numbers in an IPv6 address as “blocks” and occasionally “fields” ; you can call them whatever you like, since there’s no one official term.
Let’s look at an example of leading zero compression. Taking the address 1234:0000:1234:0000:1234:0000 we have four different blocks that have leading zeroes. The address could be written out as it is, or we can drop the leading zeroes.
Original format: 1234:0000:1234:0000:1234:0000 With leading zero compression: 1234:0:1234:0:1234:0:123:1234 For your exam and for the real world, both of those expressions are correct. It’s just
that one uses leading zero compression and the other does not. Watch that on your exam! Using zero compression and leading zero compression in the same address is perfectly legal:
Original format: 1111:0000:0000:1234:0011:0022 With zero and leading zero compression: 1111::1234:11:22:33:44 Zero compression uses the double colon to replace the second and third block of
numbers, which were all zeroes. Leading zero compression replaced the “00” at the beginning of each of the last four blocks. Just be careful and take your time with both zero compression and leading zero compression and you’ll do well on the exam and in the real world. T
Why Can’t You Use Zero Compression More Than Once? As soon as you tell me I can’t do something, I want to know why — and then I’ll probably try it anyway. (Mom always said I was a strong-willed child.) So when I was checking out IPv6 for the first time and ran into that zero compression limitation, I thought “Why can’t you use that more than once?”
Let’s check out this example to see why:
1111:0000:0000:2222:0000:0000 If we were able to use zero compression more than once, we could compress that address thusly: 1111::2222::3333 Great! But what happens when the full address is needed? We know there are eight blocks of numbers in an IPv6 address, but how would we know the number of blocks represented each set of colons?
That full address could be this:
1111:0000:2222:0000:0000:0000 Or this:
1111:0000:0000:0000:0000:2222 Or this!
1111:0000:0000:0000:2222:0000 If multiple uses of zero compression were legal, every one of those addresses could be represented by 1111::2222::3333 — and none of them would actually be the original address! That’s why using zero
compression more than once in an IPv6 address is illegal — there would be no way to know exactly what the original address was, which would kind of defeat the purpose of compression!
“The Trailing Zero Kaboom” Watch this one — it can explode points right off your score. When you’re working with zero compression, at first it’s easy to knock off some trailing zeroes along with the full blocks of zeroes, like this:
1111:2222:3300:0000:0000:0000 … does NOT compress to… 1111:2222:33::44:5555 xs
The correct compression: 1111:2222:3300::44.5555 You can’t compress trailing zeroes. That’s another way to identify illegal IPv6 addresses -if you see multiple colon sets or zeroes at the end of a block being compressed, the address expression is illegal.
Decompressing While Avoiding The Bends Decompressing an IPv6 address is pretty darn simple. Example: 2222:23:a::bbcc:dddd:342 First, insert zeroes at the beginning of each block that has at least one value in it. The result:
2222:0023:000a::bbcc:dddd:0342 Next, insert fields of zeroes where you see the set of colons.
How many fields, you ask? Easy! Just count how many blocks you see now and subtract it from eight. In this case, we see six blocks, so we know we need two blocks of zeroes to fill out the address.
2222:0023:000a:0000:0000:bbcc Done and done! This is also an easy skill to practice whenever you have a few minutes, and you don’t even need a practice exam to do so. Just take a piece of paper, and without putting a lot of thought into it, just write out
some compressed IPv6 addresses and then practice decompressing them. (You should put thought into that part.)
The Global Routing Prefix: It’s Not Exactly A Prefix While the address formats of IPv4 and v6 are wildly different, the purpose of many of the IPv6 addresses we’ll now discuss will seem familiar to you — and they should! These v6 addresses have some huge advantages over v4 addresses, particularly when it comes to subnetting and summarization. The IPv4 address scheme really wasn’t developed with subnetting or
summarization in mind, where IPv6 was developed with those helpful features specifically in mind. In short, v6 addresses were born to be subnetted and summarized! I mention that here because our first address type was once often referred to as “aggregateable global unicast address”. Thankfully, that first word’s been dropped, but the global unicast address was designed for easier summarization and subnetting.
Basically, when your company gets a block of IPv6 addresses from an ISP, it’s already been subnetted a bit. At the top of the “IPv6 address subnet food chain” is the IANA, the Internet Assigned Numbers Authority (http://www.iana.org/). The IANA has the largest block of addresses, and assigns subnets from those blocks to Regional Internet Registries (RIRs) in accordance with very strict rules. In turn, the Registries
assign subnets of their address blocks to ISPs. (The IANA never assigns addresses directly to ISPs!) These RIRs are located in The ISPs then subnet their address blocks, and those subnets go to their customers. I strongly recommend you visit http://www.iana.org/ numbers for more information on this process. It’s beyond the scope of the CCENT exam, but it’s cool to see where the
Registries are, along with charts showing how the IANA keeps highly detailed information on where the IPv6 global unicast addresses have been assigned.
Now here’s the weird part — these blocks of addresses are actually referred to as “global
routing prefixes”. When you think of a “prefix” at this point, you likely think of prefix notation (/24, for example). It’s just one of those IPv6 things we have to get used to. Here’s something else we need to get used to — you and I are now the network admins of Some Company With No Name (SCWNN). And our first task awaits!
“Now What Do I Do?” We’ve requested a block of addresses from our ISP (a “global routing prefix”, in IPv6speak), and we’ve got ’em. Now what do we do? We subnet them! Hey, come back! It’s not that bad. Personally, I believe you’ll find IPv6 subnetting to be easier than IPv4 subnetting — after you get some practice in, of course! When we get the global routing
prefix from our ISP, that comes with a prefix length, and in our example we’ll use a /48 prefix length. The prefix length in IPv6 is similar to the network mask in IPv4. (The /48 prefix length is so common that prefixes with that length are sometimes referred to as simply “forty-eights”.) You might think that leaves us a lot of bits to subnet with, but there’s also an Interface Identifier to work with, and it’s almost always 64 bits in length. This ID is found at the end of
an IPv6 address, and it identifies the host. We’ll go with that length in this exercise. So far we have a 48-bit prefix and a 64-bit identifier. That’s 112 bits, and since our addresses are 128 bits in length, that leaves us 16 bits for --- subnetting! Global Routing Prefix: 2001:1111:2222 (48 bits) Subnet ID: 16-bit value found right after the GRP Interface ID: 64-bit value
that concludes the address Can we really create as many subnets as we’ll ever need in our company with just 16 bits? Let’s find out. We use the same formula for calculating the number of valid subnets here as we did with v4 — it’s 2 to the Nth power, with “N” being the number of subnet bits. 2 to the 16th power is 65,536. That should cover us for a while!
Now we need to come up with the subnet IDs themselves.
Determining The Subnet ID Nothing to it, really. In our example of 2001:1111:2222 as the global routing prefix, we know that the next block will represent the subnets. You can just start writing them out ( or entering them in a spreadsheet — highly recommended) and go from there. Your first 11 subnets are 0001, 0002, 0003, 0004, 0005, 0006, 0007, 0008, 0009, 000A, and 000B. I listed that many as a gentle reminder that we’re
dealing with hex here! Our first full subnet is 2001:1111:2222:0001::/64, the next is 2001:1111:2222:0002 /64, and so forth. That’s it! Just be sure to keep careful records as to where each of your subnets are placed in your network, and I strongly recommend you issue them sequentially rather than just pulling values at random. Now we’re going to start assigning IPv6 addresses to router interfaces. We have options with IPv6 that are
similar to IPv4’s static assignment and DHCP, but there are important differences we must be aware of in order to past the exams — and just as importantly, to be ready to work with IPv6 in the field. Let’s get to work
First Things First: Enable IPv6 Routing — Twice? We don’t think twice about using IPv4 routing on a Cisco router, since it’s on by default. However, when using IPv6 routing, you need to enable it twice: Enable IPv6 routing globally with the ipv6 unicast-routing command Enable IPv6 routing on an interface level with ipv6 address, followed by the
IPv6 address itself.
V6ROUTER1(config)#ipv6 unicast V6ROUTER1(config)#int fast 0/0 V6ROUTER1(config-if)#ipv6 addr
You won’t get a message that IPv6 routing has been enabled after you run ipv6 unicastrouting, nor will pigeons be let loose, so you better verify with show ipv6 interface and show ipv6 interface brief. Note: It’s really easy to leave the “ipv6” part of those commands out, since we’re
used to running those commands without it. Another note: I’m going to truncate the output of both of these commands for now — you’ll see the full output later.
V6ROUTER1#show ipv6 interface FastEthernet0/0 is up, line pr IPv6 is enabled, link-local ad No Virtual link-local address Global unicast address(es): 2001:1111:2222:1:1::, subnet Joined group address(es): FF02::1 FF02::2 FF02::1:FF00:0 FF02::1:FFEF:D240
A little of this output is familiar, particularly that first line. Just as with IPv4, we need our IPv6 interface to show as “up” and “up” — and if they’re not, we go through the exact same troubleshooting checklist as we would if this were an IPv4 interface. Always start troubleshooting at the physical layer, no matter what version of IP you’re running! Since we’re good on the physical and logical state of the interface, we can look at the rest of the config — and
everything’s different here! We see the global unicast address we configured on the interface, and the subnet is right next to that. After that, we seem to have joined some groups, and we’ve also got something called a “link-local address”. Before we delve into those topics, let’s have a look at show ipv6 interface brief.
V6ROUTER1#show ipv6 interface FastEthernet0/0 [up/up] FE80::20C:31FF:FEEF:D240 2001:1111:2222:1:1:: Serial0/0 [administratively d FastEthernet0/1 [administrativ
Serial0/1
[administratively d
Brief, eh? All we get here is the state of each interface on the route, and the IPv6 addresses on the IPv6-enabled interfaces. Note the output doesn’t even tell you what those two addresses even are, so we better know the top one is the link-local address and the bottom one is the global unicast address. We know what the global unicast address is, so let’s spend a little time talking about that link-local address — tis an
important IPv6 concept!
The Link-Local Address Another “name is the recipe” topic! Packets sent to a linklocal address never leave the local link — they’re intended only for hosts on the local link, and routers will not forward messages with a link-local address as a destination. (Since these are unicast messages, the only host that will process it is the one it’s unicast to.) Fun fact: IPv4 actually has linklocal addresses, but they rarely come into play. In IPv6, a link-
local address is assigned to any and every IPv6-enabled interface. We didn’t configure a link-local address on our Fast 0/0 interface, but when we ran our show ipv6 interface commands, we certainly saw one!
V6ROUTER1#show ipv6 interface FastEthernet0/0 is up, line pr IPv6 is enabled, link-local ad
Soooo… if we didn’t configure it, where did it come from? Is our router haunted by a g-g-gghoooooooost?
Nothing as fun as that. The router simply created the linklocal address on its own, in accordance with a few simple rules. I’m sure you noticed that address was expressed using zero and leading zero compression, so let’s decompress it and examine the address in all its 128-bit glory. Compressed: FE80::20C:31FF:FEEF:D240
Uncompressed: FE80:0000:0000:0000:020C:31FF According to the official IPv6 address standards, the link-
local reserved address block is Fe80::/10. That means the first ten bits have to match FE80, and breaking that down into binary….
( 8, 4, 2, 1 for FE80 = 1111 1110 1000 0000
… we see that by setting the last two bits in the third block to all possible different values, we end up with 1000, 1001, 1010, and 1011. That means link-local addresses should be able to begin with Fe8, Fe9,
FeA, and FeB. However, RFC 4291 states the last 54 bits of a link-local address should all be set to zero, and the only value that makes that possible is Fe80. Following that standard — which is exactly what you should do on exam day and in the field — link-local addresses should begin with Fe80, followed by three blocks of zeroes. So far, our link-local address is Fe80:0000:0000:0000. We’re 64 bits short, and the Cisco
router’s going to take care of that by creating its own interface ID via EUI-64 rules. And while the router will figure out its own interface identifier in the field, you may just be asked to determine a couple of these on your exam or job interview. With that said, let’s take a close look at the process and compare it to what we’re seeing on our live equipment!
How Cisco Routers Create Their Own Interface Identifier It’s easy, and I’d be ready to perform this little operation on exam day. The router just takes the MAC address on the interface, chops it in half, sticks FFfe in the middle, and then perfoms one little bit inversion. Done! In our example, we’ll use 1122-33-aa-bb-cc. Chop it in half and put the FFfe in the middle…
1122:33FF:FEAA:BBCC … and you’re almost done. Write out the hex value for the first two digits, “11” in this case, and invert the 7th bit. “Invert the bit” is a fancy way of saying “If it’s a zero, make it a one, and if it’s a one, make it a zero.” 11 = 0001 0001 Invert the 7th bit
0001 0011 result is 13 Replace the first two characters with the ones you just calculated, and you’re done! The interface identifier is 1322:33FF:FEAA:BBCC. Let’s practice this skill using the MAC address of FastEthernet 0/0 on our live IPv6 router.
V6ROUTER1#show int fast 0/0 FastEthernet0/0 is up, line pr
Hardware is AmdFE, address is
The MAC address is 000c.31ef.d240, so we’ll split that right in half and put FFFE in the middle: 000c:31FF:FEEF:D240 Now for that bit inversion! We know 00 = 0000 0000, so invert the 7th bit to a 1, and we have 0000 0010, which equals 02. Put the “02” in the address in place of the “00” at the beginning of the identifier, and we have…. 020c:31FF:FEEF:D240
… and after a (very) little leading zero compression, we’re left with 20c.31FF:FEEF:D240. Is that correct? Let’s check out that link-local address….
V6ROUTER1#show ipv6 interface FastEthernet0/0 is up, line pr IPv6 is enabled, link-local a
We’re right! The full link-local address is shown, and after the zero compression of the prefix FE80:0000:0000:0000, the interface identifier is listed — and it matches our calculations
exactly! While this is an important process to know about, you can also configure an interface’s link-local address with the ipv6 address command:
V6ROUTER1(config-if)#ipv6 addr WORD General X:X:X:X::X IPv6 lin X:X:X:X::X/ IPv6 pre autoconfig Obtain a
Naturally, you have to abide by the link-local address rules we talked about earlier.
Using The EUI-64 Process With The ipv6 Address Command Earlier, we statically applied the full IPv6 address to the FastEthernet 0/0 interface, and that’s one way to get that address on the interface. However, if you just want the address to be unique and you don’t need to assign a certain specific address to the interface, you can use the eui64 option with the ipv6 address command to come up with a
unique address. I’ll use that option on the live equipment, after first removing the full address we applied earlier.
V6ROUTER1(config)#int fast 0/0 V6ROUTER1(config-if)#no ipv6 a
Enter the prefix and prefix length, followed by eui-64.
V6ROUTER1(config-if)#ipv6 addr anycast Configure as an anyca eui-64 Use eui-64 interface i V6ROUTER1(config-if)#ipv6 addr
Verify the global unicast address creation with show ip6 interface.
V6ROUTER1#show ipv6 int fast 0 FastEthernet0/0 is up, line pr IPv6 is enabled, link-local a No Virtual link-local address Global unicast address(es): 2001:1111:2222:1:20C:31FF:FEE
Note the global unicast address is now the prefix followed by the link-local address. The result is a unique address that was calculated in part by the router, and not totally configured by us.
Would you believe there’s a third way for that interface to get its address? Since the first two methods have been static configurations, I bet you think this one’s dynamic. Let’s use IOS Help to see that one…
V6ROUTER1(config)#int fast 0/0 V6ROUTER1(config-if)#ipv6 addr WORD General X:X:X:X::X IPv6 lin X:X:X:X::X/ IPv6 pre autoconfig Obtain a
Sounds kinda dynamic! More about autoconfiguration later — right now, let’s talk about the
IPv6 equivalent of IPv4’s Address Resolution Protocol!
The Neighbor Discovery Protocol NDP allows an IPv6 host to discover its neighbors — but you already knew that just by reading the protocol name. The “neighbors” we’re talking about here are other hosts and routers, and the processes for discovering the routers is different than the hostdiscovery process. Let’s start with finding our routers! To start the router discovery process, the host sends a
Router Solicitation multicast onto its local link. The destination is FF02::2, the “AllIPv6-Routers” address. The primary value the host wants is the router’s link-local address.
Any router on the link that hears that message will respond with a Router
Advertisement packet. That advertisement can have one of two destination addresses If the querying host has an IPv6 address that would have been in the RS message, and the router will unicast its RA back to that address. If the querying host does not yet have an IPv6 address, the source message of the RS will be all zeroes, and in that case the router will multicast the RA to
FF02::1, the “All IPv6 Nodes” address.
IPv6 routers don’t just sit around and wait to be asked for that info; on occasion, they’ll multicast it onto the link without receiving an RS. By default, the RA is multicast to FF02::1 every 200 seconds.
Now that we’re successfully discovering routers, let’s start discovering neighbors, with the aptly-named Neighbor Solicitation and Neighbor Advertisement messages! The Neighbor Solicitation message is the rough equivalent of IPv4’s ARP Request. The main difference is
that an ARP Request asked for the MAC address of the device at a particular IPv4 address….
… and a Neighbor Solicitation message asks neighbors found in the solicited-node multicast address range of the destination IPv6 address to reply with their link-layer addresses.
This leads us to the musical question “What the $&%*)%*)*$ is a solicitednode multicast address?” Welllll, this isn’t exactly one of those “the name is the recipe” protocols we’ve seen in this course, so let’s take a few minutes to examine this
address and figure out exactly what the “range” is.
The Solicited-Node Multicast Address “Dying is easy. Comedy is hard.” -- Edmund Kean “Determining the solicited-note multicast address for a given IPv6 address is easy. Figuring out what the heck a ’solicitednode multicast address’ is — now THAT’S hard.” -- Chris Bryant I doubt my quote goes down in
posterity, but it really does apply to this little section of our studies. Here’s the deal with this address. It is a multicast that goes to other hosts on the local link, but not to all hosts on the local link -- just the ones that have the same last six hex values as the destination IPv6 address of the message.. I kid you not — that’s what it is! This wasn’t developed just to be funny or to help create tricky exam questions. There are IPv6 services that rely on this
address, and you’ll see those in future studies. For right now, we need to know what this address is (covered) and how to determine the solicited-node multicast address for a given IPv6 address (coming right up!) This address is actually in the output of show ipv6 interface, but we better know where and how it was calculated, since neither is very obvious. I’ve left in a little more info in this command output than I have in the past — there’s a big hint as to where to find the solicited-
node multicast address.
V6ROUTER1#show ipv6 interface FastEthernet0/0 is up, line pr IPv6 is enabled, link-local a No Virtual link-local address Global unicast address(es): 2001:1111:2222:1:20C:31FF:FEE Joined group address(es): FF02::1 FF02::2 FF02::1:FFEF:D240
Under “joined group address(es)”, you see three different addresses. The first two, FF02::1 and FF02::2, we saw earlier in this section. The
third, FF02::1:FFEF:D240, is the solicited-node multicast address for the local host. Solicited note addresses always begin with FF02::1:FF. To get the rest, just grab the last six digits of the global unicast address, and tack it right on the end of the multicast address.
V6ROUTER1#show ipv6 interface FastEthernet0/0 is up, line pr IPv6 is enabled, link-local a No Virtual link-local address Global unicast address(es): 2001:1111:2222:1:20C:31FF:FE Joined group address(es): FF02::1
FF02::2 FF02::1:FFEF:D240
That’s it! Now back to our Neighbor Solicitations and Advertisements! When last we left our IPv6 host, now named “Host A”, it was sending a Neighbor Solicitation to the solicited-note multicast address that corresponds with the IPv6 address of the destination host, “Host B”.
You can see how this cuts down on overhead when compared to IPv4’s ARP. This initial request for information is a multicast that’s going to be processed by a very few hosts on the link, where an IPv4 ARP Request was a broadcast that every host on the link had to stop and take a look at.
After all that, it’s time for a Neighbor Advertisement! Host B answers the NS with an NA, and that NA contains Host B’s link-local address. Host A pops that address into its Neighbor Discovery Protocol neighbor table (the equivalent of IPv4’s ARP cache), and we’re done!
DHCP In IPv6 DHCP is one of the most useful protocols we’ll ever use, so IPv6 certainly wasn’t going to eliminate it — but just as we can always get better, so can protocols. Let’s jump into DHCP for IPv6, starting with a comparison of Stateful DHCP and Stateless DHCP. Stateless DHCP works a lot like the DHCP we’ve come to know and love in our IPv4 networks. See if this story sounds familiar:
A host sends a DHCP message, hoping to hear back from a DHCP server. The server will give the host a little initial information, and after another exchange of packets, the host is good to go with its IP address it accepted from the client. That address is good for the duration of the lease, as defined by the server. There are four overall messages in the entire DHCP process, two sent by the client and two by the server. The location of the DNS servers
is also given to the client. The server keeps a database of information on clients that accept the IP addresses that it offers. A problem comes in when there’s a router in between our host and DHCP server. In that case, we need the router to act as a relay agent. Those paragraphs describe both DHCPv4 and Stateful DHCPv6. There are some differences, of course:
The DHCPv6 messages Solicit, Advertise, Request, and Reply take the place of DHCPv4’s Discovery, Offer, Request, Acknowledgement messages. Note that while DHCPv6 lets the client know where the DNS servers are, just like DHCPv4 does, DHCPv6 does not include default router information as DHCPv4 does. The host will get that information from NDP. Overall, the DHCPv6 Relay Agent operation is just like that of DHCPv4. There are obviously some different messages and
addresses involved, but this illustration of a typical Relay Agent operation will show you how similar the two are.
That Solicit message is linklocal in scope, so if there’s a router between the host and the DHCP server, we have to configure the router as a relay
agent. We do that by configuring the ipv6 dhcp relay command on the interface that will be receiving the DHCP packets that need to be relayed.
V6ROUTER1(config)#int fast 0/0 V6ROUTER1(config-if)#ipv6 dhcp client Act as an IPv6 DHCP relay Act as an IPv6 DHCP server Act as an IPv6 DHCP
V6ROUTER1(config-if)#ipv6 dhcp destination Configure relay d
V6ROUTER1(config-if)#ipv6 dhcp X:X:X:X::X IPv6 address V6ROUTER1(config-if)#$elay des
The dollar sign appears at the far left of the input, since this command is too long for the screen. As a result of this command, the router will relay the DHCP Solicit to the destination we specify. When the router sees return messages from the DHCP server, the router will relay those messages to Host A. Verify the router is a now a member of the “All DHCP Servers and Agents” multicast group with the show ipv6
interface command. The interface with the relay agent config will show FF02::1:2 under “Joined Group Address(es)”.
V6ROUTER1#show ipv6 int fast 0 FastEthernet0/0 is up, line pr IPv6 is enabled, link-local a No Virtual link-local address Global unicast address(es): 2001:1111:2222:1:20C:31FF:FEE Joined group address(es): FF02::1 FF02::2 FF02::1:2 FF02::1:FFEF:D240
Now let’s have a look at
Stateless Autoconfiguration! Where Stateful Autoconfiguration has a lot in common with DHCPv4, Stateless is a whole new world. We have hosts that create their own IPv6 addresses! That process starts with some info the host received from the router way back during those Router Solicitation and Router Advertisement messages. We discussed a little of that info at that time, but here’s some more detail on what the RA contains — and one important
value it does NOT contain.
Among the information contained in that RA sent to the host is the link’s prefix and prefix length, and that info allows the host to get started on creating its own IP address. All the host has to do is tack its
64-bit interface identifier onto the back of the 64-bit prefix, and voila …. A 128-bit IPv6 address! There’s a very good chance this address will be unique on the local link, but we don’t want to leave that kind of thing to chance. Instead, that local host will perform the Duplicate Address Detection procedure before using this newly created IPv6 address.
A True DAD Lecture When I give a quick reminder about acting responsibly in the field — using the remark option with your ACLs, running undebug all before you leave a client site, that kind of thing — I usually refer to it as a “dad lecture”. What follows here is a real DAD lecture — the Duplicate Address Detection procedure, that is! It’s also a quick lecture,
because DAD is a very quick process. Basically, DAD is the host attempting to talk to itself, and if the host succeeds in doing so, there’s a duplicate address problem. To perform DAD, the host just sends a Neighbor Solicitation message to its own address.
Then one of two things will
happen: The host that sent the NS receives a Neighbor Advertisement (NA), which means another host on the link is already using that address, and the host that wanted to use it can’t do so. The host that sent the NS doesn’t hear anything back, so it’s okay for that host to use its new address. And that’s it! DAD is just a
quick, handy little check the interface runs when it’s about to use an IPv6 unicast address for the first time, or when an interface that already had an IPv6 address in use is brought down and then back up for any reason. This little double-check can spare you some big headaches!
So What About DNS? In short, we’ve got to have a DHCP server to get the DNS server info to the hosts. Even though Stateless Autoconfiguration doesn’t eliminate the need for a DHCP server, it comes very close, and there’s lot less to configure, verify, and maintain when the only thing our DHCP servers are responsible for is getting out the word about the DNS server locations. RFC 6106 lists RA options for
DNS information. That doc is beyond the scope of the CCENT and CCNA exams, but it is worth noting that they’re working on ways to get DNS information to the hosts without using a DHCP server.
http://tools.ietf.org/html/rfc6106
Pining for Pinging Pings and traceroutes work much the same in IPv6 and IPv4. We just have to be aware of a small difference or two. Here are the current addresses of R1 and R3, along with a handy little reminder of a handy little command: R1:
V6ROUTER1#show ipv6 interface FastEthernet0/0 is up, line pr IPv6 is enabled, link-local a No Virtual link-local address
Global unicast address(es): 2001:1111:2222:1:20C:31FF:FEE
R3:
V6ROUTER3#show ipv6 interface FastEthernet0/0 is up, line pr IPv6 is enabled, link-local a No Virtual link-local address Global unicast address(es): 2001:1111:2222:1:20E:D7FF:FEA
Let’s send a ping between R1 and R3. We can use the good ol’ fashioned ping command….
V6ROUTER1#ping 2001:1111:2222:
Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5 … or the extended command, usi V6ROUTER1#ping Protocol [ip]: ipv6 Target IPv6 address: 2001:1111 Repeat count [5]: Datagram size [100]: Timeout in seconds [2]: Extended commands? [no]: Sweep range of sizes? [no]: Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
Traceroute works just as it did for v4. Granted, there’s not
much of a path with this setup, but as your v6 networks grow, so will your traceroute output. The escape sequence is the same, too — the only thing that changes is the format of the address you enter. Believe me, you’ll be using ping a lot more than traceroute as you learn IPv6!
V6ROUTER1#traceroute 2001:1111
Type escape sequence to abort. Tracing the route to 2001:1111
1 2001:1111:2222:1:20E:D7FF:F
I don’t want to overwhelm you with show ip v6 commands, since there are quite a few in the IOS (about 40 of them when I looked today), but there is one more I want to introduce you to in this course — show ipv6 neighbors. You can look at all of your router’s neighbors, or you can identify the local router’s interface to filter the output.
V6ROUTER1#show ipv6 neighbors IPv6 Address FE80::20E:D7FF:FEA4:F4A0 2001:1111:2222:1:20E:D7FF:FEA4
Going from left to right--The IPv6 address field is cert Age refers to the last time in Link-layer is the MAC address State is way beyond the scope
http://www.cisco.com/en/US/doc xml/ios/ipv6/command/ipv6s4.html#wp1680937550 Interface refers to the local interface through which the neighbor is reached. Speaking of “local”, let’s spend a little time with our IPv6 route types and protocols. With both IPv4 and v6, there
are no routes in the routing table by default. With IPv4, after we put IP addresses on the interfaces and then open them, we expect to see only connected routes. With IPv6, we’re going to see connected routes and a new route type, the local route. For clarity, I’m going to delete the route codes from the table unless we’re actually talking about that route type.
V6ROUTER1#show ipv6 route IPv6 Routing Table — 3 entries Codes: C — Connected, L — Loca
C
2001:1111:2222:1::/64 [0/ via ::, FastEthernet0/0 L 2001:1111:2222:1:20C:31FF via ::, FastEthernet0/0 L FF00::/8 [0/0] via ::, Null0
We expect to see the connected route, but that local route’s a new one on us. The IPv6 router will not only put a connected route into the table in accordance with the subnet configured on the local interfaces, but will also put a host route into the table for that route. In this case, it’s R3’s interface on that same Fast
Ethernet segment.
Static and Default Routing with IPV6 Just as with ping and traceroute, both static and default static routing work under the same basic principles in IPv6 as they did in IPv4. We just have to get used to a slightly different syntax! In this lab, we’ll set up connectivity between R1 and a loopback on R3 with a regular static route, then with a default static route.
It won’t surprise you to learn that we create both of these route types with the ipv6 route command, followed by some old friends as options!
V6ROUTER1(config)#ipv6 route 2 Dialer Dialer interf FastEthernet FastEthernet Loopback Loopback inte MFR Multilink Fra Multilink Multilink-gro Null Null interfac Port-channel Ethernet Chan Serial Serial X:X:X:X::X IPv6 address
I removed some of the
available interface types for clarity, but yes, we have much the same choices with IPv6 as we did with IPv4 — the local exit interface or the IP address of the next hop! I personally like to use the next-hop address, since it’s easier to troubleshoot in case of trouble, but you can use either. Just as with IPv4, make sure to choose the local router exit interface or the next-hop address. Here, I used R3’s fastethernet0/0 IP address as
the next-hop address, and that command is so long that it brought up the dollar sign in the prompt. Hint: You can always run show ipv6 neighbors to grab the next-hop address via copy and paste rather than typing it in.
V6ROUTER1#show ipv6 neighbors IPv6 Address FE80::20E:D7FF:FEA4:F4A0 2001:1111:2222:1:20E:D7FF:FEA4:F4A0 V6ROUTER1(config)#$2001:2222:3333:1::
Full command from config: ipv6 route 2001:2222:3333:1::/64
2001:1111:2222:1:20E:D7FF:FEA Let’s send a ping from R1 to R3’s loopback….
V6ROUTER1#ping 2001:2222:3333: Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
Success, indeed! Let’s run the exact same lab but with a default static route. First, we’ll remove the previous route by using our up arrow and then ctrl-a to go to front of the lonnnng command, and
enter the word “no”:
V6ROUTER1(config)#no ipv6 rout 2001:1111:2222:1:20E:D7F$
Then we’ll enter a default route, IPv6 style:
V6ROUTER1(config)#ipv6 route :
That’s right -- ::/0 plus the local router exit interface or nexthop IPv6 address is all you need! We’ll verify with that ping:
V6ROUTER1#ping 2001:2222:3333:
Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
Ta da! When checking your V6 routing table, be sure to give it a twiceover — it’s really easy to scan right past the routing table entry for the default static route.
V6ROUTER1#show ipv6 route IPv6 Routing Table —; 4 entrie Codes: C — Connected, L — Loca S ::/0 [1/0] via 2001:1111:2222:1:20E:D7FF
C
2001:1111:2222:1::/64 [0/ via ::, FastEthernet0/0 L 2001:1111:2222:1:20C:31FF via ::, FastEthernet0/0 L FF00::/8 [0/0] via ::, Null0
OSPF For IPv6 (AKA the confusingly-named “OSPF Version 3”) First things first: OSPF for IPv6 is the same thing as “OSPF Version 3”. The OSPF for IPv4 we’ve all come to know and love is “OSPF Version 2”. You rarely see “OSPFv2” used anywhere, so if you see the simple letters “OSPF”, we’re talking about the version for IPv4. Let’s take a look at some basic OSPFv3 commands and
compare OSPF v3 to IPv4’s OSPF v2. In IPv6, you’re not going to start an OSPF configuration with router ospf. One major difference between the OSPF v2 and OSPF v3 is that while OSPF v2 is enabled globally, OSPF v3 is enabled on a per-interface basis. This will automatically create a routing process.
R1 (config-if) #ipv6 ospf area
One similarity between the two versions is their use of the OSPF RID. OSPF v3 is going to
use the exact same set of rules to determine the local routers RID — and OSPF v3 is going to use an IPv4 address as the RID! If there is no IPv4 address configured on the router, you’ll need to use our old friend router-id to create the RID. The RID must be entered in IPv4 format, even if you’re only running IPv6 on the router. R1 (config-router) #router-id
Other similarities and differences between OSPF v2
and v3: They both use the same overall terms and concepts when it comes to areas, LSAs, and the OSPF metric cost. Values such as the hello and dead time must be agreed upon for an adjacency to form, and for that adjacency to remain in place. The SPF algorithm is used by both versions, and dynamic neighbor
discovery is supported by both. One big difference — OSPFv3 routers do not have to agree on the prefix length. OSPF v3 point-to-point and point-to-multipoint configurations do not elect DRs and BDRs, just like IP v4. OSPF v3 headers are smaller than v2, since v3 headers have no authentication fields.
The OSPF v2 reserved address 224.0.0.5 is represented in OSPF v3 by FF02::5. The OSPF v2 reserved address 224.0.0.6 is represented in OSPF v3 by FF02::6.
A Sample OSPFv3 Configuration As always, we need the ipv6 unicast-routing command to do anything IPv6-related. We also need the ipv6 router ospf 1 command enabled globally. V6ROUTER1 V6ROUTER1 Eigrp Ospf Rip
(config) #ipv6 unica (config) #ipv6 route Enhanced Interior Ga Open Shortest Path F IPv6 Routing Informa
V6ROUTER1 (config) #ipv6 route Process ID
V6ROUTER1 (config) #ipv6 route V6ROUTER1 (config-rtr) # *Nov 5 18:43:56.600: %OSPFv3-4
We never like to start a new config with a notification from the router, but this one’s easily resolved. One oddity of OSPFv3 is that you have to have an IPv4 dotted decimal value for the router to use as its OSPF RID — and if you have no IPv4 addresses on the router, you must set a RID with the routerid command before you can even start your config!
Crazy, I know, but true, as verified by that console message! Let’s set a RID of 1.1.1.1 on R1 and verify with show ipv6 ospf.
V6ROUTER1 (config) #ipv6 route V6ROUTER1 (config-rtr) # *Nov 5 18:43:56.600: %OSPFv3-4 V6ROUTER1 (config-rtr) #router V6ROUTER1#show ipv6 ospf Routing Process “ospfv3 1” wit
Watch that “v6” in all of your “show ospf” commands! Here’s the R3 config:
V6ROUTER3 (config) #ipv6 route
V6ROUTER3 (config-rtr) # *Nov 5 18:59:45.566: %OSPFv3-4 V6ROUTER3 (config-rtr) #router
V6ROUTER3#show ipv6 ospf Routing Process “ospfv3 1” wi
Now we’ll put the Fast 0/0 interfaces on each router into Area 0. I’ll run IOS Help to show you that quite a few options from OSPFv2 are here in OSPFv3:
V6ROUTER1 (config) #int fast 0 V6ROUTER1 (config-if) #ipv6 os Process ID Authentication Enable cost Cost
database-filter dead-interval demand-circuit encryption flood-reduction hello-interval mtu-ignore neighbor network priority retransmit-interval transmit-delay
Filter Interva OSPF de Enable OSPF Fl Time be Ignores OSPF ne Network Router Time be adverti Link st
V6ROUTER1(config-if)#ipv6 ospf area Set the OSPF area ID
V6ROUTER1(config-if)#ipv6 ospf OSPF area ID A.B.C.D OSPF area
V6ROUTER1(config-if)#ipv6 ospf
R3:
V6ROUTER3(config)#int fast 0/0 V6ROUTER3(config-if)#ipv6 ospf V6ROUTER3(config-if)#^Z V6ROUTER3# *Nov 5 19:03:45.986: %OSPFv3-5
Seconds after finishing the config on R3, our adjacency is in place! We’ll verify with show ipv6 ospf neighbor, and you’ll see that much of the info from show ip ospf neighbor in IPv4 made the cut to IPv6!
V6ROUTER1#show ipv6 ospf neigh Neighbor ID 3.3.3.3
Pri 1
State FULL/BDR
Now let’s add R3’s loopback interface to the OSPF config by putting it into Area 1, and then check R1’s IPv6 routing table. I’ll leave the OSPF routes in the routing table this time.
V6ROUTER3(config)#int loopback V6ROUTER3(config-if)#ipv6 ospf
V6ROUTER1#show ipv6 route IPv6 Routing Table — 4 entries Codes: C — Connected, L — Loca O — OSPF intra, OI — OS ON1 — OSPF NSSA ext 1, C L OI L
2001:1111:2222:1::/64 [0/0 via ::, FastEthernet0/0 2001:1111:2222:1:20C:31FF: via ::, FastEthernet0/0 2001:2222:3333:1:20E:D7FF: via FE80::20E:D7FF:FEA4:F FF00::/8 [0/0] via ::, Null0
We have our first inter-area route, and with a familiar pair of values in the brackets for that route!
Let’s ping the loopback from R1….
V6ROUTER1#ping 2001:2222:3333: Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
… and we’re done! When it comes to verifying and troubleshooting your OSPFv3 configs, you can almost always just put in “ipv6” for “ip” in your OSPFv2 show ip ospf commands and get the same information.
You’ve already seen a few of these, and it can only help to see them again: show ipv6 route ospf will show you only your OSPF-discovered routes, just like show ip route did for OSPFv2.
V6ROUTER1#show ipv6 route ospf IPv6 Routing Table — 4 entries Codes: C — Connected, L — Loca U — Per-user Static rou I1 — ISIS L1, I2 — ISIS O — OSPF intra, OI — OS ON1 — OSPF NSSA ext 1, D — EIGRP, EX — EIGRP e OI 2001:2222:3333:1:20E:D7FF:F via FE80::20E:D7FF:FEA4:F4
Here’s another look at show ipv6 ospf neighbor.
V6ROUTER1#show ipv6 ospf neigh Neighbor ID Pri State Interface 3.3.3.3 1 FastEthernet0/0
FULL/BDR
One of my favorite troubleshooting commands, show protocols, got quite the overhaul with IPv6. Here’s the output of that command at the end of that last lab. V6ROUTER1#show ipv6 protocols
IPv6 Routing Protocol is “conn IPv6 Routing Protocol is “stat IPv6 Routing Protocol is “ospf Interfaces (Area 0): FastEthernet0/0 Redistribution: None
Let’s wrap up with your first OSPFv3 debug! To spot mismatch problems with hello and dead timers, run debug ipv6 ospf hello. I created one before running this debug so you could see the output when there’s a problem — and after our earlier OSPF section,
this output should look familiar!
V6ROUTER1#debug ipv6 ospf hell OSPFv3 hello events debugging V6ROUTER1# *Nov 5 19:37:09.454: OSPFv3: R 7FF:FEA4:F4A0 interface ID 4 *Nov 5 19:37:09.458: OSPFv3: M *Nov 5 19:37:09.458: OSPFv3: D Let’s move forward with more I
Configuring EIGRP For IPv6 To be frank, once you get used to enabling EIGRPv6 on the interface instead of using the network command, you’re gold. Many of the commands we ran in EIGRPv4 work exactly the same as they do in EIGRPv6. There’s one oddity I want to introduce you to… and it all started so simply. (With an intro like that you KNOW this had to be bad!) I was setting up a simple little
EIGRP network for this section. It really couldn’t be any simpler! It’s just one point-topoint link, and I had already sent pings across the link from both sides:
R1#ping 2001:1111:2222:13:3:: Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!!
R3#ping 2001:1111:2222:13:1:: Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5
All was well. I then configured
the interfaces to join EIGRPv6 AS 100: R3(config)#int s1/2 R3(config-if)#ipv6 eigrp 100 R1(config)#int s0/1 R1(config-if)#ipv6 eigrp 100
Again, simple! I waited a few seconds for the adjacencies to come up, and when I didn’t see any console messages, I naturally ran show ipv6 eigrp 100 neighbors, and got this:
R1#show ipv6 eigrp 100 neighbo IPv6-EIGRP neighbors for proce
% EIGRP 100 is in SHUTDOWN
A routing protocol in shutdown? What’s going on here? Never seen a message like that in all my born days! (Wanted to throw in my grandmother’s catchphrase.) I’m telling you this for a couple of reasons: Reminding you that no matter how long you work with this stuff, you’re going to run into something you weren’t
quite ready for sooner or later. As I’m always preaching and teaching, just stay calm when you see a message (usually in debug output) that you haven’t seen before. The answer’s out there, you just have to find it. That’s what we do. Now back to our troubleshooting! After a few seconds, I put that term into Google and the first
match was a Cisco doc outlining a very simple EIGRPv6 network. It wasn’t specifically about this issue, but I noticed something odd in the config: Ipv6 router eigrp 100 No shutdown
Hmm. Then I checked the config on my router, and saw this: Ipv6 router eigrp 100 Shutdown
Wow. Turns out that with newer IOS versions, you have to
actually open the EIGRPv6 routing process. Your CCNA exam won’t get into IOS version numbers, but it certainly couldn’t hurt to know about this little feature! Here’s what happened next:
R1(config)#ipv6 router eigrp 1 R1(config-rtr)#no shut
R3(config)#ipv6 router eigrp 1 R3(config-rtr)#no shutdown
IPv6-EIGRP neighbors for proce H Address Interfac (Sec) (ms) 0 Link-local address: Se0/1 1
FE80::20E:D7FF:FEA4:F4A0
There we go! Now that we’ve taken care of that, let’s look at this fundamental EIGRPv6 config — WITH the no shutdown command!
ipv6 unicast-routing interface Serial0/1 no ip address ipv6 address 2001:1111:2222:1 ipv6 eigrp 100 ! ipv6 router eigrp 100 no shutdown
A few notes and troubleshooting comments on this config: IPv6 routing is enabled with the ipv6 unicast-routing command. If you leave that out and jump straight to your IPv6 config, here’s the result:
R1(config)#ipv6 router eigrp 1 % IPv6 routing not enabled
The solution, of course, is to enable it.
R1(config)#ipv6 unicast-routin
If we leave the router-ID out, and you have no IPv4 addresses on the router, we’re going to get some attitude from EIGRPv6 -- and not necessarily when you actually configure the routing protocol. R1(config)#int s0/1 R1(config-if)#ipv6 eigrp 100 R1(config)#int s0/1 R1(config-if)#ipv6 eigrp 100
R1#show ipv6 eigrp neighbors IPv6-EIGRP neighbors for proce % No router ID for EIGRP 100 R1#show ipv6 eigrp int
IPv6-EIGRP interfaces for proc % No router ID for EIGRP 100
The solution, of course, is to configure one. (Seems to be an echo around here!)
R1(config)#ipv6 router eigrp 1 R1(config-rtr)#router-id ? A.B.C.D EIGRP Router-ID in IP
R1(config-rtr)#router-id 1.1.1
EIGRPv6 Similarities and Differences with EIGRPv4 Let’s start with the similarities: Both use the maximum-paths and variance commands, and they work in exactly the same fashion. They both require a RID. You can use the bandwidth and delay interface-level commands to tweak route metrics. The hello and hold timer concepts are the same.
Naturally, the commands are a little different:
R1(config)#int serial 0/1 R1(config-if)#ipv6 hello-inter Seconds between hell
R1(config-if)#ipv6 hold-time e Seconds before neigh
The EIGRPv4 commands also go on the interface. Just drop the “v6” part of “ipv6” and the commands are exactly the same. Routes that are redistributed into EIGRPv6 are marked “EX” and have an admin distance of
170. Here, I’ve redistributed two connected routes into EIGRP on R3. On R1, they show as external EIGRP routes and have a higher AD than they would if I had used the ipv6 eigrp 100 command on the interfaces themselves.
R3(config-rtr)#redistribute co metric Metric for redist route-map Route map referen
R3(config-rtr)#redistribute co Bandwidth met
R3(config-rtr)#redistribute co EIGRP delay m
R3(config-rtr)#redistribute co EIGRP reliability m
R3(config-rtr)#redistribute co EIGRP Effective ban
R3(config-rtr)#redistribute co EIGRP MTU of the pa
R3(config-rtr)#redistribute co
R1#show ipv6 route eigrp IPv6 Routing Table — 5 entries Codes: EX — EIGRP external EX 2001:1111:2222:23::/64 [1 via FE80::20E:D7FF:FEA4:F EX 2001:1111:2222:33::/64 [1 via FE80::20E:D7FF:FEA4:F
Both versions allow the configuration of passive interfaces. You can see that along with the other commands mentioned in this section here:
R1(config-if)#ipv6 router eigr R1(config-rtr)#? default Set a distance Admin distribute-list Filte exit Exit log-neighbor-changes Enabl log-neighbor-warnings Enabl maximum-paths Forwa metric Modif neighbor Speci no Negat passive-interface Suppr redistribute Redis
router-id shutdown stub timers variance
Note the shutdown option. Ahem.
proto route Shutd Set E Adjus Contr
Differences between EIGRPv4 and EIGRPv6: There’s no auto-summary command with EIGRPv6! If you’ve been working with EIGRPv4, don’t freak out when you try to use this command with EIGRPv6 and get this message:
R1(config)#ipv6 router eigrp 1 R1(config-rtr)#no auto-summary ^ % Invalid input detected at ’^
Unlike EIGRPv4, EIGRPv6
routers can become neighbors if they’re in different subnets.
A Little More OSPF We’ll just add a bit more to the ICND1 material here, along with a review of a typical OSPFv3 config. (And a gentle reminder that OSPF for IPV6 is technically OSPFv3.) We’ll also compare OSPFv2 (OSPF for IPv4) and OSPFv3 and note the similarities and differences. Here we go!
Never hurts to start your config with enabling IPv6…..
R3(config)#ipv6 unicast-routin
… and then starting your OSPF config.
R3(config)#ipv6 router ospf 1 R3(config-rtr)# %OSPFv3-4-NORTRID: OSPFv3 proc
The solution, of course, is to configure one. (There’s that echo again!)
R3(config-rtr)#router-id 3.3.3
No reload or clearing of OSPF processes is necessary here, since the OSPF process hasn’t actually started yet. R3#show ipv6 ospf Routing Process “ospfv3 1” with ID 3.3.3.3 Then place your interfaces into the appropriate OSPF process. Another gentle reminder: The first number in this command is the process ID, which is locally significant only, and the second number is the area number. R3(config)#int s1/2
R3(config-if)#ipv6 ospf ? Process ID R3(config-if)#ipv6 ospf 1 area
R1(config)#ipv6 router ospf 5 R1(config-rtr)#router-id 1.1.1 R1(config)#int s0/1 R1(config-if)#ipv6 ospf 5 area
Seconds later, the adjacency forms:
%OSPFv3-5-ADJCHG: Process 5, N R1#show ipv6 ospf neighbor Neighbor ID 3.3.3.3
Pri 1
State FULL/
-
The neighbors are using different OSPF process IDs, but the adjacency forms anyway, since that value is locally significant only and doesn’t affect the adjacency in any way. I know you’re thinking I’m beating this point to death and halfway back to life, but you’ll thank me later. Also note this link doesn’t have a DR or BDR listed under “State”. OSPF point-to-point links won’t elect either. There’s no need to choose a router to
flood news of changes on that link, since by definition a pointto-point link has only two neighbors on the link. When one neighbor tells the other about a change in the network, there’s no one left on that segment to tell! Multi-area OSPFv3 networks are configured in the same way you configured OSPFv2 networks earlier in this course. The commands are different, but the song concepts remain the same. We’ll add a point-topoint link between R2 and R3
to our config. R2: ipv6 unicast-routing
interface Serial0/1 no ip address ipv6 address 2001:1111:2222:2 ipv6 ospf 1 area 23 ipv6 router ospf 1 router-id 2.2.2.2
R3: ipv6 router ospf 1 router-id 2.2.2.2
interface Serial1/3 no ip address ipv6 address 2001:1111:2222:2 ipv6 ospf 1 area 23 clock rate 56000
R3#show ipv6 ospf neighbor Neighbor ID 1.1.1.1 2.2.2.2
Pri 1 1
State De FULL/ - 00 FULL/ - 00
Nothing to it! Just keep your OSPFv2 rules in mind when you’re working with OSPFv3, and you’re gold! In this config, each area contains a router
with a physical or logical connection to the backbone area, so our design is legal. That rule’s not the only similarity between the two OSPFs. Here are some others: Potential neighbors must agree on hello and dead timers, as shown here:
R1(config)#int s0/1 R1(config-if)#ipv6 ospf helloR1#show ipv6 ospf neigh *Aug 5 07:17:24.504: %OSPFv3-5 R1(config)#int s0/1 R1(config-if)#no ipv6 ospf hel %OSPFv3-5-ADJCHG: Process 5, N
The overall OSPF neighbor discovery process via Hello packets is the same. Both versions use the maximum-paths command to control how many paths OSPF uses for equal-cost load balancing. Both versions use the defaultinformation originate command to advertise a default route, and yes, that all-important always option is still there!
R3(config)#ipv6 router ospf 1 R3(config-rtr)#default-informa always Always advertis
metric metric-type route-map
OSPF default me OSPF metric typ Route-map refer
Our DR/BDR elections are carried out in the same fashion in both versions. A router that detects a network change notifies the DR and BDR of that segment directly, and the DR floods news of the change throughout the segment. The adjacency states remain the same and can be viewed with show ipv6 ospf neighbor.
The terms “ABR” and “ASBR” mean the same thing, and in this network, R3 is an Area Border Router. Verify this (and get a lot of additional great information) with show ipv6 ospf.
R3#show ipv6 ospf Routing Process “ospfv3 1” wi It is an area border router SPF schedule delay 5 secs, Ho Minimum LSA interval 5 secs. LSA group pacing timer 240 se Interface flood pacing timer Retransmission pacing timer 6 Number of external LSA 0. Che Number of areas in this route Reference bandwidth unit is 1
Area BACKBONE(0) Number of interfaces in thi SPF algorithm executed 10 t Number of LSA 7. Checksum S Number of DCbitless LSA 0 Number of indication LSA 0 Number of DoNotAge LSA 0 Flood list length 0 Area 23 Number of interfaces in thi SPF algorithm executed 9 ti Number of LSA 8. Checksum S Number of DCbitless LSA 0 Number of indication LSA 0 Number of DoNotAge LSA 0 Flood list length 0
An ASBR is an OSPF router performing route redistribution
into OSPF, and we saw a demo of that in the OSPFv2 section. More similarities…. You can change OSPFv3 costs with the (nearly) same commands we used in OSPFv2: Change the interface cost directly (ipv6 ospf cost) Use the bandwidth interface-level command Use the auto-cost reference-bandwidth command (and keep it uniform throughout your
network) The show ip ospf interface command thankfully carried over, and this command gives you a treasure trove of tshooting and verification info.
R3#show ipv6 ospf int serial 1 Serial1/2 is up, line protocol Link Local Address FE80::20E: Area 0, Process ID 1, Instanc Network Type POINT_TO_POINT, Transmit Delay is 1 sec, Stat Timer intervals configured, H Hello due in 00:00:07 Index 1/1/1, flood queue leng Next 0x0(0)/0x0(0)/0x0(0)
Last flood scan length is 1, Last flood scan time is 0 mse Neighbor Count is 1, Adjacent Adjacent with neighbor 1.1.1 Suppress hello for 0 neighbor
There’s a brief version of that command, helpfully named show ipv6 ospf interface brief.
R3#show ipv6 ospf int brief Interface PID Area In Se1/2 1 0 Se1/3 1 23
Show ip protocols carried over as well, and while the output does look a lot different in IPv6, it’s still a helpful command.
R3#show ipv6 protocols IPv6 Routing Protocol is “conn IPv6 Routing Protocol is “stat IPv6 Routing Protocol is “ospf Interfaces (Area 0): Serial1/2 Interfaces (Area 23): Serial1/3 Redistribution: None
You can still use passive interfaces in OSPFv3.
R3(config)#ipv6 router ospf 1 R3(config-rtr)#passive-interfa AsyncAsync interface BVI Bridge-Group Virtual
Etc….
And now — some differences between the two! Potential OSPF neighbors no longer have to be on the same subnet in order to form an adjacency. There are some LSA differences, including the renaming of Type 3 LSAs from summary to interarea, but they’re beyond the scope of the exam. I think that’s enough IPv6 for now! Let’s head to the next
section!
Mastering Binary Math and Subnetting I want to make sure everyone’s covered on this vital subject, so you’ll find this info in both my ICND1 and ICND2 books. If you worked with my ICND1 book, work with this info again — it’s that important and you need as much practice as you can get for your big day!
Converting Binary To Dotted Decimal It’s easy to overlook the importance of this section, or just to say, “Hey, I know how to do that, I’m going to the next section.” Don’t do that. Success in networking is all about mastering the fundamentals, and that’s true more of subnetting than any other single feature on the CCENT and CCNA exams. When you master the
fundamentals and then continually practice applying them, you can answer any question Cisco or a job interviewer asks you. That philosophy has worked for thousands of CCENT and CCNA candidates around the world, and it’ll work for you. Let’s jump right in to a typical binary-to-decimal conversion. Convert 01100010 00111100 11111100 01010101 to binary. To answer this, we’ll use this simple chart:
128 64 32 16 8 4 2 1 1st 2nd 3rd 4th Just plug the binary values under the 128, 64, etc., add ’em up, and you’re gold! Filling it in from left to right, here’s the first octet conversion.
1st
128 64 32 16 8 4 2 1 0 1 1 0 0 0 1 0
There are ones in the column for 64, 32, and 2. Just add them up, and that is the decimal value for the first octet -- 98. Repeat the process for each octet, and you quickly have the dotted decimal equivalent of the binary string — in this case, 98.60.252.85.
128 64 32 16 8 4 2 1st 0 Octet:
1
1
0
0 0 1
2nd 0 Octet:
0
1
1
1 1 0
3rd
1
1
1
1 1 0
1
Octet: 4th 0 Octet:
1
0
1
0 1 0
You certainly don’t have to write out “1st ”, “2nd”, etc. I do recommend you still write out “128”, “64”, and so forth. It’s just too easy to skip over a number when you don’t write those out, and we’re not here to give away exam points — we’re here to take them! Let’s get in some more practice
with binary-to-decimal, and then we’ll move on to the next fundamental conversion skill.
Binary-To-Decimal Practice Questions Convert each binary string to dotted decimal. The string: 11110000 00110101 00110011 11111110 128 64 32 16 8 4 2 1 1st 0 2nd 0
0
0
0
0 0 0 0
0
0
0
0 0 0 0
3rd 0 4th 1
0
0
0
0 0 0 0
1
1
1
1 1 1 1
Answer: 240.53.51.254. The string: 00001111 01101111 00011100 00110001
1st
128 64 32 16 8 4 2 1 0 0 0 0 1 1 1 1
2nd 0 3rd 0
1
1
0
1 1 1 1
0
0
1
1 1 0 0
4th 0
0
1
1
0 0 0 1
Answer: 15.111.28.49. The string: 11100010 00000001
11001010 01110110
1st
128 64 32 16 8 4 2 1 1 1 1 0 0 0 1 0
2nd 0 3rd 1
0
0
0
0 0 0 1
1
0
0
1 0 1 0
4th 0
1
1
1
0 1 1 0
Answer: 226.1.202.118. The string: 01010101 11111101 11110010 00010101
1st
128 64 32 16 8 4 2 1 0 1 0 1 0 1 0 1
2nd 1 3rd 1
1
1
1
1 1 0 1
1
1
1
0 0 1 0
4th 0
0
0
1
0 1 0 1
Answer: 85.253.242.21. The string: 00000010 11111001 00110111 00111111 128 64 32 16 8 4 2 1 1st 1
1
0
0
1 0 0 1
2nd 0 3rd 0
1
0
1
1 1 1 1
1
1
1
1 1 1 1
4th 1
1
1
1
1 1 1 0
Answer: 2.249.55.63. The string: 11001001 01011111 01111111 11111110
1st
128 64 32 16 8 4 2 1 0 0 0 0 0 0 0 0
2nd 0 3rd 0
0
0
0
0 0 0 0
0
0
0
0 0 0 0
4th 1
1
1
1
1 1 1 1
Answer: 201.95.127.254 The string: 11111000 00000111
11111001 01100110
1st
128 64 32 16 8 4 2 1 1 1 1 1 1 0 0 0
2nd 0 3rd 1
0
0
0
0 1 1 1
1
1
1
1 0 0 1
4th 0
1
1
0
0 1 1 0
Answer: 248.7.249.102. The string: 00111110 11111111 01011010 01111110
1st
128 64 32 16 8 4 2 1 0 0 1 1 1 1 1 0
2nd 1 3rd 0
1
1
1
1 1 1 1
1
0
1
1 0 1 0
4th 0
1
1
1
1 1 1 0
Answer: 62.255.90.126. The string: 11001101 11110000 00001111 10111111 128 64 32 16 8 4 2 1 1st 1
1
0
0
1 1 0 1
2nd 1 3rd 0
1
1
1
0 0 0 0
0
0
0
1 1 1 1
4th 1
0
1
1
1 1 1 1
Answer: 205.240.15.191 The string: 10011001 11110000 01111111 00100101
1st
128 64 32 16 8 4 2 1 1 0 0 1 1 0 0 1
2nd 1 3rd 0
1
1
1
0 0 0 0
1
1
1
1 1 1 1
4th 0
0
1
0
0 1 0 1
Answer: 153.240.127.37 The string: 11011111 01110110
11000011 00111111
1st
128 64 32 16 8 4 2 1 1 1 0 1 1 1 1 1
2nd 0 3rd 1
1
1
1
0 1 1 0
1
0
0
0 0 1 1
4th 0
0
1
1
1 1 1 1
Answer: 223.118.195.63. The string: 00000100 00000111 00001111 00000001
1st
128 64 32 16 8 4 2 1 0 0 0 0 0 1 0 0
2nd 0 3rd 0
0
0
0
0 1 1 1
0
0
0
1 1 1 1
4th 0
0
0
0
0 0 0 1
Answer: 4.7.15.1. The string: 11000000 00000011 11011011 00100101 128 64 32 16 8 4 2 1 1st 1
1
0
0
0 0 0 0
2nd 0 3rd 1
0
0
0
0 0 1 1
1
0
1
1 0 1 1
4th 0
0
1
0
0 1 0 1
Answer: 192.3.219.37. The string: 10000000 01111111 00110011 10000011
1st
128 64 32 16 8 4 2 1 1 0 0 0 0 0 0 0
2nd 0 3rd 0
1
1
1
1 1 1 1
0
1
1
0 0 1 1
4th 1
0
0
0
0 0 1 1
Answer: 128.127.51.131 The string: 11111011 11110111
11111100 11111000
1st
128 64 32 16 8 4 2 1 1 1 1 1 1 0 1 1
2nd 1 3rd 1
1
1
1
0 1 1 1
1
1
1
1 1 0 0
4th 1
1
1
1
1 0 0 0
Answer: 251.247.252.248. Great work! Before we move on, let me share a bonus exam prep tip
with you. The only thing you need to practice this skill is a piece of paper and something to write with, and you don’t need to practice for consecutive hours. When you have 10 minutes to yourself at work or home, spend that time jotting down strings of 1s and 0s and then converting them to binary. That little bit of time spent practicing REALLY adds up in the end!
With that said, let’s move forward!
Converting Decimal To Binary “Second verse, not quite the same as the first….” We’re pretty much doing the same thing that we did in the first section, just in reverse. Makes sense, right? Well, it will once we go through some examples. This is definitely one of those skills that seems REALLY complicated when you read about it, but when you do it, you realize how easy it is!
Let’s practice with the decimal 217. 128 64 32 16 8 4 2 217
You must now determine whether each column should have a “1” or a “0”. Work from left to right, and ask this question: “Can I subtract this column’s value from the current octet value with the result being a positive number or zero?”
If so, perform the subtraction, put a “1” in the column, and go to the next column. If not, place a “0” in the column, and repeat the process for the next column. It takes much longer to explain than to actually do. Let’s look at that chart again: 128 64 32 16 8 4 2 217 Can 128 be subtracted from 217, and result in zero or a positive number? Sure, with the
result being 89. Put a “1” in the 128 column and go to the next column, repeating the operation with the new result. 128 64 32 16 8 4 2 217 1
Can 64 be subtracted from the new result, 89? Yes, with a remainder of 25. Put a “1” in the 64 column and repeat the operation in the next column, using the new result of 25. 128 64 32 16 8 4 2
217 1
1
Can 32 be subtracted from 25, with the remainder being 0 or a positive number? No. Place a “0” in the 32 column, and repeat the operation in the next column with the value of 25. 128 64 32 16 8 4 2 217 1 1 0
Can 16 be subtracted from 25? Yes, with a remainder of 9.
Place a “1” in the 16 column, and go to the next column with the new value of 9. 128 64 32 16 8 4 2 217 1 1 0 1
Can 8 be subtracted from 9? Yes, with a remainder of 1. Place a “1” in the 8 column, and repeat the operation in the next column with a remainder of 1. 128 64 32 16 8 4 2
217 1
1
0
1
1
We can quickly see that neither of the next two columns, 4 or 2, can be subtracted from 1. Place a “0” in both of those columns. 128 64 32 16 8 4 2 217 1 1 0 1 1 0 0
Subtracting 1 from 1 brings us to zero, and also to the end of the columns. Place a “1” in the 1 column, and you have the binary equivalent of the
decimal 217. 128 64 32 16 8 4 2 217 1 1 0 1 1 0 0
The binary equivalent of the decimal 217 is 11011001. Two points of note: You can never have a value greater than “1” in any bit. You should never have a remainder at the end of the line. If you do, you
need to go back and do it again. :) Let’s get in some more work with this vital skill!
Converting Decimal To Binary Questions The address: 100.10.1.200 128 100 0 10 0 1 0 200 1
64 1 0 0 1
32 1 0 0 0
16 0 0 0 0
8 0 1 0 1
4 1 0 0 0
Answer: 01100100 00001010 00000001 11001000.
2 0 1 0 0
The address: 190.4.89.23 128 190 1 4 0 89 0 23 0
64 0 0 1 0
32 1 0 0 0
16 1 0 1 1
8 1 0 1 0
4 1 1 0 1
2 1 0 0 1
Answer: 10111110 00000100 01011001 00010111. The address: 10.255.18.244
10
128 64 32 16 8 4 2 0 0 0 0 1 0 1
255 1 18 0 244 1
1 0 1
1 0 1
1 1 1
1 1 1 0 0 1 0 1 0
Answer: 00001010 11111111 00010010 11110100. The address: 240.17.23.239 128 240 1 17 0 23 0 239 1
64 1 0 0 1
32 1 0 0 1
16 1 1 1 0
8 0 0 0 1
4 0 0 1 1
2 0 0 1 1
Answer: 11110000 00010001 00010111 11101111. The address: 217.34.39.214 128 217 1 34 0 39 0 214 1
64 1 0 0 1
32 0 1 1 0
16 1 0 0 1
8 1 0 0 0
4 0 0 1 1
Answer: 11011001 00100010 00100111 11010110.
2 0 1 1 1
The address: 20.244.182.69 128 20 0 244 1 182 1 69 0
64 0 1 0 1
32 0 1 1 0
16 1 1 1 0
8 0 0 0 0
4 1 1 1 1
2 0 0 1 0
Answer: 00010100 11110100 10110110 01000101. The address: 198.3.148.245 128 64 32 16 8 4 2 198 1 1 0 0 0 1 1
3 0 148 1 245 1
0 0 1
0 0 1
0 1 1
0 0 1 0 1 0 0 1 0
Answer: 11000110 00000011 10010100 11110101. The address: 14.204.71.250 128 14 0 204 1 71 0 250 1
64 0 1 1 1
32 0 0 0 1
16 0 0 0 1
8 1 1 0 1
4 1 1 1 0
2 1 0 1 1
Answer: 00001110 11001100 01000111 11111010. The address: 7.209.18.47 128 7 0 209 1 18 0 47 0
64 0 1 0 0
32 0 0 0 1
16 0 1 1 0
8 0 0 0 1
4 1 0 0 1
Answer: 00000111 11010001 00010010 00101111.
2 1 0 1 1
The address: 249.74.65.43 128 249 1 74 0 65 0 43 0
64 1 1 1 0
32 1 0 0 1
16 1 0 0 0
8 1 1 0 1
4 0 0 0 0
2 0 1 0 1
Answer: 11111001 01001010 01000001 00101011. The address: 150.50.5.55 128 64 32 16 8 4 2 150 1 0 0 1 0 1 1
50 5 55
0 0 0
0 0 0
1 0 1
1 0 1
0 0 1 0 1 0 0 1 1
Answer: 10010110 00110010 00000101 00110111. The address: 19.201.45.194 128 128 0 201 1 45 0 194 1
64 0 1 0 1
32 0 0 1 0
16 1 0 0 0
8 0 1 1 0
4 0 0 1 0
2 1 0 0 1
Answer: 00010011 11001001 00101101 11000010. The address: 43.251.199.207 128 43 0 251 1 199 1 207 1
64 0 1 1 1
32 1 1 0 0
16 0 1 0 0
8 1 1 0 1
4 0 0 1 1
Answer: 00101011 11111011 11000111 11001111.
2 1 1 1 1
The address: 42.108.93.224 128 42 0 108 0 93 0 224 1
64 0 1 1 1
32 1 1 0 1
16 0 0 1 0
8 1 1 1 0
4 0 1 1 0
2 1 0 0 0
Answer: 00101010 01101100 01011101 11100000. The address: 180.9.34.238 128 64 32 16 8 4 2 180 1 0 1 1 0 1 0
9 0 34 0 238 1
0 0 1
0 1 1
0 0 0
1 0 0 0 0 1 1 1 1
Answer: 10110100 00001001 00100010 11101110. The address: 243.79.68.30 128 243 1 79 0 68 0 30 0
64 1 1 1 0
32 1 0 0 0
16 1 0 0 1
8 0 1 0 1
4 0 1 1 1
2 1 1 0 1
Answer: 11110011 01001111 01000100 00011110. Great work! Now we’ll start applying these fundamentals to some real-world scenarios!
Determining The Number Of Valid Subnets Once the subnetting’s been done, it would be a really good idea to know how many subnets you have to go around! Actually, you should calculate that number before you do the actual subnetting. In this question type, the subnetting’s already been performed and we have to come up with the number of valid subnets. Here’s the best part — with
enough practice, you’ll be able to answer these questions in less than a minute, and without writing much (if anything) down on your exam board! Here’s a typical “number of valid subnets” question: “How many valid subnets exist on the 10.0.0.0 /12 network?” “How many valid subnets exist on the 10.0.0.0 255.240.0.0 network?” These examples are actually
asking the same thing, just in different formats. You’re familiar with the standard dotted decimal mask, but what about the number following the slash in the first version of the question? This is prefix notation, and it’s the more common way of expressing a subnet mask. The number behind the slash indicates how many consecutive ones there are at the beginning of this mask. The dotted decimal mask 255.240.0.0, converted to
decimal, is 11111111 11110000 00000000 00000000. (If you’re unsure how this value is derived, review Section Three.) There are twelve ones at the beginning of the mask, and that’s where the “/12” comes from. Why use this method of expressing a mask? It’s easier to write and to say. Try expressing a Class C network mask out loud as “two fifty five, two fifty five, two fifty five, zero” a couple of times, then try saying “slash twenty-four”.
See what I mean? You’re going to hear the prefix notation version of network masks mentioned more often than someone reading out the entire mask, so familiarize yourself with expressing masks in this fashion. You’re likely to see both dotted decimal masks and prefix notation on any Cisco exam. Now let’s get in some practice!In print, this seems like a long operation, but once you’re doing it, it’s not. Before you can determine the
number of valid subnets with a given network number and subnet mask, you must know the network masks for Class A, B, and C networks. They are listed here for review: Class A
Class B
1st 1—126 128—191 Octet Range Network 255.0.0.0 255.255.0. Mask # of Network 8 16 Bits
# of Host Bits
24
16
Subnetting always borrows bits from the host bits — always. To determine the number of valid subnets, you first have to know how many subnet bits there are. Let’s look at the example question again: How many valid subnets exist on the 10.0.0.0 /12 network? There are two ways to determine the number of
subnet bits. The first method is longer, but it shows you exactly what’s going on with the subnets. The second method is much shorter, and you should feel free to use that one when you’re comfortable with the first one. By looking at the network number, we see this is a Class A network. By default, a Class A network mask has 8 network bits and 24 host bits. In this mask, 12 bits are set to 1. This indicates that four host bits
have been borrowed for subnetting. The subnet bits are shown below in bold. 1st
2nd Octet Octet
3rd Oc
Class “A” 11111111 00000000 000 NW Mask SN 11111111 11110000 000 Mask
Now that you know how many subnet bits there are, place
that number into this formula: The number of valid subnets = (2 raised to the power of the number of subnet bits) We have four subnet bits, so we need to raise 2 to the 4th power. When you multiply 2 by itself four times (2 x 2 x 2 x 2), you get 16, and that’s how many valid subnets we have. That’s all there is to it! Let’s go through another
example, and we won’t draw a chart for this one. All you need is your knowledge of network masks and a little math, and you’re done! “How many valid subnets exist on the 150.10.0.0 /21 network?” This is a Class B network, so we know the network mask is 255.255.0.0, or /16. The subnet mask is /21. Just subtract the number of “1”s in the network mask from the number of 1s in the subnet mask, and you have the
number of subnet bits. 21 − 16 = 5, and 2 to the 5th power equals 32 valid subnets. It’s just that simple! Once you’re done with these practice questions, practice writing your own questions and solving them — that’s the ultimate way to practice this vital skill, and you can’t beat the cost! I’ll list the networks and masks here, and you’ll find the
answers after this list. No peeking! How many valid subnets exist on each of the following networks? 15.0.0.0 /13 222.10.1.0 / 30 145.45.0.0 /25
20.0.0.0 255.192.0.0 130.30.0.0 255.255.224.0 128.10.0.0 /19 99.0.0.0 /17 222.10.8.0 /28 20.0.0.0 255.254.0.0
210.17.90.0 /29 130.45.0.0 /26 200.1.1.0 /26 45.0.0.0 255.240.0.0 222.33.44.0 255.255.255.248 23.0.0.0 255.255.224.0
“Number Of Valid Subnets” Questions and Answers Note: The NW mask and SN mask are written out for each question. You don’t have to write them out if you’re comfortable with the quicker method. 15.0.0.0 /13 Class A, 8 network bits. Subnet mask listed is /13. 13 − 8 = 5, and 2 to the 5th power is 32 = 32 valid subnets.
NW 11111111 00000000 00 Mask
SN 11111111 11111000 00 Mask
222.10.1.0 /30 Class C, 24 network bits. 30 − 24 = 6, 2 to the 6th power = 64 valid subnets.
NW 11111111 11111111 111 Mask
SN 11111111 11111111 111 Mask
145.45.0.0 /25 Class B, 16 network bits. 25 − 16 = 9, 2 to the 9th power = 512 valid subnets.
NW 11111111 11111111 00 Mask SN 11111111 11111111 11 Mask
20.0.0.0 255.192.0.0 Class A, 8 network bits. Subnet mask converts to /10 in prefix notation. 10 − 8 = 2, 2 to the
2nd power = 4 valid subnets.
NW 11111111 00000000 00 Mask SN 11111111 11000000 00 Mask
130.30.0.0 255.255.224.0 Class B, 16 network bits. Subnet mask converts to /19 in prefix notation. 19 − 16 = 3, 2 to the 3rd power = 8 valid subnets.
NW 11111111 11111111 00 Mask SN 11111111 11111111 11 Mask
128.10.0.0 /19 Class B, 16 network bits. 19 − 16 = 3, 2 to the 3rd power = 8 valid subnets.
NW 11111111 11111111 00 Mask SN 11111111 11111111 11 Mask
99.0.0.0 /17 Class A, 8 network bits. 17 − 8 = 9. 2 to the 9th power = 512 valid subnets. NW 11111111 00000000 Mask SN 11111111 11111111 Mask
222.10.8.0 /28 Class C, 24 subnet bits. 28 − 24 = 4. 2 to the 4th power = 16 valid subnets.
NW 11111111 11111111 111 Mask SN 11111111 11111111 111 Mask
20.0.0.0 255.254.0.0 Class A, 8 network bits. Mask converts to /15 in prefix notation. 15 − 8 = 7. 2 to the 7th power = 128 valid subnets.
NW 11111111 00000000 0 Mask SN 11111111 11111110 0
Mask
210.17.90.0 /29 Class C, 24 network bits. 29 − 24 = 5. 2 to the 5th power = 32 valid subnets.
NW 11111111 111111111 11 Mask SN 11111111 11111111 Mask
130.45.0.0 /26
11
Class B, 16 network bits. 26 − 16 = 10. 2 to the 10th power = 1024 valid subnets.
NW 11111111 11111111 00 Mask SN 11111111 11111111 11 Mask
200.1.1.0 /26 Class C, 24 network bits. 26 − 24 = 2. 2 to the 2nd power = 4 valid subnets.
NW 11111111 11111111 111 Mask SN 11111111 11111111 111 Mask
45.0.0.0 255.240.0.0 Class A, 8 network bits. SN mask converts to /12 in prefix notation. 12 − 8 = 4. 2 to the 4th power = 16 valid subnets.
NW 11111111 00000000 00 Mask SN
Mask 11111111 11110000 00
222.33.44.0 255.255.255.248 Class C, 24 network bits. SN mask converts to /29 in prefix notation. 29 − 24 = 5. 2 to the 5th power = 32 valid subnets.
NW 11111111 11111111 111 Mask SN 11111111 11111111 111 Mask
23.0.0.0 255.255.224.0 Class A, 8 network bits. SN mask converts to /19. 19 − 8 = 11. 2 to the 11th power = 2048 valid subnets. NW 11111111 00000000 Mask SN 11111111 11111111 Mask
And that’s it! Once you practice this question type, you’ll nail the questions accurately and quickly — and you’ll see the
same is true of our next question type! Determining The Number Of Valid Hosts On A Subnet As in the previous section, the subnetting’s been done, and we’re now being asked to come up with a value regarding that subnetting. In this case, we need to come up with the number of valid hosts per subnet. We first need to know how many host bits are in the
subnet mask, and there’s a lightning-fast way to figure that out: (32 — the number of 1s in the mask) = # of host bits
That’s all there is to it! Using 200.10.10.0 /26 as an example, all you do is subtract 26 from 32, which gives us 6 host bits. Then plug that number into this simple formula: (2 raised to the power of the number of host bits) — 2 2 to the 6th power is 64, and 6−2 = 62. That’s your number of valid host addresses! With practice, you’ll easily figure this out for any subnet in
well under a minute. A couple of things to watch out for: Note this formula uses the number of host bits, not the number of subnet bits. We subtract 2 from the almost-final answer. What’s going on with that “-2” at the end? That accounts for the two following unusable host addresses:
The first address in the range is the subnet number itself. The last address in the range is the subnet’s broadcast address. Since neither of these addresses should be assigned to hosts, we need to subtract 2 when calculating the number of valid hosts in a subnet. Since practice makes perfect CCENTs and CCNAs, let’s get in some practice with this question type. I’ve broken the
answers down to the bit level, since you need both the right answer and how we arrived at that answer! Feel free not to write the masks out on exam day. To avoid the unbearable pressure of not peeking at the answers, the questions are listed together first, followed by the answers and explanations. Let’s get started! The Questions Determine how many valid host addresses exist in each of the
following subnets: 220.11.10.0 /26 129.15.0.0 /21 222.22.2.0 / 30 212.10.3.0 /28 14.0.0.0 /20 221.10.78.0 255.255.255.224 143.34.0.0 255.255.255.192 128.12.0.0 255.255.255.240 125.0.0.0 /24 221.10.89.0 255.255.25.248 134.45.0.0 /22
The answers…. 220.11.10.0 /26 Nothing to this. Subtract the length of the subnet mask from 32 and you have your number of host bits. In this case, that’s 6, and 2 to the 6th power is 64. Subtract 2 and you have 62 valid host addresses. 129.15.0.0 /21 Subtract the mask length from 32. That gives us 11. 2 to the 11th power equals
2048. Subtract 2 from that and 2046 valid host addresses remain. 222.22.2.0 /30 Subtract the mask length from 32. That gives us 2. 2 to the 2nd power equals 4. Subtract 2 from that and 2 valid host addresses remain. 212.10.3.0 /28 Subtract the mask length from 32. That gives us 4.
2 to the 4th power equals 16. Subtract 2 from that and 14 valid host addresses remain. 14.0.0.0 /20 Subtract the mask length from 32, and we have 12. 2 to the 12th power is 4096; subtract 2 from that and 4094 valid host addresses remain. 221.10.78.0 255.255.255.224 Subtract the mask length from 32. That mask has its first 27
bits set to 1, so in prefix notation that’s /27. 32 − 27 = 5. 2 to the 5th power is 32; subtract 2 from that, and 30 valid host addresses remain. 143.34.0.0 255.255.255.192 Subtract the mask length from 32. This mask has its first 26 bits set to 1, so that’s 32 − 26 = 6. 2 to the 6th power is 64; subtract 2 from that, and 62 valid host addresses remain.
128.12.0.0 255.255.255.240 This mask converts to /28. 32 − 28 = 4. 2 to the 4th power is 16. Subtract 2 from that, and 14 valid host addresses remain. 125.0.0.0 /24 32 − 24 = 8. 2 to the 8th power is 256. Subtract 2 from that, and 254 valid host addresses remain. 221.10.89.0 255.255.255.248
In prefix notation, that’s a /29 mask. 32 − 29 = 3. 2 to the 3rd power is 8; subtract 2 from that, and 6 valid host addresses remain. 134.45.0.0 /22 32 − 22 = 10, so we have 10 host bits. 2 to the 10th power is 1024; subtract 2 from that and 1022 valid host addresses remain. All right! We’re now
comfortable with the fundamental conversions as well as determining the number of valid hosts and subnets — all valuable skills to have for your exam and your career! In the next section, we’ll put all of this together to determine three important values with one single math operation — and there’s a great shortcut semi-hidden in the next section, too. Let’s get started!
Determining The Subnet Number Of A Given IP Address This skill is going to serve you well in both the exam room and in production networks — and I’m going to teach you how to perform this operation in minutes. (Or just one minute, with practice on your part!) Being able to determine what subnet an IP address is on is an invaluable skill for
troubleshooting production networks and labs. You’d be surprised how many issues pop up just because an admin thought a host was on “Subnet A” and the host was actually on “Subnet B”! Let’s tackle an example: “On what subnet is the IP address 10.17.2.14 255.255.192.0 found?”
All you have to do is break the IP address down into binary, add up the network and subnet bits ONLY, and you’re done! That address in binary is:
00001010 00010001 00000010 000
That subnet mask converts to /18 in prefix notation, so add the first 18 bits, convert the value back to binary, and you’re done….
…. and the subnet upon which that address is found is 10.17.0.0 255.255.192.0! Let’s hit some more practice questions! I’ll give you the IP addresses first, and following that you’ll find the answers and explanations. Let’s get it done!
For each IP address listed here, determine its subnet. 217.17.23.200 /27
24.194.34.12 /10 190.17.69.175 111.11.126.5 255.255.128.0 210.12.23.45 255.255.255.248 222.22.11.199 /28 111.9.100.7 /17 122.240.19.23 /10 184.25.245.89 /20 99.140.23.140 /10 10.191.1.1 /10 222.17.32.244 /28
Answers and explanations: 210.17.23.200 /27 Convert the address to binary, add up the first 27 bits, and you’re done!
210.17.23.200 = 11010010 00010
Subnet: 210.17.23.192 /27.
24.194.34.12 /10
24.194.34.12 = 000110001100001
Add up the first 10 bits = 24.
190.17.69.175 /22
190.17.69.175 = 10111110 00010
Add up the first 22 bits = 190
111.11.126.5 255.255.128.0
111.11.126.5 = 01101111 000010
Add up the first 17 bits = 111
210.12.23.45 255.255.255.248
210.12.23.45 = 11010010 000011
Add up the first 29 bits = 210
222.22.11.199 /28
222.22.11.199 = 11011110 00010
Add up the first 28 bits = 222
111.9.100.7 /17
111.9.100.7 = 01101111 0000100
Add up the first 17 bits = 111
122.240.19.23 /10
122.240.19.23 = 01111010 11110
Add up the first 10 bits = 122
184.25.245.89 /20
184.25.245.89 = 10111000 00011
Add up the first 20 bits = 184
99.140.23.143 /10
99.140.23.143 = 01100011 10001
Add up the first 10 bits = 99.
10.191.1.1 /10
10.191.1.1 = 00001010 10111111
Add up the first 10 bits = 10.
222.17.32.244 /28
222.17.32.244 = 11011110 00010
Add up the first 28 bits = 222
Onward!
Determining Broadcast Addresses & Valid IP Address Ranges For A Given Subnet (With The Same Quick Operation!) The operation we perform in this section will answer two different questions. Need to determine the broadcast address for a subnet? Got you covered. Need to determine the valid
address range for a subnet? Got it! Best of all, it’s a quick operation. Let’s go through a sample question and you’ll see what I mean. What is the range of valid IP addresses for the subnet 210.210.210.0 /25? We need to convert this address to binary AND identify the host bits, and we know how to do that.
Octet 1 Octet 210.210.210.0 11010010 1101 /25 11111111 1111 There are three basic rules to remember when determining the subnet address, broadcast address, and range of valid addresses once you’ve identified the host bits — and these rules answer three different questions. 1. The address with all 0s for host bits is the subnet address, also referred to
as the “all-zeroes” address. This is not a valid host address. 2. The address with all 1s for host bits is the broadcast address, also referred to as the “all-ones” address. This is not a valid host address. 3. All addresses between the all-zeroes and all-ones addresses are valid host addresses. The “all-zeroes” address is
210.210.210.0. That’s easy enough — and so is the rest of this operation. When you change all the host bits to 1, the result is 210.210.210.127, and that’s our broadcast address for this subnet. Every address in the middle of those two addresses (210.210.210.1 — 126) is a valid IP address. That’s all there is to it! Let’s tackle another example:
Octet 1 Octet 2 150.10.64.0 11010010 000010 /18 11111111 111111 What is the broadcast address of the subnet 150.10.64.0 /18? You don’t have to write out the mask on exam day if you don’t want to. I’m including it here so you see exactly what we’re doing. If all the host bits (bolded) are zeroes, the address is 150.10.64.0, the subnet
address itself. This is not a valid host address. If all the host bits are ones, the address is 150.10.127.255. That is the broadcast address for this subnet and is also not a valid host address. All bits between the subnet address and broadcast address are considered valid addresses. This gives you the range 150.10.64.1 — 150.10.127.254. Let’s get some more practice! First, I’ll list the subnets, and it’s up to you to determine the range of valid host addresses
and the broadcast address for that subnet. After the list, I’ll show you the answer and explanation for each subnet. 222.23.48.64 /26 140.10.10.0 /23 10.200.0.0 /17 198.27.35.128 /27 132.12.224.0 /27 211.18.39.16 /28 10.1.2.20 /30 144.45.24.0 /21 10.10.128.0 255.255.192.0 221.18.248.224 /28 123.1.0.0 /17 203.12.17.32 /27
Time for answers and explanations! 222.23.48.64 /26
Octet 1 Oc 222.23.48.64 11011110 00 255.255.255.192 11111111 11
All-Zeroes (Subnet) Address: 222.23.48.64 /26 All-Ones (Broadcast) Address: 222.23.48.127 /26 Valid IP address range: 222.23.48.65 — 222.23.48.126
140.10.10.0 /23
Octet 1 Octet 2 140.10.10.0 10001100 000010 /23 11111111 111111
All-Zeroes (Subnet) Address: 140.10.10.0 /23 All-Ones (Broadcast) Address: 140.10.11.255 /23 Valid IP address range: 140.10.10.1 — 140.10.11.254
10.200.0.0 /17
Octet 1 Octet 2 10.200.0.0 00001010 1100100 /17 11111111 1111111
All-Zeroes (Subnet) Address: 10.200.0.0 /17 All-Ones (Broadcast) Address: 10.200.127.255 /17 Valid IP address range: 10.200.0.1 — 10.200.127.254
198.27.35.128 /27
Octet 1 Octet 198.27.35.128 11000110 0001 /27 11111111 1111
All-Zeroes (Subnet) Address: 198.27.35.128 /27 All-Ones (Broadcast) Address: 198.27.35.159 /27 Valid IP address range: 198.27.35.129 — 198.27.35.158
132.12.224.0 /27
Octet 1 Octet 132.12.224.0 10000100 00001 /27 11111111 11111
All-Zeroes (Subnet) Address: 132.12.224.0 /27 All-Ones (Broadcast) Address: 132.12.224.31 /27 Valid IP address range: 132.12.224.1 — 132.12.224.30
211.18.39.16 /28
Octet 1 Octet 211.18.39.16 11010011 00010 /28 11111111 11111
All-Zeroes (Subnet) Address: 211.18.39.16 /28 All-Ones (Broadcast) Address: 211.18.39.31 /28 Valid IP address range: 211.18.39.17 — 211.18.39.30
10.1.2.20 /30
Octet 1 Octet 2 10.1.2.20 00001010 00000001 /30 11111111 11111111
All-Zeroes (Subnet) Address: 10.1.2.20 /30 All-Ones (Broadcast) Address: 10.1.2.23 /30 Valid IP address range: 10.1.2.21 — 10.1.2.22 /30
144.45.24.0 /21
Octet 1 Octet 2 144.45.24.0 10010000 001011 /21 11111111 111111
All-Zeroes (Subnet) Address: 144.45.24.0 /21 All-Ones (Broadcast) Address: 144.45.31.255 /21 Valid IP address range: 144.45.24.1 — 144.45.31.254 /21
10.10.128.0 255.255.192.0
Octet 1 Octe 10.10.128.0 00001010 0000 255.255.192.0 11111111 1111
All-Zeroes (Subnet) Address: 10.10.128.0 255.255.192.0 All-Ones (Broadcast) Address: 10.10.191.255 255.255.192.0 Valid IP address range: 10.10.128.1 — 10.10.191.254
221.18.248.224 /28
Octet 1 Oct 221.18.248.224 11011101 000 /28 11111111 111
All-Zeroes (Subnet) Address: 221.18.248.224 /28 All-Ones (Broadcast) Address: 221.18.248.239 /28 Valid IP address range: 221.18.248.225 — 238 /28
123.1.0.0 /17
Octet 1 Octet 2 123.1.0.0 01111011 00000001 /17 11111111 11111111
All-Zeroes (Subnet) Address: 123.1.0.0 /17 All-Ones (Broadcast) Address: 123.1.127.255 /17 Valid IP address range: 123.1.0.1 — 123.1.127.254 /17
203.12.17.32 /27
Octet 1 Octet 203.12.17.32 11001011 00001 /27 11111111 11111
All-Zeroes (Subnet) Address: 203.12.17.32 /27 All-Ones (Broadcast) Address: 203.12.17.63 /27 Valid IP address range: 203.12.17.33 — 203.12.17.62 Great work!
Now let’s put this ALL together and tackle some real-world subnetting situations that just might be CCENT and CCNA subnetting situations as well! On to the next section!
Meeting Stated Design Requirements (Or “Hey, We’re Subnetting!”) Now we’re going to put our skills together and answer questions that are asked before the subnetting’s done! Actually, we’re doing the subnetting (at last!) A typical subnetting question …..
“Using network 150.50.0.0, you must design a subnetting scheme that allows for at least 200 subnets, but no more than 150 hosts per subnet. Which of the following subnet masks is best suited for this task?” (The question could also give you no choices and ask you to come up with the best possible mask, just like my practice questions.) We’re dealing with a Class B network, which means we have 16 network bits and 16 host bits. We’ll borrow subnet bits
from the host bits, so we’ll leave the host bits area blank for now. 1st
2nd
3rd
NW 11111111 11111111 Bits Host Bits
The formulas for determining the number of bits needed for a given number of subnets or hosts: The number of valid subnets =
(2 raised to the power of the number of subnet bits) The number of valid hosts = (2 raised to the power of the number of host bits) —2 The key to this question is to come up with the minimum number of bits you’ll need for the required number of subnets, and make sure the
remaining host bits give you enough hosts, but not too many hosts. We need eight subnet bits to give us at least 200 subnets: 2x2x2x2x2x2x2x2= 256 subnets. Proposed solution: 255.255.255.0
NW 11111111 11111111 Bits SN 111 Bits
Host Bits This mask leaves eight host bits, which would result in 254 hosts. This violates the requirement that we have no more than 150 hosts per subnet. What happens if you borrow one more host bit for subnetting, giving you 9 subnet bits and 7 host bits? 9 Subnet Bits: 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 = 512 7 Host Bits: 2 x 2 x 2 x 2 x 2 x
2 x 2 = 128 − 2 = 126 This gives you 510 subnets and 126 hosts, meeting both requirements. The great thing about this question type is that it plays to your strengths. You already know how to work with subnet bits and host bits. What you must watch out for are answers that meet one requirement but do not meet the other. Let’s walk through another example:
Using network 220.10.10.0, you must develop a subnetting scheme that allows for a minimum of six hosts and a minimum of 25 subnets. What’s the best mask to use? Watch this question — it’s asking for two minimums. This is a Class C network, so 24 of the bits are already used with the network mask. You have only eight bits to split between the subnet and the host bits. Before subnetting: Class C
network mask 255.255.255.0
NW 11111111 11111111 111 Bits SN Bits Host Bits
For at least 25 subnets, 5 subnet bits are needed: 2 x 2 x 2 x 2 x 2 = 32 subnets
This would leave three host bits. Does this meet the other requirement? 2 x 2 x 2 = 8 − 2 = 6 hosts. That meets the second requirement, so a mask of 5 subnet bits and 3 host bits will get the job done. 1st
2nd
3rd
NW 11111111 11111111 111 Bits SN
Bits Host Bits
The resulting mask is 255.255.255.248. As you’ve seen, this question type brings into play skills you’ve already developed. Just be careful when analyzing the question’s requirements, and you’ll get the correct answer every time.
Practice makes perfect, so let’s practice! “Meeting Design Requirements” Questions: Your network has been assigned the address 140.10.0.0. Your network manager requires that you have at least 250 subnets, and that any of these subnets will support at least 250 hosts. What’s the best mask to use?
This Class B network has 16 network bits, which we never borrow for subnetting, and 16 host bits, which we always borrow for subnetting. (hint hint) Before subnetting: Class B network mask 255.255.0.0
NW 11111111 11111111 Bits SN Bits Host 000 Bits
You must have at least 250 subnets, and eight subnet bits would give us that (256, to be exact). That leaves eight host bits, giving us 254 hosts, so the resulting mask of 255.255.255.0 meets both requirements. 1st Octet
2nd Octet 3rd
NW 11111111 11111111 Bits SN 111 Bits
Host Bits
Your network has been assigned the network number 200.10.10.0. Your network manager has asked you to come up with a subnet mask that will allow for at least 15 subnets. No subnet should ever contain more than 12 hosts, but should contain at least five. What’s the best mask to use?
This Class C network’s mask has 24 network bits. There are only eight host bits to borrow for subnetting. Before subnetting: Class C network mask 255.255.255.0
NW 11111111 11111111 111 Bits SN Bits Host Bits Four subnet bits would give you
16 subnets, meeting the first requirement. The problem is that this would leave 4 host bits, resulting in 14 hosts, which violates the second requirement. The maximum number of host bits you can have in this answer is three, which would result in 6 hosts. You can’t have less, because that would allow only two hosts. That would leave five subnet bits, which meets the first requirement.
1st Octet
2nd Octet 3rd
NW 11111111 11111111 111 Bits SN Bits Host 000 Bits
The only mask that meets both requirements is /29.
Your network has been
assigned 134.100.0.0. Your network manager requests that you come up with a subnet mask that allows for at least 500 subnets, but no subnet should be able to hold more than 120 hosts per subnet. What is the best subnet mask to use? Network 134.100.0.0 is a Class B network with a network mask of 255.255.0.0. Sixteen bits remain to be split between the subnet bits and host bits. Before subnetting: Class B mask 255.255.0.0
NW 11111111 11111111 Bits SN Bits Host 000 Bits For 500 subnets, a minimum of nine subnet bits will be needed (2 to the 9th power is 512). That would leave 7 host bits. Does this meet the second requirement? No. 2 to the 7th power is 128.
Subtract 2 and 126 host addresses remain, violating the second requirement. A mix of 10 subnet bits and 6 host bits will work. 10 subnet bits result in 1024 valid subnets, meeting the first requirement. That would leave 6 host bits, which yields 62 valid hosts. That meets the second requirement. 1st Octet
2nd Octet 3rd
NW 11111111 11111111 Bits SN 111
Bits Host Bits
The mask is 255.255.255.192. This is the type of question you really have to watch. It would be easy to say “okay, 9 subnet bits gives me 512 subnets, that’s the right answer”, choose that answer, and move on. You must ensure that your answer meets both requirements!
Your network has been assigned 202.10.40.0. Your network manager requests that you come up with a subnet mask that allows at least 10 subnets, but no subnet should allow more than 10 hosts. What is the best subnet mask to use? Network 202.10.40.0 is a Class C network with a mask of 255.255.255.0. Only eight bits remain to be split between the subnet bits and host bits. Before subnetting: Class C
network mask 255.255.255.0
NW 11111111 11111111 111 Bits SN Bits Host Bits For a minimum of 10 subnets, at least four subnet bits would be needed (2 to the 4th power = 16). This would leave four host bits. Does this meet the second
requirement? No. There would be 14 hosts. Five subnet bits and three host bits will meet the requirements. This would yield 32 subnets and 6 hosts. The resulting mask is 255.255.255.248. 1st Octet
2nd Octet 3rd
NW 11111111 11111111 111 Bits SN Bits Host Bits
You’re working with 37.0.0.0. Your manager requests that you allow for at least 500 hosts per subnet; however, he wants as many subnets as possible without exceeding 1000 subnets. What is the best subnet mask to use? Network 37.0.0.0 is a Class A network, so we have 24 host bits to work with. Before subnetting: Class A network mask 255.0.0.0
NW
Bits 11111111 SN Bits Host Bits
00000000 000
The requirement for 500 hosts is no problem; we only need nine host bits to have 510 valid host addresses (2 to the 9th power − 2 = 510). The problem comes in with the requirement of not having more than 1000 subnets. If we only used nine host bits, that would leave 15 subnet bits, which
would result in over 32,000 subnets! How many subnet bits can we borrow without going over 1000 subnets? Nine subnet bits would give us 512 valid subnets; that’s as close as we can come without going over. Doing so would leave us with 15 host bits, which would certainly meet the “minimum number of hosts” requirement. 1st Octet NW 11111111 Bits
2nd Octet 3rd
SN Bits Host Bits
11111111
000
The best mask to use to meet both requirements is 255.255.128.0. Do not let the “minimum” part of the requirement throw you off. If you’re asked for a minimum of 500 hosts or 500 subnets, as long as you’ve got more than that, it doesn’t matter how many more you have. The requirement is met.
The key is to meet both requirements. You’re working with 157.200.0.0. You must develop a subnetting scheme where each subnet will support at least 200 hosts, and you’ll have between 100 and 150 subnets. What is the appropriate subnet mask to use? This network number is Class B, so we have 16 host bits to work with. Before subnetting: Class B
network mask 255.255.0.0
NW 11111111 11111111 Bits SN Bits Host 000 Bits
Eight host bits would result in 254 hosts, enough for the first requirement. However, this would also result in 256 valid subnets, violating the second
requirement. (2 to the 8th power = 254). The only number of subnet bits that results in between 100 and 150 valid subnets is 7; this yields 128 valid subnets. (Six subnet bits would yield 64 valid subnets.) This means we would have nine host bits left, more than meeting the “at least 200 hosts” requirement.
NW 11111111 11111111 Bits SN 111
Bits Host Bits The proper mask is 255.255.254.0.
Given network number 130.245.0.0, what subnet mask will result in at least 250 valid hosts per subnet, but between 60 and 70 valid subnets? With this Class B network, there are 16 host bits. How many subnet bits need to be
borrowed to yield between 60 and 70 subnets? The only number of subnet bits that yield this particular number is six, which gives us 64 valid subnets. Five subnet bits yield too few valid subnets (32), while seven subnet bits yield too many (128). If you borrow six subnet bits, how many hosts will be available per subnet? The remaining ten host bits will give you 1022 valid host addresses, more than enough for the first
requirement. Therefore, the appropriate mask is 255.255.252.0. 1st
Octet
2nd
3rd Octet Octe
NW 11111111 11111111 Bits SN 111 Bits Host Bits Time for our final exam! Let’s get right to it — in the very next section!
Finals! Let’s put it all together for one big final exam! We’ll sharpen our skills for exam success on these questions, and they’re presented in the same order in which they appeared in this book. If you’re a little hesitant on how to answer any of these questions, be sure to go back and get more practice! Let’s get started! Converting Binary To Dotted Decimal
The string: 01010101 11100010 01101010 01001010
Answer: 85.226.106.74
The string: 11110000 00001111 01111111 10000000
Answer: 240.15.127.128.
The string: 11001101 00000011 11110010 00100101
Answer: 205.3.242.37.
The string: 00110010 00100011 11110011 00100111
Answer: 50.35.243.39.
The string: 10000111 00111111 01011111 00110010
Answer: 135.63.95.50 Converting Dotted Decimal Addresses To Binary Strings
The address: 195.29.37.126
Answer: 11000011 00011101 00100101 01111110.
The address: 207.93.49.189
Answer: 11001111 01011101 00110001 10111101.
The address: 21.200.80.245
Answer: 00010101 11001000 01010000 11110101.
The address: 105.83.219.91
Answer: 01101001 01010011 11011011 01011011.
The address: 123.54.217.4
Answer: 01111011 00110110 11011001 00000100. Determining The Number Of Valid Subnets How many valid subnets are on the 222.12.240.0 /27 network?
This is a Class C network, with a network mask of /24. The subnet mask is /27, indicating three subnet bits. 2 to the 3rd power is 8 = 8 valid subnets.
How many valid subnets are on the 10.1.0.0 /17 network? This is a Class A network, with a network mask of /8. The subnet mask is /17, indicating nine subnet bits. (17 − 8 = 9)
2 to the 9th power is 512 = 512 valid subnets.
How many valid subnets are on the 111.0.0.0 /14 network? This is a Class A network, with a network mask of /8. The subnet mask is /14, indicating six subnet bits. (14 − 8 = 6) 2 to the 6th power is 64 = 64 valid subnets.
How many valid subnets are on
the 172.12.0.0 /19 network? This is a Class B network, with a network mask of /16. The subnet mask is /19, indicating three subnet bits. (19 − 16 = 3) 2 to the 3rd power is 8 = 8 valid subnets.
How many valid subnets are on the 182.100.0.0 /27 network? This is a Class B network, with a network mask of /16. The subnet mask is /27, indicating
11 subnet bits. (27 − 16 = 11) 2 to the 11th power is 2048 = 2048 valid subnets.
How many valid subnets exist on the 221.23.19.0 /30 network? This is a Class C network, with a network mask of /24. The subnet mask is /30, indicating six subnet bits. (30 − 24 = 6) 2 to the 6th power is 64 = 64 valid subnets.
How many valid subnets exist on the 17.0.0.0 255.240.0.0 network? This is a Class A network, with a network mask of 255.0.0.0. The subnet mask here is 255.240.0.0 (/12), indicating four subnet bits. (12 − 8 = 4) 2 to the 4th power is 16 = 16 valid subnets.
How many valid subnets exist on the 214.12.200.0 255.255.255.248 network?
This is a Class C network, with a network mask of 255.255.255.0. The subnet mask here is 255.255.255.248 (/29), indicating five subnet bits. (29 − 24 = 5) 2 to the 5th power is 32 = 32 valid subnets.
How many valid subnets exist on the 155.200.0.0 255.255.255.128 network? This is a Class B network, with a network mask of 255.255.0.0.
The subnet mask here is 255.255.255.128 (/25), indicating nine subnet bits. (25 − 16 = 9) 2 to the 9th power is 512 = 512 valid subnets. Determining The Number Of Valid Hosts How many valid host addresses exist on the 211.24.12.0 /27 subnet? To determine the number of
host bits, just subtract the subnet mask length from 32. 32 − 27 = 5. To then determine the number of host addresses, bring 2 to that result’s power and subtract 2. 2 to the 5th power = 32, 32 − 2 = 30 valid host addresses. How many valid host addresses exist on the 178.34.0.0 /28 subnet? To determine the number of host bits, just subtract the subnet mask length from 32. 32
− 28 = 4. To then determine the number of host addresses, bring 2 to that result’s power and subtract 2. 2 to the 4th power = 16, 16 − 2 = 14 valid host addresses. How many valid host addresses exist on the 211.12.45.0 /30 subnet? Subtract the subnet mask length from 32. 32 − 30 = 2 host bits. Bring 2 to that result’s power and subtract 2. 2 to the 2nd
power = 4, 4 − 2 = 2 valid host addresses on that subnet. How many valid host addresses exist on the 129.12.0.0 /20 subnet? Subtract the subnet mask length from 32. 32 − 20 = 12 host bits. Bring 2 to that result’s power and subtract 2. 2 to the 12th power = 4096, and 4096 -2 = 4094 valid host addresses on that subnet.
How many valid host addresses exist on the 220.34.24.0 255.255.255.248 subnet? Subtract the subnet mask length from 32. 32 − 29 = 3 host bits. Bring 2 to that result’s power and subtract 2. 2 to the 3rd power = 8, 8 − 2 = 6 valid host addresses on this subnet. How many valid host addresses exist on the 145.100.0.0 255.255.254.0 subnet? Subtract the subnet mask
length from 32. 32 − 23 = 9 host bits. Bring 2 to that result’s power and subtract 2. 2 to the 9th power = 512, and 512 − 2 = 510 valid host addresses on that subnet. How many valid host addresses exist on the 23.0.0.0 255.255.240.0 subnet? Subtract the subnet mask length from 32. 32 − 20 = 12 host bits. Bring 2 to that result’s power
and subtract 2. 2 to the 12th power = 4096, 4096 − 2 = 4094 valid host addresses on that subnet. Determining The Subnet Number Of A Given IP Address On what subnet can the IP address 142.12.38.189 /25 be found? Start writing out the 142.12.38.189 address in binary, and stop once you’ve converted 25 bits. That result
gives you the answer. (You can also write out the entire address for practice and then add up the first 25 bits.) First 25 bits = 10001110
0000
Result = 142.12.38.128 /25.
On what subnet can the IP address 170.17.209.36 /19 be found? Convert that IP address to binary and stop once you get to
19 bits, then convert right back to dotted decimal. First 19 bits = 10101010
000
The answer: 170.17.192.0 /19.
On what subnet can the IP address 10.178.39.205 /29 be found? Convert the address to binary and stop at the 29-bit mark. 29 bits = 00001010
10110010
Tack your /29 on the back and you have the answer! On what subnet can the IP address 190.34.9.173 /22 be found? Convert the address to binary, stop at 22 bits, and then convert the address right back to decimal. First 22 bits = 10111110
On what subnet can the IP
001
address 203.23.189.205 255.255.255.240 be found? Write out the address in binary and stop at the 28-bit mark, then convert those 28 bits back to decimal. Done! 1st 28 bits = 11001011
00010
On what subnet can the IP address 49.210.83.201 255.255.255.248 be found? Convert the address to binary up to the 29-bit mark, and convert those 29 bits right back
to decimal. 00110001
11010010
01010011
On what subnet can the IP address 31.189.234.245 /17 be found? Convert the address to binary up to the 17-bit mark, then convert those 17 bits right back to decimal. 31.189.234.245 = 00011111
101
On what subnet can the IP address 190.98.35.17 /27 be found? Convert the address to binary up to the 27-bit mark, then convert those 27 bits right back to decimal. 190.98.35.17 = 10111110
0110
Determining Broadcast Addresses and Valid IP Address Ranges For each of the following,
identify the valid IP address range and the broadcast address for that subnet. 100.100.45.32 /28 208.72.109.8 /29 190.89.192.0 255.255.240.0 101.45.210.52 /30 90.34.128.0 /18 205.186.34.64 /27 175.24.36.0 255.255.252.0 10.10.44.0 /25 120.20.240.0 /21 200.18.198.192 /26
Answer and explanations follow!
The subnet: 100.100.45.32 /28
We know that the last four bits are the host bits. If these are all zeroes, this is the subnet address itself. If there are all ones, this is the broadcast address for this subnet. All addresses between the two are valid. “All-Zeroes” Subnet Address:
100.100.45.32 /28 “All-Ones” Broadcast Address: 100.100.45.47 /28 Valid IP Addresses: 100.100.45.33 — 46 /28 The subnet: 208.72.109.8 /29
“All-Zeroes” Subnet Address: 208.72.109.8 /29 “All-Ones” Broadcast Address:
208.72.109.15 /29 Valid IP Addresses: 208.72.109.9 — 208.72.109.14 /29 The subnet: 190.89.192.0 255.255.240.0
“All-Zeroes” Subnet Address: 190.89.192.0 /20
“All-Ones” Broadcast Address: 190.89.207.255 /20 Valid IP Addresses: 190.89.192.1 — 190.89.207.254 /20 The subnet: 101.45.210.52 /30
“All-Zeroes” Subnet Address: 101.45.210.52 /30 “All-Ones” Broadcast Address:
101.45.210.55 /30 Valid IP Addresses: 101.45.210.53, 101.45.210.54 /30 The subnet 90.34.128.0 /18
“All-Zeroes” Subnet Address: 90.34.128.0 /18 “All-Ones” Broadcast Address: 90.34.191.255 /18
Valid IP Addresses: 90.34.128.1 — 90.34.191.254 /18 The subnet: 205.186.34.64 /27
“All-Zeroes” Subnet Address: 205.186.34.64 /27 “All-Ones” Broadcast Address: 205.186.34.95 /27 Valid IP Addresses: 205.186.34.65 — 94 /27
The subnet: 175.24.36.0 255.255.252.0
“All-Zeroes” Subnet Address: 175.24.36.0 /22 “All-Ones” Broadcast Address: 175.24.39.255 /22 Valid IP Addresses: 175.24.36.1 — 175.24.39.254 /22
The subnet: 10.10.44.0 /25
“All-Zeroes” Subnet Address: 10.10.44.0 /25 “All-Ones” Broadcast Address: 10.10.44.127 /25 Valid IP Addresses: 10.10.44.1 — 10.10.44.126 /25 The subnet: 120.20.240.0 /21
“All-Zeroes” Subnet Address: 120.20.240.0 /21 “All-Ones” Broadcast Address: 120.20.247.255 /21 Valid IP Addresses: 120.20.240.1 — 120.20.247.254 /21 The subnet: 200.18.198.192 /26
“All-Zeroes” Subnet Address: 200.18.198.192 /26 “All-Ones” Broadcast Address: 200.18.198.255 /26 Valid IP Addresses: 200.18.198.193 — 200.18.198.254 /26 Now let’s put it all together for some real-world design requirement questions!
Meeting The Stated Design Requirements You’re working with network 135.13.0.0. You need at least 500 valid subnets, but no more than 100 hosts per subnet. What is the best subnet mask to use? This is a Class B network, with 16 network bits and 16 host bits.
The first requirement is that we have at least 500 subnets. Nine subnet bits would give us 512 valid subnets: 2x2x2x2x2x2x2x2x2 = 512 This would leave seven host bits, resulting in 126 valid host addresses, which violates the second requirement. (2 to the
7th power is 128; subtract two, and 126 valid host addresses remain.) What about six host bits? That would yield 62 valid host addresses, which meets the second requirement. A combination of ten subnet bits and six host bits gives us 1024 valid subnets and 62 valid host addresses, meeting both requirements.
The resulting mask is 255.255.255.192 (/26). You’re working with the network 223.12.23.0. Your network manager has asked you to develop a subnetting scheme that allows at least 30 valid hosts per subnet, but yields no more than five valid subnets. What’s the
best subnet mask to use? This Class C network’s mask is /24, leaving eight host bits to borrow for subnetting.
We know we need five host bits for at least 30 hosts per subnet. (2 to the 5th power, minus two, equals exactly 30.) Does this meet the second requirement?
No. That would leave three subnet bits, which yields eight valid subnets. To meet the second requirement, you can have only two subnet bits, which yields two valid subnets.
This yields a mask of 255.255.255.192 (/26).
You’re working with the network 131.10.0.0. Your network manager has requested that you develop a subnetting scheme that allows at least fifty subnets. No subnet should contain more than 1000 hosts. What is the best subnet mask to use? This Class B network has 16 network bits, and 16 host bits that can be borrowed for subnetting.
You quickly determine that for fifty subnets, you only need six subnet bits. That gives you 64 valid subnets. Does this mask meet the second requirement? No. That would leave 10 host bits, which yields 1022 valid host addresses. (2 to the 10th power equals 1024; subtract two, and 1022 remain.) By borrowing one more bit for
subnetting, giving us seven subnet bits and nine host bits, both requirements are met. Seven subnet bits yield 128 valid subnets, and nine host bits yield 510 valid host addresses. The appropriate mask is 255.255.254.0.
Congratulations! You’ve completed this final exam. If you had any difficulty with the final section, please review Section Eight. If you nailed all five of the final questions — great work! To wrap things up, let’s hit Variable Length Subnet Masking!
How To Develop A VLSM Scheme In the networks we’ve been working with in the binary and subnetting section, we’ve cut our IP address space “pie” into nice, neat slices of the same size. We don’t always want to do that, though. If we have a point-to-point network, why assign a subnet number to that
network that gives you 200+ addresses when you’ll only need two? That’s where Variable-Length Subnet Masking comes in. VLSM is the process of cutting our address pie into uneven slices. The best way to get used to VLSM is to work with it, so let’s go through a couple of drills where VLSM will come in handy. Our first drill will involve the major network number 210.49.29.0. We’ve been asked to create a
VLSM scheme for the following five networks, and we’ve also been told that there will be no further hosts added to any of these segments. The requirement is to use no more IP addresses from this range for any subnet that is absolutely necessary. The networks: NW A: 20 hosts NW B: 10 hosts NW C: 7 hosts NW D: 5 hosts
NW E: 3 hosts We’ll need to use the formula for determining how valid host addresses are yielded by a given number of host bits: (2 to the nth power) − 2, with n representing the number of host bits To create our VLSM scheme, we’ll ask this simple question over and over: “What is the smallest subnet that can be created with all host bits set to zero?”
NW A requires 20 valid host addresses. Using the above formula, we determine that we will need 5 host bits (2 to the 5th power equals 32; 32 − 2 = 30). What is the smallest subnet that can be created with all host bits set to zero? 210.49.29.0 in binary:
11010
/27 subnet mask:
11111
We’ll use a subnet mask of /27 to have five host bits remaining, resulting in a subnet and subnet mask of
210.49.29.0 /27, or 210.49.29.0 255.255.255.224. It’s an excellent idea to keep a running chart of your VLSM scheme, so we’ll start one here. The network number itself is the value of that binary string with all host bits set to zero; the broadcast address for this subnet is the value of that binary string with all host bits set to one. These two particular addresses cannot be assigned to hosts, but every IP address between the two are valid host IP addresses.
Network Number = 11010010
00
Broadcast Add. = 11010010
00
Network: NW A
Subnet & Mask 210.49.29.0 /27
N 2
The next subnet will start with the next number up from the broadcast address. In this case, that’s 210.49.29.32. With a need for 10 valid host addresses, what will the subnet mask be?
210.49.29.32 in binary: 110100
/28 subnet mask: 11111111
11
Four host bits result in 14 valid IP addresses, since 2 to the 4th power is 16 and 16 − 2 = 14. We use a subnet mask of /28 to have four host bits remaining, resulting in a subnet and mask of 210.49.29.32 /28, or 210.49.29.32 255.255.255.240. Remember, the network number is the value of the binary string with all host bits set to zero and the broadcast address is the value of the binary string with all host bits
set to one. Network Number = 11010010
00
Broadcast Add. = 11010010
00
Network: NW A NW B
Subnet & Mask 210.49.29.0 /27 210.49.29.32 /28
The next subnet is one value up from that broadcast address, which gives us 210.49.29.48. We need seven valid host addresses. How many host bits do we need?
N 2 2
210.49.29.48 in binary:
11010
/28 subnet mask:
11111
We still need four host bits — three would give us only six valid IP addresses. (Don’t forget to subtract the two!) The subnet and mask are 210.49.29.48 255.255.255.240, or 210.49.29.48 /28. Calculate the network number and broadcast address as before. Network Number = 11010010 Broadcast Add. =
11010010
00
0
Network: NW A NW B NW C
Subnet & Mask Net 210.49.29.0 /27 21 210.49.29.32 /28 21 210.49.29.48 /28 21
The next value up from that broadcast address is 210.49.29.64. We need five valid IP addresses, which three host bits will give us (2 to the 3rd power equals 8, 8 − 2 = 6). 210.49.29.64 in binary:
11010
/29 subnet mask:
11111
The subnet and mask are
210.49.29.64 255.255.255.248, or 210.49.29.64 /29. Calculate the network number and broadcast address as before, and bring the VLSM table up to date. Network Number = 11010010
001
Broadcast Add. = 11010010
001
Network: NW A NW B NW C NW D
Subnet & Mask Ne 210.49.29.0 /27 2 210.49.29.32 /28 2 210.49.29.48 /28 2 210.49.29.64 /29 2
We’ve got one more subnet to calculate, and that one needs only three valid host addresses. What will the network number and mask be?
210.49.29.72 in binary: 110100 /29 subnet mask:
111111
We still need a /29 subnet mask, because a /30 mask would yield only two usable addresses. The subnet and mask are 210.49.29.72 /29, or 210.49.29.72 255.255.255.248. Calculate the network number
and broadcast address, and bring the VLSM table up to date. Network Number = 11010010
001
Broadcast Add. = 11010010
00
Network: NW A NW B NW C NW D NW E
Subnet & Mask 210.49.29.0 /27 210.49.29.32 /28 210.49.29.48 /28 210.49.29.64 /29 210.49.29.72 /29
And now you’re done! The next subnet would be 210.49.29.80, and the mask would of course
N 2 2 2 2 2
be determined by the number of host addresses needed on the segment. A final binary word: You either know how to determine the number of valid subnets, valid hosts, or perform the subnetting from scratch, or you don’t — and how do you learn how to do it? Practice. You don’t need expensive practice exams — the only thing you need is a piece of paper and a pencil. Just come
up with your own scenarios! All you need to do is choose a major network number, then just write down five or six different requirements for the number of valid host addresses needed for each subnet. I can tell you from firsthand experience that this is the best way to get really, really good with VLSM — just pick a network number, write down five or six different requirements for the number of valid addresses needed, and get to work!
Chris Bryant “The Computer Certification Bulldog” PS — Use these resources to advance on the path to exam success!
Website: http://www.thebryantadvantage.
YouTube: http://www.youtube.com/user/cc
Video Boot Camps: https://www.udemy.com/u/chrisb (Free and almost-free courses there!)
Blog: http://thebryantadvantage.blogs
View more...
Comments