The Bryant Advantage CCNP SWITCH Study Guide
Short Description
Download The Bryant Advantage CCNP SWITCH Study Guide...
Description
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933 http://www.thebryantadvantage.com Back To Index
Virtual LANs (VLANs) Overview Why We Create VLANs Static VLANs Dynamic VLANs Trunking ISL and IEEE 802.1q Troubleshooting Trunks The Native VLAN Dynamic Trunking Protocol (DTP) Trunking Modes VLAN Database Mode VLAN Design Guidelines End-To-End And Local VLANs
Cliche #1: "If you don't use it, you lose it." Cliche #2: "Cliches become cliches because more often than not, they're true."
I can vouch for #1, especially when it comes to certification studies. If you just recently finished your CCNA, a great deal of the information in this section will seem very familiar; if it's been a while, it's likely you've lost some of the details -- the details you need to know to pass the SWITCH exam the first time around. Even if you did recently get your CCNA and you feel comfortable with VLANs, don't skip this section. There's plenty of information here that wasn't in your CCNA studies, but must be part of your CCNP studies. We'll start with a review of the VLAN basics, and there's nothing more basic than this question: "If Virtual LANs are so great, what's wrong with our good old physical LANs?" Good question. Here's a good answer! (You know from previous studies that the symbol in the center of the following diagram is a switch - Cisco might tell you that on exam day, and again they might not.)
One common reason for creating VLANs is to prevent the excess traffic caused by a switch's default behavior when it receives a broadcast. One
of the first switching concepts you learned was that a switch that receives a broadcast will forward it out every other port on the switch except the one that it was originally received on. Here, we have five PCs each connected to their own switch port. One PC is sending a broadcast, and by default all other devices in the diagram will receive it. This network segment is one large broadcast domain. It's also known as a flat network topology. Now you just might think, "Big deal. There are only five PCs there. How many broadcasts can there be?" It's true that there are only five PCs in this diagram - and it's also true that this is a good example, but it's not a real-world example. What if you had a 48-port Cisco switch and every port had a connected host? We'd have a broadcast being sent to 47 hosts every time a broadcast was received by the switch. Odds are that all those hosts don't need that particular packet, and propagating it to all of those other hosts has two drawbacks: Unnecessary use of bandwidth Unnecessary workload on the switch to process and send all of those broadcasts Just as we broke up one big collision domain by connecting each host to a separate switch port (as opposed to a hub or repeater), we can divide this single large broadcast domain into multiple smaller broadcast domains by using VLANs. When I first started studying networking, that sounded like something we wouldn't want to do. If we're trying to control broadcasts, why are we creating more broadcast domains? Wouldn't we be better off with just one broadcast domain? Creating multiple broadcast domains helps to limit the scope of the broadcast - in effect, fewer hosts will actually receive a copy of that broadcast. When a switch receives a broadcast packet from a host in one particular VLAN, that switch will forward that broadcast only via ports that are in the same VLAN. Those VLANs are our smaller broadcast domains, and
that's how having multiple broadcast domains is more efficient than having one big one. There is no official restriction to which ports you can put into a VLANs, but Cisco's best practice is to have one VLAN per IP subnet, and this is a best practice that works very well in the real world. The VLAN membership of a host depends on one of two factors:
With static VLANs, it's dependent on the port the host is connected to With dynamic VLANs, it's dependent on the host's MAC address
Before we take a look at static VLANs, note that the difference between these two VLAN types is how the host is assigned to a VLAN. The terms "static" and "dynamic" do not refer to how the VLAN is actually created. You'll see what I mean in just a second. Let's get to the static VLANs.
By default, each of those hosts is in VLAN 1. VLAN 10 has been created and when one host in VLAN 10 sends a broadcast, the only hosts that will receive a copy of that broadcast are the other hosts in VLAN 10. In networking, sometimes it seems that when we fix one problem, the fix results in another possible issue. That's certainly true here - not only will broadcasts not be forwarded between VLANs, but no traffic will be forwarded between VLANs. This may be exactly what we wanted, but we're going to have to introduce an OSI Model Layer Three device to perform routing between the two VLANs. We used to just have two "official" methods of allowing inter-VLAN communication, but now we have three: "router on a stick" Introducing a router to our network Running a Layer 3 switch The reason I say "official" is that in your CCNA studies, you learned about only the first two methods, even though L3 switches have been around for a few years now and are quite commonplace. If you studied for the CCNA with my materials, you know that I mentioned L3 switches to you so you wouldn't look silly in a job interview ("I'm quite sure if there was such a thing as an L3 switch, Mr. Interviewer, Chris Bryant would have said something about them."). I also made it clear that when it came to your CCNA exam, a switch is a layer 2 device and that's it. You now need to leave that thinking behind, and we'll spend plenty of time with L3 switches later in the course. Every once in a while I run into a student who tells me "we don't use VLANs." If you're using a Cisco switch, you're using VLANs whether you know it or not! Let's run show vlan brief on a Cisco switch straight out of the box. SW2#show vlan br VLAN Name Status Ports ---- -------------------------------- --------- --------------------------
----1 default
active
1002 1003 1004 1005
active active active active
fddi-default token-ring-default fddinet-default trnet-default
Fa0/1, Fa0/2, Fa0/3, Fa0/4 Fa0/5, Fa0/6, Fa0/7, Fa0/8 Fa0/9, Fa0/10
We're already using VLANs, even though we haven't configured one. By default, all Cisco switch ports are placed into VLAN 1. The default VLAN is also known as the native VLAN. The five VLANs shown here - 1, 1002, 1003, 1004, and 1005 - cannot be deleted. SW2(config)#no vlan 1002 Default VLAN 1002 may not be deleted.
Note: While our Ethernet hosts are placed into VLAN 1 by default, all five of those VLANs just mentioned are default VLANs. You'll probably never use 1002 - 1005, but that's a good range to know for your exam. There's one more reason that may lead you to create VLANs. If you have a network segment with hosts whose very existence should not be known by the rest of the network, just put these hosts into their own VLAN. (This comes in handy with secret servers, too. Not so much with secret squares or secret squirrels.) Unless you then intentionally make them known to the rest of the network , these hosts will not be known or reachable by hosts in other VLANs. In the following example, all hosts are on the 172.12.123.0 /27 subnet, with their host number as the final octet. Every host can ping every other host. For now. Heh heh heh. Each host is connected to the switch port that matches its host number. These hosts are on the same subnet to illustrate inter-VLAN connectivity issues. As mentioned previously, it's standard practice as well as Cisco's recommendation that each VLAN have its own separate IP subnet.
The problem right now is that every host will receive every broadcast packet sent out by every other host, since all switch ports are placed into VLAN 1 by default. Perhaps we only want Host 2 to receive any broadcast sent by Host 1. We can make this happen by placing them into another VLAN. We'll use VLAN 12 in this case. By placing switch ports 0/1 and 0/2 into VLAN 12, hosts that are not in that VLAN will not have broadcast packets originated in that VLAN forwarded to them by the switch. SW1(config)#int fast 0/1 SW1(config-if)#switchport mode access SW1(config-if)#switchport access vlan 12 % Access VLAN does not exist. Creating vlan 12
SW1(config-if)#int fast 0/2 SW1(config-if)#switchport mode access SW1(config-if)#switchport access vlan 12
One of the many things I love about Cisco switches and routers is that if
you have forgotten to do something, the Cisco device is generally going to remind you or in this case actually do it for you. (You'll see an exception to this later in this very section.) I placed port fast0/1 into a VLAN that did not yet exist, so the switch created it for me. Note: This VLAN was created dynamically, but that doesn't make it a dynamic VLAN. I'm not playing games with words there - remember, the terms "static" and "dynamic" when it comes to VLANs refer to the method used to assign hosts to VLANs, not the manner in which the VLAN was created. It's easy to put a port into a static VLAN, but there are two commands needed to place a port into one. By default, these ports are running in dynamic desirable trunking mode, meaning that the port is actively attempting to form a trunk. (More on these modes and trunks themselves later in the course.) The problem is that a trunk port belongs to all VLANs by default, and we want to put this port into a single VLAN. To do so, we run the switchport mode access command to make the port an access port, and access ports belong to one and only one VLAN. After doing that, we placed the port into VLAN 12 with the switchport access vlan 12 command. After configuring VLANs, always run show vlan brief to verify the config. The output shows that ports 0/1 and 0/2 have been placed into VLAN 12.
Host 1 can still ping Host 2, but to ping the other hosts (or send any traffic to those other hosts), we have to get L3 involved in one of the three methods mentioned earlier: router-on-a-stick A router itself Use an L3 switch
Even though Host 3 and Host 4 are on the same IP subnet as Host 1, they're in different VLANs and therefore cannot ping each other. HOST1#ping 172.12.123.2 Sending 5, 100-byte ICMP Echos to 172.12.123.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 4/4/8 ms
HOST1#ping 172.12.123.3 Sending 5, 100-byte ICMP Echos to 172.12.123.3, timeout is 2 seconds: ..... Success rate is 0 percent (0/5)
HOST1#ping 172.12.123.4 Sending 5, 100-byte ICMP Echos to 172.12.123.4, timeout is 2 seconds: ..... Success rate is 0 percent (0/5)
What if Hosts 1 and 2 still couldn't ping each other, even though they're obviously in the same subnet and the same VLAN? There are two places you should look that might not occur to you right away. First, check speed and duplex settings on the switch ports. Second, check the MAC table on the switch and make sure the hosts in question have an entry in the table to begin with. Nothing is perfect, not even a Cisco switch, and every once in a very great while the switch may not have learned a MAC address that it should know. Throughout this chapter, I've used show vlan brief to check VLAN membership. Here's what the full show vlan command displays:
All the information you need for basic and intermediate VLAN troubleshooting is contained in show vlan brief, so I prefer to use that version of the command. You know that all ports are placed into VLAN 1 by default, and all ports in the above configuration except 0/1 and 0/2 are indeed in VLAN 1. In the more detailed field at the bottom of the show vlan output, note that the default VLAN type set for VLANs 1 and 12 is "enet", short for ethernet. The other VLANs are designed for use with FDDI and Token Ring, and you can see the defaults follow that designation. The only other default seen here is the MTU size of 1500. Notice that all the VLAN-related configuration has been placed on the switch - we haven't touched the hosts. With static VLANs, the host devices will assume the VLAN membership of the port they're connected to. The hosts don't even know about the VLANs, nor do they care. By the way, if you just want to see the ports that belong to a specific VLAN, run the command show vlan id followed by the VLAN number. This command shows you the ports that belong to that VLAN, the status of those ports, the MTU of the VLAN, and more. SW1#show vlan id ? WORD ISL VLAN IDs 1-1005
SW1#show vlan id 5 VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------5 VLAN0005 active Fa0/5, Fa0/11, Fa0/12 VLAN Type SAID MTU Parent RingNo BridgeNo Stp BrdgMode Trans1 Trans2 ---- ----- ---------- ----- ------ ------ - ------- ---- -------- ------ -----5 net 100005 1500 - 0 0 Remote SPAN VLAN ---------------Disabled Primary Secondary Type Ports ------- --------- ----------------- -----------------------------------------SW1#
The actual configuration of dynamic VLANs is out of the scope of the SWITCH exam, but as a CCNP you need to know the basics of VMPS - a VLAN Membership Policy Server. When you move a user from one port to another using static VLANs, you have to change the configuration of the switch to reflect these changes. Using VMPS results in these changes being performed dynamically, because the port's VLAN membership is decided by the source MAC address of the device connected to that port. (Yet another reason that the first value a switch looks at on an incoming frame is the source MAC address.) VMPS uses a TFTP server to help in this dynamic port assignment scheme. A database on the TFTP server that maps source MAC addresses to VLANs is downloaded to the VMPS server, and that downloading occurs every time you power cycle the VMPS server. VMPS uses UDP to listen to client requests. An interesting default of VMPS is that when a port receives a dynamic VLAN assignment, PortFast is automatically enabled for that port! There's no problem with PortFast being turned off on that port if you feel it necessary, but keep this autoenable in mind. What if we had to move Host 1's connection to the switch to port 0/6? With static VLANs, we'd have to connect to the switch, configure the port as an access port, and then place the port into VLAN 12. With VMPS, the only thing we'd have to do is reconnect the cable to port 0/6, and the
VMPS would dynamically place that port into VLAN 12. When dynamic VLANs are in use, the port number isn't important - the MAC address of the host connected to the port is the deciding factor regarding VLAN membership. I urge you to do additional reading regarding VMPS. It's a widely used switching service, and it's a good idea to know the basics. Use your favorite search engine for the term configuring vmps and you'll quickly find some official Cisco documentation on this topic. Some things to watch out for when configuring VMPS:
The VMPS server has to be configured before configuring the ports as dynamic. PortFast is enabled by default when a port receives a dynamic VLAN assignment. If a port is configured with port security, that feature must be turned off before configuring a port as dynamic. Trunking ports cannot be made dynamic ports, since by definition trunking ports must belong to all VLANs. Trunking must be disabled to make a port a dynamic port.
Final Time I'll Mention This: With static VLANs, the host's VLAN membership is the VLAN to which its switch port has been assigned. With dynamic VLANs, it is dependent upon the host device's MAC address. As for the relation between VLANs and subnets, it's Cisco's recommendation that there be every VLAN be a separate subnet. Trunking
It's highly unlikely that all members of a particular VLAN are all going to be connected to one switch. We're much more likely to see something like this:
To allow the hosts in these two VLANs to communicate with other hosts in the same VLAN, we have to create a trunk on that physical connection between the two switches. A trunk is simply a point-to-point connection between two physically connected switches that allows frames to flow between the switches. By default, a trunk port is a member of all VLANs, so traffic for any and all VLANs can travel across this trunk, and that includes broadcast traffic. The beauty and danger of trunks is that by default, our switch ports are actively attempting to form a trunk. Generally, that's a good thing, but we'll see later in the course that this can lead to trouble.
On these switches, the ports on each end of the connection between the switches are actively attempting to trunk. Therefore, the only action needed from us is to physically connect them with a crossover cable. In just a few seconds, the port light turns green and the trunk is up and running. The command show interface trunk will verify trunking.
From left to right, you can see... the ports that are actually trunking (if none are, you'll see no output) the trunking mode each post is using the encapsulation type the status of the trunk, either "trunking" or "not trunking" the "native vlan" Further down in the output, you can see the VLAN traffic that is allowed to go across the trunk. Just as important is where you will not see trunk ports listed. When we took our first look at the show vlan brief command earlier in this section, there was something a little odd... SW2#show vlan br VLAN Name Status Ports ---- -------------------------------- --------- -----------------------------1 default active Fa0/1, Fa0/2, Fa0/3, Fa0/4 Fa0/5, Fa0/6, Fa0/7, Fa0/8 Fa0/9, Fa0/10
1002 1003 1004 1005
fddi-default token-ring-default fddinet-default trnet-default
active active active active
There are 10 ports shown, but this is a 12-port switch. Note that ports 0/11 and 0/12 do not appear in the output of show vlan brief, and if you ran the full show vlan command, they wouldn't show up there, either. The ports didn't just disappear, thankfully - ports that are trunking will not appear in the output of show vlan or show vlan brief. Always run both show vlan and show interface trunk when verifying your switch configuration or troubleshooting. Now we know that frames can be sent across a trunk - but how does the receiving switch know the destination VLAN? The frames are tagged by the transmitting switch with a VLAN ID, reflecting the number of the VLAN whose member ports should receive this frame. When the frame arrives at the remote switch, that switch will examine this ID and then forward the frame appropriately. You may have had a CCNA flashback when I mentioned "dot1q" earlier. There were quite a few differences between the trunking protocols ISL and dot1q, so let's review those before we examine a third trunking protocol that you might not have seen during your CCNA studies. For a trunk to form successfully, the ports must agree on the speed, the duplex setting, and the encapsulation type. Many Cisco switches offer the choice of ISL and IEEE 802.1q - and your exam just might discuss these encap types! Inter-Switch Link Protocol (ISL),IEEE 802.1q ("dot1q"), and DTP
ISL and dot1q are both point-to-point protocols. That's about it for the similarities. Discussing the differences will take longer. ISL is Cisco-proprietary, making it unsuitable for the dreaded "multivendor environment". That's the least of our worries with ISL, though. ISL will place both a header and trailer onto the frame, encapsulating it.
That doesn't sound like a big deal, but when you think of adding that overhead to every single frame, it then becomes a big deal - especially when we compare ISL's overhead to dot1q. Since ISL places both a header and trailer, ISL is sometimes referred to as "double tagging". But wait - there's more! The default VLAN is also known as the "native VLAN", and ISL does not use the concept of the native VLAN. Dot1q does use this concept and will not place any additional overhead onto a frame destined for the native VLAN. ISL ignores the native VLAN concept and therefore will encapsulate every frame. The 26-byte header that is added to the frame by ISL contains the VLAN ID; the 4-byte trailer contains a Cyclic Redundancy Check (CRC) value. The CRC is a frame validity scheme that checks the frame's integrity. In turn, this encapsulation leads to another potential issue. ISL encapsulation adds 30 bytes total to the size of the frame, potentially making them too large for the switch to handle. The maximum size for an Ethernet frame is 1518 bytes. Frames larger than that are called giants; if the frame is just a few bytes over that limit, it's a baby giant. For that reason, if one trunking switch is using ISL and its remote partner is not, the remote partner will consider the ISL-encapsulated frames as giants. (?) In contrast to ISL, dot1q plays well with others. The only additional overhead dot1q places onto a frame is a 4-byte header, resulting in less overhead than ISL and resulting in a maximum frame size of 1522 bytes. If the frame is destined for the native VLAN, even that small header isn't added. Since the dot1q tag is only 4 bytes in size, and isn't even placed on every frame, using dot1q lessens the chance of oversized frames. When the remote switch receives an untagged frame, the switch knows that these untagged frames are destined for the native VLAN. Other dot1q facts you should be familiar with:
Dot1q actually embeds the tagging information into the frame itself. You'll occasionally hear dot1q referred to as internal tagging. Dot1q is the "open standard" or "industry standard" trunking protocol and is suitable for a multivendor environment. Since dot1q adds only one tag, it's sometimes called "single tagging"
Note: There's a 4-byte addition in both ISL and dot1q, but they're located in different parts of the frame:
ISL: 4-byte trailer (with CRC value) dot1q: 4-byte tag inserted into the frame
Troubleshooting Trunks I've created a lot of trunks over the years, and I've bumped into quite a few real-world "gotchas" to share with you.
For trunks to work properly, the port speed and port duplex setting should be the same on the two trunking ports. ISL switches don't care about the native VLAN setting, because they don't use the native VLAN to begin with. Giants are frames that are larger than 1518 bytes, and these can occur on ISL since ISL adds 30 overall bytes to the frame. Some Catalyst switches have Cisco-proprietary hardware that allows them to handle the larger frames. Check the documentation for your switch to see if this is the case for your model. Dot1q does add 4 bytes to the frame, but thanks to IEEE 802.3ac, the maximum frame length can be extended to 1522 bytes. (The opposite of a giant is a runt. While giants are too large to be successfully transmitted, runts are frames less than 64 bytes in size.) Both switches must be in the same VTP domain for a trunk to form. Watch those domain names, they're case-sensitive. It's also possible to form a trunk between two switches that don't belong to any VTP domain, but they both have to not belong to one. If you're working on a multilayer switch (also called a "Layer 3 switch"), make sure the port you want to trunk is a Layer 2 port by configuring the interface-level command switchport on that port.
You can configure a 10, 100, or 1000 Mbps interface as a trunk. Changing the native VLAN on one switch does not dynamically change the native VLAN on a remote trunking partner.
Most Cisco switches used to support both ISL and dot1q, but that is no longer the case. For example, the popular 2950 switches don't support ISL. Make sure to check Cisco's online documentation site at www.cisco.com/univercd for a particular switch model if you must have one particular trunking protocol. How Do Access Ports Handle Encapsulation And Tagging? Easy -- they don't. Since access ports belong to one and only one VLAN, there's no need to encapsulate or tag them with VLAN ID information.
Changing The Native VLAN By default, the native VLAN is VLAN 1. The native VLAN is the VLAN the port will belong to when it is not trunking. The native vlan can be changed with the switchport trunk native vlan command, but you should be prepared for an error message very quickly after configuring it on one side of the trunk. We'll change the native vlan setting on fast 0/11 on one side of an existing trunk and see what happens. SW1(config-if)#switchport trunk native vlan 12 1d21h: %SPANTREE-2-RECV_PVID_ERR: Received BPDU with inconsistent peer vlan id 1 on FastEthernet0/11 VLAN12. 1d21h: %SPANTREE-2-BLOCK_PVID_PEER: Blocking FastEthernet0/11 on VLAN0001. Inconsistent peer vlan. 1d21h: %SPANTREE-2-BLOCK_PVID_LOCAL: Blocking FastEthernet0/11 on VLAN0012. Inconsistent local vlan. 1d21h: %CDP-4-NATIVE_VLAN_MISMATCH: Native VLAN mismatch discovered on FastEthernet0/11 (12), with SW2 FastEthernet0/11 (1). SW1#show int fast 0/11 trunk Port Fa0/11
Mode desirable
Encapsulation 802.1q
Status trunking
Native vlan 12
The trunk itself doesn't come down, but those error messages will
continue until you have configured the native vlan on the remote switch port to be the same as that on the local port it's trunking with. When you're running dot1q, your choice of native VLAN is particularly important, since dot1q doesn't tag frames destined for the native VLAN. Manually Configuring Trunking Protocols To manually configure a trunk port to run ISL or dot1q, use the switchport trunk encapsulation command. Rack1SW1(config-if)#switchport trunk encapsulation ? dot1q Interface uses only 802.1q trunking encapsulation when trunking isl Interface uses only ISL trunking encapsulation when trunking negotiate Device will negotiate trunking encapsulation with peer on interface
Notice that there's a third option, negotiate. The trunk ports will then negotiate between ISL and dot1q, and naturally it must be a protocol that both ports support. If the negotiating ports support both protocols, ISL will be selected. By the way, if you use IOS Help to display your switch's encapsulation choices, and there aren't any, that's a pretty good sign that your switch supports only dot1q! SW1(config)#interface fast 0/11 SW1(config-if)#switchport trunk encapsulation ? % Unrecognized command
There's a third trunking protocol we need to be aware of. The Dynamic Trunking Protocol is a Cisco-proprietary point-to-point protocol that actively attempts to negotiate a trunk line with the remote switchport. This sounds great, but there is a cost in overhead - DTP frames are transmitted every 30 seconds. If you decide to configure a port as a non-negotiable trunk port, there's no need for the port to send DTP frames. Also, if there's a device on the other end of the line that can't trunk at all - a firewall, for example - there's no need to send DTP frames. DTP can be turned off at the interface level with the switchport nonegotiate command, but as you see below, you cannot turn DTP off until the port is no longer in dynamic desirable trunking mode. SW2(config)#int fast 0/8
SW2(config-if)#switchport nonegotiate Command rejected: Conflict between 'nonegotiate' and 'dynamic' status. SW2(config-if)#switchport mode ? access Set trunking mode to ACCESS unconditionally dynamic Set trunking mode to dynamically negotiate access or trunk mode trunk Set trunking mode to TRUNK unconditionally SW2(config-if)#switchport mode trunk SW2(config-if)#switchport nonegotiate
You can verify DTP operation (or non-operation) with show dtp. SW1#show dtp Global DTP information Sending DTP Hello packets every 30 seconds Dynamic Trunk timeout is 300 seconds 4 interfaces using DTP
There is a show dtp interface command as well, but it's extremely verbose. It will show you which interfaces are running DTP, which the basic show dtp command will not do. While we've got those trunking modes in front of us, let's examine exactly what's going on with each one. Trunk mode means just that - this port is in unconditional trunk mode and cannot be an access port. Since this port cannot negotiate, it's standard procedure to place the remote trunk port in trunk mode. Turning off DTP when you place a port in trunk mode is a great idea, because there's no use in sending negotiation frames every 30 seconds if no negotiation is necessary. Dynamic desirable is the default setting for most Cisco switch ports today. If the local switch port is running dynamic desirable and the remote switch port is running in trunk, dynamic desirable, or dynamic auto, a trunk will form. This is because a port in dynamic desirable mode is sending and responding to DTP frames. If you connect two 2950s with a crossover cable, a trunk will form in less than 10 seconds with no additional configuration needed. Dynamic auto is the "oddball" trunking mode. A port configured as dynamic auto (often called simply "auto") will not actively negotiate a trunk, but will accept negotiation begun by the remote switch. As long as the remote trunk port is configured as dynamic desirable or trunk, a trunk
will form. It's important to note that the trunk mode does not have to match between two potential trunk ports. One port could be in dynamic desirable and the other in trunk mode, and the trunk would come up. Is there a chance that two ports that are both in one of these three modes will not successfully form a trunk? Yes - if they're both in dynamic auto mode. You can expand the show interface trunk command we examined earlier in this section to view the trunking mode of a particular interface. Port 0/11 is running in dynamic desirable mode.
We can change the mode with the switchport mode command. By changing the port to trunk mode, the mode is "on". SW2(config)#int fast 0/11 SW2(config-if)#switchport mode trunk
SW2#show interface fast 0/11 trunk Port Fa0/11 Port Fa0/11
Mode on
Encapsulation Status 802.1q trunking
Native vlan 1
Vlans allowed on trunk 1-4094
Port Fa0/11
Vlans allowed and active in management domain 1
Port Fa0/11
Vlans in spanning tree forwarding state and not pruned 1
When we looked at the options for switchport mode, did you notice that there is no "off" setting?
SW2(config-if)#switchport mode ? access Set trunking mode to ACCESS unconditionally dynamic Set trunking mode to dynamically negotiate access or trunk mode trunk Set trunking mode to TRUNK unconditionally
When a port is configured as an access port, that unconditionally turns trunking off. switchport mode access is the command that turns trunking off. Here's the show interface trunk command displaying the information for the port leading to HOST 1 after configuring the port as an access port. SW1#show interface fast 0/1 trunk Port Fa0/1
Mode off
Encapsulation 802.1q
Port Fa0/1
Vlans allowed on trunk 12
Port Fa0/1
Vlans allowed and active in management domain 12
Port Fa0/1
Vlans in spanning tree forwarding state and not pruned 12
Status not-trunking
Native vlan 1
Through the various show commands we've used in this section, you might have noticed that trunk ports allow traffic for VLANs 1 - 4094 to cross the trunk line. This is the default, but it can be changed with the switchport trunk allowed vlan command. The various options with this command do take a little getting used to, so let's take a closer look at them. SW1(config-if)#switchport trunk allowed vlan ? WORD VLAN IDs of the allowed VLANs when this port is in trunking mode add add VLANs to the current list all all VLANs except all VLANs except the following none no VLANs remove remove VLANs from the current list
except - Follow this option with the VLANs whose traffic should not be allowed across the trunk. We'll configure interface fast 0/11 and 0/12 to not trunk for VLAN 1000 and look at the results with show interface trunk. SW1(config)#interface fast 0/11 SW1(config-if)#switchport trunk allowed vlan except 1000 SW1(config-if)#interface fast 0/12 SW1(config-if)#switchport trunk allowed vlan except 1000 SW1#show interface trunk
Port Fa0/11 Fa0/12
Mode desirable desirable
Encapsulation 802.1q 802.1q
Status trunking trunking
Native vlan 1 1
Port Fa0/11 Fa0/12
Vlans allowed on trunk 1-999,1001-4094 1-999,1001-4094
Port Fa0/11 Fa0/12
Vlans allowed and active in management domain 1,12 1,12
Port Fa0/11 Fa0/12
Vlans in spanning tree forwarding state and not pruned 1,12 12
VLAN 1000 is not allowed to trunk through interfaces fast 0/11 and fast 0/12. To allow VLAN 1000 to trunk through these interfaces again, we'll use the add option of this command. (To remove additional VLANs, we would use remove.) SW1(config)#int fast 0/11 SW1(config-if)#switchport trunk allowed vlan add 1000 SW1(config-if)#int fast 0/12 SW1(config-if)#switchport trunk allowed vlan add 1000 SW1#show interface trunk Port Fa0/11 Fa0/12
Mode desirable desirable
Encapsulation 802.1q 802.1q
Status trunking trunking
Native vlan 1 1
Port Fa0/11 Fa0/12
Vlans allowed on trunk 1-4094 1-4094
Port Fa0/11 Fa0/12
Vlans allowed and active in management domain 1,12 1,12
Port Fa0/11 Fa0/12
Vlans in spanning tree forwarding state and not pruned 1,12 12
VLAN 1000 is again allowed to trunk through these two interfaces. The more drastic choices are all and none. To disable trunking for all VLANs, the none option would be used. To enable trunking for all VLANs again, we'll use the all option. SW1(config)#int fast 0/11
SW1(config-if)#switchport trunk allowed vlan none SW1(config-if)#int fast 0/12 SW1(config-if)#switchport trunk allowed vlan none SW1#show interface trunk Port Fa0/11 Fa0/12
Mode desirable desirable
Encapsulation Status 802.1q trunking 802.1q trunking
Native vlan 1 1
Port Fa0/11 Fa0/12
Vlans allowed on trunk none none
Port Fa0/11 Fa0/12
Vlans allowed and active in management domain none none
Port Fa0/11 Fa0/12
Vlans in spanning tree forwarding state and not pruned none none
SW1(config)#int fast 0/11 SW1(config-if)#switchport trunk allowed vlan all SW1(config-if)#int fast 0/12 SW1(config-if)#switchport trunk allowed vlan all
SW1#show interface trunk Port Fa0/11 Fa0/12
Mode desirable desirable
Encapsulation Status 802.1q trunking 802.1q trunking
Native vlan 1 1
Port Fa0/11 Fa0/12
Vlans allowed on trunk 1-4094 1-4094
Port Fa0/11 Fa0/12
Vlans allowed and active in management domain 1,12 1,12
Port Fa0/11 Fa0/12
Vlans in spanning tree forwarding state and not pruned none none
Naming VLANs You can give your VLAN a more intuitive name with the name command. SW1(config)#vlan 10
SW1(config-vlan)#name ENGINEERING
Running show vlan brief verifies that the VLAN has been named... SW1#show vlan brief VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------1 default active Fa0/1, Fa0/2, Fa0/3, Fa0/4 Fa0/5, Fa0/6, Fa0/7, Fa0/8 Fa0/9, Fa0/10, Fa0/13, Fa0/14 Fa0/15, Fa0/16, Fa0/17, Fa0/18 Fa0/19, Fa0/20, Fa0/21, Fa0/22 Fa0/23, Fa0/24 10 ENGINEERING active
... but if you want to further configure the VLAN, you must do so by number, not by name. SW1(config)#vlan ENGINEERING Command rejected: Bad VLAN list - character #1 is a non-numeric character ('E').
SW1(config)#vlan 10 SW1(config-vlan)#
VLAN Database Mode You'll notice that all of the configurations in this study guide use the CLI commands to configure VLANs. There is a second way to do so, and that's using VLAN database mode. I personally don't like using this mode, because it's very easy to save your changes incorrectly - which of course means that your changes aren't saved! It's always a good idea to know how to do something more than one way in Ciscoland, though, so let's take a look at this mode. You enter this mode by typing vlan database at the command prompt. SW1#vlan database SW1(vlan)#
The prompt changed appropriately, so let's create VLAN 30. SW1(vlan)#vlan 30 VLAN 30 added: Name: VLAN0030
No problem! Let's exit this mode the way we always do, by using ctrl-z,
and then verify the creation of the VLAN. To save some room, I'll show all VLANs except VLAN 1. SW1(vlan)#^Z SW1#show vlan brief VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------10 ENGINEERING 1002 fddi-default 1003 token-ring-default 1004 fddinet-default 1005 trnet-default
active active active active active
Do you see a VLAN 30 there? I sure don't. And no matter how many times you do what we just did, you'll never see VLAN 30 - because vlan database mode requires you to type APPLY to apply your changes, or to type EXIT to leave this mode and save changes. I'll do both here, and notice that when you exit by typing EXIT, the APPLY is, well, applied! SW1(vlan)#vlan 30 VLAN 30 added: Name: VLAN0030 SW1(vlan)#apply APPLY completed. SW1(vlan)#exit APPLY completed. Exiting.... SW1#show vlan br VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------10 ENGINEERING 30 VLAN0030 1002 fddi-default 1003 token-ring-default 1004 fddinet-default 1005 trnet-default
active active active active active active
Cisco switches were actually giving a message a few years ago anytime you used the vlan database command that this mode was being phased out. I imagine they got tired of helpdesk calls from people who didn't know about the EXIT and APPLY. (Did you notice that when we left this mode with ctrl-z, the switch didn't tell us the changes were not being applied?) Cisco seems to have changed their minds about getting rid of this mode, and while you probably won't see it on your exam, it's a good idea to know
these details for dealing with real-world networks. VLAN Design Learning to design anything from a class or study guide can be frustrating, because like snowflakes, no two networks are alike. What works well for "Network A" may be inefficient for "Network B". You need to know about the following VLAN design types for both the exam and the real world, but as always you've got to be able to apply your critical thinking skills knowledge to your particular real-world network's needs. In my CCNP ROUTE Study Guide's discussion of Cisco's Three-Layer Hierarchical Networking Model, I mention that it's important to let the Distribution layer handle the "little things" in order to allow the core switches to do what they do best - switch! With VLAN design, we're looking at much the same scenario. If we don't control broadcast and multicast traffic, it can soon affect our network negatively, particularly if we allow it to flow through the core switches. Your VLAN scheme should keep as many broadcasts and multicasts away from the core switches as is possible. There are two major VLAN designs, end-to-end and local. Watch the details here, as one is following the 80/20 rule and the other is following the 20/80 rule. End-to-End and Local VLANs With end-to-end VLANs, the name is the recipe as end-to-end VLANs will span the entire network. The physical location of the user does not matter, as a user is assigned to a single VLAN, and that VLAN will remain the same no matter where the user is. The very nature of an end-to-end VLAN and its spanning of the entire network makes working with this VLAN type a challenge, to say the least. End-to-end VLANs can come in handy as a security tool and/or when the hosts have similar resource requirements - for example, if you had certain hosts across the network that needed access to a particular network resource, but you didn't even want your other hosts to know of the existence of that resource. End-to-end VLANs should be designed with the 80/20 rule in mind, where
80 percent of the local traffic stays within the local area and the other 20 percent will traverse the network core en route to a remote destination. End-to-end VLANs must be accessible on every access-layer switch to accommodate mobile users. Many of today's networks don't lend themselves well to this kind of configuration. The following network diagram is simplified, but even this network would be difficult to configure with end-to-end VLANs if the hosts need connectivity to the Internet and/or corporate servers located across a WAN. With Internet access becoming more of a requirement than a luxury for today's end users, 80/20 traffic patterns aren't seen as often as they once were.
Local VLANs are designed with the 20/80 rule in mind. Local VLANs
assume that 20 percent of traffic is local in scope, while the other 80 percent will traverse the network core. While physical location is unimportant in end-to-end VLANs, users are grouped by location in Local VLANs. More and more networks are using centralized data depositories, such as server farms - and even in the simplified network diagram above, the end user must go across a WAN to reach the server farm, another reason that 80/20 traffic patterns aren't seen as often as they were in the past. The Mystery And Mayhem Of The VLAN.DAT File I always say that you should never practice your CCNP skills at work and that is definitely true with this part of the course. Having said that, I get regular emails from CCNP candidates working with home labs that run into an interesting and/or frustrating situation. Let's take a look at the situation first and then come up with a solution. Let's say you've been practicing on a Cisco switch and decide to erase the config. You were working with three VLANs... SW1#show vlan brief VLAN Name ---- -----------------1 default
100 200 300
VLAN0100 VLAN0200 VLAN0300
... and you want to start your labs over, which means getting rid of those VLANs. You run write erase to erase the switch startup config and then reload the switch... SW1#write erase Erasing the nvram filesystem will remove all configuration file firm] [OK] Erase of nvram: complete SW1# 2d22h: %SYS-7-NV_BLOCK_INIT: Initalized the geometry of nvram SW1#reload Proceed with reload? [confirm]
.. and since there's no startup configuration, you're prompted to go into Setup Mode when the switch comes back up.
Would you like to enter the initial configuration dialog? [yes/no]:
After answering no, we're placed at the user exec prompt. We put a few basic commands on the switch... Switch>enable Switch#conf t Enter configuration commands, one per line. Switch(config)#hostname SW1 SW1(config)#line con 0 SW1(config-line)#logging synchronous SW1(config-line)#exec-t 0 0 SW1(config-line)#^Z SW1#wr Building configuration... [OK] SW1#
End with CNTL/Z
... and just to verify, we run show vlan brief... SW1#show vlan brief VLAN Name ---- -------------1 default
100 200 300
VLAN0100 VLAN0200 VLAN0300
... and the VLANs are still there. The first time you run into this, you might think you somehow erased the config incorrectly, so you do it again. At least I did. But I don't care how many times you erase the router config, those VLANs are still gonna be there. Why? The file that contains the actual VLAN information isn't the startup config file - it's the VLAN.DAT file, contained in Flash. (For clarity's sake, I've removed the date you'll see next to each file name.) SW1#show flash Directory of flash:/ 2
-rwx
2980731
c2950-i6q4l2-mz.121-19.EA1c.bin
3 4 5 6 7 8 85
-rwx -rwx -rwx -rwx -rwx drwx -rwx
962 317 5 736 110 2432 110
config.text env_vars private-config.text vlan.dat info html info.ver
It's actually the VLAN.DAT file you need to erase to get rid of your switch's VLAN information. The command to do so isn't difficult at all, but the prompt is a little tricky. The command is delete vlan.dat: SW1#delete vlan.dat Delete filename [vlan.dat]?
As you'd expect, you're asked whether you really want to delete that particular file. Do NOT answer "yes" or "y" - just hit the Enter key to accept the default answer contained in the brackets. After you do that, you're asked one more question: SW1#delete vlan.dat Delete filename [vlan.dat]? Delete flash:vlan.dat? [confirm]
You're then asked to confirm the delete. Again, don't answer "Y" or "yes" - just accept the default answer by hitting the Enter key. SW1#delete vlan.dat Delete filename [vlan.dat]? Delete flash:vlan.dat? [confirm] SW1#
You won't see a "file deleted" message - you'll just be put back at the prompt. If you don't have a vlan.dat file in Flash, you will see this message: SW1#delete vlan.dat Delete filename [vlan.dat]? Delete flash:vlan.dat? [confirm] %Error deleting flash:vlan.dat (No such file or directory)
Now, you're probably thinking I made a pretty big deal out of accepting those default answers and not entering "yes". And you're right, I did - and
here's why: SW1#delete vlan.dat Delete filename [vlan.dat]? y Delete flash:y? [confirm] %Error deleting flash:y (No such file or directory)
If you answer "Y" to "Delete filename?", you're telling the IOS to delete a file actually named "y", which is not going to give us the results we want. After deleting the vlan.dat file, reloading the switch, and adding the same commands as we did before ... SW1#show vlan brief VLAN Name ---- -----------------------1 default 1002 1003 1004 1005
fddi-default token-ring-default fddinet-default trnet-default
.. the VLANs are truly gone! Here's a link to a video on my YouTube Cisco Certification Video Channel that shows this information on a live Cisco switch. http://bit.ly/c7wx5G You can also find over 400 free Cisco CCNA and CCNP videos on my YouTube channel: http://www.youtube.com/user/ccie12933 Enjoy! Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage SWITCH Study Guide Chris Bryant, CCIE #12933 www.thebryantadvantage.com Back To Index
VLAN Trunking Protocol (VTP) Overview The Need For VTP Configuring VTP VTP Modes VTP Advertisement Process Preventing VTP Synchronization Issues VTP Advertisement Types VTP Features VTP Versions VTP Secure Mode
As a CCNP candidate, you know that when it comes to Cisco technologies, there's always something new to learn! You learned about the VLAN Trunking Protocol (VTP) in your CCNA studies, but now we're going to review a bit and then build on your knowledge of both of these important switching technologies.
Why Do We Need VTP? VLAN Trunking Protocol (VTP) allows each switch in a network to have an overall view of the active VLANs. VTP also allows network administrators
to restrict the switches upon which VLANs can be created, deleted, or modified. In our first example, we'll look at a simple two-switch setup and then add to the network to illustrate the importance of VTP.
Here, the only two members of VLAN 10 are found on the same switch. We can create VLAN 10 on SW1, and SW2 really doesn't need to know about this new VLAN.
We know that the chances of all the hosts in a VLAN being on one switch are very remote! More realistic is a scenario like the following, where the center or "core" switch has no ports in a certain VLAN, but traffic destined for that VLAN will be going through that very core switch.
SW2 doesn't have any hosts in VLAN 10, but for VLAN 10 traffic to successfully travel from SW1 to SW3 and vice versa, SW2 has to know about VLAN 10's existence.
SW2 could be configured manually with VLAN 10, but that's going to get very old very fast. Considering that most networks have a lot more than three switches, statically configuring every VLAN on every switch would soon take up a lot of your time, as would troubleshooting the network when you invariably leave a switch out! Luckily, the major feature of VTP is the transmission of VTP advertisements that notify neighboring switches in the same domain of any VLANs in existence on the switch sending the advertisements. The key phrase there is "in the same domain". By default, Cisco switches are not in a VTP domain. Before working with VTP in a home lab or production network, run show vtp status. (The official term for a VTP domain is "management domain", but we'll just call them domains in this section. The only place you'll probably see that full phrase is on the exam.)
There's nothing next to "VTP Domain Name", so a VTP domain has not yet been configured. We'll now change that by placing this switch into a domain called CCNP. Watch this command - it is case sensitive. By far, a mistyped VTP domain name is the #1 reason your network's switches aren't seeing the VLANs you think they should be seeing. What are the other reasons? You'll see later in this section. Let's get that VTP domain set....
After configuring the VTP domain "CCNP" on SW2, SW1 is also placed into that domain. Each switch can now successfully advertise its VLAN information to the other, and as switches are added to this VTP domain, those switches will receive these advertisements as well. A Cisco switch can belong to one and only one VTP domain. VTP Modes In the previous show vtp status readouts, the VTP Operating Mode is set to Server. The more familiar term for VTP Operating Mode is simply VTP Mode, and Server is the default. It's through the usage of VTP modes that we can place limits on which switches can delete and create VLANs.
It's not unusual for edge switches such as SW1 and SW3 to be available to more people that they should be. If SW2 is the only switch that's physically secure, SW2 should be the only VTP Server. Let's review the VTP Modes and then configure SW1 and SW3 appropriately. In Server mode, a VTP switch can be used to create, modify, and delete VLANs. This means that a VTP deployment has to have at least one Server, or VLAN creation will not be possible. This is the default setting
for Cisco switches. Switches running in Client mode cannot create, modify, or delete VLANs. Clients do listen for VTP advertisements and act accordingly when VTP advertisements notify the Client of VLAN changes. VTP Transparent mode actually means that the switch isn't fully participating in the VTP domain. (Bear with me here.) Transparent VTP switches don't synchronize their VTP databases with other VTP speakers; they don't even advertise their own VLAN information. Therefore, any VLANs created on a Transparent VTP switch will not be advertised to other VTP speakers in the domain, making them locally significant only. For that reason, using Transparent VTP mode is another of the three major reasons I've run into that your VLANs aren't being advertised as thoroughly as you'd like. The first reason was a mistyped password - and number three is coming up. This mode can come in handy in certain situations, but be aware of the differences between Transparent and Server mode. There are two versions of VTP, V1 and V2, and the main difference between the two versions affects how a VTP Transparent switch handles an incoming VTP advertisement. VTP Version 1: The Transparent switch will forward that advertisement's information only if the VTP version number and domain name on that switch is the same as that of downstream switches. VTP Version 2: The Transparent switch will forward VTP advertisements via its trunk port(s) even if the domain name does not match. To ensure that no one can create VLANs on SW1 and SW3, we'll configure both of them as VTP Clients. SW1's configuration and the resulting output of show vtp status is shown below.
Attempting to create a VLAN on a VTP client results in the following message:
This often leads to a situation where only the VTP Clients will have ports that belong to a yet-to-be-created VLAN, but the VLAN still has to be created on the VTP Server. VLANs can be created and deleted in Transparent mode, but those changes aren't advertised to other switches in the VTP domain. Also, switches do not advertise their VTP mode. Which Switches Should Be Servers, Which Should Be Clients? You have to decide this for yourself in your production network, but I will share a simple method that's always worked for me - if you can physically secure a switch, make it a VTP server. If multiple admins will have access to the switch, you may consider making that switch a VTP Client in order to minimize the chance of unwanted or unauthorized changes being made to your VLAN scheme. The VTP Advertisement Process VTP Advertisements are multicasts, but they are not sent out every port on the switch. The only devices that need the VTP advertisements are other switches that are trunking with the local switch, so VTP advertisements are sent out trunk ports only. The hosts in VLAN 10 in the following exhibit would not receive VTP advertisements.
Along with the VTP domain name, VTP advertisements carry a configuration revision number that enables VTP switches to make sure they have the latest VLAN information. VTP advertisements are sent when there has been a change in a switch's VLAN database, and this configuration revision number increments by one before it is sent. To illustrate, let's look at the revision number on Sw1.
The current revision number is 1. We'll now go to R2 to check the
revision number, add a VLAN, and then check the revision number again.
The revision number was 1, then a VLAN was added. The revision number incremented to 2 before the VTP advertisement reflecting this change was sent to this switch's neighbors. Let's check the revision number on SW1 now.
The revision number has incremented to 2, as you'd expect. But what exactly happened? SW1 received a VTP advertisement from SW2. Before accepting the changes reflected in the advertisement, SW1 compares the revision number in the advertisement to its own revision number. In this case, the revision number on the incoming advertisement was 2 and SW1's revision
number was 1. This indicates to SW1 that the information contained in this VTP advertisement is more recent than its own VLAN information, so the advertisement is accepted. If SW1's revision number had been higher than that in the VTP advertisement from SW2, the advertisement would have been ignored. In the following example, SW2 is sending out an advertisement with revision number 300. The three switches are running VLANs 10, 20, 30, 40, and 50, and everything's just fine. The VTP domain is CCNP.
Now, a switch that was at another client site is brought to this client and installed in the CCNP domain. The problem is that the VTP revision number on the newly installed switch is 500, and this switch only knows about the default VLAN, VLAN 1.
The other switches will receive a VTP advertisement with a higher revision number than the one currently in their VTP database, so they'll synchronize their databases in accordance with the new advertisement. The problem is that the new advertisements don't list VLANs 10, 20, 30, 40, or 50, so connectivity for those VLANs is lost. I've seen this happen with switches that were brought it to swap out with an out-of-service switch. That revision number has to be reset to zero! If you ever see VLAN connectivity suddenly lost in your network, but the switches are all functional, you should immediately check to see if a new switch was recently installed. If the answer is yes, I can practically guarantee that the revision number is the issue. Cisco theory holds that there are two ways to reset a switch's revision number to zero: 1. 2.
Change the VTP domain name to a nonexistent domain, then change it back to the original name. Change the VTP mode to Transparent, then change it back to Server.
In reality, resetting this number can be more of an art form than a science. The method to use often depends on the model. In the real world, you should use your favorite search engine for a phrase such as reset configuration revision number zero followed by the switch model.
Reloading the switch won't do the job, because the revision number is kept in NVRAM, and the contents of Non-Volatile RAM are kept on a reload. It's a good practice to perform this reset with VTP Clients as well as Servers. In short, every time you introduce a switch to your network and that switch didn't just come out of the box, perform this reset. And if it did come out of the box, check it anyway. ;) To see the number of advertisements that have been sent and received, run show vtp counters.
I'm sure you noticed that there are different types of advertisements! There are three major types of VTP advertisements - here's what they are and what they do. Keep in mind that Cisco switches only accept VTP advertisements from other switches in the same VTP domain. Summary Advertisements are transmitted by VTP Servers every 5 minutes, or upon a change in the VLAN database. Information included in the summary advertisement:
VTP domain name and version Configuration revision number MD5 hash code Timestamp Number of subset advertisements that will follow this ad
Subset Advertisements are transmitted by VTP Servers upon a VLAN configuration change. Subset ads give specific information regarding the
VLAN that's been changed, including:
Whether the VLAN was created, deleted, activated, or suspended The new name of the VLAN The new Maximum Transmission Unit (MTU) VLAN Type (Ethernet, Token Ring, FDDI)
Client Advertisement Requests are just that - a request for VLAN information from the client. Why would a client request this information? Most likely because the VLAN database has been corrupted or deleted. The VTP Server will respond to this request with a series of Summary and Subset advertisements.
Configuring VTP Options Setting a VTP password is optional, as is a little something called VTP Pruning - and they're considered vital in many of today's real-world networks. Let's take a look at both, starting with the VTP password. Earlier in this section, you saw how to place a switch into a VTP domain:
The VTP mode is changed with the vtp mode command.
VTP allows us to set a password as well. Naturally, the same password should be set on all switches in the VTP domain. Although this is referred to as secure VTP, there's nothing secure about it - the command show vtp password displays the password, and this password can't be encrypted with service password-encryption.
And as you've likely already guesses, a mistyped VTP password is the third of the three reasons your VLANs aren't being properly advertised. To recap: 1. Mistyped VTP Domain (by far the most common reason) 2. Not realizing you have a Transparent mode VTP switch in your network (rare) 3. Mistyped VTP Password VTP Pruning Trunk ports belong to all VLANs, which leads to an issue involving broadcasts and multicasts. A trunk port will forward broadcasts and multicasts for all VLANs it knows about, regardless of whether the remote switch actually has ports in that VLAN or not! In the following example, VTP allows both switches to know about VLANs 2 - 19, even though neither switch has ports in all those VLANs. Since a trunk port belongs to every VLAN, they both forward broadcasts and multicasts for all those VLANs. Both switches are transmitting and receiving broadcasts and multicasts that they do not need.
Configuring VTP Pruning allows the switches to send broadcasts and multicasts to a remote switch only if the remote switch actually has ports that belong to that VLAN. This simple configuration will prevent a great
deal of unnecessary traffic from crossing the trunk. vtp pruning enables pruning for all VLANs in the VTP domain, all VLANs from 2 - 1001 are eligible to be pruned. The reserved VLANs you see in show vlan brief - VLANs 1 and 1002 - 1005 - cannot be pruned.
Note that SW1 had to be changed to Server mode in order to enable pruning. Verify that pruning is enabled with show vtp status.
Enabling pruning on one VTP Server actually enables pruning for the
entire domain, but I wanted to show you that a switch has to be in Server mode to have pruning enabled. It doesn't hurt anything to enter the command vtp pruning on all Servers in the domain, but it's unnecessary. Stopping unnecessary broadcasts might not seem like such a big deal in a two-switch example, but most of our networks have more than two switches! Consider this example:
If the three hosts shown in VLAN 7 are the only hosts in that VLAN, there's no reason for VLAN 7 broadcasts to reach the middle and bottom two switches. Without VTP pruning, that's exactly what will happen! Using VTP pruning here will save quite a bit of bandwidth. I'd like to share a real-world troubleshooting tip with you here. If you're having problems with one of your VLANs being able to send data across the trunk, run show interface trunk. Make sure that all vlans shown under "vlans allowed and active in management domain" match the ones shown under "vlans in spanning tree forwarding state and not pruned". SW2#show interface trunk Port Fa0/11 Fa0/12 Port Fa0/11 Fa0/12 Port
Mode desirable desirable
Encapsulation 802.1q 802.1q
Status trunking trunking
Native vlan 1 1
Vlans allowed on trunk 1-4094 1-4094 Vlans allowed and active in management domain
Fa0/11 Fa0/12
1,10,20,30,40 1,10,20,30,40
Port Fa0/11 Fa0/12
Vlans in spanning tree forwarding state and not pruned 1,10,20,30 none
In this example, VLAN 40 is allowed and active, but it's been pruned. That's fine if you don't have hosts on both sides of the trunk in VLAN 40, but I have seen this happen in a production network where there were hosts on both sides of the trunk in a certain VLAN, and that VLAN had been pruned. It's a rarity, but now you know to look out for it!
VTP Versions By now, you've probably noticed that the first field in the readout of show vtp status is the VTP version. The first version of VTP was VTP Version 1, and that is the default of some older Cisco switches. The next version was Version 2, and that's the default on many newer models, including the 2950. As RIPv2 has advantages over RIPv1, VTP v2 has several advantages over VTPv1. Version 2 supports Token Ring VLANs and Token Ring switching, where Version 1 does not. When changes are made to VLANs or the VTP configuration at the command-line interface (CLI), Version 2 will perform a consistency check. on VLAN names and numbers. This helps to prevent incorrect / inaccurate names from being propagated throughout the network. A switch running VTPv2 and Transparent mode will forward VTP advertisements received from VTP Servers in that same domain. As with RIP, VTP versions don't work well together. Cisco switches run in Version 1 by default, although most newer switches are V2-capable. If you have a V2-capable switch in a VTP domain with switches running V1, just make sure the newer switch has V2 disabled. The version can be changed with the vtp version command.
VTP "Secure Mode" By setting a VTP password, you place the entire VTP domain into Secure Mode. Every switch in the domain must have a matching password. SW1(config)#vtp domain CCNP Changing VTP domain name from NULL to CCNP SW1(config)#vtp password CCIE Setting device VLAN database password to CCIE
VTP Secure Mode isn't all that secure, though - here's how you discover the password: SW1#show vtp password VTP Password: CCIE
Pretty secure, eh? Let's try to encrypt that password -SW1(config)#service password-encryption SW1#show vtp password VTP Password: CCIE
That's something to keep in mind! VTP Configuration Tips I've configured VTP many times, and while the following two tips aren't Cisco gospel, they've worked well for me. Unless you have a very good reason to put a switch into Transparent mode, stick with Server and Client. Not only does this ensure that the VTP databases in your network will be synchronized, but it causes less confusion in the future for other network admins who don't understand Transparent mode as well as you do. Some campus networks will have switches that can be easily secured the ones in your network control room, for example - and others that may be more accessible to others. Your VTP Servers should be the switches that are accessible only by you and a trusted few.
Don't leave every switch in your VTP domain at the default of Server, or you've made it possible to create and delete VLANs on every switch in your network.
Copyright 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933 www.thebryantadvantage.com Back To Index
Spanning Tree Protocol Basics Overview Basics Of LAN Switching BPDUs The Root Bridge Election Root Port Selection And Cost Default Root Port Costs STP Port States STP Timers Making A Nonroot Switch The Root Bridge Where Should The Root Bridge Be Located? Topology Change Notification BPDU Operation Load Sharing With The port-priority Command The Extended System ID Feature
In today's networks - heck, even in yesterday's networks - we love redundancy. A single point of failure just isn't acceptable today, so we're going to spend quite a bit of time in the SWITCH course learning how to create that redundancy when it doesn't already exist.
With our routing protocols such as EIGRP and OSPF, redundant paths can be used in addition to the primary path to perform equal-cost and/or unequal-cost load balancing. That's a desirable behavior with routing. With switching, those redundant paths need to be ready to be put into action in case the primary path fails, but they won't be used in addition to the primary path. This is the responsibility of the Spanning Tree Protocol (STP, defined by IEEE 802.1d). If you recently earned your CCNA, much of the first part of this section will seem familiar. If it's been a while since you studied STP, we'll get you back up to speed quickly - and then we'll all dive in to advanced STP features. LAN Switching Basics Switches use their MAC address table to switch frames, but when a switch is first added to a network, it has no entries in this table save for a few entries for the CPU. The switch will dynamically build its MAC table by examining the source MAC address of incoming frames. (The source MAC address is the first thing the switch looks at on incoming frames.) As the switch builds the MAC table, it quickly learns the hosts that are located off each port. But what if the switch doesn't know the correct port to forward a frame to? What if the frame is a broadcast or multicast? Glad you asked! Unknown unicast frames are frames destined for a particular host, but there is no MAC address table entry for that destination. Unknown unicast frames are forwarded out every port except the one they came in on. Under no circumstances will a switch send a frame back out the same port it came in on. Broadcast frames are destined for all hosts, while multicast frames are destined for a specific group of hosts. Broadcast and multicast frames are also forwarded out every port except the one they came in on. Known unicast frames are frames destined for a particular host, and this destination has an entry in the receiving switch's MAC table. Since we know the right port through which to forward the frame, there's no reason to send it out all ports, so this frame will be unicast via that port listed in
the MAC table. To review: Unknown unicast, broadcast, and multicast frames are forwarded out all ports except the one upon which they were originally received Known unicast frames are unicast via the port listed in the MAC address table That all sounds nice and neat, right? For the most part, it is. But as we all know, production networks are rarely nice and neat. We don't want to have only one known path from "Point A" to "Point B". We want redundancy - that is, if one path between two hosts is unusable, there should be a second path that is almost immediately available. The problem is that with redundant links comes the possibility of a switching loop. The Spanning Tree Protocol (STP) helps to prevent switching loops from forming, but what if STP didn't exist? What if you decide to turn it off? Let's walk through what would happen in a switching network with redundant paths if STP did not exist.
Now this is redundancy! We've got three separate switches connecting two ethernet segments, so even if two separate switches become unavailable, these hosts would be able to communicate. But we better have STP on to prevent switching loops. If we didn't, what would happen? If Host A sends a frame to Host C, all three switches would receive the frame on their port 0/1. Since none of the switches would have an entry for Host A in their MAC tables, each switch would make an entry for that host, listing it as reachable via port 0/1. None of the switches know where Host C is yet, so the switches will follow the default behavior for an unknown unicast address - they will flood the frame out all ports except the one it came in on. That includes port 0/2 on all three switches. Just this quickly, without STP, we have a switching loop. Each switch will see the frames that the other two switches just forwarded out their port 0/2. The problem is that the source MAC address is still the address of Host A, but now the switches will each be receiving frames with that source MAC address on port 0/2. Since all the switches had port 0/1 as the port for Host A, they'll now change that MAC address table listing to port 0/2 - and again flood the frame. The frames are just going to keep going in circles, and that's why we call it a switching loop, or bridging loop. Switching loops cause three problems: Frames can't reach their intended destination, either totally or in part Unnecessary strain put on CPU Unnecessary use of bandwidth Luckily for us, switching loops just don't occur that often, because STP does a great job of preventing switching loops before they can occur - and STP all begins with the exchange of Bridge Protocol Data Units (BPDUs). The Role Of BPDUs BPDUs are transmitted every two seconds to the well-known multicast MAC address 01-80-c2-00-00-00. (It might not have been well-known to
you before, but it is now!) We've actually got two different BPDU types:
Topology Change Notification (TCN) Configuration
We'll talk about TCNs later in this section, but for now it's enough to know that the name is the recipe - a switch sends a TCN when there is a change in the network topology. Configuration BPDUs are used for the actual STP calculations. Once a root bridge is elected, only that root bridge will originate Configuration BPDUs; the non-root bridges will forward copies of that BPDU. BPDUs also carry out the election to decide which switch will be the Root Bridge. The Root Bridge is the "boss" of the switching network - this is the switch that decides what the STP values and timers will be. Each switch will have a Bridge ID Priority value, more commonly referred to as a BID. This BID is a combination of a 2-byte Priority value and the 6-byte MAC address. The BID has the priority value listed first. For example, if a Cisco switch has the default priority value of 32,768 and a MAC address of 11-22-3344-55-66, the BID would be 32768:11-22-33-44-55-66. Therefore, if the switch priority is left at the default on all switches, the MAC address is the deciding factor in the root bridge election. Is that a bad thing? Maybe... Why You Should Care About The Winner Of This Election Before we take a look at the root bridge election process itself, let's talk a bit about why we care which switch wins. Cisco switches are equal, but some are more equal than others. In any network you're going to have switches that are more powerful than others when it comes to processing power and speed, and your root bridges are going to have a heavier workload than the non-root bridges. Bottom line: Your most powerful switches, which are also hopefully centrally located in your network, should be your root bridges.
Note that I said "root bridges". Not only can we as the network admins determine which of our switches will be the primary root bridge, we can also determine the secondary root bridge - the switch that will become the root bridge if the primary root bridge goes down. After we take a look at the important defaults of the root bridge election, along with several examples, I'll show you exactly how to configure any given Cisco switch in your network as the primary or secondary root bridge. As the network admins, it's you and I that should decide this election, rather than.... ... well, let's see what happens without admin intervention! The Default Root Bridge Election Process Switches are a lot like people - when they first arrive, they announce that they are the center of the universe. Unlike some people, the switches will soon get over it. But seriously, folks, BPDUs will be exchanged between our switches until one switch is elected Root Bridge, and it's the switch with the lowest BID that will end up being the Root Bridge. In this example, we'll look at a three-switch network and the Root Bridge election from each switch's point of view. Each switch is running the default priority of 32768, and the MAC address of each switch is the switch's letter 12 times. Through the magic of technology, all three switches are coming online at the same time, all three believe they are the root bridge, and all three get very busy announcing that fact. Since each of these switches believe it's the root bridge, all six ports in this example will go to the listening state, allowing it to hear BPDUs from other switches. More about those STP states later in the course - let's focus on the election for now.
SW A has a BID of 32768:aa-aa-aa-aa-aa-aa. That switch will receive BPDUs from both SW B and SW C, both containing their individual BIDs. SW A will see that the BIDs it's getting from both of those switches are higher than its own, so SW A will continue to send BPDUs announcing itself as the Root Bridge.
SW B has a BID of 32768:bb-bb-bb-bb-bb-bb. SW B will receive the BIDs as shown, and since SW A is sending a lower BID than SW B's, SW B will recognize that SW A is the true Root Bridge, and SW B will then recognize SW A as the root.
SW C is in the same situation. SW C will receive the BIDs as shown, and since SW A is sending a lower BID than SW C's, SW C will recognize that SW A is the Root Bridge. Even though these switches have quickly agreed that SW A is the root, this election really never ends. If a new switch comes online and advertises a BID that is lower than SW A's. that switch would then become the root bridge. In the following example, SW D has come online and has a BID lower than the current Root Bridge, SW A. SW D will advertise this BID via a BPDU to SW B, and SW B will realize that SW D should be the new root bridge. SW B will then announce this to the other switches, and soon SW D is indeed the root bridge. Since BPDUs are sent every two seconds, SW D will be seen as the new root bridge very quickly.
To see the local switch's BID, as well as that of the current root bridge, run show spanning-tree vlan x. We'll run this command with another network topology, this one a simple two-switch setup with two trunk links connecting the switches.
SW1#show spanning-tree vlan 1 VLAN0001 Spanning tree enabled protocol ieee Root ID Priority 32769 Address 000f.90e1.c240 This bridge is the root Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32769 (priority 32768 sys-id-ext 1) Address 000f.90e1.c240 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 15
Interface -----------------Fa0/11 Fa0/12
Role Sts Cost Prio.Nbr Type ---- --- --------- -------- ----------------------------Desg FWD 19 Desg FWD 19
128.11 128.12
P2p P2p
There are actually four tip-offs in this readout that you're on the root bridge. The highlighted text is one - what are the other three? The MAC address of the Root ID (indicating the root) and the Bridge ID (the info for the local switch) is the same There is no root port on this switch. As you'd expect, the root port is the port a switch will use to reach the port. The root bridge doesn't need a root port, and therefore will not have one All ports are in Forwarding (FWD) mode. No ports on the root bridge for a given VLAN will be in Blocking (BLK) mode. What do things look like on the non-root bridge, you ask? Let's take a
look at the same command's output on SW2. SW2#show spanning-tree vlan 1 VLAN0001 Spanning tree enabled protocol ieee Root ID Priority 32769 Address 000f.90e1.c240 Cost 19 Port 11 (FastEthernet0/11) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32769 (priority 32768 sys-id-ext 1) Address 000f.90e2.1300 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 15
Interface -----------------Fa0/11 Fa0/12
Role Sts Cost Prio.Nbr Type ---- --- --------- -------- ----------------------------Root FWD 19 Altn BLK 19
128.11 128.12
P2p P2p
There are four tip-offs that you're not on the root bridge. highlighted. What are the other three?
One is
No "this bridge is the root" message The MAC address of the Root ID and Bridge ID fields are different, as shown This bridge does have a root port (fast 0/11) There is a port in Blocking mode (BLK) STP works by putting certain ports into Blocking mode, which in turn prevents switching loops from forming. Notice that only one port in our little two-switch network is in blocking mode, and in this case that's enough to leave only one available path between the switches. No ports on the root bridge will be put into blocking mode.
The port that SW2 is using to reach the root bridge is called the root port,
and it wasn't selected at random. Each switch port has an assigned path cost, and this path cost is used to arrive at the root path cost. Yes, I hate it when two different values have practically the same name, too. Here's a very short chart to help you keep them straight: Path cost: Assigned to an individual port. Strictly a local value and is not advertised to upstream or downstream switches. A port's Path Cost is assigned in accordance with the speed of the port - the faster the port, the lower the Path Cost. Root path cost: Cumulative value reflecting the overall cost to get to the root. Is advertised to downstream switches via BPDUs. The BPDU actually carries the Root Path Cost, and this cost increments as the BPDU is forwarded throughout the network. A port's Path Cost is locally significant only and is unknown by downstream switches. The root bridge will transmit a BPDU with the Root Path Cost set to zero. When a neighboring switch receives this BPDU, that switch adds the cost of the port the BPDU was received on to the incoming Root Path Cost. Root Path Cost increments as BPDUs are received, not sent. That new root path cost value will be reflected in the BPDU that switch then sends out.
The Path Cost is locally significant only. In the previous example, SW3 doesn't have any idea what the Path Cost on SW2's receiving interface is, and doesn't particularly care. No switch downstream of SW3 will know of any Path Costs on SW2 or SW3 - the downstream switches will only see the cumulative cost, the Root Path Cost.
Let's go back to our two-switch example...
...the incoming Root Path Cost should be the same for both ports on SW2, since the two links are the same speed. Let's run show spanning-tree vlan 1 again to see what the deciding factor was. SW2#show spanning-tree vlan 1 Interface -----------------Fa0/11 Fa0/12
Role Sts Cost Prio.Nbr Type ---- --- --------- -------- ----------------------------Root Altn
FWD BLK
19 19
128.11 128.12
P2p P2p
The costs are indeed the same, 19 for each port. (That's the default cost for a 100 Mbps port. Remember, the port cost is determined by the speed of the port.) SW2 is receiving BPDUs from SW1 on both ports 0/11 and 0/12, and one of those ports has to be chosen as the Root Port by SW2. Here's the process of choosing a Root Port, and how these steps factored into SW2's decision-making process. 1. Choose the port receiving the superior BPDU. By "superior BPDU", we mean the one with the lowest Root BID. The BPDUs are coming from the same switch - SW1 - so this is a tie. 2. Choose the port with the lowest Root Path Cost to the root bridge. That's a tie here, too. 3. Choose the port receiving the BPDU with the lowest Sender BID. Since the same switch is sending both BPDUs, that's a tie here as well. 4. Choose the lowest sender Port ID. That was the tiebreaker here. Using our three-router network, we can easily identify the root ports on both SW B and SW C. Both ports on SW A will be in forwarding mode, since this is the root bridge.
STP isn't quite done, though. A designated port needs to be chosen for the segment connecting SW B and SW C. Let's add a host to that segment to see why a designated port needs to be chosen.
Let's say that host is transmitting frames that need to be forwarded to SW A. There are two switches that can do this, but to prevent switching loops from possibly forming, we only want one switch to forward the frames. That's where the Designated Port (DP) comes in. The switch that has the lowest Root Path Cost will have its port on this shared segment become the Designated Port. Of course, there's a chance that both switches in this example would have the same Root Path Cost. In that case, the port belonging to the switch with the lowest BID will become the Designated Port. Additionally, all ports on the root bridge are considered Designated Ports. Here's a clip from show spanning vlan 1 on our root bridge: Fa0/11 Fa0/12
Desg FWD 19 Desg FWD 19
128.11 128.12
Note that both ports are in "Desg" mode.
P2p P2p
Assuming that SW B has a priority of 32768:bb-bb-bb-bb-bb-bb and SW C has a priority of 32768:cc-cc-cc-cc-cc-cc, port 0/2 on SW B would become a Designated Port, and port 0/1 on SW C would be placed into blocking mode. It's interesting to note that of the six ports involved in our example, five are in forwarding mode and only one is blocked - but placing that one particular port into blocking mode prevents switching loops from forming.
Now we know how root bridges are elected - but this knowledge brings up a couple of interesting questions. What if our least powerful switch is elected as the root bridge? What if a switch on the very edge of the network is elected? (That's likely to be one of our least powerful switches, too.) What if we later add a more powerful switch and would now like to make that new switch the root bridge? The bottom line: The MAC address of the switches in our network should not determine the location of the primary and secondary root switches. We - the network admins - should. We have two separate commands that we can use for this: spanning-tree vlan priority spanning-tree vlan root ( primary / secondary ) We'll see both of these commands in action later in this section. In the
meantime, let's have a quick Zen lesson... The Shortest Path Is Not Always The Shortest Path The default STP Path Costs are determined by the speed of the port.
10 Mbps Port: 100 100 Mbps Port: 19 1 GBPS Port: 4 10 GBPS Port: 1
If you change a port cost, the Spanning-Tree Algorithm (STA) runs and STP port states may change as a result. Whether it's for a job interview, a practice exam, or the CCNP SWITCH exam itself, you have to be careful not to jump to the conclusion that the physically shortest path is the logically shortest path.
If you're asked which port on SW3 will be the root port, it's easy to look at the physical topology and decide that it's fast 0/3 - after all, that port is a physically straight shot to the root. However, the link speeds will tell a different story. A nonroot bridge will always select the path with the lowest cumulative cost - and here, that path is the physically longest path.
SW3 - SW1 Root Path Cost: 100 (One 10 Mbps link) SW3 - SW2 - SW 1 Root Path Cost: 38 (Two 100 Mbps links)
Whether it's in the exam room or a production network, make sure to check the port speeds before assuming that the physically shortest path is the optimal path.
Changing A Port's Path Cost Like other STP commands and features, this is another command that you should have a very good reason for configuring before using it. Make sure to add up the Root Path Cost for other available paths before changing a port's Path Cost to ensure you're getting the results you want ... ... and avoid results you don't want! In the following example, SW2 shows a Path Cost of 19 for both ports 0/11 and 0/12.
We'll now change the port cost of 0/12 to 9 for all VLANs... SW2(config)#int fast 0/12 SW2(config-if)#spanning-tree cost 9
... and the results are seen immediately. Note that 0/11 was placed into blocking mode and 0/12 is in Listening mode, soon to be Forwarding mode.
Let's take this one step further. Right now on this switch, we have VLANs 1, 20, and 100. What if we wanted to lower port 0/11's cost to 5 for VLAN 100 only, but leave it at the default of 19 for the other VLANs? We can do this by specifying the VLAN in the cost command. SW2(config)#int fast 0/11 SW2(config-if)#spanning-tree vlan 100 cost 5
The cost is lowered for this port in VLAN 100....
... but for VLAN 20, the cost remains the same.
Again, be careful when adjusting these costs - but properly used, this can be a powerful command for exercising total control over the path your switches use to transport data for a given VLAN. The STP Port States We've discussed the Forwarding and Blocking states briefly, but you should remember from your CCNA studies that there are some intermediate STP states. A port doesn't go from Blocking to Forwarding immediately, and for good reason - to do so would invite switching loops. The disabled STP port state is a little odd; you're not going to look into the STP table of a VLAN and see "DIS" next to a port. Cisco does officially consider this to be an STP state, though. A disabled port is one that is administratively shut down. A disabled port obviously isn't forwarding frames, but it's not even officially taking place in STP. Once the port is opened, the port will go into blocking state. As the name implies, the port can't do much in this state - no frame forwarding, no frame receiving, and therefore no learning of MAC addresses. About the only thing this port can do is accept BPDUs from neighboring switches. A port will then go from blocking mode into listening mode. The obvious question is "listening for what?" Listening for BPDUs - and this port can now send BPDUs as well, allowing the port to participate in the root bridge election. A port in listening mode still can't forward or receive data frames, and as a
result the MAC address table is not yet being updated. When the port goes into learning mode, it's not yet forwarding frames, but the port is learning MAC addresses by adding them to the switch's MAC address table. The port continues to send and receive BPDUs. Finally, a port enters forwarding mode. This allows a port to forward and receive data frames, send and receive BPDUs, and place MAC addresses in its MAC table. Note this is the only state in which the port is actually forwarding frames. To see the STP mode of a given interface, use the show spanning-tree interface command. SW1#show spanning-tree interface fast 0/11 Vlan Role Sts Cost Prio.Nbr Type ---------------- ---- --- --------- -------- ---------VLAN0001 Desg FWD 19 128.11 P2p
STP Timers You may remember these timers from your CCNA studies as well, and you should also remember that these timers should not be changed lightly. What you might not have known is that if you decide to change any and all of these timers, that change must be configured on the root bridge. The root bridge will inform the nonroot switches of the change via BPDUs. We'll prove that very shortly. Right now, let's review the STP timer basics. Hello Time defines how often the Root Bridge will originate Configuration BPDUs. By default, this is set to 2 seconds. Forward Delay is the length of both the listening and learning STP stages, with a default value of 15 seconds for each stage. Maximum Age, referred to by the switch as MaxAge, is the amount of time a switch will retain the superior BPDU's contents before discarding it. The default is 20 seconds. The value of these timers can be changed with the spanning-tree vlan command shown below. The timers should always be changed on the root switch, and the current secondary switch as well. Verify the changes with the show spanning-tree command.
SW1(config)#spanning-tree vlan 1 ? forward-time Set the forward delay for the spanning tree hello-time Set the hello interval for the spanning tree max-age Set the max age interval for the spanning tree priority Set the bridge priority for the spanning tree root Configure switch as root SW1(config)#spanning-tree vlan 1 hello-time 5 SW1(config)#spanning-tree vlan 1 max-age 30 SW1(config)#spanning-tree vlan 1 forward-time 20 SW1#show spanning-tree vlan 1 VLAN0001 Spanning tree enabled protocol ieee Root ID Priority 32769 Address 000f.90e1.c240 This bridge is the root Hello Time 5 sec Max Age 30 sec Bridge ID
Forward Delay 20 sec
Priority 32769 (priority 32768 sys-id-ext 1) Address 000f.90e1.c240 Hello Time 5 sec Max Age 30 sec Forward Delay 20 sec Aging Time 300
Interface -----------------Fa0/11 Fa0/12
Role Sts Cost Prio.Nbr Type ---- --- --------- -------- ----------------------------Desg FWD 19 Desg FWD 19
128.11 128.12
P2p P2p
Again, these values have to be changed on the root switch in order for the change to be accepted by the rest of the network. In the following example, we'll change the STP timers on a nonroot switch and then run show spanning-tree. SW2#show spanning vlan 10 VLAN0010 Spanning tree enabled protocol ieee Root ID Priority 32778 Address 000f.90e1.c240 Cost 19 Port 11 (FastEthernet0/11) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32778 (priority 32768 sys-id-ext 10) Address 000f.90e2.1300 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 15
SW2 is not the root switch for VLAN 10 (or any other VLANs at this point). We'll change the STP timers on this switch. SW2(config)#spanning-tree vlan 10 forward-time 30 SW2(config)#spanning-tree vlan 10 hello-time 5 SW2(config)#spanning-tree vlan 10 max-age 40 SW2#show spanning-tree vlan 10 VLAN0010 Spanning tree enabled protocol ieee Root ID Priority 32778 Address 000f.90e1.c240 Cost 19 Port 11 (FastEthernet0/11) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID
Priority 32778 (priority 32768 sys-id-ext 10) Address 000f.90e2.1300 Hello Time 5 sec Max Age 40 sec Forward Delay 30 sec Aging Time 300
The "Bridge ID" timers have changed, but the Root ID STP timers didn't. The timers listed next to Root ID are the STP timers in effect on the network. The nonroot switch will allow you to change the STP timers, but these new settings will not be advertised via BPDUs unless this local switch later becomes the root bridge. If you feel the need to change STP timers, it's a good idea to change them on both the root and secondary root switches. That allows the secondary root to keep the same timers if the root goes down and the secondary then becomes the primary root. Deterministic Root Switch Placement You might have noticed some other options with the spanning-tree vlan command .... SW1(config)#spanning-tree vlan 1 ? forward-time Set the forward delay for the spanning tree hello-time Set the hello interval for the spanning tree max-age Set the max age interval for the spanning tree priority Set the bridge priority for the spanning tree root Configure switch as root
If STP is left totally alone, a single switch is going to be the root bridge for every single VLAN in your network. Worse, that single switch is going to be selected because it has a lower MAC address than every other switch,
which isn't exactly the criteria you want to use to select a single root bridge. The time will definitely come when you want to determine a particular switch to be the root bridge for your VLANs, or when you will want to spread the root bridge workload. For instance, if you have 50 VLANs and five switches, you may want each switch to act as the root bridge for 10 VLANs each. You can make this happen with the spanning-tree vlan root command. In our previous two-switch example, SW 1 is the root bridge of VLAN 1. We can create 3 more VLANs, and SW 1 will always be the root bridge for every VLAN. Why? Because its BID will always be lower than SW 2. I've created three new VLANs, as seen in the output of show vlan brief. The edited output of show spanning-tree vlan shows that SW 1 is the root bridge for all these new VLANs. SW1#show vlan br VLAN Name Status Ports ---- -------------------------------- --------- ----------------------------1 default active Fa0/1, Fa0/2, Fa0/3, Fa0/4 Fa0/5, Fa0/6, Fa0/7, Fa0/8 Fa0/9, Fa0/10 10 VLAN0010 active 20 VLAN0020 active 30 VLAN0030 active SW1#show spanning-tree vlan 10 VLAN0010 Spanning tree enabled protocol ieee Root ID Priority 32778 Address 000f.90e1.c240 This bridge is the root SW1#show spanning-tree vlan 20 VLAN0020 Spanning tree enabled protocol ieee Root ID Priority 32788 Address 000f.90e1.c240 This bridge is the root SW1#show spanning-tree vlan 20 VLAN0020 Spanning tree enabled protocol ieee
Root ID Priority 32788 Address 000f.90e1.c240 This bridge is the root
Let's say we'd like SW 2 to act as the root bridge for VLANs 20 and 30 while leaving SW 1 as the root for VLANs 1 and 10. To make this happen, we'll go to SW 2 and use the spanning-tree vlan root primary command. SW2(config)#spanning-tree vlan 20 root primary SW2(config)#spanning-tree vlan 30 root primary SW2#show spanning vlan 20 VLAN0020 Spanning tree enabled protocol ieee Root ID Priority 24596 Address 000f.90e2.1300 This bridge is the root SW2#show spanning vlan 30 VLAN0030 Spanning tree enabled protocol ieee Root ID Priority 24606 Address 000f.90e2.1300 This bridge is the root
SW 2 is now the root bridge for both VLAN 20 and 30. Notice that the priority value has changed from the default. This command has another option you should be aware of: SW2(config)#spanning-tree vlan 30 root ? primary Configure this switch as primary root for this spanning tree secondary Configure switch as secondary root
You can also configure a switch to be the secondary, or standby, root bridge. If you want a certain switch to take over as root bridge if the current root bridge goes down, run this command with the secondary option. This will change the priority just enough so that the secondary root doesn't become the primary immediately, but will become the primary if the current primary goes down. Let's take a look at the root secondary command in action. We have a three-switch topology for this example. We'll use the root primary command to make SW3 the root of VLAN 20. Which switch would become the root if SW3 went down? SW3(config)#spanning vlan 20 root primary
SW3#show spanning vlan 20 VLAN0020 Spanning tree enabled protocol ieee Root ID Priority 24596 Address 0011.9375.de00 This bridge is the root Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 24596 (priority 24576 sys-id-ext 20) Address 0011.9375.de00 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 15
SW2#show spanning vlan 20 VLAN0020 Spanning tree enabled protocol ieee Root ID Priority 32788 Address 0011.9375.de00 Cost 19 Port 24 (FastEthernet0/22) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32788 (priority 32768 sys-id-ext 20) Address 0018.19c7.2700 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
SW1#show spanning vlan 20 VLAN0020 Spanning tree enabled protocol ieee Root ID Priority 32788 Address 0011.9375.de00 Cost 38 Port 15 (FastEthernet0/13) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32788 (priority 32768 sys-id-ext 20) Address 0019.557d.8880 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
SW2 and SW1 have the same default priority, so the switch with the lowest MAC address will be the secondary root - and that's SW2. But what if we want SW1 to become the root if SW3 goes down? We use the root secondary command on SW1!
SW1(config)#spanning vlan 20 root secondary SW1#show spanning vlan 20 VLAN0020 Spanning tree enabled protocol ieee Root ID Priority 24596 Address 0011.9375.de00 Cost 38 Port 15 (FastEthernet0/13) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 28692 (priority 28672 sys-id-ext 20) Address 0019.557d.8880 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
SW1 now has a priority of 28672, which will make SW1 the root if SW3 goes down. A priority value of 28672 is an excellent tipoff that the root secondary command has been used on a switch. The config itself shows this command as well: spanning-tree mode pvst spanning-tree extend system-id spanning-tree vlan 20 priority 28672
Take note of those two default settings - "mode pvst" and "extend systemid". We'll talk about the Extended System ID feature later in this section, and the PVST default mode is discussed in the Advanced STP section. Ever wondered how the STP process decides what priority should be set when the spanning-tree vlan root command is used? After all, we're not configuring an exact priority with that command. Here's how the STP process handles this:
If the current root bridge's priority is greater than 24,576, the switch sets its priority to 24576 in order to become the root. You saw that in the previous example. If the current root bridge's priority is less than 24,576, the switch subtracts 4096 from the root bridge's priority in order to become the root.
There is another way to make a switch the root bridge, and that's to change its priority with the spanning-tree vlan priority command. I personally prefer the spanning-tree vlan root command, since that command ensures that the priority on the local switch is lowered
sufficiently for it to become the root. With the spanning-tree vlan priority command, you have to make sure the new priority is low enough for the local switch to become the root switch. As you'll see, you also have to enter the new priority in multiples of 4096. SW2(config)#spanning-tree vlan 10 priority ? bridge priority in increments of 4096
Where Should The Root Bridge Be Located? I'm sure you remember the Cisco Three-Layer Hierarchical Model, which lists the three layers of a switching network - Core, Distribution, and Access. Access switches are those found closest to the end users, and the root bridge should not be an access-layer switch. Ideally, the root bridge should be a core switch, which allows for the highest optimization of STP. What you don't want to do is just blindly select a centrally located switch, particularly if you're visiting a client who has a configuration like this:
Don't be tempted to make SW3 the root switch just because it's got the most connections to other switches. You should never make an accesslayer switch the root switch! The best choice here is one of the core layer switches, which generally will be a physically central switch in your network. If for some reason you can't make a core switch the root, make it one of the distribution switches. Topology Change Notifications (TCNs) Configuration BPDUs are originated only by the root bridge, but a TCN BPDU will be generated by any switch in the network when one of two things happen:
A port goes into Forwarding mode A port goes from Forwarding or Learning mode into Blocking mode
While the TCN BPDU is important, it doesn't give the other switches a lot of detail. The TCN doesn't say exactly what happened, just that something happened.
As the TCN works its way toward the root bridge, each switch that receives the TCN will send an acknowledgement and forward the TCN.
When the root bridge receives the TCN, the root will also respond with an acknowledgement, but this ack will take the form of a Configuration BPDU with the Topology Change bit set.
This indicates to all receiving switches that the aging time for their MAC tables should be changed from the default of 300 seconds to whatever the Forward Delay value is - by default, that's 15 seconds. That allows the switch to quickly rid itself of now-invalid MAC address table entries while keeping entries for hosts that are currently sending frames to that switch. A natural question is "How long will the aging time for the MAC table stay at the Forward Delay value?" Here's the quick formula for the answer: (Forward Delay) + (Max Age) Assuming the default settings, that's a total of 35 seconds. TCNs And The Portfast Exception Cisco switching veterans just know that Portfast has to get involved here somewhere. Portfast-enabled ports cannot result in TCN generation,
which makes perfect sense. The most common usage of Portfast is when a single PC is connected directly to the switch port, and since such a port going into Forwarding mode doesn't impact STP operation, there's no need to alert the entire network about it. And if you're fuzzy on what Portfast is and what it does, that and many other Cisco switch features are covered in the next section! Load Sharing With The port-priority Command We can actually change a port's priority for some VLANs and leave it at the default for other VLANs in order to perform load balancing over a trunk. Let's take a look at the default behavior of a trunk between two switches when we have ten VLANs, and then change this behavior just a bit with the port-priority command. I've created ten VLANs, 11 - 20, for this example. SW1 is the root for all ten VLANs. Before we go forward, using your knowledge of switching, how many port or ports in this example will be in STP Blocking mode? Which one(s)?
Let's check with show spanning vlan 11 on both switches. If your answer was "one", you're correct! SW1#show spanning vlan 11 VLAN0011 Spanning tree enabled protocol ieee Root ID Priority 32779 Address 000e.d7f5.a040 This bridge is the root Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32779 (priority 32768 sys-id-ext 11) Address 000e.d7f5.a040 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
Interface Role Sts Cost Prio.Nbr Type ---------------- ---- --- --------- -------- ---------------------------Fa0/11 Desg FWD 19 128.11 P2p
Fa0/12
Desg FWD 19
128.12
P2p
SW2#show spanning vlan 11 VLAN0011 Spanning tree enabled protocol ieee Root ID Priority 32779 Address 000e.d7f5.a040 Cost 19 Port 11 (FastEthernet0/11) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32779 (priority 32768 sys-id-ext 11) Address 000f.90e2.14c0 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
Interface ---------------Fa0/11 Fa0/12
Role ---Root Altn
Sts --FWD BLK
Cost --------19 19
Prio.Nbr -------128.11 128.12
Type -------------------------P2p P2p
We would see the same result for every other VLAN, so at present the trunk between 0/11 on both switches is carrying the entire load for all VLANs. What if we wanted to use the trunk connecting 0/12 on both switches to carry the data for VLANs 15-20, while the trunk connecting 0/11 carries the rest? We can make that happen by lowering the port priority on 0/12 on one of the switches. Let's change the port priority on SW1's fast 0/12. Don't forget to use the VLAN range option with the spanning-tree command - this will save you quite a bit of typing and time on your exam. SW1(config)#int fast 0/12 SW1(config-if)#spanning-tree vlan ? WORD vlan range, example: 1,3-5,7,9-11 SW1(config-if)#spanning-tree vlan 15-20 ? cost Change an interface's per VLAN spanning tree path cos port-priority Change an interface's spanning tree port priority SW1(config-if)#spanning-tree vlan 15-20 port-priority ? port priority in increments of 16 SW1(config-if)#spanning-tree vlan 15-20 port-priority 16
We didn't change the root switch in any way, so SW1 still shows as the root, and both trunk ports will still be in forwarding mode. Note the change to 0/12's priority.
SW1#show spanning vlan 15 VLAN0015 Spanning tree enabled protocol ieee Root ID Priority 32783 Address 000e.d7f5.a040 This bridge is the root Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32783 (priority 32768 sys-id-ext 15) Address 000e.d7f5.a040 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
Interface ---------------Fa0/11 Fa0/12
Role ---Desg Desg
Sts --FWD FWD
Cost Prio.Nbr Type --------- -------- -----------------------19 128.11 P2p 19 P2p 16.12
The true impact of the command is seen on SW2, where 0/12 is now in Forwarding mode for VLAN 15, and 0/11 is in Blocking mode. SW2#show spanning vlan 15 VLAN0015 Spanning tree enabled protocol ieee Root ID Priority 32783 Address 000e.d7f5.a040 Cost 19 Port 12 (FastEthernet0/12) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32783 (priority 32768 sys-id-ext 15) Address 000f.90e2.14c0 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
Interface Role Sts Cost Prio.Nbr Type ------------------- --- --------- -------- ------------------------Fa0/11 Altn BLK 19 128.11 P2p Fa0/12 Root FWD 19 128.12 P2p
Let's check VLAN 11 on SW2 - is fast 0/11 still in Forwarding mode for that VLAN? SW2#show spanning vlan 11 VLAN0011 Spanning tree enabled protocol ieee Root ID Priority 32779 Address 000e.d7f5.a040 Cost 19 Port 11 (FastEthernet0/11)
Hello Time Bridge ID
2 sec
Max Age 20 sec
Forward Delay 15 sec
Priority 32779 (priority 32768 sys-id-ext 11) Address 000f.90e2.14c0 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
Interface ---------------Fa0/11 Fa0/12
Role ---Root Altn
Sts --FWD BLK
Cost --------19 19
Prio.Nbr -------128.11 128.12
Type -------------------P2p P2p
Yes, it is! VLANs 11 - 15 will use the trunk between the switches' fast 0/11 ports, and VLANs 15-20 will use the trunk between the switches' fast 0/12 ports. In many instances, you'll configure an Etherchannel here rather than using port priority to load balance over the trunk lines. In Ciscoland, it's always a good idea to know more than one way to do something especially when you're studying for an exam! And in this situation, if 0/12 should go down for some reason....say, the shutdown command.... SW2(config)#int fast 0/12 SW2(config-if)#shutdown
... VLANs 15 - 20 would begin using the 0/11 trunk. SW2#show spanning vlan 15 VLAN0015 Spanning tree enabled protocol ieee Root ID Priority 32783 Address 000e.d7f5.a040 Cost 19 Port 11 (FastEthernet0/11) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32783 (priority 32768 sys-id-ext 15) Address 000f.90e2.14c0 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
Interface Role Sts Cost Prio.Nbr Type ---------------- ---- --- --------- -------- ----------------------------Root FWD 19 128.11 P2p Fa0/11
Get The VLAN Information You Need! We're all familiar with show interface x, but there's a slight variation on this command when it comes to Cisco switches that will give you a great deal of helpful information when it comes to troubleshooting - show interface x switchport. There's actually a very common issue indicated in this output - can you spot it? SW1#show interface fast 0/2 switchport Name: Fa0/2 Switchport: Enabled Administrative Mode: dynamic desirable Operational Mode: down Administrative Trunking Encapsulation: dot1q Negotiation of Trunking: On Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 1 (default) Voice VLAN: none Administrative private-vlan host-association: none Administrative private-vlan mapping: none Administrative private-vlan trunk native VLAN: none Administrative private-vlan trunk encapsulation: dot1q Administrative private-vlan trunk normal VLANs: none Administrative private-vlan trunk private VLANs: none Operational private-vlan: none Trunking VLANs Enabled: ALL Pruning VLANs Enabled: 2-1001 Capture Mode Disabled Capture VLANs Allowed: ALL Protected: false Appliance trust: none
From top to bottom, you can see whether the switchport is enabled, what the trunking mode is ("administrative mode"), what trunking encapsulation is in use, whether trunking's being negotiated or not, what the native VLAN is, and so forth. This is an excellent VLAN and trunking troubleshooting command. And the problem? I left the interface shut down. :) output looks like when the interface is open. SW1#show interface fast 0/2 switchport Name: Fa0/2 Switchport: Enabled Administrative Mode: dynamic desirable Operational Mode: static access Administrative Trunking Encapsulation: dot1q Operational Trunking Encapsulation: native Negotiation of Trunking: On Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 1 (default) Voice VLAN: none
Here's what the
Administrative private-vlan host-association: none Administrative private-vlan mapping: none Administrative private-vlan trunk native VLAN: none Administrative private-vlan trunk encapsulation: dot1q Administrative private-vlan trunk normal VLANs: none Administrative private-vlan trunk private VLANs: none Operational private-vlan: none Trunking VLANs Enabled: ALL Pruning VLANs Enabled: 2-1001 Capture Mode Disabled Capture VLANs Allowed: ALL Protected: false Appliance trust: none
The reason I'm pointing that out is that with the basic show interface command, you'll see the phrase "administratively down" - and you know from your CCNA studies that this phrase really means "you forgot to open the interface." SW1#show interface fast 0/2 FastEthernet0/2 is administratively down, line protocol is down (disabled)
When you run show interface switchport, you're not going to see "administratively down", but just "down" - which may lead you to look for a more complex solution. Just remember to always check the interface's open/shut status first, no matter what the router or switch is telling you. Here's what the output looks like when a trunk port is specified. Note that you can also see what VLANs are allowed across the trunk and which VLANs are being pruned. SW1#show interface fast 0/11 switchport Name: Fa0/11 Switchport: Enabled Administrative Mode: dynamic desirable Operational Mode: trunk Administrative Trunking Encapsulation: dot1q Operational Trunking Encapsulation: dot1q Negotiation of Trunking: On Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 1 (default) Voice VLAN: none Administrative private-vlan host-association: none Administrative private-vlan mapping: none Administrative private-vlan trunk native VLAN: none Administrative private-vlan trunk encapsulation: dot1q Administrative private-vlan trunk normal VLANs: none Administrative private-vlan trunk private VLANs: none Operational private-vlan: none Trunking VLANs Enabled: ALL Pruning VLANs Enabled: 2-1001 Capture Mode Disabled
Capture VLANs Allowed: ALL Protected: false Appliance trust: none
The Extended System ID Feature Earlier in this section, we took a look at part of a switch's configuration and saw this line: spanning-tree extend system-id
Defined in IEEE 802.1t, the Extended System ID feature greatly extends the number of STP instances that can be supported by the switch, which in turn allows the switch to support up to 4096 VLANs. The extended VLANs will be numbered 1025 - 4096. You can't use this feature on all Cisco switches, though. It is enabled by default on 2950 and 3550 switches with an IOS version of 12.1(8)EA or later. Here's how to disable the Extended System ID: SW2(config)#no spanning extend system-id
You may have noticed something odd about the Bridge ID with the switches used in this section, all of which are running the Extended System ID feature by default: SW1#show spanning vlan 20 VLAN0020 Spanning tree enabled protocol ieee Root ID Priority 24596 Address 0011.9375.de00 Cost 38 Port 15 (FastEthernet0/13) Hello Time 2 sec Max Age 20 sec Bridge ID
Forward Delay 15 sec
Priority 32788 (priority 32768 sys-id-ext 20) Address 0019.557d.8880 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300
The BID priority is the default priority of 32768 plus the System ID Extension value (sys-id-ext). The sys-id-ext value just happens to be the VLAN number, so the BID priority is 32768 + 20, which equals 32788. Some switches running CatOS can support this feature; with those switches, it's called STP MAC Address Reduction. Disabled by default, it can be enabled with the set spantree macreduction command. (set
commands are run on CatOS switches only - IOS-based switches use the CLI commands you see throughout this book.)
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933 www.thebryantadvantage.com Back To Index
Advanced Spanning Tree Overview Portfast and the switchport host command Uplinkfast Backbonefast Root Guard BPDU Guard UDLD Loop Guard BPDU Skew Detection Rapid Spanning Tree Protocol PVST And PVST+ CTP and MST Etherchannels Flex Links
WIth the fundamentals of STP nailed down, we'll dive into more advanced STP features and versions. You won't use all of these in every network you do admin work on, but you will see them out in the field - and on
your CCNP SWITCH exam. Portfast Suitable only for switch ports connected directly to a single host, Portfast allows a port running STP to go directly from blocking to forwarding mode.
If you have an issue with a host acquiring an IP address via DHCP, configuring Portfast on the switch port in question just might solve the issue. Going through the normal STP stages on that port as the host finishes booting can cause a bit of havoc with the overall DHCP process. A Cisco router will give you an interesting warning when you configure Portfast: SW1(config)#int fast 0/5 SW1(config-if)#spanning-tree portfast %Warning: portfast should only be enabled on ports connected to a single host. Connecting hubs, concentrators, switches, bridges, etc... to this interface when portfast is enabled, can cause temporary bridging loops. Use with CAUTION %Portfast has been configured on FastEthernet0/5 but will only have effect when the interface is in a non-trunking mode. SW1(config-if)#
That is one long warning. Not only will the switch warn you about the proper usage of Portfast, but you must put the port into access mode ("non-trunking") before Portfast will take effect. An excellent real-world usage of Portfast is to allow users to get their IP addresses from a DHCP server. If a switchport has a workstation connected to a port, that workstation will still have to wait 30 seconds for the listening and learning stages of STP to run before it can communicate successfully with the DHCP server.
We all know that 30 seconds seems like 30 minutes to end users, especially first thing in the morning! Running Portfast on the appropriate switch ports did speed up their initial network connectivity. Portfast can also be enabled globally, but we'll get another warning when we do so: SW2(config)#spanning portfast default %Warning: this command enables portfast by default on all interfaces. You should now disable portfast explicitly on switched ports leading to hubs, switches and bridges as they may create temporary bridging loops.
Personally, I like to configure it on a per-port basis, but make sure you know both ways to configure Portfast. It never hurts to know more than one way to do things on a Cisco exam. And remember, a Portfastenabled port will not send TCP BPDUs when the port goes into Blocking mode. There's a command related to portfast that I want to share with you - note the three effects of this command as explained by IOS Help: SW1(config-if)#switchport host ? SW1(config-if)#switchport host switchport mode will be set to access spanning-tree portfast will be enabled channel group will be disabled
Good stuff to know! Uplinkfast When a port goes through the transition from blocking to forwarding, you're looking at a 50-second delay before that port can actually begin forwarding frames. Configuring a port with Portfast is one way to get around that, but again, you can only use it when a single host device is found off the port. What if the device connected to a port is another switch?
SW3 has two paths to the root switch. STP will only allow one path to be available, but if the open path between SW3 and SW1 goes down, there will be approximately a 50-second delay before the currently blocked path will be available. The delay is there to prevent switching loops, and we can't use Portfast to shorten the delay since these are switches, not host devices. What we can use is Uplinkfast. The ports that SW3 could potentially use to reach the root switch are collectively referred to as an uplink group. The uplink group includes the ports in forwarding and blocking mode. If the forwarding port in the uplink group sees that the link has gone down, another port in the uplink group will be transitioned from blocking to forwarding immediately. Uplinkfast is pretty much Portfast for wiring closets. Cisco recommends that Uplinkfast not be used on switches in the distribution and core layers. Some additional details regarding Uplinkfast:
The actual transition from blocking to forwarding isn't really "immediate" - it actually takes 1 - 3 seconds. Next to a 50-second delay, that certainly seems immediate! Uplinkfast cannot be configured on a root switch. When Uplinkfast is enabled, it's enabled globally and for all VLANs residing on the switch. You can't run Uplinkfast on some ports or on a per-VLAN basis - it's all or nothing.
The original root port will become the root port again when it detects that
its link to the root switch has come back up. This does not take place immediately. The switch uses the following formula to determine how long to wait before transitioning the original root port back to the forwarding state: ( 2 x FwdDelay) + 5 seconds Uplinkfast will take immediate action to ensure that a switch cannot become the root switch -- actually, two immediate actions!
First, the switch priority will be set to 49,152, which means that if all other switches are still at their default priority, they'd all have to go down before this switch can possibly become the root switch. Additionally, the STP Port Cost will be increased by 3000, making it highly unlikely that this switch will be used to reach the root switch by any downstream switches.
And you just know there's got to be at least one option with this command, right? Let's run IOS Help and see. SW2(config)#spanning-tree uplinkfast ? max-update-rate Rate at which station address updates are sent
When there is a direct link failure, dummy multicast frames are sent to the MAC destination address 01-00-0c-cd-cd-cd. The max-update-rate value determines how many of these frames will be sent in a 100-millisecond time period. Where To Apply Uplinkfast As with all the topics in this section, it's not enough to know the definition of Uplinkfast and what it does - you've got to know where to configure it for best results. Uplinkfast is a wiring-closet switch feature - it's not recommended for core and distribution-layer switches. Uplinkfast should be configured only on access-layer switches. It's a safe bet that the root switches are going to be found in the core layer, and the switches that are farthest away from the root switches will be the access switches. The access switches will be the ones closest to the end users.
Backbonefast Uplinkfast and Portfast are great, but they've got limitations on when they can and should be run. You definitely can't run either one in a network backbone, but the Cisco-proprietary feature Backbonefast can be used to help recover from indirect link failures. The key word there is indirect. If a core switch detects an indirect link failure - a failure of a link that is not directly connected to the core switch in question - Backbonefast goes into action. This indirect link failure is detected when an inferior BPDU is received. Now, you may be asking, "What is an inferior BPDU?" Glad you asked! Let's take a look at a three-switch setup where all links are working and STP is running as expected, paying particular attention to the STP states on SW3. All links are assumed to be running at the same speed.
SW1 has been elected the root bridge, and sends BPDUs every two seconds to SW2 and SW3 telling them this. In turn, SW2 takes the BPDU it's receiving from SW1 and relays it to SW3. All is well, until SW2 loses its connection to SW1, as shown below - which means that SW2 will start announcing itself as the root switch. SW3 will now be receiving two separate BPDUs from two separate switches, both claiming to be the root switch.
SW3 looks at the priority of the BPDU coming in from SW2, and compares it to the BDPUs it's getting from SW1. SW3 quickly realizes the BPDU from SW2 is an inferior BPDU, and simply ignores it. Once SW3's MaxAge timer on the port leading to SW2 hits zero, that port will transition to the listening state and will start relaying the information contained in the superior BPDU, the BPDU coming in from SW1.
The key phrase here is "once SW3's MaxAge timer on the port leading to SW2 hits zero". We really don't want to wait that long, and with Backbonefast, we don't have to! When BackboneFast is configured, this process skips the MaxAge stage. While this does not eliminate delays as efficiently as PortFast and UplinkFast, but the delay is cut from 50 seconds to 30. (MaxAge's default value is 20 seconds, but the 15-second Listening and Learning stages still have to run.) BackboneFast uses the Root Link Query (RLQ) protocol. RLQ uses a series of requests and responses to detect indirect link outages. RLQ requests are transmitted via the ports that would normally be receiving BPDUs. The purpose of these RLQ requests is to ensure that the local switch still has connectivity to the root switch. The RLQ request identifies the bridge that is considered the root bridge, and the RLQ response will identify the root bridge that can be accessed via that port. If they're one and the same, everything's fine. Upon receiving a RLQ request, a switch will answer immediately under one of two conditions:
The receiving switch is indeed the root bridge named in the RLQ request The receiving switch has no connectivity to the root bridge named in the RLQ request, because it considers another switch to be the root bridge
The third possibility is that the receiving switch is not the root, but considers the root switch named in the RLQ request to indeed be the root switch. In that case, the RLQ request is relayed toward the root switch by sending it out the root port. To put BackboneFast into action in our network, we have to know more than the command! We've got to know where to configure it as well. Since all switches in the network have to be able to send, relay, and respond to RLQ requests, and RLQ is enabled by enabling BackboneFast, every switch in the network should be configured for BackboneFast when using this feature. This feature is enabled globally, and it's simple to configure - and believe it or not, there are no additional timers or options with this command. A true Cisco rarity! The command to verify BackboneFast is just as simple and is shown below. SW1#show spanning-tree backbonefast BackboneFast is disabled SW1#conf t Enter configuration commands, one per line. SW1(config)#spanning-tree backbonefast
End with CNTL/Z.
SW1#show spanning-tree backbonefast BackboneFast is enabled
Root Guard You know that the root switch is the switch with the lowest BID, and that a secondary root is also elected - that's the switch with the next-lowest BID. You also know that you can use the spanning-tree vlan root command to make sure that a given switch becomes the root or the secondary root. SW1(config)#spanning-tree vlan 23 root ? primary Configure this switch as primary root for this spanning tree secondary Configure switch as secondary root
We've used that command to name the root and secondary root switches in the following network. For clarity's sake, the full BID is not shown - just the switch priority.
Nothing wrong here, everything's fine... until another switch is added to the mix.
The problem here is that SW4 is going to become the root switch, and SW1 is going to become the secondary root. If SW4 is allowed to become the root bridge, here's what the new STP topology will look like.
Depending on the design of your network, this change in root switches can have a negative effect on traffic flow. There's also a delay involved while the switches converge on the new STP topology. Worse yet, there's always the possibility that R4 isn't even under your administrative control it belongs to another network! STP has no default behavior to prevent this from happening; the spanning-tree vlan root command helps you determine which switches become the root and secondary root, but does nothing to disqualify a switch from becoming the root. To prevent SW4 from becoming the root in this network, Root Guard must be configured. Root Guard is configured at the port level, and disqualifies any switch that is downstream from that port from becoming the root or secondary root. To prevent SW4 from becoming the root or secondary root, SW3's port that will receive BPDUs from SW4 should be configured with Root Guard. When the BPDU comes in from SW4, SW3 will recognize this as a superior BPDU, one that would result in a new root switch being elected.
Root Guard will actually block that superior BPDU, discard it, and put the port into root-inconsistent state. When those superior BPDUs stop coming, SW3 will allow that port to transition normally through the STP port states. Configuring Root Guard is simple: SW3(config)#int fast 0/24 SW3(config-if)#spanning guard root SW3(config-if)# 00:10:35: %SPANTREE-2-ROOTGUARD_CONFIG_CHANGE: Root guard enabled on port FastEthernet0/24.
There is no interface reset or reload necessary, but note that Root Guardenabled ports act as designated ports (until a superior BPDU is received, of course). SW4 now comes online and sends a superior BPDU for VLAN 23 to SW3, which receives the BPDU on port 0/24 - the port running Root Guard. Here's the console message we receive as a result on R3: 00:26:46: %SPANTREE-2-ROOTGUARD_BLOCK: Root guard blocking port FastEthernet0/24 on VLAN0023.
Additionally, there's a spanning-tree command that will show you a list of ports that have been put into root-inconsistent state, but it's not as obvious as some of the other show spanning-tree commands we've seen: SW3#show spanning-tree ? active Report on active interfaces only backbonefast Show spanning tree backbonefast status blockedports Show blocked ports bridge Status and configuration of this bridge detail Detailed information inconsistentports Show inconsistent ports interface Spanning Tree interface status and configuration pathcost Show Spanning pathcost options root Status and configuration of the root bridge summary Summary of port states uplinkfast Show spanning tree uplinkfast status vlan VLAN Switch Spanning Trees | Output modifiers SW1#show spanning-tree inconsistentports Name Interface Inconsistency -------------------- ---------------------- ------------------
Those of you who do not like to type can just enter "inc" for that last word!
This is the resulting topology:
BPDU Guard Remember that warning that we got from the router when configuring PortFast? SW1(config)#int fast 0/5 SW1(config-if)#spanning-tree portfast %Warning: portfast should only be enabled on ports connected to a single host. Connecting hubs, concentrators, switches, bridges, etc... to this interface when portfast is enabled, can cause temporary bridging loops. Use with CAUTION %Portfast has been configured on FastEthernet0/5 but will only have effect when the interface is in a non-trunking mode.
Now, you'd think that would be enough of a warning, right? But there is a chance - just a chance - that someone is going to manage to connect a switch to a port running Portfast, which in turn creates the possibility of a switching loop.
BPDU Guard protects against this possibility. If any BPDU, superior or inferior, comes in on a port that's running BPDU Guard, the port will be shut down and placed into error disabled state, shown on the switch as err-disabled. To configure BPDU Guard on a specific port only: SW1(config)#int fast 0/5 SW1(config-if)#spanning-tree bpduguard % Incomplete command. SW1(config-if)#spanning-tree bpduguard ? disable Disable BPDU guard for this interface enable Enable BPDU guard for this interface SW1(config-if)#spanning-tree bpduguard enable
To configure BPDU Guard on all ports on the switch: SW1(config)#spanning-tree portfast bpduguard default
Note that this command is a variation of the portfast command. Naturally, BPDU Guard can only be configured on ports already running Portfast. The same goes for the next feature, BPDU Filtering. PortFast BPDU Filtering What if you don't want the port to be put into err-disabled state when it receives a BPDU? You can use BPDU Filtering, but you have to be careful how you configure it - this feature works differently when it's configured globally as opposed to configuring it on a per-interface level.
Globally enabling BPDU Filtering will enable this feature on all
switchports running portfast, and any such port will stop running PortFast if the port receives a BPDU.
Enabling BPDU Filtering on a specific port or ports, rather than enabling it globally, will result in received BPDUs being quietly ignored. Those incoming BPDUs will be dropped, and the port will not send any BPDUs in return.
To enable BPDU Filtering globally on all Portfast-enabled ports: SW1(config)#spanning portfast bpdufilter ? default Enable bdpu filter by default on all portfast ports SW1(config)#spanning portfast bpdufilter default
To enable BPDU Filtering on a specific port: SW1(config)#int fast 0/5 SW1(config-if)#spanning-tree bpdufilter enable
To verify global configuration of BPDU Filtering (and quite a few other features!): SW1#show spanning-tree summary totals Switch is in rapid-pvst mode Root bridge for: none Extended system ID is enabled Portfast Default is disabled PortFast BPDU Guard Default is disabled Portfast BPDU Filter Default is enabled Loopguard Default is disabled EtherChannel misconfig guard is enabled UplinkFast is disabled BackboneFast is disabled Configured Pathcost method used is short Name Blocking Listening Learning Forwarding STP Active ---------------------- -------- --------- -------- ---------- ---------13 vlans 39 0 0 24 63
To verify configuration of BPDU Filtering on a specific port: SW2#show spanning-tree interface fast0/5 detail Port 5 (FastEthernet0/5) of VLAN0001 is forwarding Port path cost 19, Port priority 128, Port Identifier 128.5. Designated root has priority 32769, address 000e.381f.ee80 Designated bridge has priority 32769, address 000e.381f.ee80 Designated port id is 128.5, designated path cost 0 Timers: message age 0, forward delay 0, hold 0 Number of transitions to forwarding state: 1 The port is in the portfast mode
Link type is point-to-point by default Bpdu filter is enabled by default BPDU: sent 6837, received 0
I know what you Bulldogs are thinking out there - "So what if you run both BPDU Guard and Filtering on the same port?" In that rare case, the port will only act under the Filtering rules we've gone over here.
Unidirectional Link Detection (UDLD) Most problems involving the physical link will make data transfer in either direction impossible. Particularly with fiber optic, there are situations where a physical layer issue disables data transfer in one direction, but not the other.
UDLD detects these unidirectional links by transmitting a UDLD frame across the link. If a UDLD frame is received in return, that indicates a bidirectional link, and all is well.
If a UDLD frame is not received in return, the link is considered unidirectional. It's really like a Layer 2 ping. If the UDLD "echo" is seen, there's bidirectional communication; if the "echo" is not seen, there isn't!
UDLD has two modes of operation, normal and aggressive. When a unidirectional link is detected in normal mode, UDLD generates a syslog message but does not shut the port down. In aggressive mode, the port will be put into error disabled state ("errdisabled") after eight UDLD messages receive no echo from the remote switch. Why is it called "aggressive"? Because the UDLD messages will go out at a rate of one per second when a potential unidirectional link is found. UDLD can be enabled globally or on a per-port level. To enable UDLD globally, run the udld enable command. In this case, "globally" means that UDLD will run on all fiber optic interfaces. For aggressive mode, run udld aggressive. (There is no udld normal command.) SW2(config)#udld ? aggressive Enable UDLD protocol in aggressive mode on fiber ports except configured enable Enable UDLD protocol on fiber ports except where locally configured message Set UDLD message parameters
where locally
SW2(config)#udld enable
Here are your options for running UDLD at the interface level: SW1(config)#int fast 0/11 SW1(config-if)#udld ? port Enable UDLD protocol on this interface SW1(config-if)#udld port ? aggressive Enable UDLD protocol in aggressive mode on this interface
Another important detail regarding UDLD is that to have it run effectively, you've got to have it configured on both involved ports. For example, in the previous two-switch examples, UDLD would have to be configured on both switches, either on the switch ports or globally. Now, you may be thinking the same thing I did when I first read about aggressive mode... If aggressive mode shuts a port down after failing to receive an echo to eight consecutive UDLD frames, won't the port always shut down when you first configure UDLD? Personally, I type very quickly, but even I can't enter the UDLD command on one switch and then connect to the remote switch and enable UDLD there within eight seconds!
When UDLD's aggressive mode is first configured on the local switch, the port will start sending UDLD frames, but will not shut down the port when it doesn't hear back from a remote switch within 8 seconds.
The remote switch will first have to answer back with a UDLD frame, which makes the local switch aware of the remote switch. Then, if the remote frame stops sending back an echo frame, the local switch will shut the port down.
Duplex Mismatches And Switching Loops A duplex mismatch between two trunking switches isn't quite a unidirectional link, but it can indeed lead to a switching loop. You're not often going to change switch duplex settings, especially on trunk ports, but if you change one switch port's duplex setting, change that of any trunking partner! Believe it or not, the switching loop potential is caused by CSMA/CD! The full-duplex port will not perform CSMA/CD, but the half-duplex port will. The problem comes in when the half-duplex port listens to the segment, hears nothing, and sends frames as it normally would under CSMA/CD rules...
... and then the full-duplex port sends frames without listening to the segment. We all know what happens then!
Under CSMA/CD rules, the half-duplex port will then invoke its random timer and then listen to the segment again before attempting to send frames - and that includes BPDUs. One collision does not a switching loop make, but if the full-duplex port sends enough traffic, it effectively drowns out anything that the half-duplex port tries to send. Depending on the location of the root switch in this network (or if one of these switches is the root switch), a switching loop may well occur. Keep your ports in the same duplex mode and you don't have to worry about this!
Loop Guard We've had BPDU Guard, Root Guard, and now... Loop Guard! You can probably guess that the "loop" being guarded against is a switching loop... but how does Loop Guard prevent switching loops? Let's revisit an earlier example to see how the absence of BPDUs can result in a switching loop.
In this network, only one port will be in blocking mode (BLK). Ports in blocking mode still receive BPDUs, and right now everything's as we would want it to be. SW3 is receiving a BPDU directly from the root as well as a forwarded BPDU from the secondary root. But what happens if a physical issue causes the link between SW2 and SW3 to become a unidirectional link, where SW3 can send BPDUs to SW2 but SW2 cannot send them to SW3?
If SW2 cannot transmit to SW3, the BPDUs will obviously not reach SW3. SW3 will wait for the duration of the MaxAge timer - by default, 20 seconds - and will then begin to transition the port facing SW2 from blocking to forwarding mode. With all six ports in Forwarding mode, we've got ourselves a switching loop. Loop Guard does not allow a port to go from blocking to forwarding in this situation. With Loop Guard enabled, the port will go from blocking to loopinconsistent, which is basically still blocking mode, and a switching loop will not form.
Once the unidirectional link issue is cleared up and SW3 begins to receive BPDUs again, the port will come out of loop-inconsistent state and will be treated as an STP port would normally be. Loop Guard is disabled on all ports by default, and is enabled at the port level: SW2(config-if)#int fast 0/5 SW2(config-if)#spanning-tree guard loop
You can also enable Loop Guard on a global basis: SW1(config)#spanning-tree loopguard default
Strange But True: You enable Loop Guard on a per-port basis, but it operates on a per-VLAN basis. For example, if you have your trunk configured to carry VLANs 10, 20, and 30, and BPDUs stop coming in for VLAN 10, the port would move to loop-inconsistent state for VLAN 10 only while continuing to perform normally for the other VLANs. BPDU Skew Detection You may look at that feature's name and think, "What is a BPDU Skew, and why do I want to detect it?" What we're actually attempting to detect are BPDUs that aren't being relayed as quickly as they should be. After the root bridge election, the root bridge transmits BPDUs, and the non-root switches relay that BPDU down the STP tree. This should happen quickly all around, since the root bridge will be sending a BPDU every two seconds by default ("hello time"), and the switches should relay the BDPUs fast enough so every switch is seeing a BPDU every two
seconds. That's in a perfect world, though, and there are plenty of imperfect networks out there! You may have a busy switch that can't spare the CPU to relay the BPDU quickly, or a BPDU may just simply be lost along the way down the STP tree. That two-second hello time value doesn't give the switches much leeway, but we don't want the STP topology recalculated unnecessarily either. BPDU Skew Detection is strictly a notification feature. Skew Detection will not take action to prevent STP recalculation when BPDUs are not being relayed quickly enough by the switches, but it will send a syslog message informing the network administrator of the problem. The amount of time between when the BPDU should have arrived and when it did arrive is referred to as "skew time" or "BPDU latency". A busy CPU could quickly find itself overwhelmed if it had to send a syslog message for every BPDU delivery that's skewed. The syslog messages will be limited to one every 60 seconds, unless the "skew time" is at a critical level. In that case, the syslog message will be sent immediately with no one-per-minute limit. And what is "critical", according to BPDU Skew Detection? Any value greater than 1/2 of the MaxAge value, making the critical skew time level 10 seconds or greater.
Rapid Spanning Tree Protocol So you understand STP, and you've got all these STP features down and now here's another kind of STP! Specifically, it's RSTP, or Rapid Spanning Tree Protocol. RSTP is defined by IEEE 802.1w, and is considered an extension of 802.1d. The 30-second delay caused by the listening and learning states was once considered an acceptable delay. Then again, a floppy disk used to be considered all the storage space anyone would ever need, and that theory didn't exactly stand the test of time! Root bridges are still elected with RSTP, but the port roles themselves are different between STP and RSTP. Let's take a look at the RSTP port roles in the following three-switch network, where SW1 is the root. Note that
SW3 has multiple connections to the ethernet segment.
RSTP uses the root port in the same fashion that STP does. All nonroot switches will select a root port, and this port is the one reflecting the lowest root path cost. Assuming all links in this network are running at the same speed, SW2 and SW3 will both select the port directly connected to SW1 as their root ports. There will be no root port on a root bridge.
An RSTP designated port is the port with the best root path cost. The ports on the root switch will obviously have the lowest root path cost for that network segment, and will be the DP for that segment. We'll assume R3 has the DP for the segment connected to SW2.
RSTP's answer to a blocked port is an alternate port. In this segment, SW2's port leading to SW3 is an alternate port.
In this network, SW3 has two separate ports on the same physical segment. One port has already been selected as the designated port for that segment, and the other port will become the backup port. This port gives a redundant path to that segment, but doesn't guarantee that the root switch will still be accessible.
The "rapid" in RSTP comes in with the new port states. The STP port states disabled, blocking, and listening are combined into the RSTP port state discarding, which is the initial RSTP port state.
RSTP ports transition from the discarding state to the learning state, where incoming frames are still discarded; however, the MAC addresses are now being learned by the switch. Finally, an RSTP port will transition to the forwarding state, which is the same as the STP forwarding state. Let's compare the transition states: STP: disabled > blocking > listening > learning > forwarding RSTP: discarding > learning > forwarding There are other port types unique to RSTP, edge ports and point-to-point ports. An edge port is just what it sounds like - a port on the edge of the network. In this case, it's a switch port that is connected to a single host, most likely an end user's PC.
So why do we care? RSTP edge ports act like Portfast-enabled STP ports -- they can go straight from forwarding to blocking. If a BPDU comes in an RSTP edge port, it's "demoted" to a regular RSTP port and generates a TCN BPDU. (More about that in just a few seconds.) A point-to-point port is any port running in full-duplex mode. (Any ports running half-duplex are shared ports.)
Edge Ports And RSTP Topology Changes Edge ports play a role in when RSTP considers a topology change to have taken place. Rather, I should say that they don't play a role, because RSTP considers a topology change to have taken place when a port moves into Forwarding mode - unless that port is an edge port.
When an edge port moves into Forwarding mode, RSTP doesn't consider that a topology change, since only a single host will be connected to that particular port. When a topology change is discovered by a switch running RSTP, that switch sends BPDUs with the Topology Change (TC) bit set. While the concept of a Portfast-enabled port and an Edge port in RSTP are the same - both go immediately to the Forwarding state and should be connected only to a single host - there is a major difference in their behavior when a BPDU is received on such a port. An RSTP Edge Port will simply be considered a "normal" spanning tree port after receiving a BPDU. Another major difference between STP and RSTP is the way BPDUs are handled. With STP, only the root bridge is sending BPDUs every two seconds; the nonroot bridges simply forward, or relay, that BPDU when they receive it. RSTP-enabled switches generate a BPDU every two seconds, regardless of whether they have received a BPDU from the root switch or not. (The default value of hello time, the interval at which switches send BPDUs, is two seconds in both STP and RSTP.) This change not only allows all switches in the network to have a role in detecting link failures, but discovery of link failures is faster. Why? Because every switch expects to see a BPDU from its neighbor every two seconds, and if three BPDUs are missed, the link is considered down. The switch then immediately ages out all information concerning that port. This cuts the error detection process from 20 seconds in STP to 6 seconds in RSTP. Let's compare the two protocols and their link failure detection times. When a switch running STP misses a BPDU, the MaxAge timer begins. This timer dictates how long the switch will retain the last BPDU before timing it out and beginning the STP recalculation process. By default, MaxAge is 20 seconds.
When a switch running RSTP misses three BPDUs, it will immediately age out the superior BPDU's information and begin the STP recalculation process. Since the default hello-time is 2 seconds for both STP and RSTP, it takes an RSTP-enabled switch only 6 seconds overall to determine that a link to a neighbor has failed. The BPDU format is the same for STP and RSTP, but RSTP uses all flag bits available in the BPDU for various purposes including state negotiation between neighbors, but STP uses only the Topology Change (TC) and Topology Change Ack (TCA) flags. The details of this negotiation are out of the scope of the exam, but can easily be found on the Internet by searching for "RSTP" in your favorite search engine. The RSTP BPDU is also of a totally different type (Type 2, Version 2), which allows an RSTP-enabled switch to detect older switches. Behaviors of three main STP features we looked at earlier in this section Uplinkfast, Portfast, and Backbonefast are built-in to RSTP. No additional config is needed to gain the benefits of all three. Per-VLAN Spanning Tree Versions (PVST and PVST+) The ultimate "the name is the recipe" protocol, the Cisco-proprietary PVST, well, runs a separate instance of STP for each VLAN! The Good: PVST does allow for much better fine-tuning of spanning-tree performance than does regular old STP. The Bad: Running PVST does mean extra work for your CPU and memory. The Ugly: PVST is Cisco-proprietary, so it must run over the Ciscoproprietary trunking protocol - ISL. The requirement for PVST to run ISL becomes a major issue in a network like this:
PVST doesn't play well at all with CST, so Cisco came up with PVST+. PVST+ is described by Cisco's website as having the same functionality as PVST, with the + version using dot1q rather than ISL. PVST+ is Ciscoproprietary as well.
PVST+ can serve as an intermediary between groups of PVST switches and switches running CST; otherwise, the groups wouldn't be able to communicate. Using PVST+ along with CST and PVST can be a little difficult to fine-tune at first, but this combination is running in many a network right now - and working fine!
Rapid Per-VLAN Spanning Tree Plus (RPVST +) And PVST+ Now there's a mouthful! Cisco being Cisco, you just know they have to have their own version of STP! Per-VLAN Spanning Tree Plus (PVST+) is just what it sounds like every VLAN has its own instance of STP running. PVST+ allows perVLAN load balancing and is also Cisco-proprietary. If you configure a switch running PVST+ to use RSTP, you end up with RPVST+ - Rapid Per-VLAN Spanning Tree Plus. The good news is that the command is very simple, and we'll use IOS Help to look at some other options: SW1(config)#spanning-tree mode ? mst Multiple spanning tree mode pvst Per-Vlan spanning tree mode rapid-pvst Per-Vlan rapid spanning tree mode SW1(config)#spanning-tree mode rapid-pvst
The bad news is that doing so will restart all STP processes, which in turn results in a temporary data flow interruption. If you choose to make this change, it's a good idea to do so when end users aren't around. We'll revisit an old friend, show spanning-tree, to verify that RPVST+ is running on VLAN 1: SW1#show spanning-tree vlan 1 VLAN0001 Spanning tree enabled protocol rstp Root ID Priority 32769 Address 000e.381f.ee80 Cost 4 Port 25 (GigabitEthernet0/1)
Hello Time
2 sec
Max Age 20 sec
Forward Delay 15 sec
Later in the output of that same command, note that ports leading to switches have "P2P Peer (STP)" as the type. Fa0/18 Fa0/19
Altn BLK 19 Desg FWD 19
128.18 128.19
P2p Peer(STP) P2p Peer(STP)
CST And MST When our friend IEEE 802.1Q ("dot1q") is the trunking protocol, Common Spanning Tree is in use. With dot1q, all VLANs are using a single instance of STP. Defined by IEEE 802.1s, Multiple Spanning Tree gets its name from a scheme that allows multiple VLANs to be mapped to a single instance of STP, rather than having an instance for every VLAN in the network. MST serves as a middle ground between STP and PVST. CST uses a single instance of STP, PVST has an instance for every VLAN, and MST allows you to reduce the number of STP instances without knocking it all the way back to one. MST was designed with enterprise networks in mind, so while it can be very useful in the right environment, it's not for every network. The configuration of MST involves logically dividing the switches into regions, and the switches in any given region must agree of the following: 1. 2. 3.
The MST configuration name The MST instance-to-VLAN Mapping table The MST configuration revision number
If any of these three values are not agreed upon by two given switches, they are in different regions. Switches send MST BPDUs that contain the configuration name, revision number, and a digest value derived from the mapping table. MST configurations can become quite complex and a great deal of planning is recommended before implementing it. No matter the size of the network, however, keep the central point in mind - the purpose of MST is to map multiple VLANs to a lesser number of STP instances.
A good way to get a mental picture of the interoperability of MST and CST is that CST will cover the entire network, and MST is a "subset" of the network. CST is going to maintain a loop-free network only with the links connecting the MST network subnets, and it's MST's job to keep a loopfree topology in the MST region. CST doesn't know what's going on inside the region, and it doesn't want to know.
The "IST" in each region stands for Internal Spanning Tree, and it's the IST instance that is responsible for keeping communications in the MST Region loop-free. Up to 16 MST instances (MSTIs) can exist in a region, numbered 0 through 15. MSTI 0 is reserved for the IST instance, and only the IST is going to send MST BPDUs. Occasionally the first ten MST instances are referred to as "00" "09". These are not hex values - they're regular old decimals. Here's the good part -- there's no such thing as "VTP For MST". Each and every switch in your MST deployment must be configured manually. (No, I'm not kidding!) When you create VLAN mappings in MST, you've got to configure every switch in your network with those mappings they're not advertised. A good place to start is to enable MST on the switch: SW2(config)# spanning-tree mode mst
The name and revision number must now be set.
SW2(config)# spanning-tree mode mst configuration SW2(config-mst)# name REGION1 SW2(config-mst)# revision 1
To map VLANs to a particular MST instance: SW2(config-mst)# instance 1 10,13, 14-20
Note that I could use commas to separate individual VLANs or use a hyphen to indicate a range of them. When mapping VLANs, remember that by default all VLANs will be mapped to the IST. Why Does Anyone Run STP Instead Of PVST? Like the TCP vs. UDP argument from your CCNA studies, this seems like a bit of a no-brainer. STP: 100 VLANs results in one STP process PVST: 100 VLANs results in 100 STP processes, allowing for greater flexibility with trunk usage (per-VLAN load balancing, for example) However, this goes back to something you must keep in mind when you're learning about all of these great features - everything we do on a Cisco switch has a cost in resources. The more STP processes a switch runs, the bigger the hit against the switch's memory and CPU. This is a decision you have to make in accordance with the switch's available resource and the workload PVST will put on your switch. Since Cisco Catalyst switches run PVST by default, that's a good indicator that PVST is the way to go. Just keep the resource hit in mind as your network grows - and the number of VLANs in that network with it! Etherchannels Etherchannels aren't just important for your Cisco studies, they're a vital part of many of today's networks. Knowing how to configure and troubleshoot them is a vital skill that any CCNP must have. Etherchannels are part of the CCNA curriculum, but many CCNA books either leave Etherchannels out entirely or mention them briefly. You may not have even seen an Etherchannel question on your CCNA exam, so we're going to begin this section with a review of what an Etherchannel is
and why we would configure one. After that review, we'll begin an in-depth examination of how Etherchannels work, and I'll show you some real-world examples of common Etherchannel configuration errors to help you master this skill for the exam and for the real world. What Is An Etherchannel? An Etherchannel is the logical bundling of two to eight parallel Ethernet trunks. This bundling of trunks is also referred to as aggregation. This provides greater throughput, and is another effective way to avoid the 50second wait between blocking and forwarding states in case of a link failure. Spanning-Tree Protocol (STP) considers an Etherchannel to be one link. If one of the physical links making up the logical Etherchannel should fail, there is no STP reconfiguration, since STP doesn’t know the physical link went down. STP sees only the Etherchannel, and a single link failure will not bring an Etherchannel down. In this example, there are three trunks between two switches.
show spanning vlan 10 on the non-root bridge illustrates that STP sees three separate links:
If port 0/19 goes down, port 0/20 will begin the process of going from blocking to learning. In the meantime, communication between the two switches is lost. This temporary lack of a forwarding port can be avoided with an Etherchannel. By combining the three physical ports into a single logical link, not only is the bandwidth of the three links combined, but the failure of a single link will not force STP to recalculate the spanning tree. Etherchannels use the Exclusive OR (XOR) algorithm to determine which channel in the EC to use to transmit data to the remote switch. After configuring an Etherchannel on each switch with the interface-level command channel-group, the output of commands show interface trunk and show spanning vlan 10 show STP now sees the three physical links as one logical link.
If one of the three physical links goes down, STP will not recalculate. While some bandwidth is obviously lost, the logical link itself stays up. Data that is traveling over the downed physical link will be rerouted to another physical link in a matter of milliseconds - it will happen so fast that you won't even hear about it from your end users! Negotiating An Etherchannel There are two protocols that can be used to negotiate an etherchannel. The industry standard is the Link Aggregation Control Protocol (LACP), and the Cisco-proprietary option is the Port Aggregation Protocol (PAgP). PAgP packets are sent between Cisco switches via ports that have the capacity to be placed into an etherchannel. First, the PAgP packets will check the capabilities of the remote ports against those of the local switch ports. The remote ports are checked for two important values:
The remote port group number must match the number configured on the local switch The device ID of all remote ports must be the same - after all, if the remote ports are on separate switches, that would defeat the purpose of configuring an etherchannel!
PAgP also has the capability of changing a characteristic of the etherchannel as a whole if one of the ports in the etherchannel is changed. If you change the speed of one of the ports in an etherchannel, PAgP will allow the etherchannel to dynamically adapt to this change.
The industry standard bundling protocol defined in 802.3ad, LACP assigns a priority value to each port that has etherchannel capability. You can actually assign up to 16 ports to belong to an LACP-negotiated etherchannel, but only the eight ports with the lowest port priority will be bundled. The other ports will be bundled only if one or more of the bundled ports fails. PAgP and LACP use different terminology to express the same modes. PAgP has a dynamic mode and auto mode. A port in dynamic mode will initiate bundling with a remote switch, while a port in auto mode waits for the remote switch to do so. LACP uses active and passive modes, where active ports initiate bundling and passive ports wait for the remote switch to do so. There's a third option, on, which means that there is no negotiation at all, and neither LACP nor PAgP are used in the construction of the etherchannel. Configuring Etherchannels To select a particular negotiation protocol, use the channel-protocol command. SW1(config-if)#channel-protocol ? lacp Prepare interface for LACP protocol pagp Prepare interface for PAgP protocol
The channel-group command is used to place a port into an etherchannel. SW1(config-if)#channel-group 1 mode ? active Enable LACP unconditionally auto Enable PAgP only if a PAgP device is detected desirable Enable PAgP unconditionally on Enable Etherchannel only passive Enable LACP only if a LACP device is detected
You can see the different terminology LACP and PAgP use for the same results - "active" and "desirable" for the local port to initiate the EC, "auto" and "passive" if the remote port is going to initiate the EC. To enable the etherchannel with no negotiation, use the on option. For an EC to form, LACP must have at least one of the two ports on each physical link set for "active"; if both ports are set to "passive", no EC will be built. The same can be said for PAgP and the settings "auto" and "desirable" - if both ports are set to auto, the link won't join the EC.
To verify both PAgP and LACP neighbors, you can use the show pagp neighbor and show lacp neighbor commands. To illustrate, I've created an EC using channel-group 1 and the desirable option, meaning that PAgP is enabled unconditionally. The number you see below in each command is the channel group number. You can see that PAgP is running on ports 0/23 and 0/24, and that LACP is not running at all on that EC. SW1#show pagp 1 neighbor Flags: S - Device is sending Slow hello. C - Device is in Consistent state. A - Device is in Auto mode. P - Device learns on physical port. Channel group 1 neighbors Partner Partner Partner Port Name Device ID Port Fa0/23 SW2 000e.381f.ee80 Fa0/23 Fa0/24 SW2 000e.381f.ee80 Fa0/24
Partner Group Age Flags Cap. 13s SC 10001 11s SC 10001
SW1#show lacp 1 neighbor Channel group 1 is not participating in LACP
The ECs we've created up to this point are pure Layer 2 ECs. We can verify this with the command show etherchannel brief. SW1#show etherchannel brief Channel-group listing: ---------------------Group: 1 ---------Group state = L2 Ports: 2 Maxports = 8 Port-channels: 1 Max Port-channels = 1 Protocol: PAgP
You may be wondering what other kind of EC we might see here! In certain situations, you may want to apply an IP address to the EC itself, which results in a Layer 3 Etherchannel. We're on an L3 switch right now, which gives us the ability to create an L3 EC. We'll create an L3 etherchannel in the Multilayer Switching section, and here's a sneak peek! With an L2 EC, we bundled the ports by configuring each port with the channel-group command, which automatically created the port-channel interface. When configuring an L3 interface, you must create the port-
channel interface first, then put the ports into the EC with the port-channel command. IP routing must be enabled on the L3 switch, and all involved ports must be configured as routed ports with the switchport command. SW1(config)#int port-channel 1 SW1(config-if)#no switchport SW1(config-if)#ip address 172.12.1.1 255.255.255.0 SW1(config-if)#int fast 0/23 SW1(config-if)#channel-group 1 mode desirable SW1(config-if)#no switchport
SW1(config-if)#int fast 0/24 SW1(config-if)#no switchport SW1(config-if)#channel-group 1 mode desirable
And now when we run show etherchannel brief... SW1#show etherchannel brief Channel-group listing: ---------------------Group: 1 ---------Group state = L3 Ports: 2 Maxports = 8 Port-channels: 1 Max Port-channels = 1 Protocol: -
... the L3 EC is verified by the line "Group state = L3". You can perform load balancing over an Etherchannel - and have that load balacing determined by source and/or destination IP and/or MAC addresses with the port-channel load-balance command: SW1(config)#port-channel load-balance ? dst-ip Dst IP Addr dst-mac Dst Mac Addr src-dst-ip Src XOR Dst IP Addr src-dst-mac Src XOR Dst Mac Addr src-ip Src IP Addr src-mac Src Mac Addr
You won't see "XOR" in all IOS versions for this command - sometimes you'll see "and". That load balances on source *and* destination IP or MAC address. Verify with show etherchannel load-balance.
Troubleshooting EtherChannels Once you get an EC up and running, it generally stays that way - unless a port setting changes. From personal experience, here are a few things to watch out for: Changing the VLAN assignment mode to dynamic. Ports configured for dynamic VLAN assignment from a VMPS cannot remain or become part of an EC. The allowed range of VLANs for the EC must match that of the ports. Here's a reenactment of an EC issue I ran into once. The configuration of the channel-group looked just fine... interface FastEthernet0/11 switchport trunk allowed vlan 10,20 no ip address channel-group 1 mode on ! interface FastEthernet0/12 switchport trunk allowed vlan 100,200 no ip address channel-group 1 mode on
.. but notice that the allowed VLANs on these two ports is different. That will prevent an EC from working correctly. Here's the error message that occurs in a scenario like this: 02:46:10: %EC-5-CANNOT_BUNDLE2: Fa0/12 is not compatible with Fa0/11 and will be suspended (vlan mask is different)
Interestingly enough, port fast0/12 is not going to go into err-disabled mode; instead, you see this: SW1#show int fast 0/12 FastEthernet0/12 is up, line protocol is down (notconnect)
When I remove the original command, I get the EC error message again, but once I change port 0/12's config to match 0/11's, the EC forms. SW1(config)#int fast 0/12 SW1(config-if)#no switchport trunk allowed vlan 100,200 02:51:15: %EC-5-CANNOT_BUNDLE2: Fa0/12 is not compatible with Fa0/11 and will be suspended (vlan mask is different) 02:51:15: %EC-5-CANNOT_BUNDLE2: Fa0/12 is not compatible with Fa0/11 and
will be suspended (vlan mask is different)chport
SW1(config-if)#switchport trunk allowed vlan 10,20 02:51:25: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/12, changed state to up
show interface trunk and show interface port-channel1 verify that the trunk and the EC are both up. SW1#show int trunk Port Po1 Port Po1 Port Po1
Mode desirable
Encapsulation 802.1q
Status trunking
Native vlan 1
Vlans allowed on trunk 10,20 Vlans allowed and active in management domain none
Port Vlans in spanning tree forwarding state and not pruned Po1 none SW1#show int port-channel1 Port-channel1 is up, line protocol is up (connected)
Changing a port attribute. Ports need to be running the same speed, duplex, native VLAN, and just about any other value you can think of! If you change a port setting and the EC comes down, you know what to do change the port setting back! SPAN. Ports in an etherchannel can be source SPAN ports, but not destination SPAN ports. The IP address. Naturally, this is for L3 etherchannels only - be sure to assign the IP address to the logical representation of the etherchannel, the port-channel interface, not to the physical interfaces bundled in the etherchannel. Etherchannel Load Balancing
Verifying And Troubleshooting Etherchannels To take a quick look at the ECs running on a switch, run show
etherchannel summary. In the following example, we can see that the EC serving as Port-Channel 1 is a Layer 2 EC as indicated by the code "S", and is in use as indicated by the code "U". You can also see the ports in the channel. SW1#show etherchannel summary Flags: D - down P - in port-channel I - stand-alone s - suspended H - Hot-standby (LACP only) R - Layer3 S - Layer2 u - unsuitable for bundling U - in use f - failed to allocate aggregator d - default port Number of channel-groups in use: 1 Number of aggregators: 1 Group Port-channel Protocol Ports ------+-------------+-----------+-----------------------1 Po1(SU) Fa0/11(Pd) Fa0/12(P)
If it's real detail you want, use show etherchannel x detail. SW1#show etherchannel 1 detail Group state = L2 Ports: 2 Maxports = 8 Port-channels: 1 Max Port-channels = 1 Protocol: Ports in the group: ------------------Port: Fa0/11 -----------Port state Channel group Port-channel Port index
= = = =
Up Mstr In-Bndl 1 Mode = On/FEC Po1 GC = 0 Load = 0x00
Gcchange = Pseudo port-channel = Po1 Protocol = -
Age of the port in the current state: 00d:00h:26m:49s Port: Fa0/12 -----------Port state Channel group Port-channel Port index
= = = =
Up Mstr In-Bndl 1 Mode = On/FEC Po1 GC = 0 Load = 0x00
Gcchange = Pseudo port-channel = Po1 Protocol = -
Age of the port in the current state: 00d:00h:21m:29s
Port-channels in the group: ---------------------Port-channel: Po1 -----------Age of the Port-channel = 00d:00h:26m:52s Logical slot/port = 1/0 Number of ports = 2 GC = 0x00000000 HotStandBy port = null Port state = Port-channel Ag-Inuse Protocol = Ports in the Port-channel: Index Load Port EC state No of bits ------+------+------+------------------+----------0 00 Fa0/11 On/FEC 0 0 00 Fa0/12 On/FEC 0 Time since last port bundled: 00d:00h:21m:32s Time since last port Un-bundled: 00d:00h:21m:32s
Fa0/12 Fa0/12
SW1#
And now... an alternative to STP! Flex Links In the very rare case that you don't want to run STP, configuring Flex Links allows you to have a backup link that will be up and running in less than 50 milliseconds. It's doubtful you'd use Flex Links in a typical production network, but it can come in handy in service provider networks where running STP might not be feasible. In the following example, port 0/11 is connected to SW2 and 0/12 to SW3. Using Flex Links, only the active port will be carrying data - in this case, that's 0/11.
We actually apply no configuration to the primary (active) port. All Flex Links commands will be placed on the standby port. SW1(config)#int fast 0/12 SW1(config-if)#switchport backup interface fast 0/11
Verify with show interface switchport backup. Plenty o' rules for this feature: Enabling Flex Links for a set of ports disables STP. One backup interface per active interface, please, and an interface can belong to only one set of Flex Links. Obviously, the active and backup interfaces cannot be one and the same. Ports inside an Etherchannel cannot be individually assigned as backup or active Flex Ports, BUT an entire Etherchannel can be assigned as either.
An Etherchannel can be assigned as the active port for a physical port, and vice versa. This isn't required reading, but here's some additional info from Cisco regarding this feature: http://bit.ly/buHIUt As I mentioned, this isn't something you configure every day, but it does come in handy if you can't run STP for some reason.
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933
www.thebryantadvantage.com
Back To Index
Securing The Switches Passwords Port Security Dot1x Port-Based Authentication SPAN Basics Local SPAN Remote SPAN SPAN Limitations VLAN Access Control Lists Private VLANs DHCP Snooping Dynamic ARP Inspection IP Source Guard MAC Address Flooding Dealing With Your Clients In The Real World
Bulldogs, I've included a bonus section on AAA in this book - please read that along
with this section. Thanks! -- Chris B. When we think of network security, we tend to focus on protecting our network from attacks originating outside the network. That's half the battle - but it's important to remember that many successful network attacks are launched from the inside, and from some seemingly innocent sources, such as... DHCP ARP Rogue switches (Switches acting as part of our network, but under the administrative control of a potential network attacker) CDP Telnet Unauthorized MAC addresses Hosts on the same VLAN (!) So while it's wise to protect our network from the outside, we better take some measures to protect us from.. the enemy within. (Cue dramatic music.) Seriously, we've got some important work to do here - so let's get to it. The first methods of security I'm going to talk about in this chapter aren't fancy, they aren't exciting, and they don't cost an arm and a leg. But the basic security features are the ones to start with, and I use a four-step approach to basic network security: 1. Physical security - lock those servers, routers, and switches up! This is the most basic form of network security, and it's also the most ignored. 2. Passwords - set 'em, change 'em on occasion. If you're relatively new to a particular job site, be ready for a fight on this point from other admins.
3. Different privilege levels - not every user needs the same level of access to potentially destructive commands, because not every user can handle the responsibility. 4. Grant remote access only to those who absolutely, positively need it -- and when users do connect remotely, make that communication as secure as possible. Physical security is just that. Get the routers and switches locked up! Steps two and three go hand in hand, and much of what follows may be familiar to you. Don't skip this part, though, because we're going to tie in privilege levels when it comes to telnet access. You know how to configure the basic passwords on a switch: SW2(config)#enable password ccna SW2(config)#enable secret ccnp SW2(config)#line con 0 SW2(config-line)#login % Login disabled on line 0, until 'password' is set SW2(config-line)#password ccie SW2(config)#line vty 0 15 SW2(config-line)#password cisco SW2(config-line)#login
Here's a quick refresher on some basic Cisco password rules and messages.... All passwords appear in the configuration in clear text by default except the enable secret. The command service password-encryption will encrypt the remaining passwords. The login message shown when the login command is used in the above example simply means that a password needs to be set to enable this feature. As long as you enter both the login and password commands, it does not matter in what order you enter them. Cisco switches have more VTY lines than routers. Routers allow up to five simultaneous Telnet sessions, and obviously switches allow more! The default behavior is the same, however. Any user who telnets in to the switch will be placed into user exec mode, and will then be prompted for the proper enable mode password.
If neither the enable secret nor the enable password has been set, the user will not be able to enter enable mode. To place users coming into the switch via telnet straight into enable mode, use the command privilege level 15 under the VTY lines. SW2(config-line)#privilege level 15
Note below how the configuration appears on the switch when it comes to the VTY lines. If you want a command to be applied to all 16 lines, you don't have to use "line vty 0 4" and then "line vty 5 15" - just run the command line vty 0 15. line vty 0 4 privilege level 15 password cisco login line vty 5 15 privilege level 15 password cisco login
The possible issue here is that any user who telnets in will be placed into enable mode. It's easy to configure, but maybe we don't want to give that high level of access so easily. Consider a situation where a tech support person has to telnet into a router. Maybe they know what they're doing, and with all due respect, maybe they don't. Do you want this person making changes to the router without you knowing about it? It may be better to assign privilege level 15 to yourself while assigning the default value of 0 to others. I also don't like having one password for all telnet users. I prefer a scheme where each individual user has their own password. Creating a local database of users and privilege levels allows us to do this, and it's a simple procedure. As a matter of fact, you already did this at least once during your CCNA studies. All you have to do is create a username / password database the same way you create one for PPP authentication. SW2(config)#username CBRYANT privilege 15 password CCIE SW2(config)#username WMCDANIEL password CCNP SW2(config)#username BMULLIGAN password CCNA
SW2(config)#line vty 0 15 SW2(config-line)#login local
The username / password command allows the assignment of privilege levels. If none is specified, level 0 is the default. With the above configuration, the first user would be placed into privileged exec mode when connecting via telnet, while the other two users would be required to enter the enable password before they could enter that mode. The login local command is required to have the switch look to a local database for authentication information. If a user doesn't know their username/password combination, they can't telnet into this switch. Port Security Here's another basic security feature that's regularly overlooked, but is very powerful. Port security uses a host's MAC address as a password...
... and if the port receiving this frame is running port security and expects frames with that particular MAC address only, frames from this host would be accepted. However, if a device with a different MAC address sends frames to the switch on that port, the port will take action - by default, it will shut down and go into error-disabled state. By default, that state requires manual intervention on the part of the network admin to reopen the port. The switchport port-security command enables this feature, and then we have quite a few options to consider. SW2(config)#int fast 0/5 SW2(config-if)#switchport port-security Command rejected: Fa0/5 is not an access port. SW2(config-if)#switchport mode access SW2(config-if)#switchport access vlan 10
Before we can consider the options, we have to make the port in question
a non-trunking port. Port security can't be configured on a port that even has a possibility of becoming a trunk port. Configuring a port as an access port is the equivalent of turning trunking to "off". Now, let's get back to those options! SW2(config-if)#switchport port-security ? aging Port-security aging commands mac-address Secure mac address maximum Max secure addresses violation Security violation mode
The first option to consider is the maximum value. This is the maximum number of secure MAC addresses allowed on the port. This number can vary - I've seen Cisco switches that would allow up to 1024, but this 2950 will only allow 132. These addresses can be configured statically with the mac-address option, they can be learned dynamically, or you can allow a port to do both. (More on that in just a moment.) SW2(config-if)#switchport port-security maximum ? Maximum addresses SW2(config-if)#switchport port-security mac-address ? H.H.H 48 bit mac address
Now we need to decide the action the port should take in case frames with a non-secure MAC address arrive on the port. The default port security mode is shutdown, and it's just what it sounds like - the port is placed into error-disabled state and manual intervention is needed to reopen the port. An SNMP trap message is also generated. (You can also use the errdisable recovery command to specify how long the port should remain in that state before the switch itself resets the port.) SW2(config-if)#switchport port-security violation ? protect Security violation protect mode restrict Security violation restrict mode shutdown Security violation shutdown mode
Protect mode simply drops the offending frames. Restrict mode is our middle ground - this mode drops the offending frames and will generate both an SNMP trap notification and syslog message regarding the violation, but the port does not go into err-disabled state.
Before we continue, a note of caution - throughout this course, you'll see ports shut down for one reason or another, particularly in the Advanced STP section. Note that not all of these features force the port into errdisabled mode. Be sure you're very familiar with the different states these ports are put into. (I'll have a chart at the end of that section listing each port state.) Let's take a look at the console messages you'll see when running port security in its default mode, shutdown. I configured a port on this switch with port security, one secure MAC address, and made sure it didn't match the host that would be sending frames on that port. Sure enough, within seconds all of this happened: SW1(config-if)# 05:06:04: %PM-4-ERR_DISABLE: psecure-violation error detected on Fa0/7, puttingFa0/7 in err-disable state 05:06:04: %PORT_SECURITY-2-PSECURE_VIOLATION: Security violation occurred, caused by MAC address 000f.f773.ed20 on port FastEthernet0/7. 05:06:05: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/7, changed state to down 05:06:06: %LINK-3-UPDOWN: Interface FastEthernet0/7, changed state to down
show interface verifies that this interface is in error-disabled state. SW1#show int fast 0/7 FastEthernet0/7 is down, line protocol is down (err-disabled)
That port must now manually be reopened - of course, after resolving the security issue that brought it down in the first place! There is a little "gotcha" with port security that you need to be aware of. You can specify the number of secure MAC addresses, and you can specify secure MAC addresses as well. What if you allow for more secure MAC address than you actually configure manually, as shown below? SW1(config-if)#switchport SW1(config-if)#switchport SW1(config-if)#switchport SW1(config-if)#switchport
port-security port-security maximum 3 port-security mac-address aaaa.aaaa.aaaa port-security mac-address cccc.cccc.cccc
In this situation, the remaining secure MAC address will be dynamically learned - so if a rogue host with the MAC address dddd.dddd.dddd connected to that port right now, port security would allow it. Be careful! In that configuration, these three addresses would be considered secure:
aa-aa-aa-aa-aa-aa cc-cc-cc-cc-cc-cc The next dynamically learned MAC address There is no penalty for hitting the limit of secure addresses - it just means the switch can't learn any more secure addresses. To verify your port security configuration, run show port-security interface. SW1#show port-security interface fast 0/2 Port Security : Enabled Port Status : Secure-up Violation Mode : Shutdown Aging Time : 0 mins Aging Type : Absolute SecureStatic Address Aging : Disabled Maximum MAC Addresses : 3 Total MAC Addresses : 2 Configured MAC Addresses : 2 Sticky MAC Addresses : 0 Last Source Address:Vlan : 0000.0000.0000:0 Security Violation Count : 0
The violation mode here is the default, shutdown. In this scenario, the port will be shut down if the number of secure MAC addresses is reached and a host whose MAC address is not among those secure addresses connects to this port. Note that "aging time" is set to zero - that actually means that secure MAC addresses on this port will never age out, not that they have zero minutes before aging out. You can change this value with the switchport portsecurity aging command. This particular switch accepts the value set in minutes; many older models want this entered in seconds. Always use IOS Help to double-check a command's metric! SW1(config-if)#switchport port-security aging time ? Aging time in minutes. Enter a value between 1 and 1440
The aging type value determines whether a secure MAC address will absolutely expire after a certain amount of time, or whether aging should be based on inactivity ... as IOS Help shows us! SW1(config-if)#switchport port-security aging type ? absolute Absolute aging (default) inactivity Aging based on inactivity time period
Port security is a great feature, but you can't run it on all ports. There are a few port types that you can't configure with port security:
trunk ports ports placed in an Etherchannel destination SPAN port 802.1x ports
Why Make Addresses "Sticky"? We know a MAC address can be dynamically learned by the switch as secure, and we may want that address marked as secure in the running configuration. To do so, enable sticky learning with this command: switchport port-security mac-address sticky Why use sticky addresses? Along with MAC address flooding, network intruders can use a spoofed MAC address to gain access to the network. By configuring sticky learning, the dynamically learned secure MAC addresses are written to the running configuration, which in turn helps to prevent unauthorized network access via MAC spoofing. Note: To save these dynamically learned static addresses to the startup configuration, you'll need to copy the run config over the startup config before reloading the switch. Dot1x Port-Based Authentication Port security is good, but we can take it a step further with dot1x portbased authentication. The name refers to IEEE 802.1x, the standard upon which this feature is based. Unusually enough, the Cisco authentication server must be RADIUS - you can't use TACACS or TACACS+. One major difference between dot1x port-based authentication and port security is that both the host and switch port must be configured for 802.1x EAPOL (Extensible Authentication Protocol over LANs). That's a major departure from many of the switch features we've studied to date, since most other switch features don't require anything of the host. Usually the PC isn't aware of what the switch is doing, and doesn't need to know. Not this time!
Keeping those rules in mind, a typical dot1x deployment involves: A dot1x-enabled PC, the supplicant A dot1x-enabled switch, the authenticator A RADIUS server, the authentication server (You cannot use a TACACS+ server for this purpose.) But it's not quite as simple as that. (You were waiting for that, right?) The PC has a single physical port connected to the switch, but that physical port is logically divided into two ports by dot1x - the controlled and uncontrolled ports. Unlike the subinterfaces you've studied and created to date, you and I as the network admins do not have to configure the controlled and uncontrolled ports. Dot1x will take care of that - of course, as long as we remember to configure the supplicant for dot1x to begin with! The controlled port cannot transmit data until authentication actually takes place. The uncontrolled port can transmit without authentication, but only the following protocols can be transmitted: Extensible Authentication Protocol over LANs (EAPOL) Spanning Tree Protocol (STP) Cisco Discovery Protocol (CDP) By default, once the user authenticates, all traffic can be received and transmitted through this port. To configure dot1x, AAA must first be enabled. As with previous configurations, a method list must be created. And again, as with previous configurations, you should use line as the last choice, just in case something happens regarding your login with the other methods.
SW2(config)#aaa new-model SW2(config)#aaa authentication dot1x ? WORD Named authentication list. default The default authentication list. SW2(config)#aaa authentication dot1x default ? enable Use enable password for authentication. group Use Server-group line Use line password for authentication. local Use local username authentication. local-case Use case-sensitive local username authentication. none NO authentication.
To enable dot1x on the switch: SW2(config)#dot1x ? system-auth-control
Enable or Disable SysAuthControl
Dot1x must be configured globally, but every switch port that's going to run dot1x authentication must be configured as well. SW2(config-if)#dot1x port-control ? auto PortState will be set to AUTO force-authorized PortState set to Authorized force-unauthorized PortState will be set to UnAuthorized
Force-authorized, the default, does just what it sounds like - it forces the port to authorize any host attempting to use the port, but authentication is not required. Basically, there is no authentication on this port type. A port in force-unauthorized state literally has the port unable to authorize any client - even clients who could otherwise successfully authenticate! The auto setting enables dot1x on the port, which will begin the process as unauthorized. Only the necessary EAPOL frames will be sent and received while the port's unauthorized. Once the authentication is complete, normal transmission and receiving can begin. Not surprisingly, this is the most common setting.
SPAN Operation And Configuration We've secured the ports, but there will also come a time when we want to connect a network analyzer to a switch port. A common situation is illustrated below, where we want to analyze traffic sourced from the three PCs. To properly analyze the traffic, the network analyzer needs a copy of every frame the hosts are sending - but how are we going to get it
there?
SPAN allows the switch to mirror the traffic from the source port(s) to the destination port to which the network analyzer is attached. (In some Cisco documentation, the destination port is referred to as the monitor port.) SPAN works very well, and the basic operation is simple. Studying SPAN for exams and network usage can seem complicated at first, though, because there are several different versions of SPAN. The versions are much the same, though; the real difference comes in when you define the source ports. It's the location of the source ports that determines the SPAN version that needs to run on the switch. In the above example, we're running Local SPAN, since the destination and source ports are all on the same switch. If the source was a VLAN rather than a collection of physical ports, VLAN-based SPAN (VSPAN) would be in effect. The command monitor session starts a SPAN session, along with allowing the configuration of the source and destination. The sessions are totally separate operations, but the number of simultaneous sessions you can run differs from one switch platform to another. Cat 3550s and
2950s support only two, but more powerful switches can run as many as 64 sessions at once. SW2(config)#monitor session ? SPAN session number SW2(config)#monitor session 1 ? destination SPAN destination interface or VLAN source SPAN source interface, VLAN SW2(config)#monitor session 1 source ? interface SPAN source interface remote SPAN source Remote SW2(config)#monitor session 1 source interface ? FastEthernet FastEthernet IEEE 802.3 Port-channel Ethernet Channel of interfaces SW2(config)#monitor session 1 source interface fast 0/1 - 5 , Specify another range of interfaces Specify a range of interfaces both Monitor received and transmitted traffic rx Monitor received traffic only tx Monitor transmitted traffic only
Here, ports fast 0/1 - 0/5 have been configured as the source. By default, traffic being received and transmitted will be mirrored, but this can be changed to received traffic only and transmitted traffic only as shown above. Using the same session number, the traffic will be mirrored to the destination port 0/10. Verify the SPAN configuration with show monitor. SW2(config)#monitor session 1 destination interface fast 0/10 SW2#show monitor Session 1 --------Type : Local Session Source Ports : Both : Fa0/1-2 Destination Ports : Fa0/10 Encapsulation : Native Ingress: Disabled
SPAN works fine if the source and destination ports are on the same switch, but realistically, that's not always going to happen. What if the traffic to be monitored is on one switch, but the only vacant port available is on another switch?
Remote SPAN (RSPAN) is the solution. Both switches will need to be configured for RSPAN, since the switch connected to the PCs will need to send mirrored frames across the trunk. A separate VLAN will be created that will carry only the mirrored frames. RSPAN configuration is simple, but there are some factors you need to consider when configuring RSPAN:
If there were intermediate switches between the two shown in the above example, they would all need to be RSPAN-capable. VTP treats the RSPAN VLAN like any other VLAN. It will be propagated throughout the VTP domain if configured on a VTP server. Otherwise, it's got to be manually configured on every switch along the intermediate path. VTP Pruning will also prune the RSPAN VLAN under the same circumstances that it would prune a "normal" VLAN. MAC address learning is disabled for the RSPAN VLAN. The source and destination must be defined on both the switch with the source port and the switch connected to the network analyzer, but the commands are not the same on each.
After all that, the configuration is simple! Create the VLAN first, and identify it as the RSPAN VLAN with the remote-span command. SW2(config)#vlan 30 SW2(config-vlan)#remote-span
SW2 is the source switch, and the traffic from ports 0/1 - 0/5 will be monitored and frames mirrored to SW1 via RSPAN VLAN 30. SW2(config)#monitor session 1 source interface fast 0/1 - 5 SW2(config)#monitor session 1 destination remote ? vlan Remote SPAN destination RSPAN VLAN SW2(config)#monitor session 1 destination remote vlan 30 % Incomplete command. SW2(config)#monitor session 1 destination remote vlan 30 ? reflector-port Remote SPAN reflector port
As you see, naming the RSPAN VLAN here doesn't finish the job. We now have to define the reflector port, the port that will be copying the SPAN traffic onto the VLAN. SW2(config)#monitor session 1 desti remote vlan 30 reflector-port fast 0/12
SW1 will receive the mirrored traffic and will send it to a network analyzer on port 0/10. SW1(config)#monitor session 1 source remote vlan 30 SW1(config)#monitor session 1 destination interface fast 0/10
Run show monitor to verify the configuration. SW1#show monitor Session 1 --------Type : Remote Destination Session Source RSPAN VLAN: 30 Destination Ports : Fa0/10 Encapsulation : Native Ingress: Disabled
SPAN Limitations As I mentioned, SPAN is easy to configure, but it does have a few
limitations on what ports can be made source or destination ports: Source port notes:
A source port can be monitored in multiple, simultaneous SPAN sessions. A source port can be part of an Etherchannel. A source port cannot be configured as a destination port. A source port can be any port type - Ethernet, FastEthernet, etc.
Destination port notes:
A destination port can be any port type. A destination port can participate in only one SPAN session. A destination port cannot be a source port. A destination port cannot be part of an Etherchannel. A destination port doesn't participate in STP, CDP, VTP, PaGP, LACP, or DTP.
Trunk ports can be configured as source and/or destination SPAN ports; the default behavior will result in the monitoring of all active VLANs on the trunk. I strongly recommend that you find the SPAN documentation for your switch models before configuring them. SPAN operation is simple, but the command options do change. Finally, you may see the term "ESPAN" in some SPAN documentation. This is Enhanced SPAN, and some of Cisco's documentation mentions that this term has been used so often to describe different additions that the term has lost meaning. You'll still see it occasionally, but it doesn't refer to any specific addition or change to SPAN. Filtering Intra-VLAN Traffic At this point in your Cisco studies, you're very familiar with access lists and their many, many, many uses! Access lists do have their limitations, though. While an ACL can filter traffic traveling between VLANs, it can't do anything about traffic from one host in a VLAN to another host in the same VLAN. Why not? It relates to how ACLs are applied on a multilayer switch. You know that the CAM (Content Addressable Memory) table holds the MAC
addresses that the switch has learned, but the TCAM - Ternary Content Addressable Memory - cuts down on the number of lookups required to compare a packet against an ACL. This filtering of packets by the switch hardware speeds up the process, but this limits ACL capability. An ACL can be used to filter inter-VLAN traffic, but not intra-VLAN traffic. To filter traffic between hosts in the same VLAN, we've got to use a VLAN Access List (VACL).
Even though a VACL will do the actual filtering, an ACL has to be written as well. The ACL will be used to as the match criterion within the VACL. For example, let's say we have the subnet 172.10.10.0 /24's addresses configured on hosts in VLAN 100. The hosts 172.10.10.1 - 3 are not to be allowed to communicate with any other hosts on the VLAN, including each other. An ACL will be written to identify these hosts. SW2(config)#ip access-list extended NO_123_CONTACT SW2(config-ext-nacl)#permit ip 171.10.10.0 0.0.0.3 172.10.10.0 0.0.0.255
Notice that even though the three source addresses named in the ACL are the ones that will not be allowed to communicate with other hosts in the VLAN, the ACL statement is permit, not deny. The deny part is coming!
Now the VLAN access-map will be written, with any traffic matching the ACL to be dropped and all other traffic to be forwarded. Note that the second access-map clause has no match clause, meaning that any traffic that isn't affect by clause 10 will be forwarded. That is the VACL equivalent of ending an ACL with "permit any". If you configure a VACL without a final "action forward" clause as shown below, all traffic that does not match a specific clause in the VACL will be dropped. SW2(config)# vlan access-map NO_123 10 SW2(config-access-map)# match ip address NO_123_CONTACT SW2(config-access-map)# action drop SW2(config-access-map)# vlan access-map NO_123 20 SW2(config-access-map)# action forward
Finally, we've got to apply the VACL. We're not applying it to a specific interface - instead, apply the VACL in global configuration mode. The VLAN to be filtered is specified at the end of the command with the vlanlist option. SW2(config)# vlan filter NO_123 vlan-list 100
Some additional notes and tips regarding VACLs:
Bridged traffic, as well as non-IP and non-IPX traffic, should be filtered with VACLs VACLs run from top to bottom, and run until a match occurs VACLs have an implicit deny at the end. The VACL equivalent of "permit all" is an "action forward" clause with no match criterion, as shown in the previous example. If traffic is not expressly forwarded, it's implicitly dropped! Only one VACL can be applied to a VLAN The sequence numbers allow you to go back and add lines without rewriting the entire VACL. They are still active while being edited. A routing ACL can be applied to a SVI to filter inbound and/or outbound traffic just as you would apply one to a physical interface,
but VACLs are not applied in that way - they're applied in global configuration mode. On L3 switches, you may run into a situation where there's a VACL configured, and a "normal" ACL affecting incoming traffic is applied to a routed port that belongs to that same VLAN. In this case, packets entering that VLAN will be matched against the VACL first; if the traffic is allowed to proceed, it will then be matched against the inbound ACL on that port. A Possible Side Effect Of Performing ACL Processing In Hardware At the beginning of the VACL section, I mentioned that ACL processing in multilayer switches is performed in hardware. There will still be some traffic that is sent to the CPU for software processing, and that forwarding rate is much lower than the rate for the traffic forwarded by the switch hardware. If the hardware hits its storage limit for ACL configs, resulting in even more packets begin sent to the CPU, the switch performance can degrade. (I've seen that, and it's ugly. Avoid it.) Cisco's website lists two other factors that may result in too many packets being sent to the CPU, and they may surprise you:
Excessive logging Use of ICMP Unreachable messages
Use the log option with care. Logging must be performed by the switch software, not the hardware.
Private VLANs If you want to hide a host from the world - even going as far as hiding a host from other hosts in the same VLAN and subnet - private VLANs are the way to go. Using these is really getting away from it all. This concept can throw you a bit at first, since a private VLAN is truly unlike anything we've looked at with VLANs to date - and the terminology is different, too. So hang in there - it'll be second nature before you know it.
With private VLANs, we have... three port types - one type that talks to everybody, one that talks to somebody, and one that talks to practically nobody two kinds of private VLANs, primary and secondary two kinds of secondary VLANs, community and isolated Let's break this concept down, starting with the port types. Hosts that need to talk to everyone will be connected to promiscuous ports. This port type can communicate with any host connected to any of the other two port types. When you have a router or multilayer switch that serves as a default gateway, that device must be connected to a promiscuous port for the network to function correctly. Hosts that just need to talk to some other devices are connected to community ports. These hosts can communicate with other community ports in the same private VLAN as well as any device connected to a promiscuous port. Hosts that just don't want anything to do with almost anybody are connected to isolated ports. The only hosts that these hosts can communicate with are devices connected to promiscuous ports. Even if you have two isolated ports in the same private VLAN, those hosts will not be able to communicate with each other. Those are the port types - now let's take a striaghtforward look at the private VLAN types. The "parent" Private VLAN is the primary private VLAN. The "child" Private VLAN is the secondary private VLAN. That's really it. In our configuration, we'll be mapping primary private VLANs to secondary private VLANs. A primary Private VLAN can be mapped to multiple secondaries, but a secondary Private VLAN can be mapped to only one primary. In turn, we have two secondary VLAN types.
Ports in a community private VLAN can communicate with other ports in the same community as well as promiscuous ports in the primary. Ports in an isolated private VLAN can only communicate with promiscuous ports in the parent primary VLAN. We're limited to one of these per parent primary VLAN, but since the ports in an isolated private VLAN can't intercommunicate, we only need one. Each of these concepts is illustrated in the following diagram:
Host A has been placed into an isolated private VLAN, and will be able to communicate only with the router, which is connected to a promiscuous port. If we placed another host in the same isolated private VLAN that Host A is in now, the two hosts could not communicate with each other. The other hosts are in a community private VLAN, so they can communicate with each other as well as the router. They can't communicate with Host A. In the following configuration, we'll use the following VLANs and VLAN types:
VLAN 100 as a secondary private VLAN (community); ports in this VLAN are fast 0/1 - 5. VLAN 200 as a secondary private VLAN (isolated); ports in this VLAN are fast 0/6- 10. VLAN 300 will be the primary private VLAN. This port leads to a router via fast 0/12. Creating the first VLAN with VLAN config mode is no problem, but look what happens when we try to make it a community private VLAN - or any kind of private VLAN, for that matter.... MLS(config)#vlan 100 MLS(config-vlan)#private-vlan ? association Configure association between private VLANs community Configure the VLAN as a community private VLAN isolated Configure the VLAN as an isolated private VLAN primary Configure the VLAN as a primary private VLAN twoway-community Configure the VLAN as a two way community private VLAN MLS(config-vlan)#private-vlan community Private VLANs can only be configured when VTP is in transparent mode
Please note that private VLANs can only be configured when VTP is in transparent mode. (Yeah, I know, like it says right there.) Once we do that, configuring VLAN 100 as a community private VLAN is no problem. MLS(config-vlan)#exit MLS(config)#vtp mode transparent Setting device to VTP TRANSPARENT mode. MLS(config)#vlan 100 MLS(config-vlan)#private-vlan community MLS(config)#vlan 200 MLS(config-vlan)#private-vlan isolated
Now we'll configure VLAN 300 as the primary private VLAN, and then associate those two secondary private VLANs with this primary private VLAN. (This association is not the mapping I mentioned earlier.) MLS(config)#vlan 300 MLS(config-vlan)#private-vlan primary MLS(config-vlan)#private-vlan association ? WORD VLAN IDs of the private VLANs to be configured add Add a VLAN to private VLAN list remove Remove a VLAN from private VLAN list
MLS(config-vlan)#private-vlan association 200,300
So at this point, we've... Configured VTP to run in transparent mode Created our secondary private VLANs, both isolated and community Created our primary private VLAN Created an association between the secondary and primary private VLANs Just one more little thing to do... place the ports into the proper VLAN and get that mapping done! (Okay, that's two things.) The switch port leading to the router is fast 0/12; that port must be made promiscuous. SW1(config)#int fast 0/12 SW1(config-if)#switchport mode ? access Set trunking mode to ACCESS unconditionally dot1q-tunnel set trunking mode to TUNNEL unconditionally dynamic Set trunking mode to dynamically negotiate access or trunk mode private-vlan Set private-vlan mode trunk Set trunking mode to TRUNK unconditionally SW1(config-if)#switchport mode private-vlan ? host Set the mode to private-vlan host promiscuous Set the mode to private-vlan promiscuous SW1(config-if)#switchport mode private-vlan promiscuous
We'll also need the primary vlan mapping command on that interface: SW1(config-if)#switchport private-vlan ? association Set the private VLAN association host-association Set the private VLAN host association mapping Set the private VLAN promiscuous mapping SW1(config-if)#switchport private-vlan mapping ? Primary extended range VLAN ID of the promiscuous port mapping Primary normal range VLAN ID of the promiscuous port mapping SW1(config-if)#switchport private-vlan mapping 300 ?
private
VLAN
private
VLAN
WORD add remove
Secondary VLAN IDs of the private VLAN promiscuous port mapping Add a VLAN to private VLAN list Remove a VLAN from private VLAN list
SW1(config-if)#switchport private-vlan mapping 300 100,200
Ports fast 0/1 - 5 are in VLAN 100. We'll use the interface range command to configure that port range all at once with the private-vlan host and private-vlan host-association commands. SW1(config)#interface range fast 0/1 - 5 SW1(config-if-range)#switchport mode private-vlan ? host Set the mode to private-vlan host promiscuous Set the mode to private-vlan promiscuous SW1(config-if-range)#switchport mode private-vlan host
SW1(config-if-range)#switchport private-vlan ? association Set the private VLAN association host-association Set the private VLAN host association mapping Set the private VLAN promiscuous mapping SW1(config-if-range)#switchport private-vlan host-association ? Primary extended range VLAN ID of the private VLAN host port association Primary normal range VLAN ID of the private VLAN port association SW1(config-if-range)#switchport private-vlan host-association 300 100
On ports fast 0/ 6 - 10 (in VLAN 200), it's about the same story, except the host-association command will end with 200 rather than 100. SW1(config)#int range fast 0/6 - 10 SW1(config-if-range)#switchport mode private-vlan host SW1(config-if-range)#switchport private-vlan host-association 300 200
You can verify all of your private VLANs with show vlan private-vlan (really!) and on an interface level with the command show interface switchport (output truncated): SW1#show int fast 0/6 switchport Name: Fa0/6 Switchport: Enabled Administrative Mode: private-vlan host Operational Mode: down
Administrative Trunking Encapsulation: negotiate Negotiation of Trunking: Off Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 1 (default) Administrative Native VLAN tagging: enabled Voice VLAN: none Administrative private-vlan host-association: 300 (Inactive) (Inactive) Administrative private-vlan mapping: none Administrative private-vlan trunk native VLAN: none Administrative private-vlan trunk Native VLAN tagging: enabled Administrative private-vlan trunk encapsulation: dot1q
200
Note the trunk options at the end of that output. You can change those and other trunk-related values with the switchport private-vlan trunk commands. SW1(config-if)#switchport private-vlan trunk native vlan 15 SW1(config-if)#switchport private-vlan trunk allowed vlan 100,200,300
DHCP Snooping It may be hard to believe, but something as innocent as DHCP can be used for network attacks. The potential for trouble starts when a host sends out a DHCPDiscovery packet, it listens for DHCPOffer packets and as we know, the host will accept the first Offer it gets!
Part of that DHCPOffer is the address to which the host should set its default gateway. In this network, there's no problem, because there's only one DHCP Server. The host will receive the DHCPOffer and set its default gateway accordingly. What if a DHCP server that does not belong on our network - a rogue DHCP server - is placed on that subnet?
Now we've got a real problem, because that host is going to use the information in the first DHCPOffer packet it receives - and if the host uses the Offer from the rogue DHCP server, the host will actually set its default gateway to the rogue server's IP address! The rogue server could also have the host set its DNS server address to the rogue server's address as well. This opens the host and the network to several nasty kinds of attacks. DHCP Snooping allows the switch to serve as a firewall between hosts and untrusted DHCP servers. DHCP Snooping classifies interfaces on the switch into one of two categories - trusted and untrusted. DHCP messages received on trusted interfaces will be allowed to pass through the switch. Not only will DHCP messages received on untrusted interfaces be dropped by the switch, the interface itself will be placed into err-disabled state.
Now, you're probably asking "How does the switch determine which ports are trusted and which ports are untrusted?" By default, the switch considers all ports untrusted - which means we better remember to configure the switch to trust some ports when we enable DHCP Snooping! First, we need to enable DHCP Snooping on the entire switch: SW1(config)#ip dhcp snooping
You must then identify the VLANs that will be using DHCP Snooping. Let's use IOS Help to look at the other options available. SW1(config)#ip database information verify vlan
dhcp DHCP DHCP DHCP DHCP
snooping snooping Snooping snooping Snooping
? database agent information verify vlan
SW1(config)#ip dhcp snooping vlan ? WORD DHCP Snooping vlan fist number or vlan range, example: 1,3-5,7,9-11
Note that you can use commas and dashes to define a range of VLANs for DHCP Snooping. We'll create three VLANs on this switch and then enable DHCP Snooping only for VLAN 4. SW1(config)#int fast 0/2 SW1(config-if)#switchport mode access SW1(config-if)#switchport access vlan 2 % Access VLAN does not exist. Creating vlan 2
SW1(config-if)#int fast 0/3 SW1(config-if)#switchport mode access SW1(config-if)#switchport access vlan 3 % Access VLAN does not exist. Creating vlan 3
SW1(config-if)#int fast 0/4 SW1(config-if)#switchport mode access SW1(config-if)#switchport access vlan 4 % Access VLAN does not exist. Creating vlan 4 SW1(config)#ip dhcp snooping vlan 4
Assuming we have a trusted DHCP server off port 0/10, we would then trust that port with the following command: SW1(config-if)#ip dhcp snooping trust
From your previous studies, you're familiar with the DHCP Relay Agent Information option. Usually referred to as Option 82 (we still don't know what happened to the first 81 options), this option can be disabled or enabled with the following command: SW1(config)#ip dhcp snooping information option
DHCP Snooping is verified with the show ip dhcp snooping command. SW1#show ip dhcp snooping Switch DHCP snooping is enabled DHCP snooping is configured on following VLANs: 4 Insertion of option 82 is enabled circuit-id format: vlan-mod-port remote-id format: MAC Option 82 on untrusted port is not allowed Verification of hwaddr field is enabled Interface Trusted Rate limit (pps) --------------------------------------------FastEthernet0/10 yes unlimited
The key information here, from top to bottom:
DHCP Snooping is enabled on the switch VLAN 4 is the only VLAN using DHCP Snooping Option 82 is enabled, but not allowed on untrusted ports The only trusted port is fast 0/10
Note the "rate limit" for the untrusted port is set to "unlimited". That rate limit refers to the number of DHCP packets the interface can accept in one second (packets per second).
Dynamic ARP Inspection Just as we must protect against rogue DHCP servers, we have to be wary of rogue ARP users as well. From your CCNA studies, you know all about Address Resolution Protocol and how it operates. A rogue device can overhear part of the ARP process in action and make itself look like a legitimate part of the network. This happens through ARP Cache Poisoning. (This is also known as ARP Spoofing - be aware of both names for your exam.) ARP Cache Poisoning starts innocently enough - in this case, through the basic ARP process on a switch.
Host A is sending an ARP Request, requesting the host with the IP address 172.12.12.2 to respond with its MAC Address. Host B will receive the request, but before responding, Host B will make an entry in its local ARP cache mapping the IP address 172.12.12.1 to the MAC
address aa-aa-aa-aa-aa-aa. Once Host A receives that ARP Reply, both hosts will have a MAC address - IP address mapping for the remote host.
The problem comes in if a rogue host responds to the original ARP Request with its own MAC address.
Now Host A will make an entry in its ARP cache mapping the IP address 172.12.12.2 to cc-cc-cc-cc-cc-cc. Meanwhile, the rogue host will acquire Host B's true MAC address via ARP, which leads to this process:
1.
2.
When Host A transmits data to the IP address 172.12.12.2 with a MAC address of cc-cc-cc-cc-cc-cc, the data is actually being received by the rogue host. The rogue host will read the data and then possibly forward it to Host B, so neither Host A nor Host B immediately notices anything wrong.
The rogue host has effectively placed itself into the middle of the communication, leading to the term man in the middle for this kind of network attack. When the rogue host does the same for an ARP Request being sent from Host B to Host A, all communications between Host A and Host B will actually be going through the rogue host. Enabling Dynamic ARP Inspection (DAI) prevents this behavior by building a database of trusted MAC-IP address mappings. This database is the same database that is built by the DHCP Snooping process, and static ARP configurations can be used by DAI as well. DAI uses the concept of trusted and untrusted ports, just as DHCP Snooping does. However, untrusted ports in DAI do not automatically drop ARP Requests and Replies. Once the IP-MAC address database is built, every single ARP Request and ARP Reply received on an untrusted interface is examined. If the ARP message has an approved MAC-IP address mapping, the message is forwarded appropriately; if not, the ARP message is dropped. If the interface has been configured as trusted, DAI allows the ARP message to pass through without checking the database of trusted mappings. DAI is performed as ARP messages are received, not transmitted. Since DAI uses entries in the DHCP Snooping database to do its job, DHCP Snooping must be enabled before beginning to configure DAI. After that, the first step in configuring DAI is to name the VLAN(s) that will be using DAI. SW1(config)#ip arp inspection ? filter Specify ARP acl to be applied log-buffer Log Buffer Configuration validate Validate addresses vlan Enable/Disable ARP Inspection on vlans SW1(config)#ip arp inspection vlan ? WORD vlan range, example: 1,3-5,7,9-11
SW1(config)#ip arp inspection vlan 4
Just as with DHCP Snooping, you can specify a range of VLANs with hyphens and commas. Also just as with DHCP Snooping, all ports are considered untrusted until we tell the switch to trust them, and we do that with the ip arp inspection trust interface-level command. SW1(config)#int fast 0/4 SW1(config-if)#ip arp inspection trust
You may have noticed a validate option in the ip arp inspection command above. You can use the validate option to go beyond DAI's default inspection. Let's use IOS Help to take a look at our choices: SW1(config)#ip arp inspection validate ? dst-mac Validate destination MAC address ip Validate IP addresses src-mac Validate source MAC address
You can actually specify validation of more than one of those addresses. Here's what happens with each: "src-mac" compares the source MAC address in the Ethernet header and the MAC address of the source of the ARP message. "dst-mac" compares the destination MAC address in the Ethernet header and the MAC destination address of the ARP message. "ip" compares the IP address of the sender of the ARP Request against the destination address of the ARP Reply. We'll use the "ip" option and then verify the configuration with show ip arp inspection. SW1(config)#ip arp inspection validate ip SW1#show ip arp inspection Source Mac Validation : Disabled Destination Mac Validation : Disabled IP Address Validation : Enabled Vlan ---4
Configuration ------------Enabled
Operation --------Active
ACL Match ---------
Vlan ----
ACL Logging -----------
DHCP Logging ------------
Static ACL ----------
4
Deny
Deny
Vlan ---4
Forwarded --------0
Dropped ------0
DHCP Drops ---------0
ACL Drops --------0
Vlan ---4
DHCP Permits -----------0
ACL Permits ----------0
Vlan ----
Dest MAC Failures -----------------
IP Validation Failures ----------------------
Invalid Protocol Data ---------------------
Vlan ---4
Dest MAC Failures ----------------0
IP Validation Failures ---------------------0
Invalid Protocol Data --------------------0
Source MAC Failures ------------------0
That show command results in a great deal of output, but as you apply DAI in your network, you should run this command regularly to spot potential rogue hosts on your network. A large number of validation failures is one indicator of such a rogue! If you run DAI in your network, most likely you'll run it on all of your switches. Cisco's recommended trusted/untrusted port configuration is to have all ports connected to hosts run as untrusted and all ports connected to switches as trusted. Since DAI runs only on ingress ports, this configuration scheme ensures that every ARP packet is checked once, but no more than that. There is no problem with running DAI on trunk ports or ports bundled into an Etherchannel.
IP Source Guard We can use IP Source Guard to prevent a host on the network from using another host's IP address. IP Source Guard works in tandem with DHCP Snooping, and uses the DHCP Snooping database to carry out this operation. As with DAI, DHCP Snooping must be enabled before enabling IP Source Guard. When the host first comes online and connects to an untrusted port on the switch, the only traffic that can reach that host are DHCP packets. When the client successfully acquires an IP address from the DHCP Server, the switch makes a note of this IP address assignment.
The switch will then dynamically create an VLAN ACL (VACL) that will only allow traffic with the corresponding source IP address to be processed by the switch. This IP address-to-port mapping process is called binding.
If the host pretends to be another host on that subnet, or to spoof that
host's IP address -- 172.12.12.100, for example -- the switch will simply filter that traffic because the source IP address will not match the database's entry for that port.
To enable IP Source Guard, use the ip verify source vlan command on the appropriate interfaces once DHCP snooping has been enabled with ip dhcp snooping. You can specify a VLAN range as we have in the past with commas and dashes, or with a range as shown below, entering the first and last numbers in the range. SW1(config)#ip database information verify vlan
dhcp DHCP DHCP DHCP DHCP
snooping snooping Snooping snooping Snooping
? database agent information verify vlan
SW1(config)#ip dhcp snooping vlan ? WORD DHCP Snooping vlan fist number or vlan range, example: 1,3-5,7,911 SW1(config)#ip dhcp snooping vlan 1 ? DHCP Snooping vlan last number SW1(config)#ip dhcp snooping vlan 1 10
You do have the option of using both the IP and MAC source addresses to use as match criteria for incoming frames, or you can use the default of IP address only. To use the MAC source addresses in addition to the IP address, use the port-security option with the ip verify source command. Using that command with no options will only use the IP address in the matching process. SW1(config)#int fast 0/2 SW1(config-if)#ip verify source ?
port-security
port security
MAC Address Flooding Attacks Since ARP, IP addresses, and DHCP all have potential security issues, we can't leave MAC addresses out - because network attackers sure won't do so! A MAC Address Flooding attack is an attempt by a network intruder to overwhelm the switch memory reserved for maintenance of the MAC address table. The intruder generates a large number of frames with different source MAC addresses - all of them invalid. As the switch's MAC address table capabilities are exhausted, valid entries cannot be made and this results in those valid frames being broadcast instead of unicast. This has three side effects, all unpleasant:
As mentioned, the MAC address table fills to capacity, preventing legitimate entries from being made. The large number of unnecessary broadcasts quickly consumes bandwidth as well as overall switch resources The intruder can easily intercept packets with a packet sniffer, since the unnecessarily broadcasted packets will be sent out every port on the switch - including the port the intruder is using.
You can combat MAC Address Flooding with two of the features we addresses earlier in this section - port-based authentication and port security. By making sure our host devices are indeed who we think they are, we reduce the potential for an intruder to unleash a MAC Address Flooding attack on our network. The key isn't to fight the intruder once they're in our network - the key is to keep them out in the first place. VLAN Hopping We've seen how intruders can use seemingly innocent ARP and DHCP processes can be used to harm our network, so it shouldn't come as any surprise that Dot1q tagging can be used against us as well! One form of VLAN Hopping is double tagging, so named because the
intruder will transmit frames that are "double tagged" with two separate VLAN IDs. As you'll see in our example, certain circumstances must exist for a double tagging attack to be successful:
The intruder's host device must be attached to an access port.
The VLAN used by that access port must be the native VLAN.
The term "native VLAN" tips us off to the third requirement - dot1q must be the trunking protocol in use, since ISL doesn't use the native VLAN.
When the rogue host transmits a frame, that frame will have two tags. One will indicate native VLAN membership, and the second will be the number of the VLAN under attack. In this example, we'll assume that to be VLAN 100.
The trunk receiving this double-tagged frame will see the tag for VLAN 25, and since that's the native VLAN, that tag will be removed and then transmitted across the trunk - but the tag for VLAN 100 is still there!
When the switch on the other side of the trunk gets that frame, it sees the tag for VLAN 100 and forwards the frame to ports in that VLAN. The rogue now has successfully fooled the switches and has hopped from one VLAN to another. VLAN Hopping seems innocent enough, but it's quite the opposite. VLAN Hopping has been used for network attacks ranging from Trojan horse virus propagation to stealing bank account numbers and passwords. That's why you often see the native VLAN of a network such as the one above set to a VLAN that no host on the network is a member of - that stops this version of VLAN Hopping right in its tracks.
Notice that I said "this version". Switch spoofing is another variation of VLAN Hopping that is even worse than double tagging, because this version allows the rogue to pretend to be a member of *all* VLANs in your network. Many Cisco switch ports now run in dynamic desirable mode by default, which means that a port is sending out Dynamic Trunking Protocol frames in an aggressive effort to form a trunk. A potential problem exists, since the switch doesn't really know what kind of device is receiving the DTP frames.
This leads many well-intentioned network admins to place such a port into Auto mode, which means that port will still trunk but it's not actively seeking to do so. That in turn leads to another major potential problem, because a rogue host connected to a port in Auto trunking mode can pretend it's a switch and send DTP frames of its own - leading to a trunk formed between the switch and the rogue host!
When that trunk forms, the rogue host will have access to all VLANs after all, this is now a trunk! Luckily, there's a quick defense for this attack. Every port on your switch that does not lead to another known switch should be placed into access mode. That disables the port's ability to create a trunk, and in turn disables the rogue host's ability to spoof being a switch! Cisco Discovery Protocol (CDP) And Potential Security Issues Before we talk about how CDP can pose a security risk to your network, we need to review what CDP does in the first place. Some networks have clear, concise network maps that show you every router, every switch, and every physical connection. Some networks do not. Part of troubleshooting is quietly verifying what a client is telling you. Fact is, you can't always take what a client says at face value; just because he says two switches are physically connected, it doesn't mean that they are - but you need to know! You can check a Cisco device's physical connections with Cisco
Discovery Protocol, which runs by default on Cisco routers and switches, both globally and on a per-interface level. For security purposes, many admins choose to disable CDP. Here's the command to see if CDP is indeed running on a router or switch: Router1#show cdp Global CDP information: Sending CDP packets every 60 seconds Sending a holdtime value of 180 seconds Sending CDPv2 advertisements is enabled
That output means that CDP is indeed enabled. If you see the following, it's off. Router1#show cdp % CDP is not enabled Router1#
Here's how to enable CDP: Router1#conf t Enter configuration commands, one per line. Router1(config)#cdp run
End with CNTL/Z.
The most commonly used CDP command is show cdp neighbor. I'll move over to a switch that has three physical connections to other hosts to show you the output of this command. SW1#show cdp neighbor Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone Device ID SW2 SW2 R2
Local Intrfce Holdtme Capability Platform Fas 0/12 170 SI WS-C2950-1 Fas 0/11 170 SI WS-C2950-1 Fas 0/2 131 R 2520
Port ID Fas 0/12 Fas 0/11 Eth 0
This command shows us every device this switch is physically connected to, and gives us a wealth of information as well! From left to right... Device ID is the remote device's hostname. Local Interface is the local switch's interface connected to the remote host. Holdtime is the number of seconds the local device will retain the contents of the last CDP Advertisement received from the remote host.
Capability shows you what type of device the remote host is. The first two connections are to a switch, and the third is to a router. Platform is the remote device's hardware platform. The top two connections are to a 2950 switch, and the third is to a 2520 router. Port ID is the remote device's interface on the direct connection. This is an excellent command to verify what you're seeing on a network map or what a client is telling you. I've been in more than one situation where a client said one thing and CDP directly proved them wrong. It may be best to use it when they're not around, but it can also prove what you're telling the client. Real-world courtesy: If your client has CDP turned off, and you turn it on for troubleshooting, turn it back off before you leave. It's good for the ol' job security, too. The commands cdp run and no cdp run enable and disable CDP on a global basis. CDP runs globally on a Cisco device by default. You may want to leave CDP on globally, but disable it on a particular interface. To enable or disable CDP on a per-interface basis, use cdp enable and no cdp enable. SW1(config)#int fast 0/12 SW1(config-if)#no cdp enable SW1(config-if)#cdp enable
There are some other CDP commands you may find helpful, the first being show cdp neighbors detail. This command gives you a lot of detail about every CDP neighbor, so I won't put it all here, but here's a clip of the output dealing with just one of SW1's neighbors. Note that you can even see the neighbor's IOS version with this command! SW1#show cdp neighbor detail ------------------------Device ID: SW2 Entry address(es): Platform: cisco WS-C2950-12, Interface: FastEthernet0/12, Holdtime : 148 sec
Capabilities: Switch IGMP Port ID (outgoing port): FastEthernet0/12
Version : Cisco Internetwork Operating System Software IOS (tm) C2950 Software (C2950-I6Q4L2-M), Version 12.1(19)EA1c, RELEASE SOFTWARE
(fc2) Copyright (c) 1986-2004 by cisco Systems, Inc. Compiled Mon 02-Feb-04 23:29 by yenanh
And right before I leave the client site, I'd run show cdp interface to verify that CDP is running on the interfaces that it should be running on - and not running on the others! Here's the partial output of this command on SW1: SW1#show cdp interface FastEthernet0/1 is down, line protocol is down Encapsulation ARPA Sending CDP packets every 60 seconds Holdtime is 180 seconds FastEthernet0/2 is up, line protocol is up Encapsulation ARPA Sending CDP packets every 60 seconds Holdtime is 180 seconds FastEthernet0/3 is down, line protocol is down Encapsulation ARPA Sending CDP packets every 60 seconds Holdtime is 180 seconds
So if CDP's so great, why do many network admins choose to disable it on their networks? These vulnerabilities should sound familiar.. CDP sends all information in clear text CDP offers no authentication What we used to do is disable CDP globally, and that was that, but it's not so simple anymore. Just about every Cisco network management product uses CDP in some form to monitor the network and / or create reports. What you can do to minimize the risk of using CDP is to determine which interfaces really need to be running and which ones do not, and then run CDP on the interfaces that need it. In case you run into networks that (gasp!) run non-Cisco devices, you may run into the Link Layer Discovery Protocol (LLDP). This is the industry-standard equivalent of CDP. Cisco devices can run it, but it's disabled by default. Telnet And SSH Telnet's a great way to communicate remotely with routers and switches, but there's a problem - all of the data sent to the remote host, including passwords, is transmitted in clear text. Any would-be network intruder
who intercepts the password transmission can then easily enter the network via Telnet, and then we're in real trouble!
Secure Shell (SSH) is basically "encrypted Telnet", since the basic operation of SSH is just like Telnet's, but the data (including the password) is encrypted.
For this very simple and very powerful reason, SSH is preferred over Telnet. But I can hear you now - "Then why does my company still use Telnet instead of SSH?" Telnet is very easy to set up, but SSH does take a little more work (and perhaps a little more hardware). To use SSH, we'll have to use one of the following authentication methods:
A local database on the router Authentication via AAA, enabled with aaa new-model
Telnet allows the use of a password that's configured directly on the VTY lines, but SSH does not. When using a local database for SSH, the first step is to configure login local on the VTY lines, rather than the login command we used for the Telnet configuration. Remove any passwords from the VTY lines as well. The login local command tells the switch to look to a database on the local device for valid username/password combinations.
R1(config)#line vty 0 4 R1(config-line)#login local R1(config-line)#transport input ssh
Then we'll create the username/password database. SW2(config)#username mulligan password eaglepass SW2(config)#username mcdaniel password oklahoma SW2(config)#username hrace password missouri
When a user attempts to connect, the user must specify a username in this database and supply the password assigned to that username. Getting one or the other right isn't enough! That's much more secure than the one-password-fits-all configuration many Telnet configs use. We could use the username/password command to create a database strictly for Telnet, and the login local command would have the same effect. Where the Telnet and SSH configuration differ is that the SSH config requires the following where Telnet does not:
A domain name must be specified with the ip domain-name command A crypto key must be created with the crypto key generate rsa command
Create the domain name with the ip domain-name command. Also, if the router has no name, give it one with the hostname command. R1(config)#ip domain-name bryantadvantage.com
When you generate the key with crypto key generate rsa, you'll get this readout: R1(config)#crypto key generate rsa The name for the keys will be: HQ.HQ.com Choose the size of the key modulus in the range of 360 to 2048 for your General Purpose Keys. Choosing a key modulus greater than 512 may take a few minutes. How many bits in the modulus [512]: 1024 % Generating 1024 bit RSA keys, keys will be non-exportable...[OK]
If you're getting "unrecognized command" for valid SSH commands, the most likely reason is that you skipped this step.
You may want to accept only SSH on the vty lines and refuse attempted Telnet connections. To do so, enable only SSH with the transport input
command on the vty lines. R1(config)#line vty 0 4 R1(config-line)#transport input SSH
To set the SSH timeout value, use the ip ssh time-out command. Note that IOS Help tells us to enter the command in seconds, not minutes. R1(config)#ip ssh time-out ? SSH time-out interval (secs)
To set the maximum number of SSH authentication retries, use the ip ssh authentication-retries command. R1(config)#ip ssh authentication-retries ? Number of authentication retries
Here are some other SSH options - another common one to set is the maxstartups option. R1(config)#ip ssh ? authentication-retries break-string logging maxstartups port rsa source-interface time-out version
Specify number of authentication retries break-string Configure logging for SSH Maximum concurrent sessions allowed Starting (or only) Port number to listen on Configure RSA keypair name for SSH Specify interface for source address in SSH connections Specify SSH time-out interval Specify protocol version to be supported
There's one more similarity between Telnet and SSH - you can still use ACLs to determine who should be able to connect via SSH, and you still use the access-class command to apply that ACL to the vty lines. Here, I've created a named ACL that denies a single host address and permits all others. The named ACL is then applied to the vty lines.
R1(config)#ip access-list standard BLOCKNETWORK3 R1(config-std-nacl)#deny host 3.3.3.3 R1(config-std-nacl)#permit any R1(config-std-nacl)#line vty 0 4 R1(config-line)#access-class BLOCKNETWORK3 in
Review: Creating Banners For legal reasons, you may want to warn users that unauthorized access
to the router or switch is prohibited. You can present this message, or any message you feel appropriate, with the banner command. (Inappropriate messages are best left for home lab practice!) The banner command has a few options, the most common of which is the Message Of The Day (MOTD) option. SW2(config)#banner ? LINE c banner-text c, where 'c' is a delimiting character exec Set EXEC process creation banner incoming Set incoming terminal line banner login Set login banner motd Set Message of the Day banner prompt-timeout Set Message for login authentication timeout slip-ppp Set Message for SLIP/PPP
We'll select motd, then use IOS Help to view our options. SW2(config)#banner motd ? LINE c banner-text c, where 'c' is a delimiting character
That description of the LINE command can be a little confusing, so using a dollar sign as the delimiting character, here's how to configure a MOTD banner message. SW1(config)#banner motd $ Enter TEXT message. End with the character '$'. Network down for router IOS upgrade at 10 PM EST tonight! $
It doesn't matter what symbol or letter you use for the delimiting character, but you have to use the same one to begin and end the message. When I entered a dollar sign as the delimiting character, the switch told me to end my text message with the dollar sign. I log out of the switch, then come back in, and I'm presented with the MOTD banner message. SW1 con0 is now available Press RETURN to get started.
Network down for router IOS upgrade at 10 PM EST tonight! SW1>
If we want to add a warning to that - say, a message warning against unauthorized access - we can create a login banner. That banner's
contents will appear after the MOTD, but before the login prompt. SW2(config)#banner login % Enter TEXT message. End with the character %. Unauthorized Login Prohibited %
I've added a console line password of cisco as well: line con 0 exec-timeout 0 0 password cisco logging synchronous login
When I log out and then log back in, I see the MOTD banner message followed by the login banner message. SW1 con0 is now available Press RETURN to get started.
Network down for router IOS upgrade at 10 PM EST tonight! Unauthorized Access Prohibited By Law. But You Knew That. User Access Verification Password: SW1>
This is how you'll see the banners appear in the config: banner login ^C Unauthorized Access Prohibited By Law. But You Knew That. ^C banner motd ^C Network down for router IOS upgrade at 10 PM EST tonight! ^C No matter what delimiting character you use, you'll see it represented as ^C in the config, so don't get thrown off by that. Let's use IOS Help to look at our other options: SW2(config)#banner ? LINE c banner-text c, where 'c' is a delimiting character exec Set EXEC process creation banner incoming Set incoming terminal line banner login Set login banner motd Set Message of the Day banner prompt-timeout Set Message for login authentication timeout slip-ppp Set Message for SLIP/PPP
You may want to present a banner message to users who have successfully authenticated, and you can do that with the banner exec command. You can use the ENTER key for hard breaks in a banner message, as shown below. SW1(config)#banner exec * Enter TEXT message. End with the character '*'. Welcome to our nice, clean network. < enter key pressed > Please keep it that way. *
After logging out and back in, the exec banner is presented after I successfully authenticate with the password cisco. Network down for router IOS upgrade at 10 PM EST tonight! Unauthorized Access Prohibited By Law. But You Knew That. User Access Verification Password: Welcome to our nice, clean network. Please keep it that way. SW1>
A Little HTTP Security With products such as Security Device Manager, you'll need to set up a little something called "HTTP Secure Server". This is required since a web-based interface doesn't offer encryption, and that's a poor beginning for a security implementation.
You'll need to enable HTTPS and HTTP local authentication with the ip http secure-server and ip http authentication local commands. That last command enables the use of a local database for HTTPS authentication; we create that with the username/ password command.
Note the result of the ip http secure-server command.
(enables HTTP) R1(config)#ip http server R1(config)#ip http authentication local R1(config)#ip http secure-server % Generating 1024 bit RSA keys, keys will be non-exportable...[OK] R1(config)# 11:44:05: %SSH-5-ENABLED: SSH 1.99 has been enabled R1(config)# 11:44:06: %PKI-4-NOAUTOSAVE: Configuration was modified.
Issue
"write
memory" to save new certificate
You could also use the crypto key generate rsa command to create that certificate. Real-World Security Plans, Concerns, And Conversations This section might help you on the CCNP SWITCH exam. It will definitely help you in real-world networking situations - so while this topic isn't quite as exciting as configuring security solutions, it's just about as important. This section is about the important of planning and having the right conversations before you start implementing your security solution. Conversation with whom, you ask? Define Expectations With Your Client Your average client expects that when you're done implementing a network security solution, he'll be protected against everything and everyone that will ever want to do his network harm. Forever. You and I both know that this is an unrealistic expectation - but the client might not know that, and we need to do two things before we even get started on this job: You need to define exactly what the limits are of the network solutions, what's guaranteed, what's not, and... ... you need to put this into writing. Why go to this trouble? Let's say that you put in a security solution for a client, and everything's just fine for three years. Then someone somewhere comes up with a SuperVirus that infects his network. That client will say many things, but "Oh well, that's just the way it goes" is not one of them. He may say a few things to their corporate attorney, and then you're really off to the races.
Here's how serious this is - Cisco has a security feature called AutoSecure that has a setting actually called One-Step Lockdown. Every Cisco security best practice you can think of is applied to this router. And before it even starts, you're presented with a warning that while Cisco has done everything they can to make this a secure solution, they can't guarantee that nothing bad will ever happen to that router. And if Cisco can't guarantee it, you shouldn't be guaranteeing it either. I don't say this to scare you - but it's just part of the world we live in. Be sure you clearly define expectations and limitations of any security solution with your client - and get him to sign off on it. Don't Just Jump In I've known a network admin or two in my life that weren't real big on planning - they just liked to put their hands on the routers and start configurin'. I'm not always the most patient fellow in the world, but I never touch the client's network without taking care of some other tasks first. Your company's prerequisites likely differ, but these steps are a good idea no matter what you're implementing. Test your solution on a pilot network. Nothing's worse than rolling out a security solution and then finding it's incompatible with your OSes, desktops, or anything else. Run an audit or have an auditing firm do it. You've gotta know what's out there in the network before you can protect it - and you can't depend on the client or old inventory lists for a complete picture of the network. The audit should include both the hardware and software in use. Take an incremental approach. If something's going to go wrong, I'd rather find out about it while it's affecting a small part of the network rather than the whole thing. Whether it's email or security, or anything in between, never migrate / install / implement on a network-wide basis if it can possibly be helped. Roll it out incrementally. Have an emergency plan / rollback plan / parachute / whateveryouwannacallit. If something goes wrong, you cannot say
"Hey, I don't know what to do now." You need to be ready to put the network back into its prior, operational state. This goes for anything you install on a network. And during the process, you should create a few things for your client (and you) as well: Create a definitive security policy. Put it in writing - what will be allowed, what behaviors will be prohibited, and what actions will be taken when an undesirable behavior takes place? Create an incident response plan. Look, after you leave, something's going to happen sooner or later. Have a plan in place to handle it and update it as time goes by.
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933
www.thebryantadvantage.com
Back To Index
Multilayer Switching & High Availability Services And Protocols Overview
What Is Multilayer Switching? Route Caching Cisco Express Forwarding Inter-VLAN Routing Switched Virtual Interfaces (SVIs) Fallback Bridging ICMP Router Discovery Protocol (IRDP) HSRP Basics HSRP MAC Address Changing HSRP Changing The Active Router HSRP Load Balancing HSRP Interface Tracking Virtual Router Redundancy Protocol (VRRP) Gateway Load Balancing Protocol (GLBP)
Server Load Balancing (SLB) Syslog And Logging Cisco SLA DHCP Server Config On Cisco Routers IP Helper-Addresses
When you're learning basic routing and switching theory in your CCNA studies, the two processes are taught as separate operations that happen on two separate physical devices -- switches switch at Layer 2, routers router at Layer 3, and never the two shall meet...
... until now. While they are separate operations, devices that can perform both routing and switching are more and more popular today. These devices are Layer 3 switches, or multilayer switches.
What Is Multilayer Switching? Multilayer switches are devices that switch and route packets in the switch hardware itself. A good phrase to describe a multilayer switch is "pure performance" - these switches can perform packet switching up to ten times as fast as a pure L3 router.
Multilayer switches make it possible to have inter-VLAN communication without having to use a separate L3 device or configuring router-on-astick. If two hosts in separate VLANs are connected to the same multilayer switch, the correct configuration will allow that communication without the data ever leaving that switch. When it comes to Cisco Catalyst switches, this hardware switching is performed by a router processor (or L3 engine). This processor must download routing information to the hardware itself. To make this hardware-based packet processing happen, Cat switches will run either the older....um, I mean "legacy" Multilayer Switching (MLS), or the newer Cisco Express Forwarding (CEF). Application-Specific Integrated Circuits (ASICs) will perform the L2 rewriting operation of these packets. You know from your CCNA studies that while the IP source and destination address of a packet will not change during its travels through the network, the L2 source and addresses may and probably will. With multilayer switching, it's the ASICs that perform this L2 address overwriting. The CAM And TCAM Tables You learned early in your CCNA studies that we really like having more than one name for something in networking - and that's particularly true of the MAC address table, which is also known as the bridging table, the switching table, and the Content Addressable Memory table - the CAM table. Multilayer switches still have a CAM table and it operates just as an L2 switch's CAM table does - but we have a lot more going on with our L3 switches, including routing, ACLs, and QoS. A simple CAM table can't handle all of this, so in addition to the CAM table we have a TCAM table - Ternary Content Addressable Memory. Basically, the TCAM table stores everything the CAM table can't, including info about ACLs and QoS. Multilayer Switching Methods The first multilayer switching (MLS) method is route caching. Route caching devices have both a routing processor and a switching engine. The routing processor routes a flow's first packet, the switching engine snoops in on that packet and the destination, and the switching engine
takes over and forwards the rest of the packets in that flow. Now, what exactly does a "flow" consist of? A flow is a unidirectional stream of packets from a source to a destination, and packets on the same flow will share the same protocol. That is, if a source is sending both WWW and TFTP packets to the same destination, there are actually two flows of traffic. The MLS cache entries support such unidirectional flows. Route Caching can be effective, but there's one slight drawback - the first packet in any flow will be switched by software. Even though all other packets in the flow will be hardware-switched, it is more effective for us to have all of the packets switched by hardware - and that's what we get with CEF. Cisco Express Forwarding (CEF) is a highly popular method of multilayer switching. Primarily designed for backbone switches, this topology-based switching method requires special hardware, so it's not available on all L3 switches. CEF can't be configured on 2950 switches, but you will see it on 3550s and several other higher-numbered series. CEF is highly scalable, and is also easier on a switch's CPU than route caching. CEF has two major components - the Forwarding Information Base and the Adjacency Table. CEF-enabled devices the same routing information that a router would, but it's not found in a typical routing table. CEF-enabled switches keep a Forwarding Information Base (FIB) that contains the usual routing information - the destination networks, their masks, the next-hop IP addresses, etc - and CEF will use the FIB to make L3 prefix-based decisions. The FIB's contents will mirror that of the IP routing table - actually, the FIB is really just the IP routing table in another format. You can view the FIB with the show ip cef command. SW2#show ip cef Prefix 0.0.0.0/32 224.0.0.0/4 224.0.0.0/24 255.255.255.255/32
Next Hop receive drop receive receive
Interface
Not exactly the routing table we've come to know and love! However, running CEF doesn't prevent us from configuring access-lists, QoS, or other "regular" traffic filtering features that routers use every day. The routing information in the FIB is updated dynamically as change notifications are received from the L3 engine. Since the FIB is prepopulated with the information from the routing table, the MLS can find the routing information quickly. Should the TCAM hit capacity, there's a wildcard entry that will redirect traffic to the routing engine. The FIB takes care of the L3 routing information, but what of the L2 information we need? That's found in the Adjacency Table (AT). As adjacent hosts are discovered via ARP, that next-hop L2 information is kept in this table for CEF switching. (A host is considered adjacent from another if they're just one hop apart.) Like the TCAM, if the AT hits capacity, there is a wildcard entry pointing to the L3 engine. To sum it up: The FIB contains L3 information and is created via the IP routing table. The AT contains L2 information and is created via the ARP table. There are some special cases when it comes to adjacencies: Remember the Null0 route created by route summarization? Basically, it's a route to nowhere. A null adjacency is said to be formed for these packets, and they're dropped. If we have packets that need some attention from the L3 engine rather than being switched in hardware (or if they can't be switched by hardware for some reason), that's a punt adjacency. Ingress packets are dropped via a drop adjacency or a discard adjacency. Once the appropriate L3 and L2 next-hop addresses have been found, the MLS is just about ready to forward the packet. The MLS will make the
same changes to the packet as a router normally would, and that includes changing the L2 destination MAC address - that's going to be changed to the next-hop destination, as I'm sure you remember from your CCNA studies. The L3 destination will remain the same. (The L2 source address will change as well, to the MAC address on the MLS switch interface that transmits the packet.) Enabling CEF is about as simple as it gets. CEF is on by default on any and all CEF-enabled switches, and you can't turn it off. Remember, CEF is hardware-based, not software-based, so it's not a situation where running "no cef" on a switch will disable CEF. There's no such command! A multilayer switch must have IP routing enabled for CEF to run, however. Trying to view the FIB of a switch with IP routing not enabled results in this console readout... SW2#show ip cef %IPv4 CEF not running
... and then after enabling IP routing. SW2(config)#ip routing SW2#show ip cef Prefix 0.0.0.0/32 224.0.0.0/4 224.0.0.0/24 255.255.255.255/32
Next Hop receive drop receive receive
Interface
As with several advanced L3 switching capabilities, not every L3 switch can run CEF. For instance, the 2900XL and 3500XL do not support CEF. Keep in mind that switches that do support CEF do so by default, and CEF can't be turned off on those switches! CEF does support per-packet and per-destination load balancing, but the capabilities differ between Cisco switch models. The default CEF load balancing mode is per-destination, where packets for any particular destination will take the same path from start to finish, even if other valid paths are available. The Control Plane And The Data Plane These are both logical planes found in CEF multilayer switching, and I know you won't be surprised to find they are also referred to by several
different names. These all refer to the control plane:
"CEF control plane" "control plane" "Layer 3 engine" or "Layer 3 forwarding engine"
The control plane's job is to first build the ARP and IP routing tables, from which the AT and FIB will be derived, respectively. In turn, the data plane is also called by several different names:
"data plane" "hardware engine" "ASIC"
The control plane builds the tables necessary for L3 switching, but it's the data plane that does the actual work! It's the data plane that places data in the L3 switch's memory while the FIB and AT tables are consulted, and then performs any necessary encapsulation before forwarding the data to the next hop. Exceptions To The Rule (Of L3 Switching, That Is) Exception packets are packets that cannot be hardware switched, which leaves us only one option - software switching! Comparing hardware switching to software switching is much like comparing the hare to the tortoise - but these tortoises are not going to win a race. Here are just a few of the packet types that must be software switched:
Packets with IP header options Packets that will be fragmented before transmission (because they're exceeding the MTU) NAT packets Packets that came to the MLS with an invalid encap type
Note that packets with TCP header options are still switched in hardware; it's the IP header options that cause trouble! Is "Fast Switching" Really That Fast? With so many switching options available today, it's hard to keep up with
which option is fastest, then next-fastest, and so on. According to Cisco's website, here's the order; 1. Distributed CEF (DCEF). The name is the recipe - the CEF workload is distributed over multiple CPUs. 2. CEF 3. Fast Switching 4. Process Switching (sometimes jokingly referred to as "slow switching" - it's quite an involved process and is a real CPU hog)
Inter-VLAN Routing Now that we have the (important) nuts and bolts out of the way, let's configure an L3 switch! Multilayer switches allow us to create a logical interface, the Switched Virtual Interface (SVI), that represents the VLAN. Remember that the L2 switches you've worked with have an "interface VLAN1" by default? interface Vlan1 no ip address no ip route-cache shutdown That's actually a switched virtual interface (SVI). An SVI exists for VLAN 1 by default, but that's the only VLAN that has a "pre-created" SVI. As you recall from your CCNA studies, this VLAN 1 SVI is for remote switch administration. On an MLS, such a logical interface can be configured for any VLAN, and you configure it just as you would any other logical interface, such as a loopback interface - just go into config mode, create the interface and assign it an IP address, and you're on your way.
MLS(config)#interface vlan 10 MLS(config-if)#ip address 10.1.1.1 255.255.255.0
Let's put SVIs to work with a basic interVLAN routing configuration.
To allow these two hosts to communicate, you know that we've got to have an L3 device - and now we have a different kind of L3 device than you've used before. This L3 switch will allow interVLAN communication without involving a router. Before we begin configuring, we'll send pings between the two hosts. (In this example, I'm using routers for hosts, but there are no routes of any kind on them.) HOST_1#ping 30.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 30.1.1.1, timeout is 2 seconds: ..... Success rate is 0 percent (0/5) HOST_3#ping 20.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 20.1.1.1, timeout is 2 seconds: ..... Success rate is 0 percent (0/5)
As expected, neither host can ping the other. Let's fix that! To get started, we'll put the port leading to Host 1 into VLAN 11, and the port leading to Host 3 in VLAN 33. SW1(config)#int fast 0/1 SW1(config-if)#switchport mode access SW1(config-if)#switchport access vlan 11
SW1(config-if)#int fast 0/3 SW1(config-if)#switchport mode access SW1(config-if)#switchport access vlan 33
We're going to create two SVIs on the switch, one representing VLAN 11 and the other representing VLAN 33. Note that both SVIs show as up/up
immediately after creation. Some Cisco and non-Cisco documentation mentions that you should open the SVIs after creating them, but that's not necessarily the case in the real world. Couldn't hurt, though. :) SW1(config)#int vlan11 01:30:04: %LINK-3-UPDOWN: Interface Vlan11, changed state to up 01:30:05: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan11, changed state to up SW1(config-if)#ip address 20.1.1.11 255.255.255.0
SW1(config-if)#int vlan33 01:30:11: %LINK-3-UPDOWN: Interface Vlan33, changed state to up 01:30:12: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan33, changed state to up SW1(config-if)#ip address 30.1.1.11 255.255.255.0
Only one VLAN per SVI, please. If you don't see "up" for the interface itself and/or the line protocol, you likely haven't created the VLAN yet or placed a port into that VLAN. Do those two things and you should see the following result with show interface vlan. I'll only show the top three rows of output for each SVI. SW1#show int vlan11 Vlan11 is up, line protocol is up Hardware is EtherSVI, address is 0012.7f02.4b41 (bia 0012.7f02.4b41) Internet address is 20.1.1.11/24 SW1#show int vlan33 Vlan33 is up, line protocol is up Hardware is EtherSVI, address is 0012.7f02.4b42 (bia 0012.7f02.4b42) Internet address is 30.1.1.11/24
Now let's check that routing table... SW1# show ip route Default gateway is not set Host Gateway ICMP redirect cache is empty
Last Use
Total Uses
Interface
Hmm, that's not good. We don't have one! There's a simple reason, though - on L3 switches, we need to enable IP routing, because it's off by default! Step One In L3 Switching Troubleshooting: Make Sure IP Routing Is On!
SW1(config)#ip routing SW1(config)#^Z SW1#show ip route Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is not set 20.0.0.0/24 is subnetted, 1 subnets 20.1.1.0 is directly connected, Vlan11 30.0.0.0/24 is subnetted, 1 subnets C 30.1.1.0 is directly connected, Vlan33 C
Now that looks like the routing table we've come to know and love! In this particular case, there's no need to configuring a routing protocol. Why? You recall from your CCNA studies that when router-on-a-stick is configured, the IP address assigned to the router's subinterfaces should be the default gateway setting on the hosts. When SVIs are in use, the default gateway set on the hosts should be the IP address assigned to the SVI that represents that host's VLAN. After setting this default gateway on the hosts, the hosts can now successfully communicate. Since we're using routers for hosts, we'll use the ip route command to set the default gateway. HOST_1(config)#ip route 0.0.0.0 0.0.0.0 20.1.1.11 HOST_3(config)#ip route 0.0.0.0 0.0.0.0 30.1.1.11
Can the hosts now communicate, even though they're in different VLANs? HOST_1#ping 30.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 30.1.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms HOST_3#ping 20.1.1.1 Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 20.1.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Routed Ports We also have the option of configuring a physical port on a multilayer switch as a routed port. A few things to note about these ports: You assign an IP address to a routed port in the same manner in which you would apply one to an SVI or to a port on a Cisco router. There are some big differences between SVIs and routed ports, though - for one, routed ports are physical L3 switch ports, as opposed to SVIs, which are logical. Another difference - routed ports don't represent a particular VLAN as does as SVI. You configure a routed port with a routing protocol such as OSPF or EIGRP in the exact same manner as you would on a router. That goes for protocol-specific commands as well as interface-specific.
If we add a router to our network as shown below, that's what we'll need to do.
For many Cisco L3 switches, the ports will all be running in L2 mode by default. To configure a port as a routed port, use the no switchport
command, followed by the appropriate IP address. Note that in the following configuration, the line protocol on the switch port goes down and comes back up in just a few seconds. SW1(config)#interface fast 0/5 SW1(config-if)#no switchport 02:19:27: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/5, changed state to down 02:19:30: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/5, changed state to up SW1(config-if)#ip address 210.1.1.11 255.255.255.0
We verify the IP address assignment with show int fast 0/5. SW1#show int fast 0/5 FastEthernet0/5 is up, line protocol is up (connected) Hardware is Fast Ethernet, address is 0012.7f02.4b43 (bia 0012.7f02.4b43) Internet address is 210.1.1.5/24
The switch can now ping 210.1.1.1, the downstream router. SW1#ping 210.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 210.1.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms
Now let's take this just one step further - what if we wanted the hosts in the VLANs to be able to communicate with the router? They can ping 210.1.1.11, the switch's interface in that subnet, but not 210.1.1.1, the router's interface. HOST_1#ping 210.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 210.1.1.1, timeout is 2 seconds: ..... Success rate is 0 percent (0/5)
The router has no path to either 20.1.1.0 /24 or 30.1.1.0/24, so there's no way for the pings to get back to Host 1 or Host 3. ROUTER_TO_INTERNET#show ip route < code table removed for clarity > Gateway of last resort is not set C
210.1.1.0/24 is directly connected, FastEthernet0/0
To remedy that, we'll now configure a dynamic routing protocol between the L3 switch and the router. We'll use EIGRP in this case. SW1(config)#router eigrp 100 SW1(config-router)#no auto-summary SW1(config-router)#network 210.1.1.0 0.0.0.255 SW1(config-router)#network 20.1.1.0 0.0.0.255 SW1(config-router)#network 30.1.1.0 0.0.0.255 ROUTER_TO_INTERNET(config)#router eigrp 100 ROUTER_TO_INTERNET(config-router)#no auto-summary ROUTER_TO_INTERNET(config-router)#network 210.1.1.0 0.0.0.255
The router now has the VLAN subnets in its routing table... ROUTER_TO_INTERNET#show ip route < code table removed for clarity >
Gateway of last resort is not set
D C D
20.0.0.0/24 is subnetted, 1 subnets 20.1.1.0 [90/28416] via 210.1.1.11, 00:01:01, FastEthernet0/0 210.1.1.0/24 is directly connected, FastEthernet0/0 30.0.0.0/24 is subnetted, 1 subnets 30.1.1.0 [90/28416] via 210.1.1.11, 00:01:01, FastEthernet0/0
... and the hosts now have two-way IP connectivity with the router's 210.1.1.1 interface. HOST_1#ping 210.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 210.1.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms HOST_3#ping 210.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 210.1.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
It never hurts to make sure the pings can go the other way, too! ROUTER_TO_INTERNET#ping 20.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 20.1.1.1, timeout is 2 seconds:
!!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
ROUTER_TO_INTERNET#ping 30.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 30.1.1.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
At first, the details of SVIs and routed ports might make you pine for the good ol' days of ROAS! Once you get a little experience and study in, though, you'll find that SVIs and routed ports really are much more effective ways of getting the job done on your network - and on your exam! Here's a quick SVI checklist: Create the VLAN before the SVI. The VLAN must be active when the SVI is created - that VLAN will not be dynamically created at that time. Theoretically, you need to open the SVI with no shutdown just as you would open a physical interface after configuring an IP address The SVI and VLAN have an association, but they're not the same thing, and yes, I know you know that by now. Just a friendly reminder - creating one does not dynamically create the other. The IP address assigned to the SVI should be the default gateway address configured on the VLAN's hosts. The only SVI on the switch by default is the SVI for VLAN 1, intended to allow remote switch administration and configuration. (Having to drive in at 3 AM because there's no IP address on this interface really stinks.) SVIs are a great way to allow interVLAN communication, but you must have a routing protocol configured in addition to the SVIs. (Tshooting note - if this inter-VLAN communication fails, check your SVI addresses and make sure you have IP routing enabled on the switch. More SVI tshooting notes later in this section.)
You need to create the VLAN before the SVI, and that VLAN must be active at the time of SVI creation
Theoretically, you need to open the SVI with no shut just as you would open a physical interface after configuring an IP address Remember that the VLAN and SVI work together, but they're not the same thing. Creating a VLAN doesn't create an SVI, and creating an SVI doesn't create a VLAN.
Fallback Bridging Odds are that you'll never need to configure fallback bridging, but it falls under the category of "it couldn't hurt to know it". CEF has a limitation in that IPX, SNA, LAT, and AppleTalk are either not supported by CEF or, in the case of SNA and LAT, are nonroutable protocols. If you're running any of these on an CEF-enabled switch, you'll need fallback bridging to get this traffic from one VLAN to another. Fallback bridging involves the creation of bridge groups, and the SVIs will have to be added to these bridge groups. To create a bridge group: MLS(config)# bridge-group 1
To join a SVI to a bridge group: MLS(config)#interface vlan 10 MLS(config-if)#bridge-group 1
Router-On-A-Stick vs. Switched Virtual Interfaces Those of you who earned your CCNA working with me know that I love configuring router-on-a-stick, and that they're also quite effective and quite stable. So what's the big deal about SVIs? As much as I like 'em, ROAS configs do have a few drawbacks: Not the most intuitive config in the world. I know that when you look at a completed ROAS config, you'll wonder how you could make a mistake with it - but I'll put it this way: In my CCNA ROAS material, the troubleshooting section is larger than the configuration section. We're sending a lot of traffic up and down a single trunk line from the
L2 switch to the router. And the phrase "single trunk line" brings to mind another phrase that we really hate - "single point of failure". If the port on either side of the trunk goes out, or the cable itself goes down, we're SOL (Sure Outof Luck). Those drawbacks lead us to SVIs, which have advantages that neatly map to ROAS's deficiencies: No single point of failure Faster than ROAS Don't need to configure a trunk between the L2 switch and the router Generally speaking, if you have an L3 switch, you're much better off using SVIs for inter-VLAN communication rather than ROAS. SVI "Up And Up" Checklist We saw each of these in action during the lab, and I want you to have a quick t-shooting list for your SVIs... so if you don't see your SVI interface and line protocol up, check these points first: Make sure you created the VLAN that the SVI represents. You know how the switch creates a VLAN dynamically if you try to put a port into a VLAN that doesn't exist yet? Well, the switch isn't doing this for you. Creating an SVI for VLAN 11 does not dynamically create VLAN 11 itself. A valid port must be placed into the VLAN we just talked about - and by "valid", I mean the port is physically up and has been placed into forwarding mode by STP for that same VLAN. Be sure you created the SVI you meant to create. Everybody mistypes a number once in a while... say, "vlan 12" when you meant "vlan 11".
Routed Port Checklist On the L3 switch itself, be sure to enable ip routing.
On the port, be sure you've run the no switchport command as well as applying an IP address and any protocol-specific commands you need for your particular config (the OSPF commands neighbor or ip ospf priority, for example).
An Etherchannel - SVI Similarity We're not actually bundling ports with SVIs as we do with Etherchannels, but there is an interesting similarity between the two: SVIs remain "up/up" as long as the ports in the VLAN it represents are up, even if there's only one. Etherchannels remain "up/up" as long as the ports in the Portchannel are up, even if there's only one. (The available bandwidth goes down, though. And yes, I knew you know that.) So what? So this: If you have a port in a VLAN that's not actually handling data, but is doing something else - like serving as a SPAN destination port connected to a network monitor, for example - the SVI would stay up even if all of the other ports in the VLAN went down, since this SPAN destination port would stay up. Having an SVI stay up in that situation isn't good - we can end up having a black hole in our routing process. Here's Wikipedia's definition of a black hole in space: "According to the general theory of relativity, a black hole is a region of space from which nothing, not even light, can escape. It is the result of the deformation of spacetime caused by a very compact mass." A black hole in routing is the result of an SVI remaining up when there are actually no "up/up" interfaces in that VLAN except for those connected to network monitors or similar devices. To avoid this, we can exclude such ports from the "up/up" calculation with the switchport autostate exclude command. Using that interface-level command on ports like the one previous described will exclude (get it?)
that port from the "up/up" determination.
Router Redundancy Techniques In networking, we'll take as much redundancy as we can get. If a router goes down, we've obviously got real problems. Hosts are relying on that router as a gateway to send packets to remote networks. For true network redundancy, we need two things:
A secondary router to handle the load when the primary goes down A protocol to get the networks using that secondary router as quickly as possible
Time is definitely of the essence here, in more ways than one - we need a protocol to quickly detect the fact that the primary router's down in the first place, and then we need a fast cutover to the secondary router. Now you may be thinking, "Why is Chris talking about router redundancy in a switching course?" With the popularity of L3 switches, you'll often be configuring these protocols on multilayer switches - so often that Cisco now tests your knowledge of these protocols on the CCNP Switch exam rather than Route. Running router redundancy protocols on your multilayer switches actually makes the cutover to a backup device a little faster than configuring them on routers, since our end users are generally attached to the L3 switches themselves, making this truly first-hop redundancy (or "1-hop redundancy" in some documentation). With the importance of these protocols in today's networks, we better be ready for exam questions and real-world situations. We have several different methods that allow us to achieve the goal of redundancy, and a very popular choice is HSRP - the Hot Standby Routing Protocol. Please note: In the following section, I'm going to refer to routers rather
than L3 switches, since the HSRP terminology itself refers to "Active routers", "Standby routers", and so forth. The commands and theory for all of the following protocols will be the same on a multilayer switch as they are on a router. Hot Standby Routing Protocol Defined in RFC 2281, HSRP is a Cisco-proprietary protocol in which routers are put into an HSRP router group. Along with dynamic routing protocols and STP, HSRP is considered a high-availability network service, since all three have an almost immediate cutover to a secondary path when the primary path is unavailable. One of the routers in the HSRP router group will be selected as the Active Router, and that router will handle the routing while the other routers are in standby, ready to handle the load if the primary router becomes unavailable. In this fashion, HSRP ensures a high network uptime, since it routes IP traffic without relying on a single router. The terms "active" and "standby" do not refer to the actual operational status of the routers - just to their status in the HSRP group. The hosts using HSRP as a gateway don't know the actual IP or MAC addresses of the routers in the group. They're communicating with a pseudorouter, a "virtual router" created by the HSRP configuration. This virtual router will have a virtual MAC and IP address as well, just like a physical router. The standby routers aren't just going to be sitting there, though! By configuring multiple HSRP groups on a single interface, HSRP load balancing can be achieved. Before we get to the more advanced HSRP configuration, we better get a basic one started! We'll be using a two-router topology here, and keep in mind that one or both of these routers could be multilayer switches as well. For ease of reading, I'm going to refer to them only as routers.
R2 and R3 will both be configured to be in standby group 5. The virtual router will have an IP address of 172.12.23.10 /24. All hosts in VLAN 100 should use this address as their default gateway. R2(config)#interface ethernet0 R2(config-if)#standby 5 ip 172.12.23.10 R3(config)#interface ethernet0 R3(config-if)#standby 5 ip 172.12.23.10
The show command for HSRP is show standby, and it's the first command you should run while verifying and troubleshooting HSRP. Let's run it on both routers and compare results. R2#show standby Ethernet0 - Group 5 Local state is Standby, priority 100 Hellotime 3 sec, holdtime 10 sec Next hello sent in 0.776 Virtual IP address is 172.12.23.10 configured Active router is 172.12.23.3, priority 100 expires in 9.568 Standby router is local 1 state changes, last state change 00:00:22 R3#show standby Ethernet0 - Group 5
Local state is Active, priority 100 Hellotime 3 sec, holdtime 10 sec Next hello sent in 2.592 Virtual IP address is 172.12.23.10 configured Active router is local Standby router is 172.12.23.2 expires in 8.020 Virtual mac address is 0000.0c07.ac05 2 state changes, last state change 00:02:08
R3 is in Active state, while R2 is in Standby. The hosts are using the 172.12.123.10 address as their gateway, but R3 is actually handling the workload. R2 will take over if R3 becomes unavailable. An IP address was assigned to the virtual router, but not a MAC address. However, there is a MAC address under the show standby output on R3, the active router. How did the HSRP process arrive at a MAC of 00-000c-07-ac-05? Well, most of the work is already done before the configuration is even begun. The MAC address 00-00-0c-07-ac-xx is HSRP's well-known virtual MAC address, and xx is the group number in hexadecimal. That's a good skill to have for the exam, so make sure you're comfortable with hex conversions. In this example, the group number is 5, which is expressed as 05 with a two-bit hex character. If the group number had been 17, we'd see 11 at the end of the MAC address - one unit of 16, one unit of 1. The output of the show standby command also tells us that the HSRP speakers are sending Hellos every 3 seconds, with a 10-second holdtime. These values can be changed with the standby command, but HSRP speakers in the same group should have the same timers. You can even tie down the hello time to the millisecond, but it's doubtful you'll ever need to do that. R3(config-if)#standby 5 timers ? Hello interval in seconds msec Specify hello interval in milliseconds R3(config-if)#standby 5 timers 4 ? Hold time in seconds R3(config-if)#standby 5 timers 4 12
A key value in the show standby command is the priority. The default is 100, as shown in both of the above show standby outputs. The router with the highest priority will be the primary HSRP router, the Active
Router. The router with the highest IP address on an HSRP-enabled interface becomes the Active Router if there is a tie on priority. We'll raise the default priority on R2 and see the results. R2(config)#interface ethernet0 R2(config-if)#standby 5 priority 150 R2#show standby Ethernet0 - Group 5 Local state is Standby, priority 150 Hellotime 4 sec, holdtime 12 sec Next hello sent in 0.896 Virtual IP address is 172.12.23.10 configured Active router is 172.12.23.3, priority 100 expires in 8.072 Standby router is local 1 state changes, last state change 00:14:24
R2 now has a higher priority, but R3 is still the Active Router. R2 will not take over as the HSRP primary until R3 goes down - OR the preempt option is configured on R2 with the standby command. In effect, the preempt option resets the HSRP process for that group. R2(config-if)#standby 5 priority 150 preempt 1d11h: %STANDBY-6-STATECHANGE: Ethernet0 Group 5 state Standby -> Active R2#show standby Ethernet0 - Group 5 Local state is Active, priority 150, may preempt Hellotime 4 sec, holdtime 12 sec Next hello sent in 1.844 Virtual IP address is 172.12.23.10 configured Active router is local Standby router is 172.12.23.3 expires in 10.204 Virtual mac address is 0000.0c07.ac05 2 state changes, last state change 00:00:13
In just a few seconds, a message appears that the local state has changed from standby to active. Show standby confirms that R2, the local router, is now the Active Router - the primary. R3 is now the standby. So if anyone tells you that you have to take a router down to change the Active router, they're wrong - you just have to use the preempt option on the standby priority command. What you do not have to do is configure the preempt command if you want the standby to take over as the Active Router if the current Active Router goes down. That's the default behavior of HSRP. The preempt
command is strictly intended to allow a router to take over as the active router without the current active router going down. On rare occasions, you may have to change the MAC address assigned to the virtual router. This is done with the standby mac-address command. Just make sure you're not duplicating a MAC address that's already on your network! R2(config-if)#standby 5 mac-address 0000.1111.2222
1d12h: %STANDBY-6-STATECHANGE: Ethernet0 Group 5 state Active -> Learn R2#show standby Ethernet0 - Group 5 Local state is Active, priority 150, may preempt Hellotime 4 sec, holdtime 12 sec Next hello sent in 3.476 Virtual IP address is 172.12.23.10 configured Active router is local Standby router is 172.12.23.3 expires in 10.204 Virtual mac address is 0000.1111.2222 configured 4 state changes, last state change 00:00:00 1d12h: %STANDBY-6-STATECHANGE: Ethernet0 Group 5 state Listen -> Active
The MAC address will take a few seconds to change, and the HSRP routers will go into Learn state for that time period. A real-world HSRP troubleshooting note: If you see constant state changes with your HSRP configuration, do what you should always do when troubleshooting - check the physical layer first. We can do some load balancing with HSRP, but it's not quite the load balancing you've learned about with some dynamic protocols. Let's say we have six hosts and two separate HSRP devices. For HSRP load balancing, there will be two HSRP groups created for the one VLAN. R2 will be the primary for Group 1 and R3 will be the primary for Group 2. (In production networks, you'll need to check the documentation for your software, because not all hardware platforms support multiple groups.) R2 is the Active for Group 1, which has a Virtual IP address of 172.12.23.11 /24. R3 is the Active for Group 2, which has a Virtual IP address of 172.12.23.12 /24. The key to load balancing with HSRP is to configure half of the hosts to use .11 as their gateway, and the remaining hosts should use .12.
This is not 50/50 load balancing, and if the hosts using .11 as their gateway are sending much more traffic than the hosts using .12, HSRP has no dynamic method of adapting. HSRP was really designed for redundancy, not load balancing, but there's no use in letting the standby router just sit there! Some other HSRP notes:
HSRP updates can be authenticated by using the standby command with the authentication option.
R2(config-if)#standby 5 ? authentication Authentication ip Enable HSRP and set the virtual IP address mac-address Virtual MAC address name Redundancy name string preempt Overthrow lower priority designated routers priority Priority level timers Hello and hold timers track Priority tracking
If you're configuring HSRP on a multilayer switch, you can configure HSRP on routed ports, SVIs, and L3 Etherchannels.
HSRP requires the Enhanced Multilayer Software Image (EMI) to run on an L3 switch. Gig Ethernet switches will have that image, but Fast Ethernet switches will have either the EMI or Standard Multilayer Image (SMI). Check your documentation. The SMI can be upgraded to the EMI. (Hint: It'll cost ya.) HSRP can run on Ethernet, Token Ring, and FDDI LANs. Some HSRP documentation states that Token Ring interfaces can support a maximum of three HSRP groups.
You saw several HSRP states in this example, but not all of them. Here they are, presented in order and with a quick description. Disabled - Some HSRP documentation lists this as a state, others do not. I don't consider it one, but Cisco may. Disabled means that the interface isn't running HSRP yet. Initial (Init) -- The router goes into this state when an HSRP-enabled interface first comes up. HSRP is not yet running on a router in Initial state. Learn -- At this point, the router has a lot to learn! A router in this state has not yet heard from the active router, does not yet know which router is the active router, and it doesn't know the IP address of that router, either. Other than that, it's pretty bright. ;) Listen -- The router now knows the virtual IP address, but is not the primary or the standby router. It's listening for hello packets from those routers. Speak -- The router is now sending Hello messages and is active in the election of the primary and standby routers. Standby -- The router is now a candidate to become the active router, and sends Hello messages. Active -- The router is now forwarding packets sent to the group's virtual IP address. Note that an HSRP router doesn't send Hellos until it reaches the Speak state. It will continue to send Hellos in the Standby and Active states as well.
There's also no problem with configuring an interface to participate in multiple HSRP groups on most Cisco routers. Some 2500, 3000, and 4000 routers do not have this capability. Always verify with show standby, and note that this command indicates that there's a problem with one of the virtual IP addresses! R1#show standby FastEthernet0/0 - Group 1 State is Listen Virtual IP address is 172.12.23.10 Active virtual MAC address is unknown Local virtual MAC address is 0000.0c07.ac01 (v1 default) Hello time 3 sec, hold time 10 sec Preemption disabled Active router is unknown Standby router is unknown Priority 100 (default 100) IP redundancy name is "hsrp-Fa0/0-1" (default) FastEthernet0/0 - Group 5 State is Init (virtual IP in wrong subnet) Virtual IP address is 172.12.34.10 (wrong subnet for this interface) Active virtual MAC address is unknown Local virtual MAC address is 0000.0c07.ac05 (v1 default) Hello time 3 sec, hold time 10 sec Preemption disabled Active router is unknown Standby router is unknown Priority 100 (default 100) IP redundancy name is "hsrp-Fa0/0-5" (default)
HSRP Interface Tracking Using interface tracking can be a little tricky at first, but it's a feature that can really come in handy. Basically, this feature enables the HSRP process to monitor an additional interface; the status of this interface will dynamically change the HSRP priority for a specified group. When that interface's line protocol shows as "down", the HSRP priority of the router is reduced. This can lead to another HSRP router on the network becoming the active router - but that other router must be configured with the preempt option. In the following network, R2 is the primary due to its priority of 105. R3 has the default priority of 100. R2 will therefore be handling all the traffic sent to the virtual router's IP address of 172.12.23.10. That's fine, but there is a potential single point of failure.
If R2's Serial0 interface fails, the hosts will be unable to reach the server farm. HSRP can be configured to drop R2's priority if the line protocol of R2's Serial0 interface goes down, making R3 the primary router. (The default decrement in the priority when the tracked interface goes down is 10.)
R2(config)#interface ethernet0 R2(config-if)#standby 1 priority 105 preempt R2(config-if)#standby 1 ip 172.12.23.10 R2(config-if)#standby 1 track serial0 R3(config)#interface ethernet0 R3(config-if)#standby 1 priority 100 preempt R3(config-if)#standby 1 ip 172.12.23.10 R2#show standby Ethernet0 - Group 1 Local state is Active, priority 105, may preempt Hellotime 3 sec, holdtime 10 sec Next hello sent in 1.424 Virtual IP address is 172.12.23.10 configured Active router is local Standby router is 172.12.23.3 expires in 9.600 Virtual mac address is 0000.0c07.ac01
2 state changes, last state change 00:01:38 Priority tracking 1 interface, 1 up: Interface Decrement State Serial0 10
Up
R3#show standby Ethernet0 - Group 1 Local state is Standby, priority 100, may preempt Hellotime 3 sec, holdtime 10 sec Next hello sent in 0.624 Virtual IP address is 172.12.23.10 configured Active router is 172.12.23.2, priority 105 expires in 9.452 Standby router is local 1 state changes, last state change 00:01:33
The show standby output on R2 shows the tracked interface, the default decrement of 10, and that the line protocol of the tracked interface is currently up. We'll test the configuration by shutting the interface down manually. R2(config-if)#int s0 R2(config-if)#shutdown 1d14h: %STANDBY-6-STATECHANGE: Ethernet0 Group 1 state Active -> Speak 1d14h: %LINK-5-CHANGED: administratively down
Interface
Serial0,
changed
state
to
1d14h: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0, changed state to down
R2#show standby Ethernet0 - Group 1 Local state is Standby, priority 95 (confgd 105), may preempt Hellotime 3 sec, holdtime 10 sec Next hello sent in 0.446 Virtual IP address is 172.12.23.10 configured Active router is 172.12.23.3, priority 100 expires in 9.148 Standby router is local 4 state changes, last state change 00:00:02 Priority tracking 1 interface, 0 up: Interface Decrement State Serial0 10 Down (administratively down)
Not only does the HSRP tracking work to perfection - R2 is now the standby and R3 the primary - but the show standby command even shows us that the line protocol is administratively down, rather than just "down". Running show standby on R3 verifies that R3 now sees itself as the Active router.
R3#show standby Ethernet0 - Group 1 Local state is Active, priority 100, may preempt Hellotime 3 sec, holdtime 10 sec Next hello sent in 0.706 Virtual IP address is 172.12.23.10 configured Active router is local Standby router is 172.12.23.2 expires in 8.816 Virtual mac address is 0000.0c07.ac01 2 state changes, last state change 00:02:34
We'll now reopen the Serial0 interface on R2. Since we also put the preempt option on that router's HSRP configuration, R2 should take over as the Active router. R2(config)#int s0 R2(config-if)#no shut 1d14h: %STANDBY-6-STATECHANGE: Ethernet0 Group 1 state Standby -> Active 1d14h: %LINK-3-UPDOWN: Interface Serial0, changed state to up 1d14h: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0, changed state to up
R2#show standby Ethernet0 - Group 1 Local state is Active, priority 105, may preempt Hellotime 3 sec, holdtime 10 sec Next hello sent in 0.852 Virtual IP address is 172.12.23.10 configured Active router is local Standby router is 172.12.23.3 expires in 9.276 Virtual mac address is 0000.0c07.ac01 5 state changes, last state change 00:00:16 Priority tracking 1 interface, 1 up: Interface Decrement State Serial0 10 Up
Just that quickly, R2 is again the Active router. If you're running HSRP interface tracking, it's a very good idea to configure the preempt option on all routers in the HSRP group. The #1 problem with an HSRP Interface Tracking configuration that is not working properly is a priority / decrement value problem. As I mentioned earlier, the default decrement is 10, and that's fine with the example we just worked through. If R2 had a priority of 120, the decrement of 10 would not be enough to make R3 the Active router. You can change the default decrement at the end of the standby interface
command. The following configuration would result in a priority value decrement of 25 when the tracked interface goes down. R1(config)#int ethernet0 R1(config-if)#standby 5 track s0 ? Decrement value R1(config-if)#standby 5 track s0 25
That does not change the decrement value for all interfaces - just the one we're tracking with that particular statement, serial0. If we configure a second interface for tracking and do not supply a decrement value, that interface will have a decrement value of 10. I've configured interface tracking for S1 and verified with show standby here's the pertinent information: Priority 65 (default 100) Track interface Serial0 state Down decrement 25 Track interface Serial1 state Down decrement 10
Note that this interface's priority is now 65! It's using the HSRP priority default of 100, then has 25 decremented from that because serial0 is down, and then another 10 decremented because serial1 is down.
Troubleshooting HSRP We've discussed several troubleshooting steps throughout the HSRP section, but the show standby command can indicate other HSRP issues as well. I've deliberately misconfigured HSRP on this router to illustrate a few. R1#show standby FastEthernet0/0 - Group 1 State is Active 2 state changes, last state change 01:08:58 Virtual IP address is 172.12.23.10 Active virtual MAC address is 0000.0c07.ac01 Local virtual MAC address is 0000.0c07.ac01 (v1 default) Hello time 3 sec, hold time 10 sec Next hello sent in 2.872 secs Preemption disabled Active router is local Standby router is unknown Priority 100 (default 100) IP redundancy name is "hsrp-Fa0/0-1" (default)
FastEthernet0/0 - Group 5 State is Init (virtual IP in wrong subnet) Virtual IP address is 172.12.34.10 (wrong subnet for this interface) Active virtual MAC address is unknown Local virtual MAC address is 0000.0c07.ac05 (v1 default) Hello time 3 sec, hold time 10 sec Preemption disabled Active router is unknown Standby router is unknown Priority 75 (default 100) Track interface Serial0/0 state Down decrement 25 IP redundancy name is "hsrp-Fa0/0-5" (default)
We've got all sorts of problems here! In the Group 5 readout, we see a message that the subnet is incorrect; naturally, both the active and standby routers are going to be unknown. In the Group 1 readout, the Active router is local but the Standby is unknown. This is most likely a misconfiguration on our part as well, but along with checking the HSRP config, always remember "Troubleshooting starts at the Physical layer!" One Physical layer issue with HSRP I've run into in both practice labs and production networks is an unusual number of state transitions. You can spot this and most other HSRP issues with debug standby. R1#debug standby *Apr 9 20:15:10.542: HSRP: Fa0/0 API MAC address update *Apr 9 20:15:10.546: HSRP: Fa0/0 API Software interface coming up *Apr 9 20:15:10.550: HSRP: Fa0/0 API Add active HSRP addresses to ARP table *Apr 9 20:15:10.554: HSRP: Fa0/0 API Add active HSRP addresses to ARP table R1# *Apr 9 20:15:11.648: %SYS-5-CONFIG_I: Configured from console by console *Apr 9 20:15:12.541: %LINK-3-UPDOWN: Interface FastEthernet0/0, changed state to up R1# *Apr 9 20:15:12.541: HSRP: API Hardware state change *Apr 9 20:15:12.541: HSRP: Fa0/0 API Software interface coming up *Apr 9 20:15:12.545: HSRP: Fa0/0 API Add active HSRP addresses to ARP table *Apr 9 20:15:13.483: HSRP: Fa0/0 Interface up *Apr 9 20:15:13.483: HSRP: Fa0/0 Starting minimum interface delay (1 secs) *Apr 9 20:15:13.543: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0, changed state to up R1# *Apr 9 20:15:14.485: HSRP: Fa0/0 Interface min delay expired *Apr 9 20:15:14.485: HSRP: Fa0/0 Grp 1 Init: a/HSRP enabled *Apr 9 20:15:14.485: HSRP: Fa0/0 Grp 1 Init -> Listen *Apr 9 20:15:14.485: HSRP: Fa0/0 Grp 1 Redundancy "hsrp-Fa0/0-1" state
Init ->Backup
This is an extremely verbose command, and a very helpful one. If you have the opportunity to run HSRP in a lab environment, run this one often during your configuration to see the different states and values being passed around the network. (Never practice debugs at work or in any production environment.) If you see HSRP states regularly transitioning, particularly between Speak and Standby, check your cabling - you'd be surprised how often that happens, especially in labs. Frankly, most HSRP issues you run into fall into these categories: The secondary router didn't become the Active router when it should have. The former Active router didn't take back over when it came back online. If either of those happens to you, check these values: Is the preempt command properly configured? (I put this first in the list for a reason.) What are the priority values of each HSRP speaker? Watch your decrement values with HSRP interface tracking. Don't get cute with these. If you're having a problem with interface tracking and you see decrements that don't end in 0 or 5, I can practically guarantee they're misconfigured. (I don't know exactly why, but this happens fairly often, especially in lab environments when you get tired of using decrements that don't end in 5 or 0.) Whew! That's a lot of detail - and only one of our redundancy choices. Great news, though - many of the HSRP concepts you're currently mastering are the same or similar to what we're doing with VRRP - the Virtual Router Redundancy Protocol.
Virtual Router Redundancy Protocol Defined in RFC 2338, VRRP is the open-standard equivalent of the Ciscoproprietary HSRP. VRRP works very much like HSRP, and is suited to a multivendor environment. The operation of the two is so similar that you basically learned VRRP while going through the HSRP section! There are some differences, a few of which are:
VRRP's equivalent to HSRP's Active router is the Master router. (Some VRRP documentation refers to this router as the IP Address Owner.) This is the router that has the virtual router's IP address as a real IP address on the interface it will receive packets on. The physical routers in a VRRP Group combine to form a Virtual Router. Note that the VRRP Virtual Router uses an IP address already configured on a router in its group, as opposed to how the HSRP router is assigned a separate IP address. VRRP Advertisements are multicast to 224.0.0.18. VRRP's equivalent to HSRP's Standby router state is the Backup state. The MAC address of VRRP virtual routers is 00-00-5e-00-01-xx, and you guessed it - the xx is the group number in hexadecimal. "preempt" is a default setting for VRRP routers. As of IOS Version 12.3(2)T, VRRP now has an Object Tracking feature. Similar to HSRP's Interface Tracking feature, a WAN interface can be tracked and a router's VRRP priority dropped when that interface goes down.
Gateway Load Balancing Protocol (GLBP) HSRP and its open-standard relation VRRP have some great features, but accurate load balancing is not among them. While both allow a form of load sharing, it's not truly load balancing. The primary purpose of the Gateway Load Balancing Protocol (GLPB) is just that - load balancing! It's also suitable for use only on Cisco routers, because GLBP is Cisco-proprietary. As with HSRP and VRRP, GLBP routers will be placed into a router
group. However, GLBP allows every router in the group to handle some of the load in a round-robin format, rather than having a primary router handle all of it while the standby routers remain idle. With GLBP, the hosts think they're sending all of their data to a single gateway, but actually multiple gateways are in use at one time. That's a major benefit of GLBP over HSRP and VRRP, since the latter two aren't really built for load balancing. They don't perform any balancing by default, and configuring it is both an inexact science and a pain in the behind. GLBP also allows standard configuration of the hosts, who will all have their gateway address set to the virtual router's address - none of this "some hosts point to gateway A, some hosts point to gateway B" business we had with HSRP load balancing. The key to GLBP is that when a host sends an ARP request for the MAC of the virtual router, one of the physical routers will answer with its own MAC address. The host will then have the IP address of the GLBP virtual router and the MAC address of a physical router in the group. In the following illustrations, the three hosts send an ARP request for the MAC of the virtual router.
The Active Virtual Gateway (AVG) will be the router with the highest GLBP priority, and this router will send back ARP responses containing different virtual MAC addresses. The three hosts will have the same Layer 3 address for their gateway, but a different L2 address, accomplishing the desired load balancing while allowing standard configuration on the hosts. (If the routers all have the same GLBP priority, the router with the highest IP address will become the AVG.) In the following illustration, R3 is the AVG and has assigned a virtual MAC of 22-22-22-22-22-22 to R2, 33-33-33-33-33-33 to itself, and 44-44-44-4444-44 to R4. The routers receiving and forwarding traffic received on this virtual MAC address are Active Virtual Forwarders (AVFs).
If the AVG fails, the router serving as the standby AVG will take over. If any of the AVFs fails, another router will handle the load destined for a MAC on the downed router. GLBP routers use Hellos to detect whether other routers in their group are available or not. GLBP groups can have up to four members. GLBP's load balancing also offers the opportunity to fine-tune it to your
network's needs. GLBP offers three different forms of MAC address assignment, the default being round-robin. With round-robin assignments, a host that sends an ARP request will receive a response containing the next virtual MAC address in line. If a host or hosts need the same MAC gateway address every time it sends an ARP request, host-dependent load balancing is the way to go. Weighted MAC assignments affect the percentage of traffic that will be sent to a given AVF. The higher the assigned weight, the more often that particular router's virtual MAC will be sent to a requesting host. GLBP is enabled just as VRRP and HSRP are - by assigning an IP address to the virtual router. The following command will assign the address 172.1.1.10 to GLBP group 5. MLS(config-if)# glbp 5 ip 172.1.1.10
To change the interface priority, use the glbp priority command. To allow the local router to preempt the current AVG, use the glbp preempt command. MLS(config-if)# glbp 5 priority 150 MLS(config-if)# glbp 5 preempt
GLBP Weighting A router can be configured to give up its role as the AVF if its overall weight drops below a configured value. The default weight of a GLBP AVF is 100. The router is configured with upper and lower weight thresholds, and should the router's weight fall below its lower threshold, it gives up the role of AVF. When the router's GLBP weight exceeds the higher threshold, it resumes the role of GLBP AVF. Before configuring the GLBP-specific commands, we configure track statements to number and name the interfaces being tracked. IOS Help shows us our options: R1(config)#track ? Tracked object resolution Tracking resolution parameters timer Polling interval timers
R1(config)#track 1 ? interface Select an interface to track ip IP protocol list Group objects in a list rtr Response Time Reporter (RTR) entry R1(config)#track 1 interface ? Async Async interface BVI Bridge-Group Virtual Interface CDMA-Ix CDMA Ix interface CTunnel CTunnel interface Dialer Dialer interface FastEthernet FastEthernet IEEE 802.3 Lex Lex interface Loopback Loopback interface MFR Multilink Frame Relay bundle interface Multilink Multilink-group interface Port-channel Ethernet Channel of interfaces Serial Serial Tunnel Tunnel interface Vif PGM Multicast Host interface Virtual-PPP Virtual PPP interface Virtual-TokenRing Virtual TokenRing XTagATM Extended Tag ATM interface R1(config)#track 1 interface serial 0/0 ? ip IP parameters line-protocol Track interface line-protocol R1(config)#track 1 interface serial 0/0 line-protocol ?
The choices at the end, ip and line-protocol, determine what is being tracked. The line-protocol option does just what you think it would - it tracks the line protocol to see whether it's up or not. The ip option is followed only by routing, as shown in the next track statement. R1(config)#track 2 interface serial 0/1 ip ? routing Track interface IP routing capability R1(config)#track 2 interface serial 0/1 ip routing ? R1(config)#track 2 interface serial 0/1 ip routing
After taking a look at our options with IOS Help, I'll configure a GLBP weight of 105 on the fast 0/0 interface, a lower threshold of 90, and an upper threshold of 100. I'll set a decrement of 10 for both tracking statements created earlier.
R1(config)#int fast 0/0 R1(config-if)#glbp ? Group number R1(config-if)#glbp 1 ? authentication Authentication method forwarder Forwarder configuration ip Enable group and set virtual IP address load-balancing Load balancing method name Redundancy name preempt Overthrow lower priority designated routers priority Priority level timers Adjust GLBP timers weighting Gateway weighting and tracking R1(config-if)#glbp 1 weighting ? Weighting maximum value track Interface tracking R1(config-if)#glbp 1 weighting 105 ? lower Weighting lower threshold upper Weighting upper threshold R1(config-if)#glbp 1 weighting 105 lower 90 upper 100 R1(config-if)#glbp 1 weighting track 1 decrement 10 R1(config-if)#glbp 1 weighting track 2 decrement 10
Server Load Balancing We've talked at length about how Cisco routers and multilayer switches can work to provide router redundancy - but there's another helpful service, Server Load Balancing, that does the same for servers. While HSRP, VRRP, and CLBP all represent multiple physical routers to hosts as a single virtual router, SLB represents multiple physical servers to hosts as a single virtual server. In the following illustration, three physical servers have been placed into the SRB group ServFarm. They're represented to the hosts as the virtual server 210.1.1.14.
The hosts will seek to communicate with the server at 210.1.1.14, not knowing that they're actually communicating with the routers in ServFarm. This allows quick cutover if one of the physical servers goes down, and also serves to hide the actual IP addresses of the servers in ServFarm. The basic operations of SLB involves creating the server farm, followed by creating the virtual server. We'll first add 210.1.1.11 to the server farm: MLS(config)# ip slb serverfarm ServFarm MLS(config-slb-sfarm)# real 210.1.1.11 MLS(config-slb-real)# inservice
The first command creates the server farm, with the real command specifying the IP address of the real server. The inservice command is required by SLB to consider the server as ready to handle the server farm's workload. The real and inservice commands should be repeated for each server in the server farm. To create the virtual server:
MLS(config)# ip slb vserver VIRTUAL_SERVER MLS(config-slb-vserver)# serverfarm ServFarm MLS(config-slb-vserver)# virtual 210.1.1.14 MLS(config-slb-vserver)# inservice
From the top down, the vserver was named VIRTUAL_SERVER, which represents the server farm ServFarm. The virtual server is assigned the IP address 210.1.1.14, and connections are allowed once the inservice command is applied. You may also want to control which of your network hosts can connect to the virtual server. If hosts or subnets are named with the client command, those will be the only clients that can connect to the virtual server. Note that this command uses wildcard masks. The following configuration would allow only the hosts on the subnet 210.1.1.0 /24 to connect to the virtual server. MLS(config-slb-vserver)# client 210.1.1.0 0.0.0.255
Network Monitoring Tools Actively monitoring your network is another part of a high-availability design -- and the following tools and protocols play an important role in that monitoring. SNMP The Simple Network Management Protocol is used to carry network management information between network devices, and you're going to find it in just about any network that uses anyone's network management tools -- particularly Cisco's. An SNMP deployment basically consists of three parts: A monitoring device, the SNMP Manager The SNMP instance running on the monitored devices, officially called the SNMP Agents A database containing all of this info, the Management Information
Bases (MIB) The Manager will question the Agents at a configurable interval, basically asking if there are any problems the Manager needs to know about ("polling").
This is a proactive approach (sorry for the buzzword), but the only way to get near-immediate notification of critical events through polling alone is to poll the Agents quite often - and that can be a big use of bandwidth along with increasing the hit on the managed device's CPU. To work around those issues, we can configure SNMP managed devices to send traps when certain events occur.
There are some serious security considerations with SNMP and the different versions available to us. There are three versions of SNMP; v1, v2c, and v3. Version 3 has both authentication and encryption capabilities, where the earlier version do not. Use version 3 whenever possible; use of the other versions should be restricted to allowing read-only access. How do you do that? When you configure SNMP community strings - a kind of combination of password and authority level - you'll have the option to configure the string as read-only or read-write.
R1(config)#snmp-server community ? WORD SNMP community string R1(config)#snmp-server community CCNP ? Std IP accesslist allowing access with this community string Expanded IP accesslist allowing access with this community string WORD Access-list name ipv6 Specify IPv6 Named Access-List ro Read-only access with this community string rw Read-write access with this community string view Restrict this community to a named MIB view R1(config)#snmp-server community CCNP ro ? Std IP accesslist allowing access with this community string Expanded IP accesslist allowing access with this community string WORD Access-list name ipv6 Specify IPv6 Named Access-List R1(config)#snmp-server community CCNP ro 15
That command allows hosts identified by access-list 15 to have read-only access to all SNMP objects specified by this community string. Restrict read-write access to your SNMP objects as much as is practical with your network and network personnel. To configure the location of the SNMP server on the monitored device so the local device knows where to send the traps, use this command: R1(config)#snmp-server trap < IP address of SNMP server >
Syslog Syslog delivers messages about network events in what a friend once called "kinda readable format". These messages can be really helpful in figuring out what just happened in both your home lab and production network - you just have to remain calm and read the message. That sounds flip, but I've seen and heard plenty of network admins panic because something's gone wrong in their network, and they don't know what it is, and they totally miss the Syslog message on their screen. True, that message can be hidden in a batch of other output, but I bet it's
there - and as we've seen a few times in this course, the message may well spell out exactly what the problem is. While part of the Syslog message will be in easily understood text, part of it's going to have some numbers and dashes in it. After we take a look at the different severity levels and some sample configurations, we'll "decipher" the "kinda readable" part of a Syslog message. Logging To A Host The basic command for sending logging messages to a specific host is straightforward, but I've found the level command that goes with it sometimes trips Cisco network admins up. Let's take a look at the different logging options with IOS Help. The commands we're focusing on are at the very top and very bottom of the IOS Help options. R1(config)#logging ? Hostname or A.B.C.D buffered console exception facility history host monitor on rate-limit source-interface trap
IP address of the logging host Set buffered logging parameters Set console logging level Limit size of exception flush output Facility parameter for syslog messages Configure syslog history table Set syslog server host name or IP address Set terminal line (monitor) logging level Enable logging to all supported destinations Set messages per second limit Specify interface for source address in logging transactions Set syslog server logging level
Identifying the logging host is easy enough - we just need to follow logging with the hostname or IP address of that host. It's the trap command you have to watch, since that sets the logging level itself. R1(config)#logging 172.12.123.1 R1(config)#logging trap ? Logging severity level alerts Immediate action needed critical Critical conditions debugging Debugging messages emergencies System is unusable errors Error conditions informational Informational messages notifications Normal but significant conditions warnings Warning conditions
(severity=1) (severity=2) (severity=7) (severity=0) (severity=3) (severity=6) (severity=5) (severity=4)
Selecting a trap level means that all log messages of the severity you configure and all those with a lower numeric value are sent to the logging
server. If you want all log messages to be sent, you don't have to enter every number - just number 7, the debug level, which is the highest numeric level. I've occasionally seen instances where the desired log messages were not being sent to the server. The first thing you should check is the logging trap level. If you want debug logs sent to the server, you must specify that level. R1(config)#logging trap 7
You can use the actual name or the level behind logging trap. Just make sure to set the level high enough to get the desired results! Let's take a "kinda typical" Syslog message that you've seen quite often by this point in your studies and examine it closely. 5d05h: %SYS-5-CONFIG_I: Configured from console by console
About as commonplace as it gets, right? Those are some odd characters at the beginning, though. The very beginning of that is the timestamp, which you can set to different formats with the service timestamps command. Right now it's set to uptime, showing that this router's been up for five days and five hours. I prefer the datetime option, which I'll show you here along with the syntax of the command via IOS Help: R3(config)#service timestamps ? debug Timestamp debug messages log Timestamp log messages R3(config)#service timestamps log ? datetime Timestamp with date and time uptime Timestamp with system uptime R3(config)#service timestamps log datetime R3(config)#^Z R3# *Mar 6 05:42:35: %SYS-5-CONFIG_I: Configured from console by console
Note the immediate change of the timestamp format. The "SYS" in that message is the facility; "SYS" indicates a System message. When you're configuring routing protocols, you'll see
messages with "OSPF", "EIGRP", or "RIP" there. The "5" is the severity, which in this case is the "Notifications" level. That's followed by the mnemonic ("CONFIG_I", for Config Interface in this case) and the message-text, which is the final part of the Syslog message. Cisco SLA In your Frame Relay studies, you were introduced to the Committed Information Rate (CIR). The CIR is basically a guarantee given to the customer by the Frame Relay service provider where the provider says.. "For X dollars, we guarantee you'll get "Y" amount of bandwidth. You may get more, but we guarantee you won't get less." Given that guarantee of minimum performance, the customer can then plan the WAN appropriately. The SLA is much the same, only this agreement (the Service Level Agreement, to be precise) can be between different groups.. .. it can be much like the CIR, where a service provider guarantees a certain level of overall network uptime and performance... ... or it can be between the internal clients of a company and the network team at the same company. The SLA can involve bandwidth minimums, but it can involved just about any quality-related value in your network, including acceptable levels of jitter in voice networks. From Cisco's "IOS IP Service Level Agreements" website: "With Cisco IOS IP SLAs, users can verify service guarantees, increase network reliability by validating network performance, proactively identify network issues, and increase Return on Investment (ROI) by easing the deployment of new IP services." "Cisco IOS IP SLAs use active monitoring to generate traffic in a continuous, reliable, and predictable manner, thus enabling the measurement of network performance and health." Now that's quite an agreement.
According to the same site, a typical SLA contains the following assurances: Network availability percentage Network performance (often measured by round-trip delay) Latency, jitter, packet loss, DNS lookup time Trouble notification response time Resolution time Reimbursement schedule when above assurances are not met Cisco IOS IP SLA usage includes: Performance visibility SLA monitoring IP service network health readiness Edge-to-edge network availability monitoring Business-critical app performance monitoring Network operation troubleshooting There are two parties involved in the overall SLA process, the selfexplanatory Source and Responder. Once we configure SLA, the source kicks off the process... Control packets are sent to the Responder on UDP port 1967 in an attempt to create a control connection similar to that of FTP - it's basically an agreement on the rules of communication. In this case, the rules sent to the Responder are the protocol and port number it should listen for and a time value indication how long it should listen. If the Responder agrees to the rules, it will send a message back to the Source indicating its agreement, and will then start listening! (If the Responder doesn't agree, it will indicate that as well, and our story ends here.)
We now go from controlling to probing, as the Source sends some test packets to the Responder. What's the Source testing? The approximate length of time it take the Responder to - you guessed it - respond! The Responder adds timestamps both as the packets are accepted and returned to the Sender. This gives the Sender a better idea of the overall time it took the Responder to process the packets. Some notes regarding the time measurements going on here... It bears repeating that the Responder places two timestamps on the returned packets - one indicating when the packet arrived and another indicating when it left. This allows the Sender to determine how long it took the Responder to process the packets. This dual timestamping allows the Sender to determine if there were any delays either as the packet went from Sender to Receiver or vice versa. (Similar to our old friends BECN and FECN from Frame Relay.) All of this time measurement and timestamping only works if the involved devices have the same time set - and that's what NTP is all about. The Network Time Protocol is not a part of your CCNP SWITCH exam studies, but it is important to know for working in realworld networks. I have an idea it might show up on other exams, too! There's some excellent information on Cisco's SLA white paper site: http://bit.ly/ahY5dh Configuring SLA can be a bit tricky - I've seen the IOS commands vary more than usual between IOS version. While the basic process is the same.. Config the probe and send it Config the object to be probed ... the commands to get you there vary. Here's a Cisco PDF on the subject: http://www.cisco.com/en/US/docs/ios/12_4/ip_sla/configuration/guide/hsicmp.html
I certainly wouldn't memorize every single step, but knowing the basic commands ("ip sla monitor" and "show ip sla statistics", for example) certainly couldn't hurt. Configuring A Cisco Multilayer Switch As A DHCP Server As a CCNA and future CCNP, you're familiar with the basic purpose of DHCP and the basic process a host goes through in order to get an IP address from a DHCP server... but let's review it anyway! The initial step has the DHCP client sending a broadcast packet, a DHCPDiscover packet, that allows the host to discover where the DHCP servers are. The DHCP servers that receive that DHCPDiscover packet will respond with a DHCPOffer packet. This packet contains an IP address, the time the host can keep the address (the "lease"), a default gateway, and other information as configured by the DHCP server admin. If the host receives DHCPOffer packets from multiple DHCP servers, the first DHCPOffer packet received is the one accepted. The host accepts this offer with a DHCPRequest packet, which is also a broadcast packet. The DHCP server whose offered IP address is being accepted sends a unicast DHCPAck (for "acknowledgement") back to the host. Note that two broadcast packets are involved in the DHCP address assignment process. You remember from your CCNA studies that there is a certain circumstance where broadcasts may not reach their intended destination. I'll remind you what that is and show you how we're going to get around it later in this section. The commands to configure a Cisco router and a multilayer switch as a DHCP server are the same. There's one "gotcha" involving using an L3 switch as a DHCP Server that I'll point out as we dive into the config. Careful planning is the first step to success when working with Cisco routers, and that's particularly true of a DHCP deployment. We may have a situation where we want to exclude a certain range of addresses from the DHCP pool, and oddly enough, we need to do that in global configuration mode.
Actually, let me point out that "gotcha" right here - when you use an L3 switch as a DHCP Server, the switch must have an IP address from any subnet that it's offering addresses from. Let's say we are going to assign addresses to DHCP clients from the 11.0.0.0 /8 range, but we don't want to assign the addresses 11.1.1.1 11.1.1.255. We need to use the ip dhcp excluded-address command to do that, and again, that's a global command. (I mention that twice because that drives everyone crazy - it's very easy to forget!) R1(config)#ip dhcp excluded-address ? A.B.C.D Low IP address R1(config)#ip dhcp excluded-address 11.1.1.1 ? A.B.C.D High IP address R1(config)#ip dhcp excluded-address 11.1.1.1 11.1.1.255 ? R1(config)#ip dhcp excluded-address 11.1.1.1 11.1.1.255
Note that there are no masks used in this command - just the numerically lowest and highest IP addresses in the excluded range. Finally, we're ready to create the DHCP pool! We enter DHCP config mode with the ip dhcp pool command, followed by the name we want the pool to have. R1(config)#ip dhcp pool ? WORD Pool name R1(config)#ip dhcp pool NETWORK11 R1(dhcp-config)#
The range of addresses to be assigned to the clients is defined with the network command. Note that for the mask, we're given the option of entering the value in either prefix notation or the more familiar dotted decimal. R1(dhcp-config)#network ? A.B.C.D Network number in dotted-decimal notation R1(dhcp-config)#network 11.0.0.0 ? /nn or A.B.C.D Network mask or prefix length R1(dhcp-config)#network 11.0.0.0 /8
We can specify a domain name with the domain-name command, and give the clients the location of DNS servers with the dns-server command. The DNS servers can be referred to by either their hostname or IP address. R1(dhcp-config)#domain-name ? WORD Domain name R1(dhcp-config)#domain-name bryantadvantage.com R1(dhcp-config)#dns-server ? Hostname or A.B.C.D Server's name or IP address R1(dhcp-config)#dns-server 11.1.1.255
To specify a default router for the clients, use the default-router command. R1(dhcp-config)#default-router ? Hostname or A.B.C.D Router's name or IP address R1(dhcp-config)#default-router 11.1.1.100
Not only can you specify the length of the DHCP address lease, you can be really specific about that value - down to the minute! The lease can also be made indefinite with the infinite option. R1(dhcp-config)#lease ? Days infinite Infinite lease R1(dhcp-config)#lease 30 ? Hours R1(dhcp-config)#lease 30 23 ? Minutes R1(dhcp-config)#lease 30 23 59 ? R1(dhcp-config)#lease 30 23 59
At the beginning of this section, I mentioned that a Cisco router acting as a DHCP server will check for IP address conflicts before assigning an IP address. This check consists of the router sending two ping packets to an IP address before assigning that address; those pings will time out in 500 milliseconds.
If the ping times out, the address will be assigned. If an echo returns, obviously that address should not and will not be assigned! If you want to change either the number of pings sent or the ping timeout value during this process, use the ip dhcp ping packets and ip dhcp ping timeout commands. Note that these are global commands as well. You can also disable this pinging by entering zero for the ping packets value. R1(config)#ip dhcp ping packets ? Number of ping packets (0 disables ping) R1(config)#ip dhcp ping packets 5
R1(config)#ip dhcp ping timeout ? Ping timeout in milliseconds R1(config)#ip dhcp ping timeout 1000 R1(config)#
Finally, if you need to disable the DHCP service on this router, run the no service dhcp command. It can be reenabled at any time with the service dhcp command. (DHCP capabilities should be enabled by default, but it never hurts to make sure.) Now, about those broadcasts .... IP Helper Addresses
While routers accept and generate broadcasts, they do not forward them. That can present quite a problem with DHCP requests when a router is between the requesting host and the DHCP server. The initial step in the DHCP process has the host generating a DHCPDiscover packet - and that packet is a broadcast.
If this PC attempts to locate a DHCP server with a broadcast, the broadcast will be stopped by the router and will never get to the DHCP server. By configuring the ip helper-address command on the router, UDP broadcasts such as this will be translated into a unicast by the router, making the communication possible. The command should be configured on the interface that will be receiving the broadcasts -- not the interface closest to the destination device. R1(config)#int e0 R1(config-if)#ip helper-address ? A.B.C.D IP destination address R1(config-if)#ip helper-address 100.1.1.2
A Cisco router running the ip helper-address command is said to be acting as a DHCP Relay Agent, but DHCP messages are not the only broadcasts being relayed to the correct destination. Nine common UDP service broadcasts are "helped" by default:
TIME, port 37 TACACS, port 49 DNS, port 53 BOOTP/DHCP Server, port 67 BOOTP/DHCP Client, port 68 TFTP, port 69
NetBIOS name service, port 137 NetBIOS datagram service, port 138 IEN-116 name service, port 42
That's going to cover most scenarios where the ip helper-address command will be useful, but what about those situations where the broadcast you need forwarded is not on this list? You can use the ip forward-protocol command to add any UDP port number to the list. To remove protocols from the default list, use the no ip forward-protocol command. In the following example, we'll add the Network Time Protocol port to the forwarding list while removing the NetBIOS ports. Remember, you can use IOS Help to get a list of commonly filtered ports! R1(config)#ip forward-protocol udp ? Port number biff Biff (mail notification, comsat, 512) bootpc Bootstrap Protocol (BOOTP) client (68) bootps Bootstrap Protocol (BOOTP) server (67) discard Discard (9) dnsix DNSIX security protocol auditing (195) domain Domain Name Service (DNS, 53) echo Echo (7) isakmp Internet Security Association and Key Management Protocol (500) mobile-ip Mobile IP registration (434) nameserver IEN116 name service (obsolete, 42) netbios-dgm NetBios datagram service (138) netbios-ns NetBios name service (137) netbios-ss NetBios session service (139) ntp Network Time Protocol (123) pim-auto-rp PIM Auto-RP (496) rip Routing Information Protocol (router, in.routed, 520) snmp Simple Network Management Protocol (161) snmptrap SNMP Traps (162) sunrpc Sun Remote Procedure Call (111) syslog System Logger (514) tacacs TAC Access Control System (49) talk Talk (517) tftp Trivial File Transfer Protocol (69) time Time (37) who Who service (rwho, 513) xdmcp X Display Manager Control Protocol (177)
R1(config)#ip forward-protocol udp 123 R1(config)#no ip forward-protocol udp 137 R1(config)#no ip forward-protocol udp 138
The DHCP Relay Agent And "Option 82"
In many cases, simply configuring the appropriate ip helper-address command on a Cisco router is enough to get the desired results when it comes to DHCP. In some networks, particularly larger ones, you may find it necessary for the DHCP Relay Agent - in this case, the Cisco router - to insert information about itself in the DHCP packets being forwarded to the server. On Cisco routers, this is done by enabling the Relay Agent Information Option, also known as Option 82. (No word on what happened to the first 81 options.) This is enabled with the ip dhcp relay information option command. R1(config)#ip dhcp relay information ? check Validate relay information in BOOTREPLY option Insert relay information in BOOTREQUEST policy Define reforwarding policy R1(config)#ip dhcp relay information option
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933
www.thebryantadvantage.com
Back To Index
IP Telephony & Voice VLANs Overview Cisco IP Phone Basics Voice VLANs Voice And Switch QoS DiffServ At Layer 2 DiffServ At Layer 3 Trust Or No Trust? Power Over Ethernet
If you don't have much (or any) experience with Voice Over IP (VoIP) yet, you're okay for now - you'll be able to understand this chapter with no problem. I say "for now" because all of us need to know some basic VoIP. Voice and security are the two fastest-growing sectors of our business. They're not going to slow down anytime soon, either. Once you're done with your CCNP, I urge you to look into a Cisco voice certification. There are plenty of good vendor-independent VoIP books on the market as well. Most Cisco IP phones will have three ports. One will be connected to a Catalyst switch, another to the phone ASIC, and another will be an access port that will connect to a PC.
As is always the case with voice or video traffic, the key here is getting the voice traffic to its destination as quickly as possible in order to avoid jitter and unintelligible voice streams. ("jitter" occurs when there's a delay in transmitting voice or video traffic, perhaps due to improper queueing.) With Cisco IP Phones, there is no special configuration needed on the PC - as far as the PC's concerned, it is attached directly to the switch. The PC is unaware that it's actually connected to an IP Phone. The link between the switch and the IP Phone can be configured as a trunk or an access link. Configuring this link as a trunk gives us the advantage of creating a voice VLAN that will carry nothing but voice traffic while allowing the highest Quality of Service possible, giving the delaysensitive voice traffic priority over "regular" data handled by the switch. Configuring the link as an access link results in voice and data traffic being carried in the same VLAN, which can lead to delivery problems with the voice traffic. The problem isn't that the voice traffic will not get to the switch - it simply may take too long. Voice traffic is much more delaysensitive than data traffic.
The phrase "delay-sensitive" is vague, so let's consider this: The human
ear will only accept 140 - 150 milliseconds of delay before it notices a problem with voice delivery. That's how long we have to get the voice traffic from source to destination before the voice quality is compromised. Voice VLANs When it comes to the link between the switch and the IP Phone, we've got four choices:
Configure the link as an access link Configure the link as a trunk link and use 802.1p Configure the link as a trunk link and do not tag voice traffic Configure the link as a trunk link and specify a Voice VLAN
If we configure the link as an access link, the voice and data traffic is transmitted in the same VLAN. It's recommended you make the port a trunk whenever possible. This will allow you to create a Voice VLAN, which will be separate from the regular data VLAN on the same line. The creation of Voice VLANs also makes it much easier to give the delay-sensitive voice traffic priority over "regular" data flows. The command to create a voice VLAN is a simple one - it's the choices that take a little getting used to. The "PVID" shown in these options is the Port VLAN ID, which identifies the data VLAN. SW2(config-if)#switchport voice vlan ? Vlan for voice traffic dot1p Priority tagged on PVID none Don't tell telephone about voice vlan untagged Untagged on PVID
Let's look at these options from top to bottom. The option creates a voice VLAN and will create a dot1q trunk between the switch and the IP Phone. As with data VLANs, if the Voice VLAN has not been previously created, the switch will create it for you. SW2(config-if)#switchport voice vlan 12 % Voice VLAN does not exist. Creating vlan 12
The dot1p option has two effects:
The IP Phone grants voice traffic high priority
Voice traffic is sent through the default voice native VLAN, VLAN 0
Note the console message when the dot1p option is enabled: SW2(config-if)#switchport voice vlan dot1p % Voice VLAN does not exist. Creating vlan 0
The none option sets the port back to the default. Finally, the untagged option results in voice packets being put into the native VLAN. SW2(config-if)#switchport voice vlan untagged
As always, there are just a few details you should be aware of when configuring :
When Voice VLAN is configured on a port, Portfast is automatically enabled -- but if you remove the Voice VLAN, Portfast is NOT automatically disabled. Cisco recommends that QoS be enabled on the switch and the switch port connected to the IP phone be set to trust incoming CoS values. The commands to perform these tasks are mls qos and the interface-level command mls qos trust cos, respectively. You can configure voice VLANs on ports running port security or 802.1x authentication. It is recommended that port security be set to allow more than one secure MAC address. CDP must be running on the port leading to the IP phone. CDP should be globally enabled on all switch ports, but take a few seconds to make sure with show cdp neighbor. Voice VLAN is supported only on L2 access ports. Particularly when implementing video conferencing, make sure your total overall traffic doesn't exceed 75% of the overall available bandwidth. That includes video, voice, and data! Cisco also recommends that voice and video combined not exceed 33% of a link's bandwidth. This allows for network control traffic to flow through the network and helps to prevent jitter as well.
A voice VLAN's dependency on CDP can result in problems. Believe it or
not, there is such a thing as CDP Spoofing, and that can result in an issue with anonymous access to Voice VLANs. Basically, CDP Spoofing allows the attacker to pretend to be the IP Phone! This issue is out of the scope of the exam, but if you've got voice VLANs in your network or are even thinking about using them, you should run a search on "cdp spoofing voice vlan" and start reading!
Voice And Switch QoS I mentioned jitter earlier, but we've got three main enemies when it comes to successful voice transmission:
jitter delay packet loss
To successfully combat these problems, we have to make a decision on what QoS scheme to implement - and this is one situation where making no decision actually is making a decision! Best-effort delivery is the QoS you have when you have no explicit QoS configuration - the packets are simply forwarded in the order in which they came into the router. Best-effort works fine for UDP, but not for voice traffic. The Integrated Services Model, or IntServ, is far superior to best-effort. I grant you that's a poor excuse for a compliment! IntServ uses the Resource Reservation Protocol (RSVP) to do its job, and that reservation involves creating a high-priority path in advance of the voice traffic's arrival. The device that wants to transmit the traffic does not do so until a reserved path exists from source to destination. The creation of this path is sometimes referred to as Guaranteed Rate Service (GRS), or simply Guaranteed Service. The obvious issue with IntServ is that it's not a scalable solution - as your network handles more and more voice traffic, you're going to have more and more reserved bandwidth, which can in turn "choke out" other traffic.
That issue is addressed with the Differentiated Services Model, or DiffServ. Where IntServ reserves an entire path in advance for the entire voice packet flow to use, DiffServ does not reserve bandwidth for the flow; instead, DiffServ makes its QoS decisions on a per-router basis as the flow traverses the network. If DiffServ sounds like the best choice, that's because it is - and it's so popular that this is the model we'll spend the most time with today. (Besides, it's pretty easy to configure the best-effort model - just don't do anything!) The DiffServ Model At Layer Two As mentioned earlier, DiffServ takes a DifferentView (sorry, couldn't resist) of end-to-end transmission than that taken by IntServ. The DiffServ model allows each network device along the way to make a separate decision on how best to forward the packet toward its intended destination, rather than having all forwarding decisions made in advance. This process is Per-Hop Behavior (PHB).
The core tasks of Diffserv QoS are marking and classification. (They are two separate operations, but they work very closely together, as you'll see.) Marking is the process of tagging data with a value, and classification is taking the appropriate approach to queueing and transmitting that data according to that value. It's best practice to mark traffic as close to the source as possible to ensure the traffic receives the proper QoS as it travels across the network. This generally means you'll be marking traffic at the Access layer of the Cisco switching model, since that's where our end users can
be found. At Layer 2, tagging occurs only when frames are forwarded from one switch to another. We can't tag frames that are being forwarded by a single switch from one port to another.
You know that the physical link between two switches is a trunk, and you know that the VLAN ID is tagged on the frame before it goes across the trunk. You might not know that another value - a Code of Service (CoS) value - can also be placed on that frame. Where the VLAN ID indicates the VLAN whose hosts should receive the frame, the CoS is used by the switch to make decisions on what QoS, if any, the frame should receive. It certainly won't surprise you to find that our trunking protocols, ISL and IEEE 802.1Q ("dot1q") handle CoS differently. Hey, with all the differences between these two that you've already mastered, this is easy! The ISL tag includes a 4-bit User field; the last three bits of that field indicate the CoS value. I know I don't have to tell you this, but three binary bits give us a range of decimal values of 0 - 7. The dot1q tag has a User field as well, but this field is built a little differently. Dot1q's User field has three 802.1p priority bits that make up the CoS value, and again that gives us a decimal range of 0 - 7. Of course, there's an exception to the rule! Remember how dot1q handles frames destined for the native VLAN? There is no tag placed on those frames -- so how can there be a CoS value when there's no tag?
The receiving switch can be configured with a CoS to apply to any incoming untagged frames. Naturally, that switch knows that untagged frames are destined for the native VLAN. The DiffServ Model At Layer Three Way back in your Introduction To Networking studies, you became familiar with the UDP, TCP, and IP headers. One of the IP header fields is Type Of Service (ToS), and that ToS value is the basis for DiffServ's approach to marking traffic at Layer Three. The IP ToS byte consists of...
an IP Precedence value, generally referred to as IP Prec (3 bits) a Type Of Service value (4 bits) a zero (1 bit)
DiffServ uses this 8-bit field as well, but refers to this as the Differentiated Services (DS) field. The DS byte consists of....
a Differentiated Service Code Point value (DSCP,6 bits,RFC 2474) an Explicit Congestion Notification value (ECN, 2 bits, RFC 2481)
The 6-bit DSCP value is itself divided into two parts:
a Class Selector value, 3 bits a Drop Precedence value, 3 bits
These two 3-bit values also have a possible range of 0 - 7 overall (000 111 in binary). Here's a quick description of the Class Selector values and their meanings: Class 7 (111) - Network Control, and the name is the recipe - this value is reserved for network control traffic (STP, routing protocol traffic, etc.)
Class 6 (110) - Internetwork Control, same purpose as Network Control. Class 5 (101) - Expedited Forwarding (EF, RFC 2598) - Reserved for voice traffic and other time-critical data. Traffic in this class is practically guaranteed not to be dropped. Classes 1 - 4 (001 - 100) - Assured Forwarding (AF, RFC 2597) These classes allow us to define QoS for traffic that is not as timecritical as that in Class 5, but that should not be left to best-effort forwarding, which is.... Class 0 (000) - Best-effort forwarding. This is the default. We've got four different classes in Assured Forwarding, and RFC 2597 defines three Drop Precedence values for each of those classes:
High - 3 Medium - 2 Low - 1
The given combination of any class and DP value is expressed as follows: AF (Class Number)(Drop Precedence) That is, AF Class 2 with a DP of "high" would be expressed as "AF23".
To Trust Or Not To Trust, That Is The Question Just as you and I have to make a decision on whether to trust something that's told to us, a switch has to make a decision on whether to trust an incoming QoS value.
Once that decision is made, one of two things will happen.
If the incoming value is trusted, that value is used for QoS. If the incoming value is not trusted, the receiving switch can assign a preconfigured value.
It's a pretty safe bet that if the frame is coming from a switch inside your network, the incoming value should be trusted. It's also better to be safe than sorry, so if the frame is coming from a switch outside your administrative control, it should not be trusted. The point at which one of your switches no longer trusts incoming frames is the trust boundary.
We've also got to decide where to draw the line with a trust boundary when PCs and IP Phones are involved. Let's walk through a basic configuration with an IP Phone attached to the switch. Here's a quick reminder of the physical topology:
The first command we'll use isn't required, but it's a great command for those admins who work on the switch in the future: SW2(config)#int fast 0/5 SW2(config-if)#description ? LINE Up to 240 characters describing this interface SW2(config-if)#description IP Phone Port
It never hurts to indicate which port the phone is attached to. Now to the required commands! Before we perform any QoS on this switch, we have to enable it - QoS is disabled globally by default. SW2(config)#mls qos QoS: ensure flow-control on all interfaces are OFF for proper operation.
We can trust values on two different levels. First, we can trust the value unconditionally, whether that be CoS, IP Prec, or DSCP. Here, we'll unconditionally trust the incoming CoS value. SW2(config-if)#mls qos trust ? cos Classify by packet COS device trusted device class dscp Classify by packet DSCP ip-precedence Classify by packet IP precedence SW2(config-if)#mls qos trust cos
We can also make this trust conditional, and trust the value only if the device on the other end of this line is a Cisco IP phone. IOS Help shows us that the only option for this command is a Cisco IP phone! SW2(config-if)#mls qos trust device ? cisco-phone Cisco IP Phone SW2(config-if)#mls qos trust device cisco-phone
If you configure that command and show mls qos interface indicates the port is not trusted, most likely there is no IP Phone connected to that port. Trust me, I've been there. :) SW2#show mls qos interface fast 0/5 FastEthernet0/5 trust state: not trusted trust mode: trust cos COS override: dis default COS: 0 DSCP Mutation Map: Default DSCP Mutation Map Trust device: cisco-phone
There's another interesting QoS command we need to consider: SW2(config-if)#switchport priority extend ? cos Override 802.1p priority of devices on appliance trust Trust 802.1p priorities of devices on appliance
If an IP Phone is the only device on the end of the link, what "appliance" are we talking about? Why are we discussing extending the trust? To what point are we extending the trust? Let's take another look at our diagram:
Remember, we've got a PC involved here as well. The IP Phone will generate the voice packets sent to the switch, but the PC will be generating data packets. We need to indicate whether QoS values on data received by the phone from the PC should be trusted or overwritten. In other words, should the trust boundary extend to the PC? The best practice is to not trust the QoS values sent by the PC. Some applications have been known to set QoS values giving that application's data a higher priority than other data. (Can you believe such a thing?) That's one reason the default behavior is to not trust the CoS value from the PC, and to set that value to zero.
To overwrite the CoS value sent by the PC and set it to a value we choose, use the switchport priority extend cos command. SW2(config-if)#switchport priority extend cos ? Priority for devices on appliance SW2(config-if)#switchport priority extend cos 2
Frames received from the PC are now trusted and will have their priority
set to 2. If we had chosen to trust those same frames and allow their CoS to remain unchanged after transmission from the PC, we would use the switchport priority extend trust command. SW2(config-if)#switchport priority extend ? cos Override 802.1p priority of devices on appliance trust Trust 802.1p priorities of devices on appliance SW2(config-if)#switchport priority extend trust
The word "boundary" doesn't appear in the command, but this command has the effect of extending the trust boundary beyond the phone to the PC.
Other QoS Methods To Improve VOIP Speed & Quality We've talked at length about using a priority queue for voice traffic, but there are some other techniques we can use as well. As with any other QoS, the classification and marking of traffic should be performed as close to the traffic source as possible. Access-layer switches should always perform this task, not only to keep the extra workload off the core switches but to ensure the end-to-end QoS you wanted to configure is the QoS you're getting. Another method of improving VOIP quality is to configure RTP Header Compression. This compression takes the IP/UDP/RTP header from its usual 40 bytes down to 2 - 4 bytes.
RTP header compression is configured with the interface-level ip rtp header-compression command, with one option you should know about passive. If the passive option is configured, outgoing packets are subject to RTP compression only if incoming packets are arriving compressed. "Power Over Ethernet" I don't anticipate you'll see much of POE on your exam, if at all, but it is a handy way to power the phone if there's just no plug available! With POE, the electricity necessary to power the IP Phone is actually transferred from the switch to the phone over the UTP cable that already connects the two devices!
Not every switch is capable of running POE. Check your particular switch's documentation for POE capabilities and details. The IEEE standard for POE is 802.3af. There is also a proposed standard for High-Power POE, 802.3at. To read more than you'd ever want to know about POE, visit http://www.poweroverethernet.com. By default, ports on POE-capable switches do attempt to find a device needing power on the other end of the link. We've got a couple of options for POE as well: SW4(config)#int fast 1/0/1 SW4(config-if)#power inline ? auto Automatically detect and power inline devices consumption Configure the inline device consumption never Never apply inline power static High priority inline power interface
The auto setting is the default. The consumption option allows you to set the level of power sent to the device: SW4(config-if)#power inline consumption ? milli-watts SW4(config-if)#power inline consumption
And naturally, the never option disables POE on that port. POE options and capabilities differ from one device to the next, so check your switch's documentation *carefully* before using POE.
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933 http://www.thebryantadvantage.com Back To Index
Wireless Overview
Wireless Networking Basics The Association Process Roaming Users SSIDs WLAN Authentication Standards, Ranges, and Frequencies Antenna Types CSMA/CD Cisco Compatible Extension Lightweight APs and LWAPP Wireless LAN Controllers (WLC) Wireless Control System & The Location Appliance Wireless LAN Solution Engine (WLSE) Wireless Repeaters Aironet Desktop Utility
Aironet System Tray Utility Introduction To Mesh Networks
Wireless Basics Hard to believe there was once a time when a laptop or PC had to be connected to an outlet to access the Internet, isn't it? Wireless is becoming a larger and larger part of everyday life, to the point where people expect to be able to access the Net or connect to their network while eating lunch. Wireless networks are generally created by configuring Wireless Access Points (WAP or AP, depending on documentation). If you're connecting to the Internet or your company's network from a hotel or restaurant, you're connected to a lily pad network. Unlike the physical networks we've discussed previously in this course, the WAPs in a lily pad network can be owned by different companies. The WAPs create hotspots where Internet access is available to anyone with a wireless host - and hopefully, a username and password is required as well! WAPs are not required to create a wireless network. In an ad hoc WLAN ("wireless LAN"), the wireless devices communicate with no WAP involved. Ad hoc networks are also called Independent Basic Service Sets (iBSS or IBSS, depending on whose documentation you're reading). There are two kinds of infrastructure WLANs. While a Basic Service Set (BSS) will have a single AP, Extended Service Set WLANs (ESS), have multiple access points. An ESS is essentially a series of interconnected BSSes. Hosts successfully connecting to the WAP in a BSS are said to have formed an association with the WAP. Forming this association usually requires the host to present required authentication and/or the correct Service Set Identifier (SSID). The SSID is the public name of the wireless network. SSIDs are case-sensitive text strings and can be up to 32 characters in length.
Cisco uses the term AP instead of WAP in much of their documentation; just be prepared to see this term expressed either way on your exam and in network documentation. I'll call it an AP for the rest of this section. A BSS operates much like a hub-and-spoke network in that all communication must go through the hub, which in this case is the AP. We went over three different service set types in that section, so to review:
Independent Basic Service Sets have no APs; the few wireless devices involved interact directly. An IBSS network is also called an ad hoc network. Basic Service Sets have a single AP. Extended Service Sets have multiple APs, which allow for a larger coverage area than the other two types and also allow roaming users to fully utilize the WLAN.
Creating An Association There's quite a bit going on when a client forms an association with an AP, but here's an overview of the entire process. The client is going to transmit Probe Requests, and in turn the AP response with Probe Responses. Basically, the Probe Request is the client yelling "Anybody out there?" and the Probe Response is the AP saying "I'm Over Here!"
When the client learns about the AP, the client then begins the process of association. The exact information the client sends depends on the configuration of the client and the AP, but it will include authentication information such as a pre-shared key.
If the client passes the authentication process, the AP then records the client's MAC address and accepts the association with the client. Roamin', Roamin', Roamin' APs can also be arranged in such a way that a mobile user, or roaming user, will (theoretically) always be in the provider's coverage area. Those of us who are roaming users understand the "theoretical" part! Roaming is performed by the wireless client. Under certain circumstances that we'll discuss in just a moment, the client will actively search for another AP with the same SSID as the AP it's currently connected to. There are two different methods the client can use to find the next AP active scanning and passive scanning. With active scanning, the client sends Probe Request frames and then waits to hear Probe Responses. If multiple Probe Responses are heard, the client chooses the most appropriate WAP to use in accordance with vendor standards. Passive scanning is just what it sounds like - the client listens for beacon frames from APs. No Probe Request frames are sent. Roaming networks use multiple APs to create overlapping areas of coverage called cells. While your signal may occasionally get weak near the point of overlapping, the ESS allows roaming users to hit the network
at any time. (We hope!)
Roaming is made possible by the Inter-Access Point Protocol (IAPP). For roaming users to remain connected to the same network as they roam, the APs must be configured with the same SSIDs and have knowledge of the same IP subnets and VLANs (assuming VLANs are in use, which they probably are). How does our client decide it's time to move from one AP to another? Any one of the following events can trigger that move, according to Cisco's website:
Client has not received a beacon from an AP for a given amount of time The maximum data retry count has been reached A change in the data rate
Why would the data rate change? With wireless, the lower the data rate, the greater the range. The 802.11 standard will automatically reduce the data rate as the association with an AP deteriorates. L2 Roaming vs. L3 Roaming The difference between the two is straightforward - L2 roaming is performed when the APs the client is roaming are on the same IP subnet, while L3 roaming occurs when the APs are on different IP subnets. Service Set Identifier (SSID)
When you configure a name for your WLAN, you've just configured a SSID. The SSID theory is simple enough - if the wireless client's SSID matches that of the access point, communication can proceed. The SSID is case-sensitive and it has a maximum length of 32 characters.
A laptop can be configured with a null SSID, resulting in the client basically asking the AP for its SSID; if the AP is configured to broadcast its SSID, it will answer and communication can proceed.
A classic "gotcha" with SSIDs is to configured the AP to not broadcast its SSID. This would seem to be a great move for your WLAN's security ... but is it?
As you've already guessed, this is not an effective security measure, because the SSID sent by the client is not encrypted. It's quite easy to steal, and obviously no unencryption is needed! WLAN Authentication (And Lack of Same) Of course, you don't want just any wireless client connecting to your
WLAN! The 802.11 WLAN standards have two different authentication schemes - open system and shared key. They're both pretty much what they sound like. Open system is basically one station asking the receiving station "Hey, do you recognize me?" Hopefully, shared key is the authentication system you're more familiar with, since open system is a little too open! Shared key uses Wired Equivalent Privacy (WEP) to provide a higher level of security than open system. There's just one little problem with WEP. Okay, a big problem. It can be broken in seconds by software that's readily available on the Web. Another problem is the key itself. It's not just a shared key, it's a static key, and when any key or password remains the same for a long time, the chances of it being successfully hacked increase substantially. These two factors make WEP unacceptable for our network's security. Luckily, we've got options... A Giant LEAP Forward The Extensible Authentication Protocol (EAP) was actually developed originally for PPP authentication, but has been successfully adapted for use in wireless networks. RFC 3748 defines EAP. Cisco's proprietary version of EAP is LEAP, the Lightweight Extensible Authentication Protocol. LEAP has several advantages over WEP:
There is two-way authentication between the AP and the client The AP uses a RADIUS server to authenticate the client The keys are dynamic, not static, so a different key is generated upon every authentication
Recognizing the weaknesses inherent in WEP, the Wi-Fi Alliance (their home page is http://wi-fi.org) saw the need for stronger security features in the wireless world. Their answer was Wi-Fi Protected Access (WPA), a higher standard for wireless security. Basically, WPA was adopted by many wireless equipment vendors while the IEEE was working on a higher standard as well, 802.11i - but it wasn't adopted by every vendor. As a result, WPA is considered to work
universally with wireless NICs, but not with all early APs. When the IEEE issued 802.11i, the Wi-Fi Alliance improved the original WPA standards, and came up with WPA2. As you might expect, not all older wireless cards will work with WPA2. To put it lightly, both WPA and WPA2 are major improvements over WEP. Many wireless devices, particularly those designed for home use, offer WEP as the default protection - so don't just click on all the defaults when you're setting up a home wireless network! The WPA or WPA2 password will be longer as well - they're actually referred to as passphrases. Sadly, many users will prefer WEP simply because the password is shorter. Wireless Networking Standards, Ranges, and Frequencies Along with the explosion of wireless is a rapidly-expanding range of wireless standards. Some of these standards play well together, others do not. Let's take a look at the wireless standards you'll need to know to pass the exam and to work with wireless in today's networks. The standards listed here are all part of the 802.11x standards developed by the IEEE. 802.11a has a typical data rate of 25 MBPS, but can reach speeds of 54 MBPS. Indoor range is 100 feet. Operating frequency is 5 GHz. 802.11b has a typical data rate of 6.5 MBPS, but can reach speeds of 11 MBPS. Indoor range is 100 feet. Operating frequency is 2.4 GHz. 802.11g has a typical data rate of 25 MBPS, a peak data rate of 54 MBPS, and an indoor range of 100 feet. Operating frequency is 2.4 GHz. 802.11g is fully backwards-compatible with 802.11b, and many routers and cards that use these standards are referred to as "802.11b/g", or just "b/g". .11g and .11b even have the same number of non-overlapping channels (three). You can have trouble with 802.11g from an unexpected source - popcorn! Well, not directly, but microwave ovens also share the 2.4 GHz band, and the presence of a microwave in an office can actually cause connectivity issues. (And you thought they were just annoying when people burn popcorn in the office microwave!) Solid objects such as walls and other buildings can disturb the signal in any bandwidth.
802.11n has a typical data rate of 200 MBPS, a peak data rate of 540 MBPS, and an indoor range of 160 feet. Operating frequency is either 2.4 GHz or 5 GHz. Infrared Data Association (IrDA) The IrDA is another body that defines specifications, but the IrDA is concerned with standards for transmitting data over infrared light. IrDA 1.0 only allowed for a range of 1 meter and transmitted data at approximately 115 KBPS. The transmission speed was greatly improved with IrDA 1.1, which has a theoretical maximum speed of 4 MBPS. The two standards are compatible. Keep in mind that neither IrDA standard has anything to do with radio frequencies - only infrared light streams. The IrDA notes that to reach that 4 MBPS speed, the hardware must be 1.1 compliant, and even that might not be enough - the software may have to be modified as well. Which doesn't sound like fun. Antenna Types A Yagi antenna (technically, the full name is "Yagi-Uda antenna") sends its signal in a single direction, which means it must be aligned correctly and kept that way. Yagi antennas are sometimes called directional antennas, since they send their signal in a particular direction.
In contrast, an Omni ("omnidirectional") antenna sends a signal in all directions on a particular plane. Since this is networking, we can't just call these antennae by one name! Yagis are also known as point-to-point and directional antennas; Omni
antennas are also known as omnidirectional and point-to-multipoint antenna. Both Yagi and Omni antennas have their place in wireless networks. The unidirectional signal a Yagi antenna sends makes it particularly helpful in bridging the distance between APs. The multidirectional signal sent by Omni antennas help connect hosts to APs, including roaming laptop users. Courtesy of wikipedia.org, here are some "antenna terms" you should be familiar with: Gain refers to the directionality of an antenna. Antennae with low gains emit radiation at the same power in all directions, where a high-gain antenna will focus its power in a particular direction or directions. dBi stands for Decibel(isotropic), and I won't go far into this territory, I promise! dBi is a common value used to truly measure the gain of a given antenna when compared to a fictional antenna that distributes energy in all directions. And you thought we had it bad with BGP. :) Bandwidth refers to the range of frequencies over which the antenna is effective. There are several methods of increasing bandwidth, including the use of thicker wires and combining multiple antennas into a single antenna. Polarization refers to the physical positioning and orientation of the antenna. CSMA/CA From your CCNA studies, you know all about how a "Wired LAN" avoids collisions. Through the use of IEEE 802.3, CSMA/CD (Carrier Sense Multiple Access with Collision Detection), only one host can transmit at a time - and even if multiple hosts transmit data onto a shared segment at once, jam signals and random timers help to minimize the damage. With "Wireless LANs", life isn't so simple. Wireless LANs can't listen and send at the same time - they're half-duplex - so traditional collision detection techniques cannot work. Instead, wireless LANs will use IEEE standard 802.11, CSMA/CA, (Carrier Sense Multiple Access with Collision
Avoidance). Let's walk through an example of Wireless LAN access, and you'll see where the "avoidance" part of CSMA/CA comes in. The foundation of CSMA/CA is the Distributed Coordination Function (DCF). The key rule of DCF is that when a station wants to send data, the station must wait for the Distributed Interframe Space (DIFS) time interval to expire before doing so. In our example, Host A finds the wireless channel to be idle, waits for the DIFS timer to expire, and then sends frames.
Host B and Host C now want to send frames, but they find the channel to be busy with Host A's data.
The potential issue here is that Host B and Host C will simultaneously realize Host A is no longer transmitting, so they will then both transmit, which will lead to a collision. To help avoid (there's the magic word!) this, DCF requires stations finding the busy channel to also invoke a random timer before checking to see if the channel is still busy. In DCF-speak, this random amount of time is the Backoff Time. The
formula for computing Backoff Time is beyond the scope of the exam, but the computation does involve a random number, and that random value helps avoid collisions. The Cisco Compatible Extensions Program When you're looking to start or add to your wireless network, you may just wonder.... "How The $&!(*% Can I Figure Out Which Equipment Supports Which Features?" A valid question! Thankfully, Cisco's got a great tool to help you out - the Cisco Compatible Extension (CCX) website. Cisco certification isn't just for you and I - Cisco also certifies wireless devices that are guaranteed to run a desired wireless feature. The website name is a little long to put here, and it may well change, so I recommend you simply enter "cisco compatible extension" into your favorite search engine - you'll find the site quickly. Don't just enter "CCX" in there - you'll get the Chicago Climate Exchange. I'm sure they're great at what they do, but don't trust them to verify wireless capabilities! Lightweight Access Points and LWAPP Originally, most access points were autonomous - they didn't depend on any other device in order to do its job. The BSS we looked at earlier in this section was a good example of an autonomous AP.
The problem with autonomous APs is that as your wireless network grows - and it will! - it becomes more difficult to have a uniform set of policies applied to all APs in your network. It's imperative that each AP in your network enforce a consistent policy when it comes to security and Quality of Service - but sometimes this just doesn't happen. Many WLANs start small and end up being not so small! At first, centralizing your security policies doesn't seem like such a big deal, especially when you've only got one access point.
As your network grows larger and more access points are added, having a central policy does become more important. The more WAPs you have, the bigger the chance of security policies differing between them - and the bigger the chance of a security breach. Let's say you add two WAPs to the WLAN network shown above. Maybe they're configured months apart, maybe they're configured by different people - but the result can be a radically different set of security standards.
We've now got three very different WLAN security protocols in place, and the difference between the three is huge, as you'll soon see. Depending on which WAP the laptop uses to authenticate to the WLAN, we could have a secure connection - or a very non-secure connection. This simple example shows us the importance of a standard security policy, and that's made possible through the concept of the Cisco Unified Wireless Network, which has two major components - Lightweight Access Points (LAP or WLAP) and WLAN Controllers (WLC). The WLC brings several benefits to the table:
Centralization, management, and distribution of security policies and authentication Allows mobile users to receive a consistent level of service and security from any AP in the network Detection of rogue APs
Configuring the access points as LAPs allows us to configure a central device, the WLAN Controller, to give each of the LAPs the same security policy. The protocol used to do so, the aptly-named Lightweight Wireless Access Point Protocol (LWAPP), detects rogue (fake) access points as well. How does the WLC perform this rogue AP detection? The LAP and WLC actually have digital certificates installed when they're built - X.509
certificates, to be exact. A rogue AP will not have this certificate, and therefore can't authenticate to become part of the network. These certificates are technically referred to as MICs, short for Manufacturing Installed Certificates. The WLC is basically the manager of the WLAN, with the LAPs serving as the workers. The WLAN Controller will be configured with security procedures, Quality of Service (QoS) policies, mobile user policies, and more. The WLC then informs the LAPs of these policies and procedures, ensuring that each LAP is consistently enforcing the same set of wireless network access rules and regulations. LAPs cannot function independently, as Autonomous APs can. LAPs are dependent on the presence of a WLC and cannot function properly without one. Conversely, Autonomous APs cannot work with a WLC, since Autonomous APs do not speak LWAPP. (LWAPP is Cisco-proprietary; the industry standard is CAPWAP, the Control and Provisioning of Wireless Access Points.) LAPs can be configured with static IP addresses, but it's common to have an LAP use DHCP to acquire an IP address in the same fashion a host device would use. If the LAP is configured to get its IP address via DHCP and the first attempt to do so fails, the LAP will continue to send DHCP Discovery messages until a DHCP Server replies. Now the LAP must associate with a WLC. The LAP will use the Lightweight Wireless Access Point Protocol (LWAPP) to do so. We have two modes for LWAPP - L2 mode and L3 mode. If the LAP is running L2 mode, the LAP will send an L2 LWAPP Discovery message in an attempt to find a WLC that is running L2 LWAPP.
If a WLC receives that Discovery message and is running L2 LWAPP, it will respond with a LWAPP L2 Discovery Response.
If the LAP does not receive an L2 LWAPP Discovery Response, or if the LAP doesn't support L2 LWAPP in the first place, it'll send an L3 LWAPP Discovery message.
If that doesn't work, the entire process begins again with the LAP sending a DHCP Discovery message. Now the LAP needs to associate with one of the WLCs it has discovered. To do so, the LAP sends a LWAPP Join Request, and the WLC returns a LWAPP Join Response.
How does the LAP know where to send that LWAPP Join Request? After receiving an IP address of its own via DHCP, the LAP must learn the IP address of the WLC via DHCP or DNS. To use DHCP, the DHCP Server must be configured to use DHCP Option 43. When Option 43 is in effect, the DHCP Server will include the IP addresses of WLCs in the Option 43 field of the DHCP Offer packet. The LAP can then send L3 LWAPP Discovery Request messages to each of the WLCs. The LAP can also broadcast that Join Request to its own IP subnet, but
obviously that's only going to work if the WLC is actually on the subnet local to the LAP. Once this Join has taken place, a comparison is made of the software revision number on both the LAP and WLC. If they have different versions, the LAP will download the version stored on the WLC. There will be two forms of traffic exchanged between the LAP and WLC:
Control traffic Data traffic
While LWAPP L2 traffic is encapsulated in an Ethernet frame (EtherType 0xBBBB), L3 LWAPP traffic uses UDP source port 1024 and the following destination ports for control and data traffic:
Control traffic: Destination UDP port is 12223 Data traffic: Destination UDP port is 12222
LWAPP uses secure key distribution to ensure the security of the control connection between the two - the control messages will be both encrypted and authenticated. The encryption is performed by the AES-CCM protocol. (The previously mentioned LWAPP Join Request and Response messages are not encrypted.) The data packets passed between the LAP and WLC will be LWAPPencapsulated - essentially, LWAPP creates a tunnel through which the data is sent - but no other encryption or security exists by default. Just as we had L2 and L3 roaming, we also have LWAPP L2 and L3 mode. A lightweight AP will first use LWAPP L2 mode to attempt to locate a WLC; if none is found, the AP will then use LWAPP L3 mode. Many networks will have more than one WLC, which is great for redundancy, but how does the AP decide which WLC to associate with if it finds more than one? The AP will simply use the WLC with the fewest associated APs. This prevents one WLC from being overloaded with associations while another WLC in the same network remains relatively idle. Many Cisco Aironet access points can operate autonomously or as an LAP. Here are a few of those models:
1230 AG Series 1240 AG Series 1130 AG Series
Sounds simple enough, but there are some serious restrictions to APs that have been converted from Autonomous mode to Lightweight mode. Courtesy of Cisco's website, here are the major restrictions: Roaming users cannot roam between Lightweight and Autonomous APs. Wireless Domain Services (WDS) cannot support APs converted from Autonomous to Lightweight. Those Lightweight APs will use WLCs, as we discussed earlier. The console port on a converted Lightweight AP is read-only. Converted APs do not support L2 LWAPP. Converted APs must be assigned an IP address and discover the IP address of the WLC via one of three methods:
DNS DHCP A broadcast to its own IP subnet
You can telnet into lightweight APs if the WLC is running software release 5.0 or later. You can convert the Lightweight AP back to Autonomous mode. Check Cisco's website for directions. If tech forums are any indication, this can be more of an art form than a science. Some other Aironet models have circumstances under which they cannot operate as LAPs - make sure to do your research before purchasing! The Cisco Wireless Control System and Wireless Location Appliance The examples in this section have shown only one WLC, but it's common to have more than one in a wireless network, due to either the sheer number of LAPs and/or the desire for redundancy. We don't want our entire wireless network to go down due to a WLC issue and a lack of a backup!
To monitor those WLCs and the LAPs as well, you can use the Cisco Wireless Control System. There's a little hype in this description, but here's how Cisco's website describes the WCS: "The Cisco WCS is an optional network component that works in conjunction with Cisco Aironet Lightweight Access Points, Cisco wireless LAN controllers and the Cisco Wireless Location Appliance. With Cisco WCS, network administrators have a single solution for RF prediction, policy provisioning, network optimization, troubleshooting, user tracking, security monitoring, and wireless LAN systems management. Robust graphical interfaces make wireless LAN deployment and operations simple and cost-effective. Detailed trending and analysis reports make Cisco WCS vital to ongoing network operations. Cisco WCS includes tools for wireless LAN planning and design, RF management, location tracking, Intrusion Prevention System (IPS), and wireless LAN systems configuration, monitoring, and management. " The Wireless Location Appliance mentioned in that description actually tracks the physical location of your wireless network users. The Location Appliance And RF Fingerprinting Your fingerprints can prove who you are; they can also prove who you are not. In a similar vein, a device's RF Fingerprint can prove that it is a legitimate access point - or prove that it is not! All of the devices in our WLAN have a role in RF Fingerprinting. The APs themselves will collect Received Signal Strength Indicator information, and will send that information to the WLAN Controller (WLC) via LWAPP.
In turn, the WLAN Controller will send the RSSI information it receives from the APs to the Location Appliance. Note that Simple Network Management Protocol is used to do this; make sure not to block SNMP communications between the two devices.
What else can be tracked in the Location Appliance?
Laptop and palm clients RFID Asset Tags (Radio Frequency Identifier) VoIP clients
The CiscoWorks Wireless LAN Solution Engine There is an easier way to manage autonomous networks - the CiscoWorks Wireless LAN Solution Engine (WLSE). Cisco's website defines this product as "a centralized, systems-level application for managing and controlling an entire autonomous Cisco WLAN infrastructure". The CiscoWorks WLSE acts as the manager of the autonomous APs. If there's a need to change the config on the APs, we've got two choices:
Perform them on each individual AP Perform the change on the WLSE
Not much of a choice there! CiscoWorks WLSE has quite a few features to help make our WLANs run smoothly:
Proactive monitoring of thresholds and alerting the admin to potential issues before they become critical, which assists with capacity
planning and monitoring network performance as new clients are added Reporting and tracking features to help with problem diagnosis, troubleshooting, and resolution Centralized AP configs allow us to change multiple AP configs simultaneously Execute multiple firmware upgrades simultaneously Creation of templates that can be used to quickly configure new APs Very effective at detecting rogue APs and either shut the rogue down or alert the admin and let the admin handle the rogue shutdown When an AP is lost, WLSE will tell that AP's neighbors to increase their cell coverage ("self-healing network")
There are two versions of WLSE. The full version (generally referred to as simply "WLSE") can manage a maximum of 2500 devices. WLSE Express is for smaller networks that have 100 or fewer devices to manage. If you're using WLSE Express, you'll need to set up an AAA server. Once the deployment is complete, the infrastructure APs are communicating with the WDS AP, and the WDS AP is in turn sending any necessary information to CiscoWorks WLSE.
The limit on the number of APs is determined by the device in use as the WDS:
If the WDS device is an AP, the limit is 60. If it's an Integrated Services Router, the limit is 100. If it's a switch running WLSM (Wireless LAN Services Module), the limit is 600.
Remember that all limits are theoretical and your mileage may vary! Wireless Repeaters You don't see many "wired" repeaters in today's networks, but wireless repeaters are a common sight in today's wireless networks. From the Linksys website, here's their description / sales pitch for one of their wireless repeaters: "Unlike adding a traditional access point to your network to expand wireless coverage, the does not need to be connected to the network by a data cable. Just put it within range of your main access point or wireless router, and it "bounces" the signals out to remote wireless devices. This "relay station" or "repeater" approach saves wiring costs and helps to build wireless infrastructure by driving signals into even those distant, reflective corners and hard-to-reach areas where wireless coverage is spotty and cabling is impractical." We all know that when it comes to range and throughput capabilities, vendors do tend to state maximum values. Having said that, the following values are commonly accepted as true when it comes to wireless repeaters. The overlap of coverage between a wireless repeater and a wired AP should be much greater than the overlap between two APs. The repeater and AP coverage should overlap by at least 50 percent. From personal experience, I can vouch for the fact that this is a minimum. The repeater must use the same RF channel as the wired AP, and naturally must share the same SSID.
Since the repeater must receive and repeat every frame on the same channel, there is a sharp decrease in overall performance. You should expect the throughput to be cut by about 50%. An Autonomous AP can serve as a wireless repeater, but a Lightweight AP cannot. The Cisco Aironet Desktop Utility The ADU is a very popular choice for connecting to APs, so let's take a detailed look at our options with this GUI. As you'll see in the following pages, the ADU allows us to do the following: Configure an encryption scheme Establish an association between the client and one or more APs, as well as listing the APs in order of preference for that association Configure authentication methods and passphrases Enable or disable the local client's radio capabilities The install process is much like any other software program, but here's a specific warning I'd like you to see.
After clicking Next, you'll be prompted to decide if you're using the ADU or the Microsoft tool. While the MS tool is okay - you can still see the Tray Utility, which we'll discuss later, and perform some other basic tasks using the ADU does give you config options and capabilities that the MS tool does not. For example, you can use disable the radio capability of the client with the ADU, but not with the Microsoft tool. I've used both and I much prefer the ADU. Once the install's done, we launch ADU, which opens to the Current Status tab. Note: If you print this section, you may see some choices that look lighter than others. That simply means they're grayed out in the application, and it's a good idea to note when certain choices are available and when they're not!
Clicking on the Advanced tab shows more detailed information regarding the APs, including the AP Name, IP address, and MAC address.
The Profile Management tab allows us to create additional profiles as well as editing the Default profile.
One limitation of this particular software is that only one card can be used at a time - but we can create up to 16 profiles! This allows you to create one profile for office use, another for home, another for hot spots, etc. In this example, we'll look at the options for modifying the Default profile. After clicking Modify, we'll see these tabs:
The Security tab is what we're most interested in, since we have quite a few options there. Here's the default setting...None.
In ADU, all drop-down and check boxes are only enabled if they're related to the security option you've chosen. Since None is selected by default, everything else on the screen is disabled.
When I select WPA/WPA2/CCKM, some options become available. (CCKM is Cisco Centralized Key Management, which allows roaming users to roam between APs very quickly - according to their website, in less than 150 milliseconds.)
I clicked on the drop-down box to illustrate that the WPA/WPA2/CCKM EAP choices are now available. You can't see it due to that drop-down box, but the 802.1x choices are still unavailable. After clicking Configure, here are the options we're presented with.
The next available security option was WPA/WPA2 Passphrase. Note that once I choose that option, both EAP drop-down boxes are again disabled.
Clicking Configure presents us with only one option, and it's the one we'd expect.
Let's go back to the main window and select 802.1x.
Note the WPA/WPA2/CCKM EAP selections are still disabled, but the dot1x EAP window is now enabled. If we click Configure, the EAP choices are the same as they were when we selected WPA/WPA2/CCKM EAP - except for Host-Based EAP, which is only available with 802.1x.
The previous methods have the authentication server generate a key and then pass that key to the client, but what if we want to configure the keys
ourselves? We simply use the aptly-named Pre-Shared Key option. Let's take a look at the Pre-Shared Key values. I went back to the main screen, chose Pre-Shared Key, and again both EAP drop-down boxes were disabled. I then clicked Configure and here's the result - simple enough!
Naturally, a WEP key configured here must match that of the AP you want the client to associate with. Ad Hoc networks are fairly rare today, but if you're working without an IP and using WEP keys, the key must be agreed upon by each client in the Ad Hoc network. (This tends to be the trickiest part of configuring an Ad Hoc network!) A couple of points to remember from the Security Options tab.. The default is None Drop-down boxes are enabled only if you choose an option related to that box - when we chose WPA/WPA2/CCKM, the dot1x EAP box was disabled, and vice versa The Advanced tab has some options that you'll generally leave at the defaults, but let's take a look at them anyway!
If you want to list your APs in order of preference, click Preferred APs and then enter their MAC addresses in the following fields.
Configuring preferred APs does not mean that your client is limited to these APs. If your client is unable to form an association with any APs specified here, the client can still form an association with other APs.
The Aironet System Tray Utility We're all familiar with the generic icon on a laptop or PC that shows us how strong (or weak) our wireless signal is. The Aironet System Tray Utility (ASTU) gives us that information and a lot more. Instead of just indicating how strong the wireless signal is, the icon will change color to indicate signal strength and other important information. At the beginning of the ADU install, we saw this window, following by a prompt to choose the Cisco tool or a third-party tool:
A reminder - you can still see the ASTU if you're working with the Microsoft utility, but the ADU's overall capabilities are diminished. Naturally, Cisco recommends you use the ADU. Having used both, I agree! The only problem with the ASTU is that the colors aren't exactly intuitive, so we better know what they mean. Here's a list of ASTU icon colors and their meanings. Red - This does not mean that you don't have a connection to an access point! It means that you do have connectivity to an AP, and you are authenticated via EAP if necessary, but that the signal strength is low.
Yellow - Again, you are connected to an AP and are authenticated if necessary, but signal strength is fair. Green - Connection to AP is present, EAP authentication is in place if necessary, and signal strength is very good. Light Gray - Connection to AP is present, but you are *not* EAPauthenticated. Dark Gray - No connection to AP is present. White - Client adapter is disabled. If you're connecting to an ad hoc network, just substitute "remote client" for "AP" in the above list. The key is to know that red, green, and yellow are referring to signal strength, light gray indicates a lack of EAP authentication, dark gray means there is no connection to an AP or remote client, and white means the adapter is disabled. Interpreting The Lights On A Cisco Aironet Adapter Card We have two lights on a Cisco Aironet card. The green light is the Status LED, and the amber light is the Activity LED. We've got quite a few combinations with those two lights, so let's take a look at what each of the following LED readouts indicates. Status off, Activity off - Naturally, this means the card isn't getting power! Status blinking slowly, Activity off - the adapter's in Power Save mode. Status on, Activity off - adapter has come out of Power Save mode. Both lights blinking in an alternating fashion - adapter is scanning for its network. Both lights blinking slowly at the same time - adapter has successfully associated with an AP (or other client if you have an Ad Hoc network) Both lights blinking quickly at the same time - adapter is associated and is sending or receiving data Tips On Configuring The WLAN Controller Many Cisco products can now be configured via a GUI or at the CLI, and
WLAN Controllers are no exception. The GUI is actually built into the controller, and allows up to five admins to browse the controller simultaneously. Real-world note: If you're on a controller with four other admins, make sure you're all talking to each other while you're on there. Nothing more annoying than configuring something and having someone else remove the config. The GUI allows you to use HTTP or HTTPS, but Cisco recommends you enable only HTTPS and disable HTTP access. To enable or disable HTTP access, use the config network webmode ( enable / disable) command. To enable or disable HTTPS access, use the config network secureweb (enable / disable) command. Cisco has an excellent online PDF you can use as a guide to get started with a WLAN controller configuration - how to connect, console default settings, etc. Links tend to change so I will not post it here, but to get a copy, just do a quick search on "cisco wireless lan controller configuration guide". It's not required reading for the exam, but to learn more about WLAN controllers, it's an excellent read. An Introduction To Mesh Networks - And An Age-Old Problem A wireless mesh network is really just what it sounds like - a collection of access points that are logically connected in a mesh topology, such as the following.
Real-world note: Not all APs can serve as a mesh AP. The most popular mesh AP today is probably the Cisco Aironet 1500 series. This is obviously a very small mesh network, but several APs have multiple paths to the AP that has a connection to the WLC. From our CCNA studies, we already know that we need a protocol to determine the optimal path - and it's not the Spanning Tree Protocol. The Cisco-proprietary Adaptive Wireless Path Protocol (AWPP) will discover neighboring APs and decide on the best path to the wired network by determining the quality of each path and choosing the highestquality path. Much like STP, AWPP will continue to run even after the optimal path (the "root path") to the wired network from a given AP is chosen. AWPP will continually calculate the quality of the available paths, and if another path becomes more attractive, that path will be chosen as the root path. Likewise, if the root path becomes unavailable, AWPP can quickly select another root path. Avoid A Heap Of Trouble With H-REAP The almost-ridiculously named Hybrid Remote Edge Access Point can really help a remote location keep its wireless access when its access point loses sight of its Controller. The H-REAP is an atypical controller-based AP. When your average AP can't see its own WLC any longer, it can't offer wireless to its clients.
When a H-REAP encounters that situation, it begins to act like an autonomous AP - an AP that can offer wireless with no help from anyone or anything else. Config of an H-REAP is beyond the scope of the CCNP SWITCH exam, but if you have need for a wireless solution for remote sites that can't afford to have wireless services unavailable, check this solution out!
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933
www.thebryantadvantage.com
Back To Index
Network Design And Models Overview Cisco's Three-Layer Hierarchical Model The Core Layer The Distribution Layer The Access Layer The Enterprise Composite Network Model The Server Farm Block The Network Management Block The Enterprise Edge & Service Provider Edge Block PPDIOO
In this section, you're going to be reintroduced to a networking model you first saw in your CCNA studies. No, it's not the OSI model or the TCP/IP model - it's the Cisco Three-Layer Hierarchical Model. About all you had to do for the CCNA was memorize the three layers and the order they were found in that model, but the stakes are raised here in your CCNP studies. You need to know what each layer does, and what each layer should not be doing. This is vital information for your realworld network career as well, so let's get started with a review of the Cisco three-layer model, and then we'll take a look at each layer's tasks. Most of the considerations at each layer are common sense, but we'll go over them anyway!
The Cisco Three-Layer Hierarchical Model
The Core Layer The term core switches refers to any switches found here, the core layer. Switches at the core layer allow switches at the distribution layer to communicate, and this is more than a full-time job. It's vital to keep any extra workload off the core switches, and allow them to do what they need to do - switch! The core layer is the backbone of your entire network, so we're interested in high-speed data transfer and very low latency. That's it! The core layer is the backbone of our network, so we've got to optimize data transport. Today's core switches are generally multilayer switches - switches that can handle both the routing and switching of data. The throughput of core switches must be high, so examine your particular network's requirements and switch documentation thoroughly before making a decision on purchasing core switches. We want our core switches to handle switching, and let distribution-layer switches handle routing. Core layer switches are usually the most powerful in your network, capable of higher throughput than any other switches in the network. Remember, everything we do on a Cisco router or switch has a cost in
CPU or memory, so we're going to leave most frame manipulation and filtering to other layers. The exception is Cisco QoS, or Quality of Service. Advanced QoS is generally performed at the core layer. We'll go into much more detail regarding QoS in another section, but for now, know that QoS is basically high-speed queuing where special consideration can be given to certain data in certain queues. Leave ACLs and other filters for other parts of the network. We always want redundancy, but you want a lot of redundancy in your core layer. This is the nerve center of your entire network, so fault tolerance needs to be as high as you can possibly get it. Root bridges should also be located in the core layer whenever possible.
The Distribution Layer The demands on switches at this layer are high. The access-layer switches are all going to have their uplinks connecting to these switches, so not only do the distribution-layer switches have to have high-speed ports and links, they've got to have quite a few to connect to both the access and core switches. That's one reason you'll find powerful multilayer switches at this layer - switches that work at both L2 and L3. Distribution-layer switches must be able to handle redundancy for all links as well. Examine your network topology closely and check vendor documentation before making purchasing decisions on distribution-layer switches. The distribution layer is also where routing should take place when utilizing multilayer switches, since the access layer is busy with end users and we want the core layer to be concerned only with switching, not routing. While QoS is often found operating at the core layer, you'll find it in the distribution layer as well. The distribution layer also serves as the boundary for broadcasts and multicasts, thanks to the L3 devices found here. (Recall from your CCNA studies that Layer 3 devices do not forward broadcasts or multicasts.) The Access Layer End users communicate with the network at this layer.
VLAN
membership is handled at this layer, as well as traffic filtering and basic QoS. Redundancy is important at this layer as well - hey, when isn't redundancy important? - so redundant uplinks are vital. The uplinks should also be scalable to allow for future network growth. You also want your access layer switches to have as many ports as possible, and again, plan for future growth. A 12-port switch may be fine one week, but a month from now you might just wish you had bought a 24-port switch. A good rule of thumb for access switches is "low cost, high switchport-to-user ratio". Don't assume that today's sufficient port density will be just as sufficient tomorrow! You can perform MAC address filtering at the access layer, although hopefully there are easier ways for you to perform the filtering you need. (MAC filtering is a real pain to configure.) Collision domains are also formed at the access layer.
The Enterprise Composite Network Model This model is much larger than the Cisco three-layer model, as you'll see in just a moment. I want to remind you that networking models are guidelines, and should be used as such. This is particularly true of the Enterprise Composite Network Model, which is one popular model used to design campus networks. A campus network is basically a series of LANs that are interconnected by a backbone. Before we look at this model, there's some terminology you should be familiar with. Switch blocks are units of access-layer and distribution-layer devices. These layers contain both the traditional L2 switches (found at the access layer) and multilayer switches, which have both L2 and L3 capabilities (found at the distribution layer). Devices in a switch block work together to bring network access to a unit of the network, such as a single building on a college campus or in a business park. Core blocks consist of the high-powered core switches, and these core blocks allow the switch blocks to communicate. This is a tremendous responsibility, and it's the major reason that I'll keep mentioning that we
want the access and distribution layers to handle as many of the "extra" services in our network whenever possible. We want the core switches to be left alone as much as possible so they can concentrate on what they do best - switch. The design of such a network is going to depend on quite a few factors the number of LANs involved, the physical layout of the building or buildings involved being just two of them - so again, remember that these models are guidelines. Helpful guidelines, though! The Enterprise Composite Network Model uses the term block to describe the three layers of switches we just described. The core block is the collection of core switches, which is the backbone mentioned earlier. The access and distribution layer switches are referred to as the switch blocks. Overall, there are three main parts of this model:
The Enterprise Campus The Enterprise Edge The Service Provider Edge
The Enterprise Campus consists of the following modules:
Campus Infrastructure module Server Farm module Network Management module Enterprise Edge (yes, again)
In turn, the Campus Infrastructure module consists of these modules:
Building Access module (Access-layer devices) Building Distribution module (Distribution-layer devices) Campus Backbone (Interconnects multiple Distribution modules)
Let's take a look at a typical campus network and see how these block types all tie in. How The Switch Blocks And Core Blocks Work Together
The smaller switches in the switch block represent the access-layer switches, and these are the switches that connect end users to the network. The distribution-layer switches are also in the switch block, and these are the switches that connect the access switches to the core. All four of the distribution layer switches shown have connections to both switches in the core block, giving us the desired redundancy. The core block serves as the campus backbone, allowing switches in the LAN 1 Switch Block to communicate with switches in the LAN 2 Switch Block. The core design shown here is often referred to as dual core, referring to the redundant fashion in which the switch blocks are connected to the core block. The point at which the switch block ends and the core block begins is very clear. A smaller network may not need switches to serve only as core switches, or frankly, may not be able to afford such a setup. Smaller networks can use a collapsed core, where certain switches will perform both as distribution and core switches.
In a collapsed core, there is no dedicated core switch. The four switches at the bottom of the diagram are serving as both core and distribution layer switches. Note that each of the access switches have redundant uplinks to both distribution / core switches in their switch block. The Server Farm Block As much as we'd like to get rid of them sometimes, we're not going to have much of a network without servers! In a campus network, the server farm block will be a separate switch block, complete with access and distribution layer switches. The combination of access, distribution, and core layers shown here is sometimes referred to as the Campus Infrastructure.
Again, the distribution switches have redundant connections to the core switches. So far we have a relatively small campus network, but you can already get a good idea of the sheer workload the core switches will be under. The Network Management Block Network management tools are no longer a luxury - in today's networks, they're a necessity. AAA servers, syslog servers, network monitoring tools, and intruder detection tools are found in almost every campus network today. All of these devices can be placed in a switch block of their own, the network management block.
Now our core switches have even more to contend with - but we're not quite done yet. We've got our end users located in the first switch blocks, we've got our server farm connected to the rest of the network, we've got our all-important network management and security block set up... what else do we need? Oh yeah.... internet connectivity! (And WAN access!) Two blocks team up to bring our end users those services - the Enterprise Edge Block and the Service Provider Edge Block.
Internet and WAN connectivity for a campus network is a two-block job one block we have control over, the other we do not. The Enterprise Edge Block is indeed the edge of the campus network, and this block of the routers and switches needed to give the needed WAN connectivity to the rest of the campus network. While the Service Provider Edge Block is considered part of the campus network model, we have no control over the actual structure of this block. And frankly, we don't really care! The key here is that this block borders
the Enterprise Edge Block, and is the final piece of the Internet connectivity puzzle for our campus network. Take a look at all the lines leading to those core switches. Now you know why we want to dedicate as much of these switches' capabilities to pure switching - we're going to need it! PPDIOO Now there's an acronym. PPDIOO is a Cisco lifecycle methodology, and it stands for... Prepare. At this stage, we're answering the musical questions "What is our final goal, what hardware do we need to get there, and how much is this going to cost?" The questions here are broad. Plan. You're still asking questions, they're just a bit different. "What does the client have now, can the current network support the network we want to have, and if so, what steps do we need to take to get there?" At this point, the questions are getting more specific. Design. Now we're really getting detailed. "How exactly are we going to build this network?" Implement. The design becomes a reality. Operate. The (hopefully) mundane day-to-day network operation. Optimize. "What current operations could we be doing in a more efficient manner?" Here's a link to a Cisco PDF on this topic. Not required reading for the exam, but it certainly couldn't hurt. Warning: buzzwords ahead. http://bit.ly/beuZBg Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933
http://www.thebryantadvantage.com
Back To Index
Queueing And Compression Overview First In, First Out (FIFO) Flow-Based Weighted Fair Queueing (WFQ) Class-Based Weighted Fair Queueing (CBWFQ) CBWFQ, Packet Drop, And TCP Global Synchronization Random Early Detect & Weighted Random Early Detect Low Latency Queueing (LLQ) Priority Queueing (PQ) Custom Queueing (CQ) Queueing Summary Choosing A Queueing Strategy Header And Payload Compression
We covered CoS and IP Telephony QoS in another section, but there's a chance that you'll see some more general QoS questions on your CCNP SWITCH exam. With that in mind, here's a bonus chapter on QoS! In today's networks, there's a huge battle for bandwidth. You've got voice traffic, video traffic, multicasts, broadcasts, unicasts.... and they're all fighting to get to the head of the line for transmission! The router's got to
make a decision as to which traffic should be treated with priority, which traffic should be treated normally, and which traffic should be dumped if congestion occurs. Cisco routers offer several options for this queueing procedure, and it won't surprise you to know that you need to know quite a few of them to become a CCNP! Beyond certification, it's truly important to know what's going on with a network's queues - and the only way to learn queueing is to dive right in, so let's get started! Here's a (very) basic overview of the queuing dilemma facing a router:
Three different kinds of traffic, and they all want to be transmitted first by the router. Of course, we could break this down further by specifying the sender and receiver - "if Host A sends data to Host B, send that first". Developing a successful queuing strategy takes time and planning, because not all this data can go first. First In, First Out FIFO is just what it sounds like - there is no priority traffic, no traffic classes, no queueing decision for the router to make. This is the default for all Cisco router interfaces, with the exception of Serial interfaces running at less than E1 (2.048 MBPS) speed. FIFO is fine for many networks, and if you have no problem with network congestion, FIFO may be all you need. If you've got traffic that's especially time-sensitive such as voice and video, FIFO is not your best choice. Flow-Based Weighted Fair Queueing What's so "fair" about Weighted Fair Queueing (WFQ)? WFQ prevents one particular stream of network traffic, or flow, from using most or all of the available bandwidth while forcing other streams of traffic to sit and
wait. These flows are defined by WFQ and require no access list configuration. Flow-based WFQ is the default queueing scheme for Serial interfaces running at E1 speed or below. Flow-Based WFQ takes these packet flows and classifies them into conversations. WFQ gives priority to the interactive, low-bandwidth conversations, and then splits the remaining bandwidth fairly between the non-interactive, high-bandwidth conversations. In the following exhibit, a Telnet flow reaches the router at the same time as two FTP flows. Telnet is low-volume, so the Telnet transmission will be forwarded first. The two remaining file transfers will then be assigned a comparable amount of bandwidth. The packets in the two file transfers will be interleaved - that is, some packets for Flow 1 will be sent, then some for Flow 2, and so on. The key here is that one file transfer flow will not have priority over the other.
Enabling flow-based WFQ is simple enough. We don't even have to configure it on the following Serial interface, since WFQ is enabled by default on all serial interfaces running at or below E1 speed, but let's walk through the steps: R1(config)#int serial0 R1(config-if)#fair-queue ? Congestive Discard Threshold
The Congestive Discard Threshold dictates the number of packets that can be held in a single queue. The default is 64. Let's change it to 200. R1(config)#int serial0 R1(config-if)#fair-queue ? Congestive Discard Threshold
R1(config-if)#fair-queue 200
To verify your queuing configuration, run show queue followed by the interface type and number. R1#show queue serial0 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/200/0 (size/max total/threshold/drops) Conversations 0/0/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1158 kilobits/sec IOS Help shows other WFQ Options: R1(config-if)#fair-queue 200 ? Number Dynamic Conversation Queues
The Dynamic Conversation Queues are used for normal, best-effort conversations. We'll change that to 200 as well. R1(config-if)#fair-queue 200 200 Number of dynamic queues must be a power of 2 (16, 32, 64, 128, 256, 512, 1024)
Then again, maybe we won't. Let's change it to 256 instead and use IOS Help to show any other options. R1(config-if)#fair-queue 200 256 ? Number Reservable Conversation Queues
The final WFQ option is the number of Reservable Conversation Queues. The default here is zero. These queues are used for specialized queueing and Quality of Service features like the Resource Reservation Protocol (RSVP). We'll set this to 100. R1(config-if)#fair-queue 200 256 100
show queue verifies that all three of these values have been successfully set, as does show queueing fair. R1#show queue serial 0 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/200/0 (size/max total/threshold/drops) Conversations 0/0/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1158 kilobits/sec
What Prevents WFQ From Running? Earlier in this section, I mentioned that serial interfaces running at E1 speed or lower will run WFQ by default. However, if any of the following features are running on the interface, WFQ will not be the default.
Tunnels, Bridges, Virtual Interfaces Dialer interfaces, LAPB, X.25
Class-Based Weighted Fair Queuing The first reaction to WFQ is usually something like this: "That sounds great, but shouldn't the network administrator be deciding which flows should be transmitted first, rather than the router?" Good question! There's an advanced form of WFQ, Class-Based Weighted Fair Queuing (CBWFQ) that allows manual configuration of queuing - and CBWFQ does involve access list configuration. Since the name is the recipe, the first step in configuring CBWFQ is to create the classes themselves. If you've already passed your BCRAN exam, this will all look familiar to you. If not, no problem at all, we'll take a step-by-step approach to CBWFQ. We'll first define two classes, one that will be applied to TCP traffic sourced from 172.10.10.0 /24, and another applied to FTP traffic from 172.20.20.0 /24. The first step is to write two separate ACLs, with one matching the first source and another matching the second. Don't write one ACL matching both. R1(config)#access-list 100 permit tcp 172.10.10.0 0.0.0.255 any R1(config)#access-list 110 permit tcp 172.20.20.0 0.0.0.255 any eq ftp
Now two class maps will be written, each calling one of the above ACLs. R1(config)#class-map 17210100 R1(config-cmap)#match access-group 100 R1(config)#class-map 17220200 R1(config-cmap)#match access-group 110
By the way, we've got quite a few options for the match statement in a class map, and up to 64 classes can be created: R1(config-cmap)#match ? access-group Access group any Any packets
class-map Class map cos IEEE 802.1Q/ISL class of service/user priority values destination-address Destination address input-interface Select an input interface to match ip IP specific values mpls Multi Protocol Label Switching specific values not Negate this match result protocol Protocol qos-group Qos-group source-address Source address
At this point, we've created two class maps that aren't really doing anything except matching the access list. The actual values applied to the traffic are contained in our next step, the policy map. R1(config)#policy-map CBWFQ R1(config-pmap)#class 17210100 R1(config-pmap-c)#? QoS policy-map class configuration commands: bandwidth Bandwidth exit Exit from QoS class action configuration mode no Negate or set default values of a command priority Strict Scheduling Priority for this Class queue-limit Queue Max Threshold for Tail Drop random-detect Enable Random Early Detection as drop policy service-policy Configure QoS Service Policy shape Traffic Shaping police Police
The values we'll set for both classes are the bandwidth and queue-limit values. For traffic matching class 17210100, we'll assign bandwidth of 400 and a queue limit of 50 packets; for traffic matching class 17220200, we'll assign bandwidth of 200 and a queue limit of 25 packets. The bandwidth assigned to a class is the value CBWFQ uses to assign weight.
The more bandwidth assigned to a class, the lower the weight The lower the weight, the higher the priority for transmission
R1(config)#policy-map CBWFQ R1(config-pmap)#class 17210100 R1(config-pmap-c)#bandwidth 400 R1(config-pmap-c)#queue-limit 50
R1(config-pmap-c)#class 17220200 R1(config-pmap-c)#bandwidth 200 R1(config-pmap-c)#queue-limit 25
If no queue limit is configured, the default of 64 is used.
Finally, we need to apply this policy map to the interface! As with ACLs, a Cisco router interface can have one policy map affecting incoming traffic and another affecting outgoing traffic. We'll apply this to traffic leaving Serial0. R1(config)#int s0 R1(config-if)#service ? history Keep history of QoS metrics input Assign policy-map to the input of an interface output Assign policy-map to the output of an interface R1(config-if)#service output CBWFQ Must remove fair-queue configuration first.
Here's a classic "gotcha" - to apply a policy map, you've got to disable WFQ first. The router will be kind enough to tell you that. The exam probably won't be that nice. :) Remove WFQ with the no fair-queue command, then we can apply the policy map. R1(config-if)#no fair-queue R1(config-if)#service output CBWFQ
To view the contents of a policy map, run show policy-map. R1#show policy-map CBWFQ Policy Map CBWFQ Class 17210100 Bandwidth 400 (kbps) Max Threshold 50 (packets) Class 17220200 Bandwidth 200 (kbps) Max Threshold 25 (packets)
CBWFQ configuration does have its limits. By default, you can't assign over 75% of an interface's bandwidth via CBWFQ, because 25% is reserved for network control and routing traffic. To illustrate, I've rewritten the previous policy map to double the requested bandwidth settings. When I try to apply this policy map to the serial interface, I get an interesting message: R1#show policy-map Policy Map CBWFQ Class 17210100 Bandwidth 800 (kbps) Max Threshold 50 (packets) Class 17220200 Bandwidth 400 (kbps) Max Threshold 25 (packets) R1#conf t Enter configuration commands, one per line. End with CNTL/Z. R1(config)#interfac serial0 R1(config-if)#service output CBWFQ Serial0 class 17220200 requested bandwidth 400 (kbps) Available only
358 (kbps)
Why is 358 Kbps all that's available? Start with the bandwidth of a serial interface, 1544 kbps. Only 75% of that bandwidth can be assigned through CBWFQ, and 1544 x .75 = 1158. We can assign only 1158 kbps of a T1 interface's bandwidth in the policy map. We have already assigned 800 kbps to class 17210100, leaving only 358 kbps for other classes. Keep this 75% rule in mind - it's a very common error with CBWFQ configurations. Don't jump to the conclusion that bandwidth 64 is the proper command to use when you've got a 64 kbps link and you want to enable voice traffic to use all of it. Always go with a minimum of 75% of available bandwidth, and don't forget all the other services that will need bandwidth as well! If you really need to change this reserved percentage - and you should have a very good reason before doing so - use the max-reservedbandwidth command on the interface. The following configuration changes the reservable bandwidth to 85%. R1(config-if)#max-reserved-bandwidth ? Max. reservable bandwidth as % of interface bandwidth R1(config-if)#max-reserved-bandwidth 85
The "reservable bandwidth" referenced in this command isn't just the bandwidth assigned in CBWFQ. It also includes bandwidth allocated for the following:
Low Latency Queuing (LLQ) IP Real Time Protocol (RTP) Priority Frame Relay IP RTP Priority Frame Relay PVC Interface Priority Queuing Resource Reservation Protocol (RSVP)
CBWFQ And Packet Drop Earlier in this section, we used the queue-limit command to dictate how many packets a queue could hold before packets would have to be dropped. Below is part of that configuration, and for this particular class the queue is limited to holding 50 packets. R1(config)#policy-map CBWFQ
R1(config-pmap)#class 17210100 R1(config-pmap-c)#bandwidth 400 R1(config-pmap-c)#queue-limit 50
If the queue is full, what happens? No matter how efficient your queuing strategy, sooner or later, the router is going to drop some packets. The default method of packet drop with CBWFQ is tail drop, and it's just what it sounds like - packets being dropped from the tail end of the queue.
Tail drop may be the default, but there are two major issues with it. First, this isn't a very discriminating way to drop traffic. What if this were voice traffic that needed to go to the head of the line? Tail drop offers no mechanism to look at a packet and decide that a packet already in the queue should be dropped to make room for it. The other issue with tail drop is TCP global synchronization. This is a result of TCP's behavior when packets are lost.
Packets dropped due to tail drop result in the TCP senders reducing their transmission rate. As the transmission slows, the congestion is reduced. All TCP senders will gradually increase their transmission speed as a result of the reduced congestion - which results in congestion occurring all over again.
The result of TCP global synchronization? When the TCP sender simultaneously slow their transmission, that results in underutilization of the bandwidth. When the TCP senders all increase their transmission rate at the same time, the bandwidth is oversubscribed, packets are dropped and must be retransmitted, and the entire process begins all over again. Basically, the senders are either sending too little or too much traffic at any given time.
To avoid the TCP Global Synchronization problems, Random Early Detection (RED) or Weighted Random Early Detection (WRED) can be used in place of Tail Drop. RED will proactively drop packets before the queue gets full, but the decision of which packets will be dropped is still random. WRED uses either a packet's IP Precedence or Differentiated Services Code Point (DSCP) to decide which packets should be dropped. WRED gives the best service it can to packets in a priority queue. If the priority queue becomes full, WRED will drop packets from other queues before dropping any from the priority queue. The random-detect command is used to enable WRED. You know it can't be just that simple, right? You must keep in mind that when WRED is configured as part of a class in a policy map, WRED must not be running on the same interface that the policy is going to be applied to. R1(config)#policy-map CBWFQ_WRED R1(config-pmap)#class 17210100 R1(config-pmap-c)#bandwidth 400 R1(config-pmap-c)#random-detect R1(config-pmap-c)#random-detect ? dscp parameters for each dscp value dscp-based Enable dscp-based WRED as drop policy exponential-weighting-constant weight for mean queue depth calculation prec-based Enable precedence-based WRED as drop policy precedence parameters for each precedence value
Both RED and WRED are useful only when the traffic in question is TCPbased. Low Latency Queueing CBWFQ is definitely a step in the right direction, but what we're looking for is a guarantee (or something close to it) that data adversely affected by delays is given the highest priority possible. Low Latency Queuing (LLQ) is an "add-on" to CBWFQ that creates such a strict priority queue for such traffic, primarily voice traffic, allowing us to avoid the jitter that comes with voice traffic that is not given the needed priority queuing. (Cisco recommends that you use an LLQ priority queue only to transport Voice Over IP traffic.) Since we're mentioning "priority" so often here, it shouldn't surprise you to learn that the command to enable LLQ is priority. Before we configure LLQ, there are a couple of commands and services
we've mentioned that don't play well with LLQ:
WRED and LLQ can't work together. Why? Because WRED is effective only with TCP-based traffic, and the voice traffic that will use LLQ's priority queue is UDP-based. The random-detect and priority commands can't be used in the same class. By its very nature, LLQ doesn't have strict queue limits, so the queue-limit and priority commands are mutually exclusive. Finally, the bandwidth and priority commands are also mutually exclusive.
In the following example, we'll create an LLQ policy that will place any UDP traffic sourced from 210.1.1.0 /24 and destined for 220.1.1.0 /24 into the priority queue - IF the UDP port falls in the 17000-18000 or 2000021000 range. The priority queue will be set to a maximum bandwidth of 45 kbps. The class class-default defines what happens to traffic that doesn't match any other classes, and we'll use that class to apply fair queuing to unmatched traffic. R2#show access-list Extended IP access list 155 permit udp 210.1.1.0 0.0.0.255 220.1.1.0 0.0.0.255 range 17000 18000 permit udp 210.1.1.0 0.0.0.255 220.1.1.0 0.0.0.255 range 20000 21000 R2(config)#class-map VOICE_TRAFFIC_PRIORITY R2(config-cmap)#match access-group 155
R2(config)#policy-map VOICE R2(config-pmap)#class VOICE_TRAFFIC_PRIORITY R2(config-pmap-c)#priority 45 R2(config-pmap-c)#class class-default R2(config-pmap-c)#fair-queue
R2(config-pmap-c)#interface serial0 R2(config-if)#service-policy output VOICE
Priority Queuing (PQ) The "next level" of queuing is Priority Queuing (PQ), where four predefined queues exist: High, Medium, Normal, and Low. Traffic is placed into one of these four queues through the use of access lists and priority lists. The High queue is also called the strict priority queue, making
PQ and LLQ the queueing solutions to use when a priority queue is needed.
These four queues are predefined, as are their limits:
High-Priority Queue: 20 Packets Medium-Priority Queue: 40 Packets Normal-Priority Queue: 60 Packets Low-Priority Queue: 80 Packets
It won't surprise you to learn that these limits can be changed. Before we configure PQ and change these limits, there's one very important concept that you must keep in mind when developing a PQ strategy. PQ is not round-robin; when there are packets in the High queue, they're going to be sent before any packets in the lower queues. If too many traffic types are configured to go into the High and Medium queues, packets in the Normal and Low queues may never be sent! This is sometimes referred to as traffic starvation or packet starvation. (I personally think it's more like queue starvation, but the last thing we need is a third name for it.) The moral of the story: When you're configuring PQ, be very discriminating about how much traffic you place into the upper queues. Configuring PQ is simple. The queues already exist, but we need to define what traffic should go into which queue. We can use the incoming interface or the protocol to decide this, and we can also change the size of the queue with this command. R3(config)#priority-list 1 ? default Set priority queue for unspecified datagrams interface Establish priorities for packets from a named interface protocol priority queueing by protocol queue-limit Set queue limits for priority queues
If we choose to use protocol to place packets into the priority queues, access lists can be used to further define queuing. R3(config)#priority-list 1 protocol ? aarp AppleTalk ARP appletalk AppleTalk arp IP ARP bridge Bridging cdp Cisco Discovery Protocol compressedtcp Compressed TCP decnet DECnet decnet_node DECnet Node decnet_router-l1 DECnet Router L1 decnet_router-l2 DECnet Router L2 ip IP ipx Novell IPX llc2 llc2 pad PAD links snapshot Snapshot routing support R3(config)#priority-list 1 protocol ip ? high medium normal low R3(config)#priority-list 1 protocol ip high ? fragments Prioritize fragmented IP packets gt Prioritize packets greater than a specified size list To specify an access list lt Prioritize packets less than a specified size tcp Prioritize TCP packets 'to' or 'from' the specified port udp Prioritize UDP packets 'to' or 'from' the specified port
Let's say we want IP traffic sourced at 20.1.1.0 /24 and destined for 30.3.3.0 /27 to be placed into the High queue. We'd need to write an ACL defining that traffic and call that ACL from the priority-list command. R3(config)#access-list 174 permit ip 20.1.1.0 0.0.0.255 30.3.3.0 0.0.0.31 R3(config)#priority-list 1 protocol ip high list 174
To place all TCP DNS traffic into the Medium queue, use the protocol option with the priority-list command. We'll use IOS Help to show us the options after the queue name. R3(config)#priority-list 1 protocol ip medium tcp 53 R3(config)#priority-list 1 protocol ip medium ? fragments Prioritize fragmented IP packets gt Prioritize packets greater than a specified siz list To specify an access list
lt tcp udp
Prioritize packets less than a specified size Prioritize TCP packets 'to' or 'from' the speci Prioritize UDP packets 'to' or 'from' the speci
R3(config)#priority-list 1 protocol ip medium tcp ? Port number bgp Border Gateway Protocol (179) chargen Character generator (19) cmd Remote commands (rcmd, 514) daytime Daytime (13) discard Discard (9) < output of command edited here > R3(config)#priority-list 1 protocol ip medium tcp 53
As you can see, the router will list many of the more common TCP ports. To place packets coming in on the Ethernet0 interface into the Normal queue, use the interface option with the priority-list command. R3(config)#priority-list 1 interface ethernet0 normal
Finally, the default queue sizes can be changed with the queue-limit command. This is an odd little command in that if you just want to change one queue's packet limit, you still have to list the values for all four queues - and all four values must be entered in the order of high, medium, normal, and low. In the following example, we'll double the capacity of the Normal queue while retaining all other default queue sizes. R3(config)#priority-list 1 queue-limit ? High limit R3(config)#priority-list 1 queue-limit 20 ? Medium limit R3(config)#priority-list 1 queue-limit 20 40 ? Normal limit R3(config)#priority-list 1 queue-limit 20 40 120 ? Lower limit R3(config)#priority-list 1 queue-limit 20 40 120 80 ? R3(config)#priority-list 1 queue-limit 20 40 120 80
Priority queuing is applied to the interface with the priority-group command. R3(config)#int serial0
R3(config-if)#priority-group 1
show queueing verifies that PQ is now in effect on this interface. R3#show queueing inter serial0 Interface Serial0 queueing strategy: priority Output queue utilization (queue/count) high/0 medium/0 normal/0 low/0
Show queueing priority displays the priority lists that have been created, along with the changes to each queue's defaults. Note that the queue limit is only shown under Arguments ("Args") if it's been changed. Also, ACLs and port numbers in use are shown on the right. R3#show queueing priority Current DLCI priority queue configuration: Current priority queue configuration: List 1 1 1 1 1
Queue high high medium normal normal
Args protocol ip protocol ip protocol ip interface Ethernet0 limit 120
list 174 tcp port domain
Custom Queueing (CQ) Custom Queueing (CQ) takes PQ one step further - CQ actually allows you to define how many bytes will be forwarded from every queue when it's that queue's turn to transmit. CQ doesn't have the same queues that PQ has, though. CQ has 17 queues, with queues 1 - 16 being configurable. Queue Zero carries network control traffic and cannot be configured to carry additional traffic. By default, the packet limit for each configurable queue is 20 packets and each will send 1500 bytes when it's that queue's turn to transmit.
The phrase "network control traffic" in regards to Queue Zero covers a lot of traffic. Traffic that uses Queue Zero includes
Hello packets for EIGRP, OSPF, IGRP, ISIS Syslog messages STP keepalives
CQ uses a round-robin system to send traffic. When it's a queue's turn to send, that queue will transmit until it's empty or until the configured byte limit is reached. By configuring a byte-limit, CQ allows you to allocate the desired bandwidth for any and all traffic types. Configuring CQ is basically a three-step process:
Define the size of the queues Define what packets should go in each queue Define the custom queue list by applying the list to the appropriate interface
Defining The Custom Queue Size To change the capacity of any queue from the default of 20 packets, use the queue-list x queue x limit command. The following configuration changes Queue 1's queue limit to 100 packets. R3(config)#queue-list 1 queue 1 limit ? number of queue entries R3(config)#queue-list 1 queue 1 limit 100
Defining The Packets To Be Placed In Each Custom Queue Traffic can be placed into a given queue according to its protocol or incoming interface. If the protocol option is used, an ACL can be used to
further define the traffic. In the following example, traffic sourced from network 100.1.1.0 /25 and destined for 200.2.2.0 /28 will be placed into Queue 2. R3(config)#access-list 0.0.0.15
124
permit
ip
100.1.1.0
0.0.0.127
200.2.2.0
R3(config)#queue-list 1 protocol ip ? queue number R3(config)#queue-list 1 protocol ip 2 ? fragments Prioritize fragmented IP packets gt Classify packets greater than a specified size list To specify an access list lt Classify packets less than a specified size tcp Prioritize TCP packets 'to' or 'from' the specified port udp Prioritize UDP packets 'to' or 'from' the specified port R3(config)#queue-list 1 protocol ip 2 list 124
To queue traffic according to the incoming interface, use the interface option with the queue-list command. All traffic arriving on ethernet0 will be placed into Queue 4. R3(config)#queue-list 1 interface ethernet0 4
To change the amount of bytes a queue will transmit when the round-robin format allows it to, use the byte-count option. Here, we'll double the default for Queue 3. R3(config)#queue-list 1 queue 3 byte-count 3000
A default queue can also be created as a "catch-all" for traffic that isn't matched by earlier arguments. Since this example has used queues 1 - 4, Queue 5 will be used as the default queue. R3(config)#queue-list 1 default 5
There's one more common queue-list configuration you should know about. All traffic using a specific port number can be assigned to a specific queue. The configuration isn't the most intuitive I've seen, so let's go through a queue-list command that places all WWW traffic into queue 2. We'll start by looking at all the options for the queue-list command. R1(config)#queue-list 1 ? default Set custom queue for unspecified datagrams interface Establish priorities for packets from a named interface
lowest-custom protocol queue stun
Set lowest number of priority queueing by Configure parameters Establish priorities
queue to be treated as custom protocol for a particular queue for stun packets
We'll use the protocol option and look at the options there. R1(config)#queue-list 1 protocol ? aarp AppleTalk ARP appletalk AppleTalk arp IP ARP bridge Bridging cdp Cisco Discovery Protocol compressedtcp Compressed TCP decnet DECnet decnet_node DECnet Node decnet_router-l1 DECnet Router L1 decnet_router-l2 DECnet Router L2 ip IP ipx Novell IPX llc2 llc2 pad PAD links snapshot Snapshot routing support
The next step is where the confusion tends to come in. After ip, the next value is the queue number itself. The next value is the protocol type. R1(config)#queue-list 1 protocol ip ? queue number R1(config)#queue-list 1 protocol ip 3 ? fragments Prioritize fragmented IP packets gt Classify packets greater than a specified size list To specify an access list lt Classify packets less than a specified size tcp Prioritize TCP packets 'to' or 'from' the specified port udp Prioritize UDP packets 'to' or 'from' the specified port
Finally, the port number is configured, which ends the command. I won't show all the port numbers that IOS Help will display, but it's a good idea for test day to know your common port numbers. And I don't mean just the BCRAN test - I mean any Cisco test. You should know them by heart anyway, but five minutes review before any exam wouldn't hurt. :) R1(config)#queue-list 1 protocol ip 3 tcp ? Port number bgp Border Gateway Protocol (179) chargen Character generator (19) cmd Remote commands (rcmd, 514) R1(config)#queue-list 1 protocol ip 3 tcp 80
Defining The Custom Queue List By Applying It To The Appropriate Interface To apply the custom queue to an interface, use the custom-queue-list command. To verify the configuration, run show queueing custom and show queueing interface serial0. Note that the latter command shows all 17 queues, including the control queue Queue Zero. R3(config)#interface serial0 R3(config-if)#custom-queue-list 1 R3#show queueing custom Current custom queue configuration: List 1 1 1 1 1
Queue 5 2 4 1 3
Args default protocol ip interface Ethernet0 limit 100 byte-count 3000
list 124
R3#show queueing interface serial0 Interface Serial0 queueing strategy: custom Output queue utilization (queue/count) 0/0 1/0 2/0 3/0 4/0 5/0 6/0 7/0 8/0 9/0 10/0 11/0 12/0 13/0 14/0 15/0 16/0
Queueing Summary I know from experience that keeping all of these queueing strategies straight is tough when you first start studying them. I strongly advise you to get some hands-on experience configuring queueing, and here's a chapter summary to help you keep them straight. This summary is NOT a substitute for studying the entire chapter! Weighted Fair Queueing (Flow-Based)
No predefined limit on the number of queues Assigns weights to traffic flows Low-bandwidth, interactive transmissions are given priority over highbandwidth transmissions The default queueing strategy for physical interfaces running at or less than E1 speed, AND that aren't running LAPB, SDLC, Tunnels, Loopbacks, Dialer Profiles, Bridges, Virtual Interfaces, or X.25.
Priority Queueing
Four predefined queues High priority queue traffic is always sent first, sometimes at the expense of lower queues whose traffic may receive inadequate attention Not the default under any circumstances, must be manually configured A maximum of 64 classes can be defined
Custom Queueing
17 overall predefined queues; Queue Zero is used for network control traffic and cannot be configured to carry other traffic, leaving 16 configurable queues Uses a round-robin transmission approach A maximum of 64 classes can be defined Not the default under any circumstances, must be manually configured
Deciding On A Queueing Strategy The key to a successful queueing rollout is planning. Much like network design, there's no "one size fits all" solution for queueing. This is where your analytical skills come in. You're familiar with the phrase "measure twice, cut once"? You want to measure your queueing strategy at least twice before applying it on your network! This decision often comes down to whether you've got voice traffic on your network. If you do, Priority Queueing is probably your best choice. PQ offers a queue (the High queue) that will always offer the highest priority to traffic - but you must be careful and not choke out traffic in the lower queues at the expense of priority traffic. If there's no delay-sensitive traffic such as voice or video, Custom Queueing works well, since CQ allows you to configure the size of each queue as well as allocating the maximum amount of bandwidth each queue is allowed to use. In comparison to PW and CQ, Weighted Fair Queueing requires no access-list configuration to determine priority traffic, because there isn't any priority traffic. Both low-volume, interactive traffic as well as highervolume traffic such as file transfers gets a fair amount of bandwidth.
Link, Header, And Payload Compression Techniques There are two basic compression types we're going to look at in this section. First, there's link compression, which compresses the header and payload of a data stream, and is protocol-independent. The second is TCP/IP header compression, and the name is definitely the recipe. When it comes to link compression, we can choose from Predictor or Stacker (STAC). The actual operation of these compression algorithms is out of the scope of this exam, but in short, the Predictor algorithm uses a compression dictionary to predict the next set of characters in a given data stream. Predictor is easier on a router's CPU than other compression techniques, but uses more memory. In contrast, Stacker is much harder on the CPU than Predictor, but uses less memory. There's a third compression algorithm worth mentioning. Defined in RFC 2118, the Microsoft Point-To-Point Compression makes it possible for a Cisco router to send and receive compressed data to and from a MS client. To use any of these compression techniques, use the compress interfacelevel command followed by the compression you want to use. Your options depend on the interface's encapsulation type. On a Serial interface using HDLC encapsulation, stacker is the only option. R1(config)#int s0 R1(config-if)#encapsulation hdlc R1(config-if)#compress ? stac stac compression algorithm
Using PPP encapsulation on the same interface triples our options. R1(config)#int s0 R1(config-if)#encap ppp R1(config-if)#compress ? mppc MPPC compression type predictor predictor compression type stac stac compression algorithm
Keep in mind that the endpoints of a connection using link compression must agree on the method being used. Defined in RFC 1144, TCP/IP Header Compression does just what it says - it compresses the TCP/IP header. Just as obviously, it's protocoldependent. This particular RFC is very detailed, but it's worth reading,
particularly the first few paragraphs where it's noted that TCP/IP HC is truly designed for low-speed serial links. TCP/IP HC is supported on serial interfaces running HDLC, PPP, or Frame Relay. Configuring TCP/IP HC is simple, but it's got one interesting option, shown below with IOS Help. R1(config-if)#ip tcp header-compression ? passive Compress only for destinations which send compressed headers
If the passive option is configured, the only way the local interface will compress TCP/IP headers before transmission is if compressed headers are already being received from the destination. Finally, if your network requires the headers to remain intact and not compressed, the payload itself can be compressed while leaving the header alone. Frame Relay allows this through the use of the Frame Relay Forum.9, referred to on the router as FRF.9. This can be enabled on a per-VC basis at the very end of the frame map command. The following configuration would compress the payload of frames sent to 172.12.1.1, but the header would remain intact. R1(config-if)#frame FRF9 data-stream packet-by-packet
map ip 172.12.1.1 110 broad payload-compression ? FRF9 encapsulation cisco proprietary encapsulation cisco proprietary encapsulation
R1(config-if)#frame map ip 172.12.1.1 110 broad payload-compression frf9 ? stac Stac compression algorithm R1(config-if)#frame map ip 172.12.1.1 110 broad payload-compression frf9 stac
Choosing Between TCP/IP HC And Payload Compression The main deciding factor here is the speed of the link. If the serial link is slow - and I mean running at 32 kbps or less - TCP/IP HC is the best solution of the two. TCP/IP HC was designed especially for such slow links. By contrast, if the link is running above 32 kbps and less than T1 speed, Layer 2 payload compression is the most effective choice. What you don't want to do is run them both. The phrase "unpredictable results" best describes what happens if you do. Troubleshooting that is a lot more trouble than it's ever going to be worth. Choose L2 compression or TCP/IP HC in accordance with the link speed, and leave it at that.
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNP SWITCH Study Guide Chris Bryant, CCIE #12933 --- www.thebryantadvantage.com Back To Index
Multicasting Overview What Is Multicasting? Multicast Address Ranges IGMP PIM Dense Mode PIM Sparse Mode PIM Sparse-Dense Mode Rendezvous Point Discovery Methods Configuring Auto-RP Bootstrapping And Multicasting IGMP Snooping CGMP The RPF Check
Ever since you picked up your first CCNA book, you've heard about multicasting, gotten a fair idea of what it is, and you've memorized a couple of reserved multicasting IP addresses. Now as you prepare to become a CCNP, you've got to take that
knowledge to the next level and gain a true understanding of multicasting. Those of you with an eye on the CCIE will truly have to become multicasting experts! Having said that, we're going to briefly review the basics of multicasting first, and then look at the different ways in which multicasting can be configured on Cisco routers and switches. What Is Multicasting? A unicast is data that is sent from one host directly to another, while a broadcast is data sent from a host that is destined for "all" host addresses. By "all", we can mean all hosts on a subnet, or truly all hosts on a network. There's a bit of a middle ground there! A multicast is that middle ground, as a multicast is data that is sent to a logical group of hosts, called a multicast group. Hosts that are not part of the multicast group will not receive the data.
Some other basic multicasting facts:
There's no limit on how many multicast groups a single host can belong to. The sender is usually unaware of what host devices belong to the multicast group. Multicast traffic is unidirectional. If the members of the multicast group need to respond, that reply will generally be a unicast. Expressed in binary, the first four bits of a multicast IP address are 1110. The range of IP addresses reserved for multicasting is the Class D range, 224.0.0.0 - 239.255.255.255.
The overall range of Class D addresses contains several other reserved address ranges. The 224.0.0.0 - 224.0.0.255 range is reserved for network protocols. Packets in this range will not be forwarded by routers, so these packets cannot leave the local segment. This block of addresses is the local network control block. Just as Class A, Class B, and Class C networks have private address ranges, so does Class D. The Class D private address range is 239.0.0.0 - 239.255.255.255. Like the other private ranges, these addresses can't be routed, so they can be reused from one network to another. This block of addresses is the administratively scoped block. These addresses are also called limited scope addresses. The 224.0.1.0 - 238.255.255.255 range is the globally scoped address range, and these addresses are acceptable to assign to internet-based hosts - with a lot of exceptions. Here are some other individual reserved multicast addresses:
224.0.0.5 - All OSPF Routers 224.0.0.6 - All OSPF DRs 224.0.0.9 - All RIPv2 routers 224.0.0.10 - All EIGRP routers 224.0.0.12 - HSRPv2 224.0.1.1 - Network Time Protocol (NTP) 224.0.0.1 - "All hosts" 224.0.0.2 - "All multicast routers"
There are some individual addresses in the Class D range that you should not use. Called unusable multicast addresses or unstable multicast
addresses, there's quite a few of these and you should be aware of them when planning a multicast deployment. The actual addresses are beyond the scope of the CCNP SWITCH exam, but you can find them easily using your favorite search engine. The RPF Check A fundamental difference between unicasting and multicasting is that a unicast is routed by sending it toward the destination, while a multicast is routed by sending it away from its source. "toward the destination" and "away from its source" sound like the same thing, but they're not. A unicast is going to follow a single path from source to destination. The only factor the routers care about is the destination IP address - the source IP address isn't a factor. With multicast routing, the destination is a multicast IP group address. It's the multicast router's job to decide which paths will lead back to the source (upstream) and which paths are downstream from the source. Reverse Path Forwarding refers to the router's behavior of sending multicast packets away from the source rather than toward a specific destination. The RPF Check is run against any incoming multicast packet. The multicast router examines the interface that the packet arrived on. If the packet comes in on an upstream interface - that is, an interface found on the reverse path that leads back to the source - the packet passes the check and will be forwarded. If the packet comes in on any other interface, the packet is dropped. Since we have multicast IP addresses and multicast MAC addresses, it follows that we're going to be routing and switching multicast traffic. We'll first take a look at multicasting from the router's perspective. Just as there are different routing protocols, there are different multicasting protocols. The BSCI test focuses on Protocol Independent Multicast (PIM), so we'll stick with that one. When a router runs a multicasting protocol, a multicast tree will be created. The source sits at the top of the tree, sending the multicast stream out to the network. The recipients are on logical branches, and these routers need to know if a downstream router is part of the multicast
group. If there are no downstream routers that need the multicast stream, that router will not forward the traffic. This prevents the network from being overcome with multicast traffic.
In the above illustration, there are three multicast group members, each labeled "MG". A multicasting protocol will prevent a router from sending multicast traffic on a branch where it's not needed. The middle branch of this multicast tree has no member of the multicast group, so multicast traffic shouldn't be sent down that branch. The left branch does have a member at the edge, as does the right branch, so traffic for that multicast group will flow all the way down those two branches. The routers on the multicast tree branches that receive this traffic are referred to as leaf nodes. That's all fine - but how does a device join a multicast group in the first place? That job is performed by IGMP, the Internet Group Management Protocol. There are three versions of IGMP in today's networks, and these three versions have dramatic differences in the way they work. IGMP Version 1 A host running IGMP v1 will send a Membership Report message to its
local router, indicating what multicast group the host wishes to join. This Membership Report's destination IP address will reflect the multicasting group the host wishes to join. (This message is occasionally called a Host Membership Report as well.)
A router on every network segment will be elected IGMPv1 Querier, and that router will send a General Query onto the segment every 60 seconds. If there are multiple routers on the segment, only one router will fill this role, as there's no need for two routers to forward the same multicast traffic onto a segment. (Different protocols elect an IGMPv1 querier in different ways, so there's no one way to make sure a given router becomes the Querier with v1.)
This query is basically asking every host on the segment if they'd like to join a multicast group. These queries are sent to the reserved multicast address 224.0.0.1, the "all-hosts on this subnet" address. A host must respond to this query with a Membership Request under one of two conditions:
The host would like to join a group, OR The host would like to continue its membership in a multicast group it has already joined!
That second bullet point means that a host, in effect, must renew a multicast group membership every minute. That's a lot of renewing, and a lot of Membership Requests taking up valuable bandwidth on that segment. In effect, IGMPv1 gives a host two ways to join a multicast group:
Send a Membership Report Respond to a Membership Request
The process for a host leaving a multicast group isn't much more efficient. There is no explicit "quit" message that a host running IGMP v1 can send to the router. Instead, the host's group membership will time out when the router sees no Membership Report for three minutes.
In the above scenario, the host was a member of a multicast group, but stopped sending Membership Reports two minutes ago. The problem is
that the router will not age that membership out for a total of three minutes, so not only has the router been unnecessarily sending multicast traffic onto this segment for the last two minutes, but will continue doing so for another minute before finally aging out the multicast group membership. If it occurs to you that IGMPv1 could be a great deal more efficient, you're right. That's why IGMPv2 was developed! A major difference between the two is that IGMPv2 hosts that wish to leave a group do not just stop sending Membership Reports, and there's no three-minute wait to have the membership age out. IGMPv2 hosts send a Leave Group message to the reserved multicast address 224.0.0.2, the "all routers on this segment" address.
In return, the Querier will send a group-specific query, which will be seen by all hosts on the segment. This query specifically asks all hosts on the segment if they would like to receive multicast traffic destined for the group the initial host left. If another host wants to continue to receive that traffic, that host must send a Membership Report back to the Querier.
If the Querier sends that group-specific query and gets no response, the Querier will stop forwarding multicast traffic for that group onto that segment. An IGMPv2 Querier will send out General Queries, just as IGRPv1 Queriers do. Another major difference between IGMPv1 and v2 is that there is a onestep way to make a certain IGMPv2 router become the Querier, and that's to make sure it has the lowest IP address on the shared segment. As you'd expect, there are some issues that arise when you've got some hosts on a segment running IGMPv1 and others running IGMPv2, or one router running IGMPv1 and another router running IGMPv2. The different scenarios are beyond the scope of the exam, but for those of you who'd like to learn more about the interoperability of the IGMP versions (and especially if you'd like to be a CCIE one day), get a copy of RFC 2236 off the Internet and start reading! IGMP Version 3 is also now available on many Cisco devices. The major improvement in IGMPv3 is source filtering, meaning that the host joining a multicast group not only indicates the group it wants to join, but also
chooses the source of the multicast traffic. Multicast group members sent IGMP v3 messages to 224.0.0.22. When a host makes that choice regarding the source of the multicast stream, it can take one of two forms:
"I will accept multicast traffic from source x" "I will accept multicast traffic from any source except source x"
If you'd like to do some additional reading on any IGMP version, here are the RFC numbers:
IGMP v1: RFC 1112 IGMP v2: RFC 2236 IGMP v3: RFC 3376
Now that the hosts are using IGMP to join the desired multicast group, we've got to get that traffic to them. For that, we'll use PIM - Protocol Independent Multicast. There are three modes of PIM you must be fluent with to pass the CCNP exams as well as two different PIM versions. You'll see all these modes and versions in production networks as well, so it's vital to understand the concepts of all of them. PIM Dense Mode Operation The first decision to make when implementing a multicasting protocol is which one to choose. PIM Dense is more suited for the following situations:
The multicast source and recipients are physically close There will be few senders, but many recipients All routers in the network have the capability to forward multicast traffic There will be a great deal of multicast traffic The multicast streams will be constant
When PIM Dense is first configured, a router will send a Hello message on every PIM-enabled interface to discover its neighbors. Once a multicast source begins transmitting, PIM Dense will use the prune-and-flood technique to build the multicasting tree. Despite the name, the flooding actually comes first. The multicast packets will be flooded throughout the network until the packets reach the leaf routers.
The initial flooding ensures that every router has a chance to continue to receive multicast traffic for that specific group. If a leaf router has no hosts that need this multicast group's traffic. the leaf router will send a Prune message to the upstream router. The Prune message's IP destination address is 224.0.0.13. The routers with hosts who belong to this multicast group are marked with "MG". Since none of the leaf routers know of hosts who need this multicast group's traffic, they will all send a Prune message to 224.0.0.13.
If the upstream router receiving the prune also has no hosts that need this multicast group's traffic, that router will then send a Prune to its upstream neighbor as well. Here, the router in the right-hand column that is receiving a Prune from its downstream neighbor knows of no hosts that need the traffic, so that router will send a Prune upstream. In the other two columns, the routers receiving the Prune do have a need for the multicast traffic, so the pruning in those branches stops there.
The router receiving that Prune also knows of no hosts that are members of this multicast group, so -- you guessed it -- that router will send a Prune to the upstream router.
Logically, the multicast tree now looks like this:
One branch of the tree has been completely pruned, while the leaf routers on the other two branches have been pruned. This group's multicast traffic will now only be seen by these five routers. The other routers were pruned to prevent sending multicast traffic to routers that didn't need that traffic flow, but what if one of the pruned routers later learns of a host that needs to join that group? The pruned router will then send a Join message to its upstream neighbor.
Where PIM Dense builds the multicasting tree from the root down to the branches, PIM Sparse takes the opposite approach. PIM Sparse builds the multicast tree from the leaf nodes up. PIM Dense creates a sourcebased multicast tree, since the tree is based around the location of the multicast traffic's source. PIM Sparse creates a shared multicast tree, referring to the fact that multiple sources can share the same tree - "one tree, many groups".
PIM Sparse Mode is best suited for the following situations:
The multicast routers are widely dispersed over the network There are multiple, simultaneous multicast streams There are few receivers in each group The multicast traffic will be intermittent
The root of a PIM Sparse tree is not even necessarily the source of the multicasting traffic. A PIM Sparse tree has a Rendezvous Point (RP) for its root. The RP serves as a kind of "central distribution center", meaning that a shared tree will create a single delivery tree for a multicast group. The routers discover the location of the RP in one of three different fashions.
Statically configuring the RP's location on each router Using an industry-standard bootstrap protocol to designate an RP and advertise its location Using the Cisco-proprietary protocol Auto-RP to designate an RP and advertise its location
The shared tree creation begins in the same fashion that a source-based tree does - with a host sending a Membership Report to a router.
If there is already an entry in the router's multicast table for 224.1.1.1, the ethernet interface shown here will be added as an outgoing interface for that group, and that's it. If there is no entry for 224.1.1.1, the router will send a Join message toward the RP. If the upstream router is the RP, it will add the interface
that received the Join to the list of outgoing interfaces for that group; if the upstream router is not the RP, it will send a Join of its own toward the RP. The three routers marked MG have hosts that want to join this particular multicast group, and we're assuming that the multicast group is new, with no prior neighbors. Note that one router in the left column has no hosts that want to join the group, but it's still sending a Join message.
R2 has hosts that want to join the multicast group 224.1.1.4. R2 has no entry in its multicasting table for this group, so it sends a Join toward the RP. R1 receives the Join, checks its multicast table, and sees it has no entries for 224.1.1.4. Even though R1 has no hosts that need to join this group, R1 will send a Join of its own toward the RP. The RP receives the Join message and adds the interface upon which the Join was received to the outgoing multicast list for 224.1.1.4. Sparse Mode uses Join messages as keepalives as well. They are sent every 60 seconds, and the membership will be dropped if three hellos are missed. To avoid unnecessary transmission of multicast traffic, the multicast routers can send Prune messages to end their membership in a
given multicast group. Using the same network setup we used for the PIM Dense example, we see that while the operation of PIM Sparse is much different - there is no "flood-and-prune" operation - the resulting multicast tree is exactly the same.
PIM Sparse-Dense Mode Many multicasting networks use a combination of these two methods, Sparse-Dense mode. A more accurate name would be "Sparse-OrDense" mode, since each multicast group will be using one or the other. If an RP has been designated for a group, that group will use Sparse Mode. If there's no RP for a group, obviously Sparse Mode is out, so that group will default to Dense Mode. RP Discovery Methods It's one thing to decide which router should be the RP, but it's another to make sure all the other routers know where the RP is! The available methods depend on the PIM version in use. PIM Version 1: Static configuration or Auto-RP
PIM Version 2: Static configuration, Auto-RP, or bootstrapping, the open standard method. Let's take a closer look at these methods using the following hub-andspoke network, beginning with the static configuration method. All routers are using their Serial0 interfaces on the 172.12.123.0 /24 network, with their router number as the fourth octet.
Each router will have multicast routing enabled with the ip multicastrouting command. Under the assumption that we will not have many recipients, we'll configure these routers for sparse mode. (If we had many recipients, we'd use dense mode.) All three routers will have R1 configured as the RP. R1(config)#ip multicast-routing R1(config)#ip pim rp-address 172.12.123.1 R1(config)#int s0 R1(config-if)#ip pim sparse R2(config)#ip multicast-routing R2(config)#ip pim rp-address 172.12.123.1 R2(config)#int s0 R2(config-if)#ip pim sparse R3(config)#ip multicast-routing R3(config)#ip pim rp-address 172.12.123.1 R3(config)#int s0 R3(config-if)#ip pim sparse
The command show ip pim neighbor verifies that PIM is running and in Version 2 on all interfaces. R1#show ip pim neighbor PIM Neighbor Table
Neighbor Address 172.12.123.3 172.12.123.2
Interface Serial0 Serial0
Uptime Expires Ver Mode 00:11:08 00:01:37 v2 (DR) 00:11:37 00:01:38 v2
Both R2 and R3 show R1 as the RP for the multicast group 224.0.1.40, which is the RP-Discovery multicast group. R2#show ip pim rp Group: 224.0.1.40, RP: 172.12.123.1, v1, uptime 00:12:51, expires 00:03:11 R3#show ip pim rp Group: 224.0.1.40, RP: 172.12.123.1, v1, uptime 00:12:43, expires 00:04:20
You don't have to use the same router as the RP for every single multicasting group. An access-list can be written to name the specific groups that a router should serve as the RP for, and that access-list can be called in the ip pim rp-address command. You'll see this limitation configured on R1 only, but it would be needed on R2 and R3 as well. R1(config)#access-list 14 permit 224.0.1.40 R1(config)#ip pim rp-address 172.12.123.1 ? Access-list reference for group Access-list reference for group (expanded range) WORD IP Named Standard Access list override Overrides Auto RP messages R1(config)#ip pim rp-address 172.12.123.1 14
There's one more option in the ip pim rp-address command you should note. See the override option in the above IOS Help readout? Using that option will allow this static RP configuration to override any announcement made by Auto-RP, Cisco's proprietary method of announcing RPs to multicast routers running Sparse Mode. (Auto-RP is now supported by some non-Cisco vendors.) And how do you configure Auto-RP? Glad you asked! Configuring Auto-RP Auto-RP will have one router acting as the mapping agent (MA), and it is the job of this router to listen to the multicast address 224.0.1.39. It's with this address that routers announce themselves as candidate RPs (C-RP). The MA listens to the candidate announcements, and then decides on a RP for each multicast group. The MA then announces these RPs on 224.0.1.40 via RP-Announce messages.
We'll first configure R2 and R3 as candidate RPs. PIM Sparse is already running on all three routers' serial interfaces and multicasting has been enabled on all three routers as well. R2(config)#ip pim send-rp-announce ? BRI ISDN Basic Rate Interface Ethernet IEEE 802.3 Null Null interface Serial Serial R2(config)#ip pim send-rp-announce serial0 ? scope RP announcement scope R2(config)#ip pim send-rp-announce serial0 scope 5 R3(config)#ip pim send-rp-announce serial0 scope 5
The scope value sets the TTL of the RP-Announce messages. Now that the candidate RPs are configured, R1 will be configured as the mapping agent. R1(config)#ip pim send-rp-discovery serial 0 scope 5
As multicast groups are added to the network, both R2 and R3 will contend to be the RP by sending their RP-Announce messages on 224.0.1.39. As the mapping agent, R1 will decide which router is the RP, and will make that known via RP-Discovery messages sent on 224.0.1.40. Using The Bootstrapping Method PIM Version 2 offers a open-standard, dynamic method of RP selection. If you're working in a multivendor environment and want to avoid writing a standard configuration, you may need to use this method. In the real world, I use Auto-RP every chance I get. It's just a little more straightforward in my opinion, but as I always say, it's a really good idea to know more than one way to get something done! The bootstrap method's operation is much like Auto-RP, but the terminology is much different.
Candidate Bootstrap Routers (C-BSRs) and Candidate Rendezvous Points (C-RPs) are configured. A Bootstrap Router (BSR) is chosen from the group of C-BSRs. The BSR sends a notification that the C-RPs will hear, and the C-RPs
will send Candidate-RP-Advertisements to the BSR. The BSR takes the information contained in these advertisements and compiles the information into an RP-Set. The RP-Set is then advertised to all PIM speakers. The destination multicast address for bootstrap messages is 224.0.0.13.
To configure R1 as a C-BSR: R1(config)# ip pim bsr-candidate
To configure R2 and R3 as C-RPs: R2(config)# ip pim rp-candidate
Handling Multicast Traffic At Layer 2 Routers and Layer 3 switches have the capability to make intelligent decisions regarding multicast traffic, enabling them to create multicast trees and avoid unnecessary transmission of multicast streams. Layer 2 switches do not. One of the first things you learn about Layer 2 switches in your CCNA studies is that they handle multicast traffic in the exact same way they handle broadcasts - by flooding that traffic out every single port except the one the traffic came in on. That's a very inefficient manner of handling multicasting, so two different methods of helping Layer 2 switches with multicasting have been developed: IGMP Snooping and CGMP, the Cisco Group Membership Protocol. So what is IGMP Snooping "snooping" on? The IGMP reports being sent from a host to a multicast router. The switch listens to these reports and records the multicast group's MAC address and the switch port upon
which the IGMP report was received. This allows the switch to learn which ports actually need the multicast traffic, and will send it only to those particular ports instead of flooding the traffic.
IGMP Snooping is not supported by all Cisco switch hardware platforms, but is supported by the 2950 and 3550 families by default, as shown here on a 2950: SW1#show ip igmp snooping Global IGMP Snooping configuration: ----------------------------------IGMP snooping : Enabled IGMPv3 snooping (minimal) : Enabled Report suppression : Enabled TCN solicit query : Disabled TCN flood query count : 2 Vlan 1: -------IGMP snooping Immediate leave Multicast router learning mode Source only learning age timer CGMP interoperability mode
: : : : :
Enabled Disabled pim-dvmrp 10 IGMP_ONLY
To turn IGMP snooping off, run the command no ip igmp snooping.
SW1(config)#no ip igmp snooping SW1#show ip igmp snooping Global IGMP Snooping configuration: ----------------------------------IGMP snooping : Disabled IGMPv3 snooping (minimal) : Enabled Report suppression : Enabled TCN solicit query : Disabled TCN flood query count : 2 Vlan 1: -------IGMP snooping Immediate leave Multicast router learning mode Source only learning age timer
: : : :
Disabled Disabled pim-dvmrp 10
From experience, I can tell you that one deciding factor between IGMP Snooping and CGMP is the switch's processor power. IGMP Snooping is best suited for high-end switches with CPU to spare. If CPU is an issue, consider using CGMP. Cisco Group Membership Protocol (CGMP) If a Layer Two switch doesn't have the capabilities to run IGMP Snooping, it will be able to run CGMP - Cisco Group Membership Protocol. (As long as it's a Cisco switch, that is - CGMP is CIsco-proprietary!) CGMP allows the multicast router to work with the Layer Two switch to eliminate unnecessary multicast forwarding. CGMP will be enabled on both the multicast router and the switch, but the router's going to do all the work. The router will be sending Join and Leave messages to the switch as needed. PIM must be running on the router interface facing the switch before enabling CGMP, as you can see: R1(config)#int e0 R1(config-if)#ip cgmp WARNING: CGMP requires PIM enabled on interface R1(config-if)#ip pim sparse R1(config-if)#ip cgmp
Let's look at two examples of when CGMP Join and Leave messages will be sent, and to where.
When CGMP is first enabled on both the multicast router and switch, the router will send a CGMP Join message, informing the switch that a multicast router is now connected to it. This particular CGMP Join will contain a Group Destination Address (GDA) of 0000.0000.0000 and the MAC address of the sending interface. The GDA is used to identify the multicast group, so when this is set to all zeroes, the switch knows this is an introductory CGMP Join. This GDA lets the switch know that the multicast router is online. The switch makes an entry in its MAC table that a multicast router can be found off the port that the CGMP Join came in on. The router will send this CGMP Join to the switch every minute to serve as a keepalive.
A workstation connected to port 0/5 now wishes to join multicast group 225.1.1.1. The Join message is sent to the multicast router, but first it will pass through the switch. The switch will do what you'd expect it to do read the source MAC address and make an entry for it in the MAC
address table as being off port fast 0/5 if there's not an entry already there. (Don't forget that the MAC address table is also referred to as the CAM table or the bridging table.)
The router will then receive the Join request, and send a CGMP Join back to the switch. This CGMP Join will contain both the multicast group's MAC address and the requesting host's MAC address. Now the switch knows about the multicast group 225.1.1.1 and that a member of that group is found off port fast 0/5. In the future, when the switch receives frames destined for that multicast group, the switch will not flood the frame as it would an unknown multicast - the switch will forward a copy of the frame to each port that it knows leads to a member of the multicast group. CGMP Leaves work much the same way, but the router and switch have to allow for the possibility that there are still other members on the switch that still need that multicast group's traffic. In the following example, two hosts that are receiving traffic from the multicast group 225.1.1.1 are connected to the same switch. One of the hosts is sending an CGMP Leave. The multicast router receives this request, and in return sends a group-specific CGMP query back to the switch. The switch will then flood this frame so hosts on every other port receives a copy. Any host that wishes to continue to receive this group's traffic must respond to this query. As shown below, the remaining host will send such a response, and the router in turn will send a CGMP Leave to the switch, telling the switch to delete only the host that originally sent the CGMP Leave from the group.
If no other host responds to the Group-Specific Query, the router will still send a CGMP Leave to the switch. However, the CGMP Leave will tell the switch to remove the entire group listing from the MAC table. You may be wondering how the switch differentiates CGMP Joins and Leaves from all the other frames it processes. The switch recognizes both of those by their destination address of 01-00-0c-dd-dd-dd, a reserved Layer 2 address used only for this purpose. Enabling CGMP The additional configuration needed to run CGMP depends on the switch model. On Layer 3 switches, CGMP is disabled. CGMP is enabled at the interface level with the following command: SW1(config)#int fast 0/5 SW1(config-if)#ip cgmp
On Layer 2 switches, CGMP is enabled by default.
Joining A Multicast Group - Or Not! After all this talk about dense, sparse, and sparse-dense mode, we need
to get some routers to actually join a multicasting group! Before we do, there are two command that are close in syntax but far apart in meaning, and we need to have these two commands in mind before starting the next configuration. The interface-level command ip igmp join-group allows a router to join a multicast group. The interface-level command ip igmp static-group allows a router to forward packets for a given multicast group, but the router doesn't actually accept the packets. In the following configuration, R1 is the hub router of a hub-and-spoke configuration and the RP for the multicast group 239.1.1.1. R2 and R3 will be made members of this group.
We'll configure R1 as the RP for the group 239.1.1.1. Don't forget to enable multicasting on the router before you begin the interface configuration - the router will tell you to do so, but the exam probably will not! R1(config)#int s0 R1(config-if)#ip pim sparse WARNING: "ip multicast-routing" is not configured, IP Multicast packets will not be forwarded R1(config)#ip multicast-routing R1(config)#ip pim rp-address 172.12.123.1 R1(config)#int s0 R1(config-if)#ip pim sparse R2(config)#ip multicast-routing R2(config)#ip pim rp-address 172.12.123.1 R2(config)#int s0
R2(config-if)#ip pim sparse R3(config)#ip multicast-routing R3(config)#ip pim rp-address 172.12.123.1 R3(config)#int s0 R3(config-if)#ip pim sparse
We'll verify the neighbor relationships with show ip pim neighbor. R1#show ip pim neighbor PIM Neighbor Table Neighbor Address Interface 172.12.123.2 Serial0 172.12.123.3 Serial0
Uptime 00:00:37 00:00:40
Expires 00:01:37 00:01:35
Ver v2 v2
Mode
We'll also verify that R2 and R3 see R1 as the RP for 239.1.1.1 with show ip pim rp. R2#show ip pim rp Group: 224.0.1.40, RP: 172.12.123.1, v2, uptime 00:55:58, expires never Group: 239.1.1.1, RP: 172.12.123.1, v2, uptime 00:52:34, expires 00:03:01 R3#show ip pim rp Group: 224.0.1.40, RP: 172.12.123.1, v2, uptime 00:55:39, expires never Group: 239.1.1.1, RP: 172.12.123.1, v2, uptime 00:52:40, expires 00:04:25
Ever wonder how you can test whether routers have correctly joined a multicast group? Here's an old CCIE lab trick - ping the multicast IP address of the group with the extended ping command. R1#ping Protocol [ip]: Target IP address: 239.1.1.1 Repeat count [1]: 100 Datagram size [100]: Timeout in seconds [2]: Extended commands [n]: Sweep range of sizes [n]: Type escape sequence to abort. Sending 100, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds: ............................
R1's pings to 239.1.1.1 are failing because there are no members of that multicast group. Let's fix that by making R2 and R3 members. R2(config)#int s0 R2(config-if)#ip igmp join-group 239.1.1.1 R3(config)#int s0 R3(config-if)#ip igmp join-group 239.1.1.1
Let's take a look at the multicasting route table on R2 with show ip mroute. R2#show ip mroute IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement, U - URD, I - Received Source Specific Host Report Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 224.0.1.40), 00:07:40/00:00:00, RP 172.12.123.1, flags: SJPCL Incoming interface: Serial0, RPF nbr 172.12.123.1 Outgoing interface list: Null (*, 239.1.1.1), 00:04:16/00:00:00, RP 172.12.123.1, flags: SJPCLF Incoming interface: Serial0, RPF nbr 172.12.123.1 Outgoing interface list: Null (172.12.123.1, 239.1.1.1), 00:04:16/00:01:13, flags: PCLFT Incoming interface: Serial0, RPF nbr 0.0.0.0 Outgoing interface list: Null
This table is quite different from the IP routing table we're used to, so let's take a few minutes to examine this table. Note that there are two entries for 239.1.1.1. The first is a "star, group" entry. The star ("*") represents all source addresses, while the "group" indicates the destination multicast group address. This entry is usually referred to as *,G in Cisco documentation. The second entry is a "Source,Group" entry, usually abbreviated as S,G in technical documentation. The Source value is the actual source address of the group, while the Group is again the multicast group address itself. When spoken, the *,G entry is called a "star comma G" entry; the S,G entry is called a "S comma G" entry. Note the RPF neighbor entry 0.0.0.0 for the 172.12.123.1,239.1.1.1 entry. That will always be 0.0.0.0 when you're running sparse mode, as we are here. Also note that each entry has some flags set. It couldn't hurt to know the
meanings of some of the more often-set flags: D - Dense Mode entry S - Sparse Mode entry C - Connected, referring a member of the group being on the directly connected network L - Local Router, meaning this router itself is a member of the group P - Pruned, indicates the route has been, well, pruned. :) T - Shortest Path Tree, indicates packets have been received on the tree. To wrap this up, let's go back to R1 and test this configuration. We'll send pings to 239.1.1.1 and see what the result is. R1#ping Protocol [ip]: Target IP address: 239.1.1.1 Repeat count [1]: Datagram size [100]: Timeout in seconds [2]: Extended commands [n]: Sweep range of sizes [n]: Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds: Reply to request 0 from 172.12.123.3, 116 ms Reply to request 0 from 172.12.123.2, 128 ms
Both downstream members of the multicast group 239.1.1.1 responded to the ping. There are some other commands you can use to verify and troubleshoot multicasting, one being show ip pim interface. R1#show ip pim interface Address 172.12.123.1
Interface Serial0
Version/Mode v2/Sparse
Nbr Query Count Intvl 2 30
DR 172.12.123.3
Note the "DR" entry. On multiaccess segments like the one we've got here, a PIM Designated Router will be elected. The router with the highest IP will be the DR. The DR election is really more for ethernet segments with more than one router than for NBMA networks like this
frame relay network. The PIM DR has two major responsibilities:
Send IGMP queries onto the LAN If sparse mode is running, transmit PIM join and register messages to the RP
You can verify IGMP group memberships with show ip igmp groups. R2#show ip igmp groups IGMP Connected Group Membership Group Address Interface 224.0.1.40 Serial0 239.1.1.1 Serial0
Uptime 00:58:20 00:52:13
Expires 00:02:49 00:02:46
Last Reporter 172.12.123.1 172.12.123.2
With video and voice traffic becoming more and more popular in today's networks, multicasting and Quality Of Service (QoS) are going to become more and more important. I urge you to continue your multicasting studies after you earn your CCNP, and for those of you with your eyes on the big prize - the CCIE - you'll truly have to become a multicasting master!
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
The Bryant Advantage CCNA Security Study Guide Chris Bryant, CCIE #12933
www.thebryantadvantage.com
Back To Index
AAA Overview Authentication Don't Lock Yourself Out! Don't Stop Until You're Done... Authorization Privilege Levels And Authorization Accounting Hot Spots And Gotchas
This is a bit of bonus reading for your CCNP SWITCH exam. This section from my CCNA Security Study Guide covers more AAA than you're likely to see on your CCNP SWITCH exam, but I do recommend you spend some time studying it to go along with the Security section in this course. Enjoy! Authentication, Authorization, and Accounting, commonly referred to in the Cisco world as AAA, is a common feature in today's networks. In this section, we'll examine exactly what each "A" does, and then configure AAA at the command-line interface and with Cisco SDM. Each "A" is a separate function, and requires separate configuration. Before we begin to configure AAA, let's take a look at each "A" individually.
Authentication Authentication is the process of deciding if a given user should be allowed to access the network or a network service. As a CCNA and future CCNP, you've already configured authentication in the form of creating a local database of usernames and passwords for both Telnet access and PPP authentication. This is sometimes called a self-contained AAA deployment, since no external server is involved. It's more than likely that you'll be using a server configured for one of the following security protocols:
TACACS+, a Cisco-proprietary, TCP-based protocol RADIUS, an open-standard, UDP-based protocol originally developed by the IETF
An obvious question is "If there's a TACACS+, what about TACACS?" TACACS was the original version of this protocol and is rarely used today. Before we head into AAA Authentication configuration, there are some other TACACS+ / RADIUS differences you should be aware of:
While TACACS+ encrypts the entire packet, RADIUS encrypts only the password in the initial client-server packet. RADIUS actually combines the authentication and authorization processes, making it very difficult to run one but not the other. TACACS+ considers Authentication, Authorization, and Accounting to be separate processes. This allows another method of authentication to be used (Kerberos, for example), while still using TACACS+ for authorization and accounting. RADIUS does not support the Novell Async Services Interface (NASI) protocol, the NetBIOS Frame Protocol Control protocol, X.25 Packet Assembler / Disassembler (PAD), or the AppleTalk Remote Access Protocol (ARA or ARAP). TACACS+ supports all of these. RADIUS implementations from different vendors may not work well together, or at all. RADIUS can't control the authorization level of users, but TACACS+
can. Regardless of which "A" you're configuring, AAA must be enabled with the global command aaa new-model. The location of the TACACS+ and / or RADIUS server must then be configured, along with a shared encryption key that must be agreed upon by the client and server. Since you're on the way to the CCNP, that's what we'll use here. R1(config)#aaa new-model R1(config)#tacacs-server host 172.1.1.1 key CCNP R1(config)#radius-server host 172.1.1.2 key CCNP
The aaa new-model command carries out two tasks:
enables AAA overrides every previously configured authentication method for the router lines - especially the vty lines!
More on that "especially the vty lines" a little later in this section. Multiple TACACS+ and RADIUS servers can be configured, and the key can either be included at the end of the above commands or separate from that, as shown below. R1(config)#tacacs-server key CCNP R1(config)#radius-server key CCNP
Now comes the interesting part! We've got a TACACS+ server at 172.1.1.1, a RADIUS server at 172.1.1.2, and the router is configured as a client of both with a shared key of CCNP for both. Now we need to determine which servers will be used for Authentication, and in what order, with the aaa authentication command. Let's take a look at the options: R1(config)#aaa authentication login ? WORD Named authentication list. default The default authentication list.
The first choice is whether to configure a named authentication list, or a default list that will be used for all authentications that do not reference a named list. In this example, we'll create a default list.
R1(config)#aaa authentication login default ? enable Use enable password for authentication. group Use Server-group line Use line password for authentication. local Use local username authentication. local-case Use case-sensitive local username authentication. none NO authentication.
Remember our old friend the enable password? We can configure Authentication to use the enable password, and we could also use a line password. More common is the local username authentication, which will use a database local to the router. That sounds complicated, but to build a username/password database, just use the username/password command! R1(config)#username gagne password awa R1(config)#username afflis password wwa R1(config)#username thesz password nwa
The username / password command creates a local database that can be used for multiple purposes, including authenticating Telnet users. We could create a local database and use it for AAA Authentication, but in this example we'll use the TACACS+ and RADIUS servers. To do so, we need to drill a little deeper with the aaa authentication command. R1(config)#aaa authentication login default group ? WORD Server-group name radius Use list of all Radius hosts. tacacs+ Use list of all Tacacs+ hosts. R1(config)#aaa authentication login default group radius ? enable Use enable password for authentication. group Use Server-group line Use line password for authentication. local Use local username authentication. local-case Use case-sensitive local username authentication. none NO authentication. R1(config)#aaa authentication login default group radius group tacacs
The group radius and group tacacs commands configure the router to use those devices for Authentication - but it's interesting that we were able to configure more than one Authentication source. Actually, we can name a maximum of four methods, and they'll be used in the order listed. In the above command, the default list will check the RADIUS server first. If there's an error or a timeout, the second method
listed will be checked. If a user's authentication is refused by the first method, the second method is not used, and the user's authentication attempt will fail. Interestingly enough, "none" is an option with the aaa authentication command. R1(config)#aaa authentication login default group radius ? enable Use enable password for authentication. group Use Server-group line Use line password for authentication. local Use local username authentication. local-case Use case-sensitive local username authentication. none NO authentication.
If you're concerned that all prior listed methods of authentication may result in an error or timeout, you can configure none at the end of the aaa authentication command. Of course, if none is the only option you select, you've effectively disabled authentication. Here, I've configured a default list on R3 that is using only one authentication option - none! I then apply that list to the vty lines and attempt to telnet to R3 from R1. R3(config)#aaa new-model R3(config)#aaa authentication login default none R3(config)#line vty 0 4 R3(config-line)#login authentication default R1#telnet 172.12.13.3 Trying 172.12.13.3 ... Open R3>
Note that I was not prompted for a vty password. Not a good idea! And speaking of bad ideas.... Be VERY Careful When Configuring Authentication - You CAN Lock Yourself Out! Sorry for all the yelling, but believe me - if you put half of the AAA authentication in place, and log out without finishing it, you can end up locked out of your own router!
I'll illustrate on a very basic setup using R1 and R3. These routers are directly connected at their S1 interfaces, and R3 is configured with a vty password of tuco. To allow users to enter privilege mode 15 (exec mode), we'll use an enable secret of CCNP. No username is configured on R3 for vty access, so when we telnet to R3 from R1, we will be prompted only for the vty password. When we run the enable command, we'll be prompted for the enable secret password. R1#telnet 172.12.13.3 Trying 172.12.13.3 ... Open
User Access Verification Password: R3>en Password: R3#
(vty password of tuco) (enable secret password of CCNP)
And all is well! Now we'll start configuring AAA on R3 via the telnet connection. The first step is to run the aaa new-model command. R3#conf t Enter configuration commands, one per line. R3(config)#aaa new-model
End with CNTL/Z.
At this point, we're interrupted for some reason, so we save the config on R3 before logging out. R3#wr Building configuration... [OK] R3#logout [Connection to 172.12.13.3 closed by foreign host] R1#
Once lunch -- I mean, the interruption is over, we'll log back in to R3 from R1. R1#telnet 172.12.13.3 Trying 172.12.13.3 ... Open
User Access Verification
Username:
Hmm. We weren't asked for a username before. Let's try both the vty and enable passwords for that username. R1#telnet 172.12.13.3 Trying 172.12.13.3 ... Open
User Access Verification Username: % Username: timeout expired! Username: trump Password: % Access denied Username: CCNP Password: % Access denied [Connection to 172.12.13.3 closed by foreign host]
A couple of things to note... One authentication attempt timed out in the time it took me to cut and paste that config. When a username/password authentication attempt failed - here, two of them did - we were not told whether it was the username, password, or both that were bad. Finally, we were denied access to a router we could log into before the interruption. The problem here is that we're being asked for a username that doesn't actually exist! Once you enable AAA, you've got to define the authentication methods immediately afterwards. Right now, no one can successfully telnet to that router, and someone's going to have to connect to it via the console port and finish the configuration. So let's do just that. We've got the aaa new-model command in place, so we'll now define a local username/password database and have that serve as the default authentication method. We'll configure a named list called AAA_LIST and have R3's vty lines use that list for authentication.
R3(config)#username chris password bryant R3(config)#aaa authentication login AAA_LIST local R3(config)#line vty 0 4 R3(config-line)#login authentication ? WORD Use an authentication list with this name. default Use the default authentication list. R3(config-line)#login authentication AAA_LIST
R1#telnet 172.12.13.3 Trying 172.12.13.3 ... Open
User Access Verification Username: chris Password: (entered bryant here) R3>enable Password:
(entered CCNP here)
R3#
Note that neither the vty line password nor the enable password are shown when they're entered. No asterisks, no nothing! It's an excellent idea to leave yourself a "back door" into the network by configuring a local database with only one username and password - one known only by you and perhaps another administrator - and ending the aaa authentication command with local. That way, if something happens to the one or two primary methods, you've always got an emergency password to use. Using AAA For Privileged EXEC Mode And PPP The most common usage for AAA Authentication is for login authentication, but it can also be used as the enable password itself or to authenticate PPP connections. If you want to configure the router to use AAA Authentication for the enable password, use the aaa authentication enable command. Note that you cannot specify a named list for the enable password, only the default list.
R1(config)#aaa authentication enable ? default The default authentication list.
(No option for named list)
R1(config)#aaa authentication enable default ? enable Use enable password for authentication. group Use Server-group line Use line password for authentication. none NO authentication. R1(config)#aaa authentication enable default group tacacs group radius none
The above configuration would first look to the TACACS+ server to authenticate a user attempting to enter privileged EXEC mode, then the RADIUS server, and then would finally allow a user to enter with no authentication needed. To use AAA Authentication for PPP connections, use the aaa authentication ppp command. R1(config)#aaa authentication ppp ? WORD Named authentication list. default The default authentication list. R1(config)#aaa authentication ppp default ? group Use Server-group if-needed Only authenticate if needed. local Use local username authentication. local-case Use case-sensitive local username authentication. none NO authentication. R1(config)#aaa authentication ppp default group tacacs group radius local
The above command would first look to the TACACS+ server to authenticate PPP connections, then RADIUS, then the router's local database. Why You Shouldn't Stop Configuring Authentication Until You're Done! Configuring authentication isn't a long process, but make sure you're not going to be interrupted! (Or as sure as you can be in our business.) If you configure aaa new-model on a router, you can no longer configure a single VTY line password, as shown below. R1(config)#aaa new-model R1(config)#line vty 0 4 R1(config-line)#login % Incomplete command
R1(config-line)#login ? authentication Authentication parameters. R1(config-line)#login authentication ? WORD Use an authentication list with this name. default Use the default authentication list. R1(config-line)#login authentication default AAA: Warning authentication list "default" is not defined for LOGIN.
Now, you'd think this would make the administrator realize that they need to make a default list - but then again, maybe they don't realize it. Maybe they don't know how and don't want to ask. Maybe they headed for lunch. It doesn't matter, because the end result is that no one can telnet in with the router configured like this. A method list must be configured along with the aaa new-model and login authentication commands. Before moving on to Authorization, let's review the steps for an AAA configuration using a TACACS+ server for telnet authentication. First, we have to enable AAA, define the location of the TACACS+ server and create the case-sensitive key. R2(config)#aaa new-model R2(config)#tacacs-server host 172.10.10.100 R2(config)#tacacs-server key PASSISCW
Next, create a default AAA method list that uses TACACS+, and will allow users to connect with no authentication if there's a failure with TACACS+. R2(config)#aaa authentication login default group tacacs none
Apply the default AAA list to the VTY lines, and we're all set! R2(config)#line vty 0 4 R2(config-line)#login authentication default
Authorization Authentication decides whether a given user should be allowed into the network; Authorization dictates what users can do once they are in.
The aaa authorization command creates a user profile that is checked when a user attempts to use a particular command or service. As with Authentication, we'll have the option of creating a default list or a named list, and AAA must be globally enabled with the aaa new-model command. R1(config)#aaa new-model R1(config)#aaa authorization ? auth-proxy For Authentication Proxy Services commands For exec (shell) commands. config-commands For configuration mode commands. configuration For downloading configurations from AAA server exec For starting an exec (shell). network For network services. (PPP, SLIP, ARAP) reverse-access For reverse access connections R1(config)#aaa authorization exec ? WORD Named authorization list. default The default authorization list. R1(config)#aaa authorization exec default ? group Use Server-group if-authenticated Succeed if user has authenticated. local Use local database. none No authorization (always succeeds).
Privilege Levels And AAA Authorization Privilege levels define what commands a user can actually run on a router. There are three predefined privilege levels on Cisco routers, two of which you've been using since you started your Cisco studies - even if you didn't know it! When you're in user exec mode, you're actually in privilege level 1, as verified with show privilege: R2>show privilege Current privilege level is 1
By moving to privileged exec mode with the enable command, you move from level 1 to level 15, the highest level: R2>show privilege Current privilege level is 1 R2>enable R2#show privilege Current privilege level is 15
There's actually a third predefined privilege level, Level Zero, which allows the user to run the commands exit, logout, disable, enable, and logout. Obviously, a user at Level Zero can't do much. There's a huge gap in network access between levels 1 and 15, and the remaining levels 2-14 can be configured to fill that gap. Levels 2 - 14 can be configured to allow a user assigned a particular privilege level to run some commands, but not all of them. Assume you have a user who should not be allowed to use the ping command, which by default can be run from privilege level 1: R2>ping 172.1.1.1
(Success of the ping has been edited)
By moving the ping command to privilege level 5, a user must have at least that level of privilege in order to use ping. To change the privilege level of a command, use the privilege command. (IOS Help shows approximately 30 options following privilege, so I won't put all of those here.) R2(config)#privilege address-family configure congestion dhcp exec
? Address Family configuration mode Global configuration mode Frame Relay congestion configuration mode DHCP pool configuration mode Exec mode
R2(config)#privilege exec ? level Set privilege level of command reset Reset privilege level of command R2(config)#privilege exec level ? Privilege level R2(config)#privilege exec level 5 ? LINE Initial keywords of the command to modify R2(config)#privilege exec level 5 ping
A user must now have at least a privilege level of 5 to send a ping. Let's test that from both level 1 and 15. R2>ping 172.1.1.1
^ % Invalid input detected at '^' marker. R2#ping 172.1.1.1
(Success of ping edited)
Note that the user is not told they're being denied access to this command because of privilege level. The ping works successfully from Level 15. There are two options for assigning privilege levels to users, one involving AAA and one not. To enable AAA Authorization to use privilege levels, use the aaa authorization command followed by the appropriate option: R2(config)#aaa authorization ? auth-proxy For Authentication Proxy Services commands For exec (shell) commands. config-commands For configuration mode commands. configuration For downloading configurations from AAA server exec For starting an exec (shell). network For network services. (PPP, SLIP, ARAP) reverse-access For reverse access connections
The full command to use the TACACS+ server to assign privilege levels, followed by the local database, is as follows: R2(config)#aaa authorization commands 5 default group tacacs+ local
Getting authorization to work exactly the way you want it to does take quite a bit of planning and testing due to the many options. Privilege levels can also be assigned via the router's local database. To do so, use the privilege option in the middle of the username/password command. R2(config)#username chris privilege 5 password bryant
That would assign a privilege level of 5 to that particular user. The Authorization feature of AAA can also assign IP addresses and other network parameters to Mobile IP users. How this occurs is beyond the scope of the ISCW exam, but you can refer to RFC 2905 for more details. Perhaps more details than you'd like to know! Accounting Authentication decides who can get in and who can't; authorization decides what users can do once they get in; accounting tracks the resources used by the authorized user. This tracking can be used for security purposes (detecting users doing things they shouldn't be doing), or for tracking network usage in order to bill other departments in your company.
As with authentication and authorization, accounting requires that AAA be globally enabled. The aaa accounting command is used to define the accounting parameters -- and IOS Help shows us that there are quite a few options! Earlier in this section, we talked about privilege lists, and accounting can be configured to track any given privilege level. Even that seemingly simple task takes a good deal of IOS digging, as shown below. Overall, AAA supports six different accounting formats, as shown below in IOS Help. R2(config)#aaa auth-proxy commands connection delay-start is known. exec nested before network resource send suppress of user system update
accounting ? For authentication proxy events. For exec (shell) commands. For outbound connections. (telnet, rlogin) Delay PPP Network start record until peer IP address For starting an exec (shell). When starting PPP from EXEC, generate NETWORK records EXEC-STOP record. For network services. (PPP, SLIP, ARAP) For resource events. Send records to accounting server. Do not generate accounting records for a specific type For system events. Enable accounting update records.
Here's a brief look at each category and what accounting information can be recorded. Commands: Information regarding EXEC mode commands issued by a user. Connection: Information regarding all outbound connections made from network access server. Includes Telnet and rlogin. EXEC: Information about user EXEC terminal sessions. Network: Information regarding all PPP, ARAP, and SLIP sessions. Resource: Information regarding start and stop records for calls passing authentication, and stop records for calls that fail authentication. System: Non-user-related system-level events are recorded.
To finish the aaa accounting command, let's assume we want to enable auditing of privileged mode commands. As IOS Help will show you, to do this you have to know the level number of the mode you wish to audit, and privileged exec mode is level 15. R2(config)#aaa accounting commands ? Enable level R2(config)#aaa accounting commands 15 % Incomplete command. R2(config)#aaa accounting commands 15 ? WORD Named Accounting list. default The default accounting list. R2(config)#aaa accounting commands 15 default ? none No accounting. start-stop Record start and stop without waiting stop-only Record stop when service terminates. wait-start Same as start-stop but wait for start-record commit. R2(config)#aaa accounting command 15 default start-stop ? broadcast Use Broadcast for Accounting group Use Server-group R2(config)#aaa accounting command 15 default start-stop group tacacs
Both authorization and accounting offer so many different options that it's impossible to go into all of them here, and you're not responsible for complex configurations involving either one on your ISCW exam. You should know the basic commands and that AAA must be globally enabled before either can be configured. Also, there are no enable, login, or local options with accounting - we're limited to using TACACS+ and/or RADIUS servers for accounting purposes. R2(config)#aaa accounting exec default start-stop group ? WORD Server-group name radius Use list of all Radius hosts. tacacs+ Use list of all Tacacs+ hosts.
Hot Spots And Gotchas An AAA Authentication statement generally has more than one option listed. They're checked in the order in which they are listed, from left to right. If the first option is unavailable, the next is checked. However, if the first option FAILS the user's authentication attempt, the user is denied
authentication and the process ends. If you enable AAA with the aaa new-model command and then do not complete the Authentication configuration, no one can authenticate. It's also legal to specify none as the only authentication option, but that basically disables authentication! HQ(config)#aaa authentication login default none
You can use a named list with aaa authentication login, but not with aaa authentication enable. HQ(config)#aaa authentication login ? WORD Named authentication list. default The default authentication list. HQ(config)#aaa authentication enable ? default The default authentication list.
Real-world note that may come in handy on exam day: Don't get too clever and name your lists "AAA". That tends to confuse others. For example, in the aaa authentication login command, I would not use this command: HQ(config)#aaa authentication login AAA group tacacs+ none
That command uses a list named "AAA" for authentication. Again, it's just not something I like to do, but it is legal. What does each "A" mean? Authentication - Can the user come in? Authorization - What can the user do when they come in? Can they assign privilege levels? IP addresses? Delete configurations? Assign ACLs? Change the username/password database, perhaps? Accounting - What network resources did the user access, and for how long? The Accounting information that can be recorded falls into six main categories:
command - accounting for all commands at a specified privilege level exec - accounting for exec sessions system - Non-user system events, that is network - All network-related service requests (NCP, ARA, SLIP) connection - outbound connections (Telnet, rlogin) resource - stop and start records
With accounting, we can save information to RADIUS or TACACS+ servers. HQ(config)#aaa accounting exec default start-stop group ? WORD Server-group name radius Use list of all Radius hosts. tacacs+ Use list of all Tacacs+ hosts.
And finally, a quick RADIUS vs. TACACS+ comparison: RADIUS:
Open-standard protocol Runs on UDP Can't control authorization level of users Authentication and authorization are combined, so running a separate authorization protocol is not practical
TACACS+:
Cisco-proprietary protocol Runs on TCP Can control authorization level of users Authentication and authorization are separate processes, so running a separate authorization protocol is possible Copyright © 2010 The Bryant Advantage. All Rights Reserved.
CCNP SWITCH Exam Details You Must Know! Chris Bryant, CCIE #12933
www.thebryantadvantage.com
Overview VLANs & Trunking VTP STP Basics STP Advanced Skills Networking Models And Designs Basic Switch Configuration Quality Of Service Multilayer Switching Switch Security And Tunneling Voice VLANs
VLANs: Hosts in static VLANs inherit their VLAN membership from the port's static assignment, and hosts in dynamic VLANs are assigned to a VLAN in accordance with their MAC address. This is performed by a VMPS (VLAN Membership Policy Server). VMPS uses a TFTP server to help in this dynamic port
assignment scheme. A database on the TFTP server that maps source MAC addresses to VLANs is downloaded to the VMPS server, and that downloading occurs every time you power cycle the VMPS server. VMPS uses UDP to listen to client requests. Some things to watch out for when configuring VMPS: •
The VMPS server has to be configured before configuring the ports as dynamic.
•
PortFast is enabled by default when a port receives a dynamic VLAN assignment.
•
If a port is configured with port security, that feature must be turned off before configuring a port as dynamic.
•
Trunking ports cannot be made dynamic ports, since by definition they must belong to all VLANs. Trunking must be disabled to make a port a dynamic port.
It takes two commands to configure a port to belong to a single VLAN: switchport mode access Switchport access vlan x
(makes the port an access port) (places the port into VLAN “x”)
If two hosts can’t ping and they’re in the same VLAN, there are two settings you should check right away. First, check those speed and duplex settings on the switch ports. Second, check that MAC table and make sure the hosts in question have an entry in the table to begin with. ISL is Cisco-proprietary and encapsulates every frame before sending it across the trunk. ISL doesn’t recognize the native VLAN concept. ISL encapsulation adds 30 bytes total to the size of the frame, potentially making them too large for the switch to handle. (The maximum size for an Ethernet frame is 1518 bytes, and such frames are called giants. Frames less than 64 bytes are called runts.)
Dot1q does not encapsulate frames. A 4-byte header is added to the frame, resulting in less overhead than ISL. If the frame is destined for hosts residing in the native VLAN, that header isn't added. For trunks to work properly, the port speed and port duplex setting should be the same on the two trunking ports. Dot1q does add 4 bytes, but thanks to IEEE 802.3ac, the maximum frame length can be extended to 1522 bytes. To change the native VLAN: SW1(config-if)#switchport trunk native vlan 12
The Cisco-proprietary Dynamic Trunking Protocol actively attempts to negotiate a trunk line with the remote switch. This sounds great, but there is a cost in overhead - DTP frames are transmitted every 30 seconds. To turn DTP off: SW2(config)#int fast 0/8 SW2(config-if)#switchport nonegotiate Command rejected: Conflict between 'nonegotiate' and 'dynamic' status. SW2(config-if)#switchport mode trunk SW2(config-if)#switchport nonegotiate
Is there a chance that two ports that are both in one of the three trunking modes will not successfully form a trunk? Yes - if they're both in dynamic auto mode. End-to-end VLANs should be designed with the 80/20 rule in mind, where 80 percent of the local traffic stays within the local area and the other 20 percent will traverse the network core en route to a remote destination. Local VLANs are designed with the 20/80 rule in mind. Local VLANs assume that 20 percent of traffic is local in scope, while the other 80 percent will traverse the network core. While physical location is unimportant in end-to-end VLANs, users are grouped by location in Local VLANs.
VTP: Place a switch into a VTP domain with the global command vtp domain. In Server mode, a VTP switch can be used to create, modify, and delete VLANs. This means that a VTP deployment has to have at least one Server, or VLAN creation will not be possible. This is the default setting for Cisco switches. Switches running in Client mode cannot be used to create, modify, or delete VLANs. Clients do listen for VTP advertisements and act accordingly when VTP advertisements notify the Client of VLAN changes. Transparent VTP switches don't synchronize their VTP databases with other VTP speakers; they don't even advertise their own VLAN information! Therefore, any VLANs created on a Transparent VTP switch will not be advertised to other VTP speakers in the domain, making them locally significant only. VTP advertisements carry a configuration revision number that enables VTP switches to make sure they have the latest VLAN information. When you introduce a new switch into a VTP domain, you have to make sure that its revision number is zero.
Theory holds that there are two ways to reset a switch's revision number to zero: 1. 2.
Change the VTP domain name to a nonexistent domain, then change it back to the original name. Change the VTP mode to Transparent, then change it back to Server.
Reloading the switch won't do the job, because the revision number is kept in NVRAM, and the contents of Non-Volatile RAM are kept on a
reload. Summary Advertisements are transmitted by VTP servers every 5 minutes, or upon a change in the VLAN database. Subset Advertisements are transmitted by VTP servers upon a VLAN configuration change. Configuring VTP Pruning allows the switches to send broadcasts and multicasts to a remote switch only if the remote switch actually has ports that belong to that VLAN. This simple configuration will prevent a great deal of unnecessary traffic from crossing the trunk. All it takes is the global configuration command vtp pruning. Real-world troubleshooting tip: If you're having problems with one of your VLANs being able to send data across the trunk, run show interface trunk. Make sure that all vlans shown under "vlans allowed and active in management domain" match the ones shown under "vlans in spanning tree forwarding state and not pruned". It's a rarity, but now you know to look out for it! As RIPv2 has advantages over RIPv1, VTP v2 has several advantages over VTPv1. VTPv2 supports Token Ring switching, Token Ring VLANs, and runs a consistency check. VTPv1 does none of these.
Cisco switches run in Version 1 by default, although most newer switches are V2-capable. If you have a V2-capable switch such as a 2950 in a VTP domain with switches running V1, just make sure the newer switch is running V1. The version can be changed with the vtp version command.
Spanning Tree Basics Switches use their MAC address table to switch frames, but when a switch is first added to a network, it has no entries in its table. The switch will dynamically build its MAC table by examining the source MAC address of incoming frames.
BPDUs are transmitted by a switch every two seconds to the multicast MAC address 01-80-c2-00-00-00. We've actually got two different BPDU types:
Topology Change Notification (TCN) Configuration
As you'd expect from their name, TCN BPDUs carry updates to the network topology. Configuration BPDUs are used for the actual STP calculations.
The Root Bridge is the "boss" of the switching network - this is the switch that decides what the STP values and timers will be.
This BID is a combination of a default priority value and the switch's MAC address, with the priority value listed first. For example, if a Cisco switch has the default priority value of 32,768 and a MAC address of 11-22-3344-55-66, the BID would be 32768:11-22-33-44-55-66. Therefore, if the switch priority is left at the default, the MAC address is the deciding factor.
Each potential root port has a root port cost, the total cost of all links along the path to the root bridge. The BPDU actually carries the root port cost, and this cost increments as the BPDU is forwarded throughout the network.
The default STP Path Costs are determined by the speed of the port. •
10 MBPS Port: 100
•
100 MBPS Port: 19
•
1 GBPS Port: 2
•
10 GBPS Port: 1
Be careful not to jump to the conclusion that the physically shortest path is the logically shortest path.
Hello Time is the interval between BPDUs, two seconds by default.
Forward Delay is the length of both the listening and learning STP stages, with a default value of 15 seconds.
Maximum Age, referred to by the switch as MaxAge, is the amount of time a switch will retain a BPDU's contents before discarding it. The default is 20 seconds.
The value of these timers can be changed with the spanning-tree vlan command shown below. The timers should always be changed on the root switch, and the secondary switch as well.
There are two commands that will make a non-root bridge the root bridge for the specified VLAN. If you use the root primary command, the priority will automatically be lowered sufficiently for the local switch to become the root. If you use the vlan priority command, you must make sure the priority is low enough for the local switch to become the root.
SW2(config)#spanning-tree vlan 20 root primary SW2(config)#spanning-tree vlan 10 priority
Ideally, the root bridge should be a core switch, which allows for the highest optimization of STP.
Advanced Spanning Tree Suitable only for switch ports connected directly to a single host, PortFast allows a port running STP to go directly from blocking to forwarding mode.
What if the device connected to a port is another switch? We can't use PortFast to shorten the delay since these are switches, not host devices. What we can use is Uplinkfast.
Uplinkfast is pretty much PortFast for wiring closets. (Cisco recommends that Uplinkfast not be used on switches in the distribution and core layers.)
Some additional details regarding Uplinkfast:
The actual transition from blocking to forwarding isn't really "immediate" - it actually takes 1 - 3 seconds. Uplinkfast cannot be configured on a root switch. When Uplinkfast is enabled, it's enabled globally and for all VLANs residing on the switch. You can't run Uplinkfast on some ports or on a per-VLAN basis - it's all or nothing.
Uplinkfast will take immediate action to ensure that the switch cannot become the root switch. First, the switch priority will be set to 49,152, which means that if all other switches are still at their default priority, they'd all have to go down before this switch can possibly become the root switch. Additionally, the STP Port Cost will be increased by 3000, making it highly unlikely that this switch will be used to reach the root switch by any downstream switches.
The Cisco-proprietary feature BackboneFast can be used to help recover from indirect link failures. BackboneFast uses the Root Link Query (RLQ) protocol.
Since all switches in the network have to be able to send, relay, and respond to RLQ requests, and RLQ is enabled by enabling BackboneFast, every switch in the network should be configured for BackboneFast. This is done with the following command: SW2(config)#spanning-tree backbonefast
Root Guard is configured at the port level, and basically disqualifies any switch that is downstream from that port from becoming the root or secondary root. Configuring Root Guard is simple: SW3(config)#int fast 0/3 SW3(config-if)#spanning-tree guard root
If any BPDU comes in on a port that's running BPDU Guard, the port will be shut down and placed into error disabled state, shown on the switch as err-disabled.
UDLD detects unidirectional links by transmitting a UDLD frame across the link. If a UDLD frame is received in return, that indicates a bidirectional link, and all is well. If a UDLD frame is not received in return, the link is considered unidirectional.
UDLD has two modes of operation, normal and aggressive. When a unidirectional link is detected in normal mode, UDLD generates a syslog
message but does not shut the port down. In aggressive mode, the port will be put into error disabled state ("err-disabled") after eight UDLD messages receive no echo from the remote switch.
Loop Guard prevents a port from going from blocking to forwarding mode due to a unidirectional link. Once the unidirectional link issue is cleared up, the port will come out of loop-inconsistent state and will be treated as an STP port would normally be.
BDPU Skew Detection is strictly a notification feature. Skew Detection will not take action to prevent STP recalculation when BDPUs are not being relayed quickly enough by the switches, but it will send a syslog message informing the network administrator of the problem.
Comparison of STP / RSTP post states: STP: disabled > blocking > listening > learning > forwarding RSTP: discarding > learning > forwarding
When a switch running RSTP misses three BPDUs, it will immediately begin the STP recalculation process. Since the default hello-time is 2 seconds for both STP and RSTP, it takes an RSTP-enabled switch only 6 seconds overall to determine that a link to a neighbor has failed. That switch will then age out any information regarding the failed switch.
When our old friend IEEE 802.1Q ("dot1q") is the trunking protocol, Common Spanning Tree is in use. With dot1q, all VLANs are using a single instance of STP.
Per-VLAN Spanning Tree (PVST) is just what it sounds like - every VLAN has its own instance of STP running.
Defined by IEEE 802.1s, Multiple Spanning Tree gets its name from a scheme that allows multiple VLANs to be mapped to a single instance of STP, rather than having an instance for every VLAN in the network.
The switches in any MST region must agree of the following: 1. The MST configuration name 2. The MST instance-to-VLAN Mapping table 3. The MST configuration revision number If any of these three values are not agreed upon by two given switches, they are in different regions.
Each and every switch in your MST deployment must be configured manually.
To map VLANs to a particular MST instance: SW2(config-mst)# instance 1 10,13, 14-20
Note that I could use commas to separate individual VLANs or use a hyphen to indicate a range of them.
Networking Models The core layer is the backbone of your entire network, so we're interested in high-speed data transfer and very low latency - that's it!
Today's core switches are generally multilayer switches - switches that can handle both the routing and switching of data.
Advanced QoS is generally performed at the core layer.
Not only do the distribution-layer switches have to have high-speed ports and links, they've got to have quite a few to connect to both the access and core switches. That's one reason you'll find powerful multilayer switches at this layer - switches that work at both L2 and L3.
End users communicate with the network at the Access layer. VLAN membership is handled at this layer, as well as traffic filtering and basic QoS. Collision domains are also formed at the access layer.
A good rule of thumb for access switches is "low cost, high switchport-touser ratio". Don't assume that today's sufficient port density will be just as sufficient tomorrow!
Switch blocks are units of access-layer and distribution-layer devices.
Core blocks consist of the high-powered core switches, and these core blocks allow the switch blocks to communicate.
Dual core is a network design where the switch blocks have redundant connections to the core block. The point at which the switch block ends and the core block begins is very clear.
In a collapsed core, there is no dedicated core switch. The distribution and core switches are the same.
AAA servers, syslog servers, network monitoring tools, and intruder detection tools are found in almost every campus network today. All of these devices can be placed in a switch block of their own, the network management block.
The Enterprise Edge Block works with the Service Provider Edge Block to bring WAN and Internet access to end users.
To configure port autorecovery from err-disabled state, define the causes of this state that should be recovered from without manual intervention, then enter the duration of the port’s err-disabled state in seconds with the following commands: SW2(config)#errdisable recovery cause all SW2(config)#errdisable recovery interval 300
Etherchannels Spanning-Tree Protocol (STP) considers an Etherchannel to be one link. If one of the physical links making up the logical Etherchannel should fail, there is no STP reconfiguration, since STP doesn’t know the physical link went down.
There are two protocols that can be used to negotiate an etherchannel. The industry standard is the Link Aggregation Control Protocol (LACP), and the Cisco-proprietary option is the Port Aggregation Protocol (PAgP).
PAgP and LACP use different terminology to express the same modes. PAgP has a dynamic mode and auto mode. LACP uses active and passive modes, where active ports initiate bundling and passive ports wait for the remote switch to do so.
To select a particular negotiation protocol, use the channel-protocol command. SW1(config-if)#channel-protocol ? lacp Prepare interface for LACP protocol pagp Prepare interface for PAgP protocol
The channel-group command is used to place a port into an etherchannel. SW1(config-if)#channel-group 1 mode ? active Enable LACP unconditionally auto Enable PAgP only if a PAgP device is detected desirable Enable PAgP unconditionally on Enable Etherchannel only passive Enable LACP only if a LACP device is detected
Ports bundled in an Etherchannel need to be running the same speed, duplex, native VLAN, and just about any other value you can think of! If you change a port setting and the EC comes down, you know what to do change the port setting back!
QOS The three basic reasons for configuring QoS are delays in packet delivery, unacceptable levels of packet loss, and jitter in voice and video traffic. Of course, these three basic reasons have about 10,000 basic causes! ;)
Best-effort is just what it sounds like - routers and switches making their "best effort" to deliver data. This is considered QoS, but it's kind of a "default QoS". Best effort is strictly "first in, first out" (FIFO).
Integrated Services is much like the High-Occupancy Vehicle lanes found in many larger cities. If your car has three or more people in it, you're considered a "priority vehicle" and you can drive in a special lane with much less congestion than regular lanes. Integrated Services will create this lane in advance for "priority traffic", and when that traffic comes along, the path already exists.
Integrated Services uses the Resource Reservation Protocol (RSVP) to create these paths. RSVP guarantees a quality rate of service, since this "priority path" is created in advance. With Differentiated Services (DiffServ), there are no advance path reservations and there's no RSVP. The QoS policies are written on the routers and switches, and they take action dynamically as needed. Since each router and switch can have a different QoS policy, DiffServ takes effect on a per-hop basis rather than the per-flow basis of Integrated Services. A packet can be considered "high priority" by one router and "normal priority" by the next.
Layer 2 switches have a Class Of Service field that can be used to tag a frame with a value indicating its priority. The limitation is that a switch can't perform CoS while switching a frame from one port to another port on the same switch. If the source port and destination port are on the same switch, QoS is limited to best-effort delivery. CoS can tag a frame that is about to go across a trunk.
Classification is performed when a switch examines the kind of traffic in question and comparing it against any given criterion, such as an ACL.
The point in your network at which you choose not to trust incoming QoS values is the trust boundary. The process of changing an incoming QoS value with another value is marking.
Placing the traffic into the appropriate egress queue, or outgoing queue, is what scheduling is all about.
Tail Drop is aptly named, because that's what happens when the queue fills up. The frames at the head of the queue will be transmitted, but frames coming in and hitting the end of the line are dropped, because there's no place to put them.
Random Early Detection (RED) does exactly what the name says - it detects high congestion early, and randomly drops packets in response. This will inform a TCP sender to slow down. The packets that are dropped are truly random, so while congestion is avoided, RED isn't a terribly intelligent method of avoiding congestion.
WRED will use the CoS values on a switch and the IP Precedence values on a router to intelligently drop frames or packets. Thresholds are set for values, and when that threshold is met, frames with that matching value will be dropped from the queue.
Both RED and WRED are most effective when the traffic is TCP-based, since both of these QoS strategies take advantage of TCP’s retransmission abilities.
Enabling QoS on a switch is easy enough: SW2(config)#mls qos
Basic steps to creating and applying a QoS policy:
1. Use an ACL to define the traffic to be affected by the policy. 2. Write a class-map that calls the ACL. 3. Write a policy-map that calls the class-map and names the action to be taken again the matching traffic. 4. Apply the policy-map with the service-policy command.
Traffic should generally be classified and marked at the Access layer. Low Latency Queuing (LLQ) is an excellent choice for core switches. The name says it all - low latency! Weighted Fair Queuing gives priority to low-volume traffic, and highvolume traffic shares the remaining bandwidth.
Multilayer Switching Multilayer switches are devices that switch and route packets in the switch hardware itself.
The first multilayer switching (MLS) method is route caching. This method may be more familiar to you as NetFlow switching. The routing processor routes a flow's first packet, the switching engine snoops in on that packet and the destination, and the switching engine takes over and forwards the rest of the packets in that flow.
A flow is a unidirectional stream of packets from a source to a destination.
Cisco Express Forwarding (CEF) is a highly popular method of multilayer switching. Primarily designed for backbone switches, this topology-based switching method requires special hardware, so it's not available on all L3
switches.
CEF-enabled switches keep a Forwarding Information Base (FIB) that contains the usual routing information - the destination networks, their masks, the next-hop IP addresses, etc - and CEF will use the FIB to make L3 prefix-based decisions. The FIB's contents will mirror that of the IP routing table.
The FIB takes care of the L3 routing information, but what of the L2 information we need? That's found in the Adjacency Table (AT).
On an MLS, a logical interface representing a VLAN is configured like this: MLS(config)#interface vlan 10 MLS(config-if)#ip address 10.1.1.1 255.255.255.0
You need to create the VLAN before the SVI, and that VLAN must be active at the time of SVI creation Hosts in that SVI’s VLAN should use this address as their gateway. Remember that the VLAN and SVI work together, but they're not the same thing. Creating a VLAN doesn't create an SVI, and creating an SVI doesn't create a VLAN.
The ports on multilayer switches are going to be running in L2 mode by default, so to assign an IP address and route on such a port, it must be configured as an L3 port with the no switchport command. MLS(config)#interface fast 0/1 MLS(config-if)# no switchport MLS(config-if)# ip address 172.1.1.1 255.255.255.0
To put a port back into switching mode, use the switchport command. MLS(config)# interface fast 0/1 MLS(config-if)# switchport
CEF has a limitation in that IPX, SNA, LAT, and AppleTalk are either not supported by CEF or, in the case of SNA and LAT, are nonroutable protocols. If you're running any of these on an CEF-enabled switch, you'll need fallback bridging to get this traffic from one VLAN to another.
Defined in RFC 2281, HSRP is a Cisco-proprietary protocol in which routers are put into an HSRP router group. One of the routers will be selected as the primary, and that primary will handle the routing while the other routers are in standby, ready to handle the load if the primary router becomes unavailable. The MAC address 00-00-0c-07-ac-xx is reserved for HSRP, and xx is the group number in hexadecimal. On rare occasions, you may have to change the MAC address assigned to the virtual router. This is done with the standby mac-address command. R2(config-if)#standby 5 mac-address 0000.1111.2222
The following configuration configures an HSRP router for interface tracking. The router’s HSRP priority will drop by 10 (the default decrement) if the line protocol on Serial0 goes down. R2(config)#interface ethernet0 R2(config-if)#standby 1 priority 105 preempt R2(config-if)#standby 1 ip 172.12.23.10 R2(config-if)#standby 1 track serial0
Defined in RFC 2338, VRRP is the open-standard equivalent of HSRP. VRRP works very much like HSRP, and is suited to a multivendor
environment.
As with HSRP and VRRP, GLBP routers will be placed into a router group. However, GLBP allows every router in the group to handle some of the load, rather than having a primary router handle all of it while the standby routers remain idle.
The Active Virtual Gateway (AVG) in the group will send requesting hosts ARP responses containing virtual MAC addresses. The virtual MAC addresses are assigned by the AVG as well, to the AVFs – Active Virtual Forwarders.
To add a server’s IP address to a server farm: MLS(config)# ip slb serverfarm ServFarm MLS(config-slb-sfarm)# real 210.1.1.11 MLS(config-slb-real)# inservice
To create the virtual server for the server farm: MLS(config)# ip slb vserver VIRTUAL_SERVER MLS(config-slb-vserver)# serverfarm ServFarm MLS(config-slb-vserver)# virtual 210.1.1.14 MLS(config-slb-vserver)# inservice
Switch Security A local database of passwords is just one method of authenticating users. We can also use RADIUS servers (Remote Authentication Dial-In User Service, a UDP service) or TACACS+ servers (Terminal Access
Controller Access Control System, a TCP service). To enable AAA on a switch: SW2(config)#aaa new-model
Port security uses a host’s MAC address for authentication. SW2(config)#int fast 0/5 SW2(config-if)#switchport port-security Command rejected: Fa0/5 is not an access port. SW2(config-if)#switchport mode access SW2(config-if)#switchport access vlan 10
The number of secure MAC addresses defined here includes static and dynamically learned addresses.
One major difference between dot1x port-based authentication and port security is that both the host and switch port must be configured for 802.1x EAPOL (Extensible Authentication Protocol over LANs). Until the user is authenticated, only the following protocols can travel through the port:
EAPOL Spanning-Tree Protocol Cisco Discovery Protocol
By default, once the user authenticates, all traffic can be received and transmitted through this port. To configure dot1x, AAA must first be enabled.
SPAN allows the switch to mirror the traffic from the source port(s) to the destination port, the destination port being the port to which the network analyzer is attached. Local SPAN occurs the destination and source ports are all on the same switch. If the source was a VLAN rather than a collection of physical ports, VLAN-based SPAN (VSPAN) would be in effect. RSPAN (Remote SPAN) is configured when source and destination ports are found on
different switches. The command monitor session will start a SPAN session, along with allowing the configuration of the source and destination. SPAN Source port notes:
A source port can be monitored in multiple SPAN sessions. A source port can be part of an Etherchannel. A source port cannot then be configured as a destination port. A source port can be any port type - Ethernet, FastEthernet, etc.
SPAN Destination port notes:
A destination port can be any port type. A destination port can participate in only one SPAN session. A destination port cannot be a source port. A destination port cannot be part of an Etherchannel. A destination port doesn't participate in STP, CDP, VTP, PaGP, LACP, or DTP.
To filter traffic between hosts in the same VLAN, we've got to use a VLAN Access List (VACL). A sample configuration follows: SW2(config)#ip access-list extended NO_123_CONTACT SW2(config-ext-nacl)#permit ip 171.10.10.0 0.0.0.3 172.10.10.0 0.0.0.255 SW2(config)# vlan access-map NO_123 10 SW2(config-access-map)# match ip address NO_123_CONTACT SW2(config-access-map)# action drop SW2(config-access-map)# vlan access-map NO_123 20 SW2(config-access-map)# action forward SW2(config)# vlan filter NO_123 vlan-list 100
Dot1q tunneling allows a service provider to transport frames from different customers over the same tunnel - even if they're using the same VLAN numbers. This technique also keeps customer VLAN traffic
segregated from the service provider's own VLAN traffic. The configuration is very simple, and needs to be configured only on the service provider switch ports that are receiving traffic from and sending traffic to the customer switches. MLS_1(config)#int fast 0/12 MLS_1(config-if)#switchport access vlan 100 MLS_1(config-if)#switchport mode dot1qtunnel MLS_1(config-if)#vlan dot1q tag native
The service provider switches will accept CDP frames from the customer switches, but will not send them through the tunnel to the remote customer site. Worse, STP and VTP frames will not be accepted at all, giving the customer a partial (and inaccurate) picture of its network. To tunnel STP, VTP, and CDP frames across the services provider network, a Layer 2 Protocol Tunnel must be built.
Voice VLANs As is always the case with voice or video traffic, the key here is getting the voice traffic to its destination as quickly as possible in order to avoid jitter and unintelligible voice streams. The human ear will only accept 140 - 150 milliseconds of delay before it notices a problem with voice delivery. That means we've got that amount of time to get the voice traffic from Point A to Point B.
802.1p is a priority tagging scheme that grants voice traffic a high priority. All voice traffic will go through the native voice VLAN, VLAN 0. 802.1q will carry traffic in a VLAN configured especially for the voice traffic. This traffic will have a CoS value of 2.
Some Voice VLAN commands and their options: MLS(config)# mls qos
(globally enables QoS on the switch)
MLS(config)# interface fast 0/5
(port leading to IP phone)
MLS(config-if)# mls qos trust cos
(trust incoming CoS values)
MLS(config-if)# switchport voice vlan ( x / dot1p / none / untagged)
To configure the phone to accept the CoS values coming from the PC: MLS(config)# interface fast 0/5
(port leading to IP phone)
MLS(config-if)# switchport priority extend trust
To configure the phone not to trust the incoming CoS value: MLS(config)# interface fast 0/5
(port leading to IP phone)
MLS(config-if)# switchport priority extend cos 0
We can also make this trust conditional, and trust the value only if the device on the other end of this line is a Cisco IP phone. SW2(config-if)#mls qos trust device ? cisco-phone Cisco IP Phone SW2(config-if)#mls qos trust device cisco-phone
If you configure that command and show mls qos interface indicates the port is not trusted, most likely there is no IP Phone connected to that port. Trust me, I've been there. :) SW2#show mls qos interface fast 0/5 FastEthernet0/5 trust state: not trusted trust mode: trust cos COS override: dis default COS: 0 DSCP Mutation Map: Default DSCP Mutation Map Trust device: cisco-phone Copyright © 2010 The Bryant Advantage. All Rights Reserved.
How To Perform Hexadecimal Conversions Chris Bryant, CCIE #12933
www.thebryantadvantage.com
Back To Index
Performing Hexadecimal Conversions Cisco certification candidates, from the CCNA to the CCIE, must master binary math. This includes basic conversions, such as binary-to-decimal and decimal-to-binary, as well as more advanced scenarios involving subnetting and VLSM. Newcomers to hexadecimal numbering are often confused as to how a letter of the alphabet can possibly represent a number. Worse, they may be intimidated – after all, there must be some incredibly complicated formula involved with representing the decimal 11 with the letter “b”, right? Wrong. The numbering system we use every day, decimal, concerns itself with units of ten. Although we rarely stop to think of it this way, if you read a decimal number from right to left, the number indicates how many units of one, ten, and one hundred we have. That is, the number “15” is five units of one and one unit of ten. The number “289” is nine units of one, eight units of ten, and two units of one hundred. Simple enough!
Units Of 100 Units Of 10 Units Of 1 The decimal “15” 0 1 5 The decimal “289” 2 8 9
Hex numbers are read much the same way, except the units here are units of 16. The number “15” in hex is read as having five units of one and one unit of sixteen. The number “289” in hex is nine units of one, eight units of sixteen, and two units of 256 (16 x 16).
Units Of 256 Units Of 16 Units Of 1 The hex numeral “15” 0 1 5 The hex numeral “289” 2 8 9
Since hex uses units of sixteen, how can we possibly represent a value of 10, 11, 12, 13, 14, or 15? We do so with letters. The decimal “10” is represented in hex with the letter “a”; the decimal 11 with “b”; the decimal “12” with “c”, “13” with “d”, “14” with “e”, and finally, “15” with “f”. (Remember that a MAC address of “ffff.ffff.ffff” is a Layer 2 broadcast.) Practice Your Conversions For Exam Success Now that you know where the letters fall into place in the hexadecimal numbering world, you’ll have little trouble converting hex to decimal and decimal to hex – if you practice. How would you convert the decimal 27 to hex? You can see that there is one unit of 16 in this decimal; that leaves 11 units of one. This is represented in hex with “1b” – one unit of sixteen, 11 units of one.
Work From Left To Right To Perform Decimal – Hexadecimal Conversions.
Decimal Number “27”
Units of 256 0
Units of 16 1
Units of 1 B (11)
Hexadecimal Value 1b
Converting the decimal 322 to hex is no problem. There is one unit of 256; that leaves 66. There are four units of 16 in 66; that leaves 2, or two units of one. The hex equivalent of the decimal 322 is the hex figure 142 – one unit of 256, four units of 32, and 2 units of 2.
Decimal Number “322”
Units of 256 1
Units of 16 4
Units of 1 2
Hexadecimal Value 142
Hex-to-decimal conversions are even simpler. Given the hex number 144, what is the decimal equivalent? We have one unit of 256, four units of 16, and four units of 4. This gives us the decimal figure 324.
Hexadecimal Number “144”
Units of 256 1
Units of 16 4
Units of 1 4
Decimal Value 256 + 64 + 4 = 324
What about the hex figure c2? We now know that the letter “c” represents the decimal number “12”. This means we have 12 units of 16, and two units of 2. This gives us the decimal figure 194.
Hexadecimal Number “c2”
Units of 256 0
Units of 16 12
Units of 1 2
Decimal Value 192 + 2 = 194
I have written 20 practice questions that will help you practice your hexadecimal conversion skills. Once you practice with these questions, and know exactly how each answer was arrived at, you’ll have no problem with hexadecimal conversions on your Cisco exams. Best of luck! To your success, Chris Bryant, CCIE™ #12933
1.
Convert the following hexadecimal number to decimal: 1c
2.
Convert the following hexadecimal number to decimal: f1
3.
Convert the following hexadecimal number to decimal: 2a9
4.
Convert the following hexadecimal number to decimal: 14b
5.
Convert the following hexadecimal number to decimal: 3e4
6.
Convert the following decimal number to hexadecimal: 13
7.
Convert the following decimal number to hexadecimal: 784
8.
Convert the following decimal number to hexadecimal: 419
9.
Convert the following decimal number to hexadecimal: 1903
10. Convert the following decimal number to hexadecimal: 345 11. Convert the following hex number to binary: 42 12. Convert the following hex number to binary: 12 13. Convert the following hex number to binary: a9 14. Convert the following hex number to binary: 3c 15. Convert the following hex number to binary: 74 16. Convert the following binary string to hex: 00110011 17. Convert the following binary string to hex: 11001111 18. Convert the following binary string to hex: 01011101 19. Convert the following binary string to hex: 10011101 20.Convert the following binary string to hex: 11010101
Before we go through the answers and how they were achieved, let's review the meaning of letters in hexadecimal numbering: A = 10, B = 11, C = 12, D = 13, E = 14, F = 15. that ffff.ffff.ffff is a Layer 2 broadcast !)
(And remember
Conversions involving hexadecimal numbers will use this chart: 256 16 1
_________________________________________________________________________
1.
Convert the following hexadecimal number to decimal: 1c 256 16 1
1
c
There is one unit of 16 and twelve units of 1. 16 + 12 = 28. _________________________________________________________________________
2. Convert the following hexadecimal number to decimal: f1 256 16
f
1
1
There are fifteen units of 16 and 1 unit of 1. 240 + 1 = 241 _________________________________________________
3. Convert the following hexadecimal number to decimal: 2a9 256 16
2
a
1
9
There are two units of 256, ten units of 16, and nine units of 1. 512 + 160 + 9 = 681 ______________________________________________________
4. Convert the following hexadecimal number to decimal: 14b 256 16 1
1
4
b
There is one unit of 256, four units of 16, and 11 units of 1. 256 + 64 + 11 = 331
______________________________________________________
5.
Convert the following hexadecimal number to decimal: 3e4 256 16 1
3
e
4
There are three units of 256, fourteen units of 16, and four units of 1. 768 + 224 + 4 = 996 ______________________________________________________
6. Convert the following decimal number to hexadecimal: 13 When converting decimal to hex, work with the same chart from left to right. Are there any units of 256 in the decimal 13? No.
256 16 1
0 Are there any units of 16 in the decimal 13? No. 256 16 1
0
0
Are there any units of 1 in the decimal 13? Sure. Thirteen of them. Remember how we express the number "13" with a single hex character? 256 16 1
0
0
d
The answer is "d". It's not necessary to have any leading zeroes when expressing the number. _________________________________________________
7. Convert the following decimal number to hexadecimal: 784 Are there any units of 256 in the decimal 784? Yes, three of them, for a total of 768. Place a "3" in the 256 slot, and subtract 768 from 784. 256 16 1
3 784 - 768 = 16 Obviously, there's one unit of 16 in 16. Since there is no remainder, we can place a "0" in the remaining slots. 256 16 1
3
1
0
The final result is the hex number "310". _________________________________________________
8. Convert the following decimal number to hexadecimal: 419 Are there any units of 256 in the decimal 419? Yes, one, with a remainder of 163. 256 16 1
1 Are there any units of 16 in the decimal 163? Yes, ten of them, with a remainder of three. 256 16 1
1
a
Three units of one takes care of the remainder, and the hex number "1a3" is the answer. 256 16 1
1
a
3
______________________________________________________
9.
Convert the following decimal number to hexadecimal: 1903
Are there any units of 256 in the decimal 1903? Yes, seven of them, totaling 1792. This leaves a remainder of 111. 256 16 1
7 Are there any units of 16 in the decimal 111? Yes, six of them, with a remainder of 15. 256 16 1
7
6
By using the letter "f" to represent 15 units of 1, the final answer "76f" is achieved. 256 16 1
7
6
f
_________________________________________________
10. Convert the following decimal number to hexadecimal: 345 Are there any units of 256 in 345? Sure, one, with a remainder of 89. 256 16 1
1 Are there any units of 16 in 89? Yes, five of them, with a remainder of 9. 256 16 1
1
5
Nine units of nine give us the hex number "159". 256 16 1
1
5
9
_________________________________________________
11. Convert the following hex number to binary: 42 First, convert the hex number to decimal. We know "42" in hex means we have four units of 16 and two units of 1. Since 64 + 2 = 66, we have our decimal. Now we've got to convert that decimal into binary. Here's our chart showing how to convert the decimal 66 into binary:
128 64 32 16 8 4 66 0 1 0 0 0 0
2 1
1
0
The correct answer: 01000010 _______________________________________________________
12. Convert the following hex number to binary: 12 First, convert the hex number to decimal. The hex number "12" indicates one unit of sixteen and two units of one; in decimal, this is 18. Now to convert that decimal into binary. Use the same chart we used in Question 11:
128 64 32 16 8 4 18 0 0 0 1 0 0
2 1 1 0
The correct answer: 00010010 _______________________________________________________
13. Convert the following hex number to binary: a9 First, convert the hex number to decimal. Since "a" equals 10 in hex, we have 10 units of 16 and nine units of 1. 160 + 9 = 169 Now convert the decimal 169 to binary:
128 64 32 16 8 4 169 1 0 1 0 1 0
2 1 0 1
The correct answer: 10101001 _______________________________________________________
14. Convert the following hex number to binary: 3c First, convert the hex number to decimal. We have three units of 16 and 12 units of 1 (c = 12), giving us a total of 60 (48 + 12). Convert the decimal 60 into binary:
60
128 64 32 16 8 4 0 0 1 1 1 1
2 1 0 0
The correct answer: 00111100
15. Convert the following hex number to binary: 74 First, convert the hex number to decimal. We have seven units of 16 and four units of 1, resulting in the decimal 116 (112 + 4). Convert the decimal 116 into binary:
116
128 64 32 16 8 4 0 1 1 1 0 1
2 1 0 0
The correct answer: 01110100 _______________________________________________________
16. Convert the following binary string to hex: 00110011 First, we'll convert the binary string to decimal:
128 64 32 16 8 4 2 0 0 1 1 0 0 1
1 Decimal 1 51
To finish answering the question, convert the decimal 51 to hex. Are there any units of 256 in the decimal 51? No. Are there any units of 16 in the decimal 51? Yes, three, for a total of 48 and a remainder of three. Three units of one gives us the hex number "33". 256 16
0
3
1
3
17. Convert the following binary string to hex: 11001111 First, we'll convert the binary string to decimal:
128 64 32 16 8 4 2 1 1 0 0 1 1 1
1 Decimal 1 207
Now convert the decimal 207 to hex. Are there any units of 256 in the decimal 207? No. Are there any units of
16 in the decimal 207? Yes, twelve of them, for a total of 192 and a remainder of 15. Twelve is represented in hex with the letter "c". Fifteen units of one are expressed with the letter "f", giving us a hex number of "cf". 256 16 1
0
c
f
18. Convert the following binary string to hex: 01011101 First, convert the binary string to decimal:
128 64 32 16 8 4 2 0 1 0 1 1 1 0
1 Decimal 1 93
Now convert the decimal 93 to hex. There are no units of 256, obviously. How many units of 16 are there? Five, for a total of 80 and a remainder of 13. We express the number 13 in hex with the letter "d". The final result is the hex number "5d". 256 16 1
0
5
d
19. Convert the following binary string to hex: 10011101 As always, convert the binary string to decimal first:
128 64 32 16 8 4 2 1 0 0 1 1 1 0
1 Decimal 1 157
Now convert the decimal 157 to hex. There are no units of 256. How many units of 16 are there in the decimal 157? Nine, for a total of 144 and a remainder of 13. You know to express the number 13 in hex with the letter "d", resulting in a hex number of "9d". 256 16 1
0
9 d
20. Convert the following binary string to hex: 11010101 First, convert the binary string to decimal:
128 64 32 16 8 4 2 1 1 0 1 0 1 0
1 Decimal 1 213
Now convert the decimal 213 to hex. No units of 256, but how many of 16? Thirteen of them, with a total of 208 and a remainder of 5. Again, the number 13 in hex is represented with the letter "d", and the five units of one give us the hex number "d5". 256 16 1
0
d
5
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
CCNP SWITCH Exam Command Reference Chris Bryant, CCIE #12933
www.thebryantadvantage.com
Back To Index
Command Reference Overview VLANs VTP Basic Spanning Tree Advanced Spanning Tree Basic Switch Operations Multicasting Quality of Service Multilayer Switching & Router Redundancy Switch Security & Tunneling Voice VLANs
VLANs show interface trunk shows port trunk modes, encapsulation, whether the interface is actually trunking, and the native vlan for each interface. SW1#show interface trunk Port Fa0/11 Fa0/12 Port
Mode desirable desirable
Encapsulation Status 802.1q trunking 802.1q trunking
Vlans allowed on trunk
Native vlan 1 1
Fa0/11 Fa0/12
1-999,1001-4094 1-999,1001-4094
Port Vlans allowed and active in management domain Fa0/11 1,12 Fa0/12 1,12 Port Vlans in spanning tree forwarding state and not pruned Fa0/11 1,12 Fa0/12 12
show vlan is the full command to see information regarding all VLANs on the switch, including some reserved ones you probably aren't using.
show vlan brief gives you the information you need to troubleshoot any VLAN-related issue, but limits the information shown on the reserved VLANs.
switchport nonegotiate turns DTP frames off, but the port must be hardcoded for trunking to do so. SW2(config)#int fast 0/8 SW2(config-if)#switchport nonegotiate Command rejected: Conflict between 'nonegotiate' and 'dynamic' status. SW2(config-if)#switchport mode ? access Set trunking mode to ACCESS unconditionally dynamic Set trunking mode to dynamically negotiate access or trunk mode trunk Set trunking mode to TRUNK unconditionally SW2(config-if)#switchport mode trunk SW2(config-if)#switchport nonegotiate
switchport mode access and switchport access vlan x work together to place a port into a VLAN. The first command prevents the port from becoming a trunk port, and the second command is a static vlan assignment. SW1(config)#int fast 0/1 SW1(config-if)#switchport mode access SW1(config-if)#switchport access vlan 12
switchport trunk allowed vlan is used to disallow or allow VLANs from sending traffic across the trunk, as shown with the below IOS Help readout. SW1(config-if)#switchport trunk allowed vlan ? WORD VLAN IDs of the allowed VLANs when this port is in trunking mode add add VLANs to the current list all all VLANs except all VLANs except the following none no VLANs remove remove VLANs from the current list
SW1(config)#interface fast 0/11 SW1(config-if)#switchport trunk allowed vlan except 1000 SW1(config-if)#interface fast 0/12 SW1(config-if)#switchport trunk allowed vlan except 1000
switchport trunk encapsulation is used to define whether ISL or dot1q will be used on the trunk. Rack1SW1(config-if)#switchport trunk encapsulation ? dot1q Interface uses only 802.1q trunking encapsulation when trunking isl Interface uses only ISL trunking encapsulation when trunking negotiate Device will negotiate trunking encapsulation with peer on interface
switchport trunk native vlan x is used to change the native VLAN of the trunk. This should be agreed upon by both endpoints. Be prepared to see an error message while you're changing this, as shown below. SW1(config-if)#switchport trunk native vlan 12 1d21h: %SPANTREE-2-RECV_PVID_ERR: Received BPDU with inconsistent peer vlan id 1on FastEthernet0/11 VLAN12. 1d21h: %SPANTREE-2-BLOCK_PVID_PEER: Blocking FastEthernet0/11 on VLAN0001. Inconsistent peer vlan. 1d21h: %SPANTREE-2-BLOCK_PVID_LOCAL: Blocking FastEthernet0/11 on VLAN0012. Inconsistent local vlan.
VTP show vtp counters displays the number of different VTP advertisements send and received by the switch.
show vtp status displays just about anything you need to know about your VTP domain, including domain name and revision number.
vtp domain is used to define the VTP domain.
vtp mode is used to define the switch as a VTP Server, Client, or as running in Transparent mode.
To configure VTP in secure mode, set a password on all devices in the VTP domain with vtp password. Verify with show vtp password.
Enable VTP pruning with vtp pruning, and check the VTP version with vtp
version.
Basic Spanning Tree show spanning tree interface x will display the STP settings for an individual port. SW2#show spanning-tree vlan 1 Interface Role Sts Cost Prio.Nbr Type ---------------- ---- --- --------- -------- -------------------------------Fa0/11 Root FWD 19 128.11 P2p Fa0/12 Altn BLK 19 128.12 P2p
show spanning-tree vlan x shows the STP setting for the entire VLAN. SW1#show spanning-tree vlan 1 VLAN0001 Spanning tree enabled protocol ieee Root ID Priority 32769 Address 000f.90e1.c240 This bridge is the root Hello Time 5 sec Max Age 30 sec Forward Delay 20 sec Bridge ID Priority 32769 (priority 32768 sys-id-ext 1) Address 000f.90e1.c240 Hello Time 5 sec Max Age 30 sec Forward Delay 20 sec Aging Time 300 Interface Role Sts Cost Prio.Nbr Type ---------------- ---- --- --------- -------- -------------------------------Fa0/11 Desg FWD 19 128.11 P2p Fa0/12 Desg FWD 19 128.12 P2p
spanning-tree vlan x can be used to make a nonroot the root bridge with either the root primary or priority options. SW2(config)#spanning-tree vlan 20 root primary SW2(config)#spanning-tree vlan 30 root primary SW2(config)#spanning-tree vlan 30 root ? primary Configure this switch as primary root for this spanning tree secondary Configure switch as secondary root SW2(config)#spanning-tree vlan 10 priority ? bridge priority in increments of 4096
spanning-tree vlan x is also used to change the STP timers, but this must be done on the root bridge to be effective. SW1(config)#spanning-tree vlan 1 hello-time 5 SW1(config)#spanning-tree vlan 1 max-age 30 SW1(config)#spanning-tree vlan 1 forward-time 20
Advanced Spanning Tree Portfast can be enabled on the interface level or globally with the spanning-tree portfast and spanning portfast default commands. SW1(config)#int fast 0/5 SW1(config-if)#spanning-tree portfast %Warning: portfast should only be enabled on ports connected to a single host. Connecting hubs, concentrators, switches, bridges, etc... to this interface when portfast is enabled, can cause temporary bridging loops. Use with CAUTION %Portfast has been configured on FastEthernet0/5 but will only have effect when the interface is in a non-trunking mode. SW1(config-if)# SW2(config)#spanning portfast default %Warning: this command enables portfast by default on all interfaces. You should now disable portfast explicitly on switched ports leading to hubs, switches and bridges as they may create temporary bridging loops.
Below, you'll see how to enable the STP features Uplinkfast, Backbonefast, Root Guard, BPDU Guard, Loop Guard, and UDLD. Several important options are also shown. You must know these
commands and exactly what they do. SW2(config)#spanning-tree uplinkfast
SW2(config)#spanning-tree backbonefast
SW3(config)#int fast 0/3 SW3(config-if)#spanning-tree guard root
SW1(config)#int fast 0/5 SW1(config-if)#spanning-tree bpduguard % Incomplete command. SW1(config-if)#spanning-tree bpduguard ? disable Disable BPDU guard for this interface enable Enable BPDU guard for this interface SW1(config-if)#spanning-tree bpduguard enable
SW2(config)#udld ? aggressive Enable UDLD protocol in aggressive mode on fiber ports except where locally configured enable Enable UDLD protocol on fiber ports except where locally configured message Set UDLD message parameters SW2(config)#udld enable
SW2(config-if)#int fast 0/5 SW2(config-if)#spanning-tree guard loop
To enable Multiple Spanning Tree: SW2(config)# spanning-tree mode mst
The name and revision number must now be set. SW2(config)# spanning-tree mode mst configuration SW2(config-mst)# name REGION1 SW2(config-mst)# revision 1
To map VLANs to a particular MST instance: SW2(config-mst)# instance 1 10,13, 14-20
Basic Switch Operation show mac-address-table displays the CAM table contents. This command has about 10 options -- the dynamic option is very helpful. SW2#show mac-address-table dynamic Mac Address Table ------------------------------------------Vlan Mac Address Type Ports ---- ------------------ ----1 000e.d7f5.a04b DYNAMIC Fa0/11 Total Mac Addresses for this criterion: 1
Create an SVI on an L3 switch: SWITCH_2(config)#interface vlan 1 SWITCH_2(config-if)#ip address 20.1.1.1 255.255.255.0
Configure the switch's VTY lines to accept Secure Shell connections: line vty 0 15 transport input ssh
Use the interface-range command to configure a number of interfaces with one command. Use speed and duplex to adjust those settings for an interface, and use description to, well, describe what the ports are doing! SW2(config)#interface range fast 0/1 - 11 SW2(config-if-range)#speed 10 SW2(config-if-range)#duplex half
SW2(config)#interface range fast 0/11 - 12 SW2(config-if-range)#description ports trunking with SW1 SW2(config)#errdisable recovery cause all SW2(config)#errdisable recovery interval ? timer-interval(sec) SW2(config)#errdisable recovery interval 300 SW1(config-if)#channel-group 1 mode ? active Enable LACP unconditionally auto Enable PAgP only if a PAgP device is detected desirable Enable PAgP unconditionally on Enable Etherchannel only passive Enable LACP only if a LACP device is detected
Multicasting Enable multicasting with ip multicast-routing. Statically configure the RP location with ip pim rp-address. Enable Sparse Mode on the interfaces with ip pim sparse. Verify with show ip pim neighbor. R1(config)#ip multicast-routing R1(config)#ip pim rp-address 172.12.123.1 R1(config)#int s0 R1(config-if)#ip pim sparse R2(config)#ip multicast-routing R2(config)#ip pim rp-address 172.12.123.1 R2(config)#int s0 R2(config-if)#ip pim sparse R3(config)#ip multicast-routing R3(config)#ip pim rp-address 172.12.123.1 R3(config)#int s0 R3(config-if)#ip pim sparse R1#show ip pim neighbor PIM Neighbor Table Neighbor Address Interface 172.12.123.3 Serial0 172.12.123.2 Serial0
Uptime Expires Ver Mode 00:11:08 00:01:37 v2 (DR) 00:11:37 00:01:38 v2
How to limit the multicast groups a router can serve as the RP for:
R1(config)#access-list 14 permit 224.0.1.40 R1(config)#ip pim rp-address 172.12.123.1 ? Access-list reference for group Access-list reference for group (expanded range) WORD IP Named Standard Access list override Overrides Auto RP messages R1(config)#ip pim rp-address 172.12.123.1 14
Configure routers as PIM RPs with send-rp-announce, and as PIM Mapping Agents with send-rp-discovery. R3(config)#ip pim send-rp-announce serial0 scope 5 R1(config)#ip pim send-rp-discovery serial 0 scope 5
Bootstrapping Commands: To configure R1 as a C-BSR: R1(config)# ip pim bsr-candidate
To configure R2 and R3 as C-RPs: R2(config)# ip pim rp-candidate
IGMP and CGMP: Verify IGMP snooping with show ip igmp snooping. SW1#show ip igmp snooping Global IGMP Snooping configuration: ----------------------------------IGMP snooping : Enabled IGMPv3 snooping (minimal) : Enabled Report suppression : Enabled TCN solicit query : Disabled TCN flood query count : 2 Vlan 1: --------
IGMP snooping : Enabled Immediate leave : Disabled Multicast router learning mode : pim-dvmrp Source only learning age timer : 10 CGMP interoperability mode : IGMP_ONLY
Enable CGMP on a router and switch as shown below. router interface must be PIM-enabled first. R1(config)#int e0 R1(config-if)#ip cgmp WARNING: CGMP requires PIM enabled on interface R1(config-if)#ip pim sparse R1(config-if)#ip cgmp SW1(config)#int fast 0/5 SW1(config-if)#ip cgmp
Quality Of Service To enable QoS: SW2(config)#mls qos
To configure an interface to trust the incoming CoS: MLS(config-if)# mls qos trust cos
To change your mind and take the trust off: SW2(config-if)# no mls qos trust
To create COS-DSCP and IP PREC-DSCP maps: SW2(config)# mls qos map cos-dscp SW2(config)#mls qos map ip-prec-dscp
A mutation map is created as follows: SW2(config) mls qos dscp-mutation
Note that the
The mutation map needs to be applied to the proper interface: SW2(config-if)mls qos dscp-mutation MAP_NAME
To create a QoS policy, write an ACL to identify the traffic and use a class-map to refer to the ACL: SW1(config)#access-list 105 permit tcp any any eq 80 SW1(config)#class-map WEBTRAFFIC SW1(config-cmap)#match access-group 105
QoS policies are configured with the policy-map command, and each clause of the policy will contain an action to be taken to traffic matching that clause. SW1(config)#policy-map LIMIT_WEBTRAFFIC_BANDWIDTH SW1(config-pmap)#class WEBTRAFFIC SW1(config-pmap-c)#police 5000000 exceed-action drop SW1(config-pmap-c)#exit
Finally, apply the policy to an interface with the service-policy command. SW1(config)# service-policy LIMIT_WEBTRAFFIC_BANDWIDTH in
Multilayer Switching
To create a Switched Virtual Interface: MLS(config)#interface vlan 10 MLS(config-if)#ip address 10.1.1.1 255.255.255.0
To configure a multilayer switch port as a routed port:
MLS(config)#interface fast 0/1 MLS(config-if)# no switchport MLS(config-if)# ip address 172.1.1.1 255.255.255.0
To configure a multilayer switch port as a switching port: MLS(config)# interface fast 0/1 MLS(config-if)# switchport
To configure basic HSRP: R2(config)#interface ethernet0 R2(config-if)#standby 5 ip 172.12.23.10 R3(config)#interface ethernet0 R3(config-if)#standby 5 ip 172.12.23.10 R2#show standby Ethernet0 - Group 5 Local state is Standby, priority 100 Hellotime 3 sec, holdtime 10 sec Next hello sent in 0.776 Virtual IP address is 172.12.23.10 configured Active router is 172.12.23.3, priority 100 expires in 9.568 Standby router is local 1 state changes, last state change 00:00:22
To change HSRP timers: R3(config-if)#standby 5 timers 4 12
To change HSRP priority and allow a router to take over from an online Active router: R2(config-if)#standby 5 priority 150 preempt
To change the HSRP virtual router MAC address: R2(config-if)#standby 5 mac-address 0000.1111.2222
To configure HSRP interface tracking: R2(config-if)#standby 1 track serial0
To configure GLBP: MLS(config-if)# glbp 5 ip 172.1.1.10
To change the interface priority, use the glbp priority command. To allow the local router to preempt the current AVG, use the glbp preempt command. MLS(config-if)# glbp 5 priority 150 MLS(config-if)# glbp 5 preempt
To configure members of the server farm "ServFarm" MLS(config)# ip slb serverfarm ServFarm MLS(config-slb-sfarm)# real 210.1.1.11 MLS(config-slb-real)# inservice
To create the SRB virtual server: MLS(config)# ip slb vserver VIRTUAL_SERVER MLS(config-slb-vserver)# serverfarm ServFarm MLS(config-slb-vserver)# virtual 210.1.1.14
MLS(config-slb-vserver)# inservice
To allow only specified hosts to connect to the virtual server: MLS(config-slb-vserver)# client 210.1.1.0 0.0.0.255
Switch Security / Tunnel Commands
To enable AAA and specify a RADIUS or TACACS server: SW2(config)#aaa new-model SW2(config)#radius-server host ? Hostname or A.B.C.D IP address of RADIUS server SW2(config)#tacacs-server ? host Specify a TACACS server
To define a default method list for AAA authentication: SW2(config)#aaa authentication login default local group radius
To configure port security: SW2(config)#int fast 0/5 SW2(config-if)#switchport port-security Command rejected: Fa0/5 is not an access port. SW2(config-if)#switchport mode access SW2(config-if)#switchport access vlan 10
To specify secure MAC addresses: SW2(config-if)#switchport port-security mac-address ? H.H.H 48 bit mac address
To set the port security mode: SW2(config-if)#switchport port-security violation ? protect Security violation protect mode restrict Security violation restrict mode shutdown Security violation shutdown mode
To enable Dot1x on the switch: SW2(config)#dot1x system-auth-control system-auth-control Enable or Disable SysAuthControl
Dot1x must be configured globally, but every switch port that's going to run dot1x authentication must be configured as well. SW2(config-if)#dot1x port-control ? auto PortState will be set to AUTO force-authorized PortState set to Authorized force-unauthorized PortState will be set to UnAuthorized
To configure and verify a local SPAN session: SW2(config)#monitor session 1 source interface fast 0/1 - 5 SW2(config)#monitor session 1 destination interface fast 0/10 SW2#show monitor Session 1 --------Type : Local Session Source Ports : Both : Fa0/1-2 Destination Ports : Fa0/10 Encapsulation : Native Ingress: Disabled
To verify a remote SPAN session, create the VLAN that will carry the mirrored traffic:
SW2(config)#vlan 30 SW2(config-vlan)#remote-span
Configure the source ports and destination as shown on the source switch: SW2(config)#monitor session 1 source interface fast 0/1 - 5 SW2(config)#monitor session 1 desti remote vlan 30 reflector-port fast 0/12
Configure the source VLAN and destination port on the destination switch: SW1(config)#monitor session 1 source remote vlan 30 SW1(config)#monitor session 1 destination interface fast 0/10
To create a VLAN ACL, first write an ACL specifying the traffic to be affected. SW2(config)#ip access-list extended NO_123_CONTACT SW2(config-ext-nacl)#permit ip 171.10.10.0 0.0.0.3 172.10.10.0 0.0.0.255
Follow that with the VLAN access-map: SW2(config)# vlan access-map NO_123 10 SW2(config-access-map)# match ip address NO_123_CONTACT SW2(config-access-map)# action drop SW2(config-access-map)# vlan access-map NO_123 20 SW2(config-access-map)# action forward
Finally, we've got to apply the VACL. We're not applying it to a specific interface - instead, apply the VACL in global configuration mode. SW2(config)# vlan filter NO_123 vlan-list 100
For dot1q tunneling, the following configuration would be needed on the service provider switch ports that will receive traffic from the customer:
MLS_1(config)#int fast 0/12 MLS_1(config-if)#switchport access vlan 100 MLS_1(config-if)#switchport mode dot1qtunnel MLS_1(config-if)#vlan dot1q tag native
By default, CDP, STP, and VTP will not be sent through the dot1q tunnel. To send those frames to the remote network, create an L2 protocol tunnel. This command has quite a few options, so I've shown as many as possible below. MLS_1(config-if)#l2protocol-tunnel ? cdp Cisco Discovery Protocol drop-threshold Set drop threshold for protocol packets point-to-point point-to-point L2 Protocol shutdown-threshold Set shutdown threshold for protocol packets stp Spanning Tree Protocol vtp Vlan Trunking Protocol MLS_1(config-if)#l2protocol-tunnel drop-threshold ? Packets/sec rate beyond which protocol packets will be dropped cdp Cisco Discovery Protocol point-to-point point-to-point L2 Protocol stp Spanning Tree Protocol vtp Vlan Trunking Protocol MLS_1(config-if)#l2protocol-tunnel drop-threshold cdp ? Packets/sec rate beyond which protocol packets will be dropped MLS_1(config-if)#l2protocol-tunnel drop-threshold cdp 2000 ? MLS_1(config-if)#l2protocol-tunnel drop-threshold cdp 2000 MLS_1(config-if)#l2protocol-tunnel shutdown-threshold ? Packets/sec rate beyond which interface is put to err-disable cdp Cisco Discovery Protocol point-to-point point-to-point L2 Protocol stp Spanning Tree Protocol vtp Vlan Trunking Protocol MLS_1(config-if)#l2protocol-tunnel shutdown-threshold vtp ? Packets/sec rate beyond which interface is put to err-disable MLS_1(config-if)#l2protocol-tunnel shutdown-threshold vtp 4096
Creating a private VLAN: MLS(config-vlan)#private-vlan community Private VLANs can only be configured when VTP is in transparent mode MLS(config-vlan)#exit MLS(config)#vtp mode transparent Setting device to VTP TRANSPARENT mode. MLS(config)#vlan 20 MLS(config-vlan)#private-vlan community MLS(config-vlan)#private-vlan association ? WORD VLAN IDs of the private VLANs to be configured add Add a VLAN to private VLAN list remove Remove a VLAN from private VLAN list MLS(config-vlan)#private-vlan association 30
The ports will now be placed into the private VLAN: MLS(config-if)# switchport mode private-vlan 20 host
Voice VLANs
The basic Voice VLAN configuration is as follows: MLS(config)# mls qos
(globally enables QoS on the switch)
MLS(config)# interface fast 0/5
(port leading to IP phone)
MLS(config-if)# mls qos trust cos
(trust incoming CoS values)
MLS(config-if)# switchport voice vlan ( x / dot1p / none / untagged)
To configure the phone to accept the CoS values coming from the PC: MLS(config)# interface fast 0/5
(port leading to IP phone)
MLS(config-if)# switchport priority extend trust
To configure the phone not to trust the incoming CoS value: MLS(config)# interface fast 0/5
(port leading to IP phone)
MLS(config-if)# switchport priority extend cos 0
To configure the switch to trust incoming CoS values if they're sent by a Cisco IP phone: MLS(config-if)# mls qos trust cos MLS(config-if)# mls qos trust device cisco-phone
Copyright © 2010 The Bryant Advantage. All Rights Reserved.
Simulator Question Success Chris Bryant, CCIE #12933 www.thebryantadvantage.com Back To Index
Five Tips For Success On Cisco Simulator Questions If there’s one type of Cisco exam question that causes anxiety among test-takers, it has to be the dreaded “simulator question”. In this kind of question, the candidate is presented with a task or series of tasks that must be performed on a router simulator. Certainly, this type of question is important. Cisco has stated for the record that these questions are given more weight than the typical multiple choice questions. Cisco has also stated that partial credit is given in multiple-task simulator questions; that is, if you are asked to do three things and you get two of them right, you do get some points for that. Given all that, not a day goes by that I don’t see a CCNA or CCNP candidate post a note on the Net about what simulator questions are like, what will be asked, and how to prepare for them. The conventional wisdom is that it’s impossible, if not difficult, to pass a Cisco exam if you miss the simulator questions. That fact is responsible for a lot of the anxiety I see out there. But as with most anxiety and fear, this anxiety can be conquered with knowledge. There are five simple steps toward conquering the dreaded simulator questions and nailing your CCNA or CCNP exam. Follow these steps, and you’ll be on your way to walking out of that testing room with a passing grade in your hand and a grin on your face! 1. Proper Performance Prevents Poor Performance The best thing you can do to get over your anxiety about simulator questions is to make sure you’re properly prepared for them. You have to go beyond just reading books! While simulator programs have come a long way, working on them exclusively is just not enough. You need to put in some work on real Cisco equipment. I hear you already: “I can’t afford my own equipment”. Yes, you can. It’s cheaper than you think.
Let’s look at the cost of simulators vs. Cisco equipment. A simulator program will set you back $150 - $200. From what I’ve seen on ebay, these programs have little resale value.
It's cheaper than ever before to put together your own Cisco lab. You don’t need anything incredibly fancy, and there are dozens of dealers on ebay who have pre-made CCNA and CCNP kits. They’ll sell you your cables, transceivers, and everything else you need to get started. When you’ve completed your CCNA or CCNP, you then have a choice to make. You can sell your lab on ebay or possibly back to the dealer who you bought it from in the first place, or you can keep it and add some equipment for your next level or study. You’re basically leasing the lab, not buying it. There is no substitute for working on Cisco routers. When you walk into a network center, do you see real Cisco routers and switches, or do you see stacks of simulators? Great chefs learn to cook in real kitchens, not kitchen simulators. Great Cisco engineers learn routing and switching on real routers and switches. When you do this, you’ll solve the simulator questions easily. 2. Relax. Sounds simple, right? The problem here is the “fear of the unknown”. Most of the emails I get on this topic are from candidates who are worried about what tasks they’re going to be asked to configure. Relax. You’re not going to be asked anything above the level of the exam. For the CCNA exams, I would think you’d be asked something along the lines of configuring a VLAN or a routing protocol. Certainly you’ll agree that a CCNA should know how to do that. (Would you hire a CCNA who didn’t know how to create a VLAN?) Change your mental approach to simulator questions. Look upon them as a chance to PROVE you know what you’re doing. People who don’t know what they’re doing might get lucky with a multiple choice questions, but there’s a simple rule with simulator questions: You either know how to do it or you don’t.
This is a chance to prove to Cisco that you are a true CCNA or CCNP. Look on these questions as opportunities, not obstacles. 3. All the information you need is right in front of you. I occasionally see a post or get an email from a candidate who says there’s not enough information to answer the question. This is incorrect. The simulator questions on Cisco exams are straightforward, and all the information you need is right there in front of you. Make sure to take the tutorial at the beginning on the exam. You do not lose any time by doing so, and there’s a thorough walkthrough of a simulator question in the tutorial. I know you’re anxious to get started when you walk into the testing room, but you must consider going through the tutorial part of your exam prep. Cisco’s not trying to trick you with these questions. Just make sure you know where to look for the information you need BEFORE you actually start the exam – go through the tutorial carefully. 4. Use IOS Help in the simulator when possible, but don’t depend on it being available. IOS Help is the Cisco engineer’s best friend. I’ve tackled and passed the CCIE Lab, and I can tell you point blank that anyone who says they remember all the available commands is lying to you. We all use IOS Help. It’s great to use in lab environments, too. You should be very familiar with IOS Help in your CCNA and CCNP study, and use it every chance you get However, don’t depend on it being available in your exam. You can certainly try to use it; it’s not like you lose points for doing so. But don’t be surprised if it’s not available. The same goes for “show” commands. Know the important ones, but again, don’t depend on them on exam day. 5. Type out your answer in Notepad before entering it into the simulator. If Notepad isn’t available, write out your answer first. I’ve heard different reports on whether Notepad is available in testing centers. Having said that, it’s very important for you to write out your answer one way or the other before entering it into the simulator. It’s not that you cannot remove your configuration on the simulator after
you enter it. I found writing out my answer before entering it really helped me on my way up the Cisco certification ladder, and current candidates have told me it really helps as well. Give it a try!
I know that the simulator questions on Cisco exams can make you a little nervous, particularly the first time you have to answer them. Using these five techniques will help you nail these important questions, and will help you emerge victorious on exam day. To Your Success, Chris Bryant CCIE #12933 Copyright © 2010 The Bryant Advantage. All Rights Reserved.
View more...
Comments