CCNA Study Guide Vol1

January 10, 2018 | Author: Cesar Morera Alpizar | Category: Transmission Control Protocol, Port (Computer Networking), Osi Model, Ethernet, Duplex (Telecommunications)
Share Embed Donate


Short Description

Chris Bryant's CCNA Study Guide Vol1 - Chris Bryant...

Description

Hi there! Thanks for purchasing my ICND1 Study Guide! Whether you’re preparing for success on the CCENT 100-101 exam or you’re going all the way to the CCNA 200120 exam, you’ve made an excellent choice. (Those of you preparing for the CCNA 200-120 exam should also

grab my ICND2 Study Guide to go along with this one.) You’re about to benefit from the same clear, comprehensive CCENT and CCNA instruction that thousands of students around the world have used to earn their certifications. They’ve done it, and you’re about to do the same! On the next page or two, I’ve listed some additional free resources that will definitely help you on your way to the CCENT, the CCNA, and

to real-world networking success. Use them to their fullest, and let’s get started on your exam pass! Chris Bryant “The Computer Certification Bulldog”

Udemy: https://www.udemy.com/u/chrisbryan Over 30,000 happy students have made me the #1 individual

instructor on Udemy, and that link shows you a full list of my free and almost-free Video Boot Camps! Use the discount code BULLDOG60 to join my 27-hour CCNA Video Boot Camp for just $44! You can also follow this link, which has the discount built in.

https://www.udemy.com/ccna-ondemand-video-boot-camp/? couponCode=bulldog60&ccManual=

YouTube: http://www.youtube.com/u/ccie12933

(Over 325 free training videos!) Website: http://www.thebryantadvantage.com (New look and easier-to-find tutorials in January 2014!) Facebook: http://on.fb.me/nlT8SD Twitter: https://twitter.com/ccie12933 See you there!

Chris B.

Copyright © 2013 The Bryant Advantage, Inc. All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the publisher except for the use of brief quotations in a book review. No part of this publication may be stored in a retrieval system, transmitted, or reproduced in any way, including but not limited to photocopy,

photograph, magnetic, or other record, without the prior agreement and written permission of the publisher. The Bryant Advantage, Inc., has attempted throughout this book to distinguish proprietary trademarks from descriptive terms by following the capitalization style used by the manufacturer. Copyrights and trademarks of all products and services listed or described herein are property of their respective owners and companies. All rules and laws pertaining to said copyrights and trademarks are inferred. Printed in the United States of America

First Printing, 2013 The Bryant Advantage, Inc. 9975 Revolutionary Place Mechanicsville, VA 23116

Table Of Contents: Free CCENT and CCNA Resources The Fundamentals Of Networking Ethernet (Header is “You Got Your Ethernet In My Cabling!”) Hubs & Repeaters (Header is “Hubs, Repeaters, and a little more Ethernet”) Switching Fundamentals and Security

Introduction To WANs (Header is “A Network Admin’s Book Of WANs) DNS, DHCP, And ARP Router Memory, Configs, and More IP Addresses and the Routing Process Config Modes and Fundamental Commands Static Routing The Wildcard Mask

OSPF Access Lists and The Network Time Protocol Route Summarization IP Version 6 NAT and PAT ROAS and L3 Switching Binary Math and Subnetting Mastery

The Fundamentals Of Networking (With A Little Zen Thrown In) Before we dive into the nuts and bolts of networking, let’s ask ourselves one question: What is networking? Why are we doing all of this stuff with routers, and switches, and who knows what else? That’s two questions, I grant you, but you get the point.

Networking was once simple. We had a few end users with terminals, and maybe a printer in the room, and that was about it.

Things got just a little more complicated after that. Our end users and company owners

were thrilled that everyone could now print to one printer – and then it seemed like everyone wanted something else! “Wouldn’t it be great if we could send each other files?” “Hey, can we set up the network so we can save data to a central location?” “We’re adding ecommerce to our business – can we set up our ecommerce servers so that only certain people can have access to them?” “We need to add voice

conferencing to our network.” “Wouldn’t it be great if we could see a person’s face while they host an online meeting for the company? When the first computer networks were put together – and for many networks after that – services we use today without a second thought, such as GoToMeeting and other video / voice conference tools, were fantasies. They’re fantasies no longer, and our

networks have grown and grown and grown in order to handle these services and make them available to our end users in an efficient manner. Of course, we also have to secure our networks, and the tools we use to do so can complicate our network operations. In short, we can have a collection of devices like the ones shown here all in one network, and it’s our job to get them to work together.

If you’re new to networking, I know from experience that the thought of putting all of the stuff together can be intimidating. It was for me the first time I put a network together, and we didn’t even have some of these devices yet!

Here’s the key to success in this field, and to keeping calm when your studies intimidate you – it’s all about the fundamentals. Success with networking, whether it’s on certification exams or in real-world server rooms, is all about knowing and applying the fundamentals. The material you’re about to study – networking models, TCP and UDP operation, and other networking fundamentals – is actually the most

important part of your network studies. After all, if you don’t know how the fundamentals work, you can’t possibly master the more advanced concepts. I’m mentioning all of this now for one reason. One night, when you’re tired and you’re hitting your studies hard, it’s easy to look at the OSI model and think to yourself, “Do I REALLY need to know this stuff?”

I know because I used to wonder that, too. I’ve been there. After 10+ years of experience with networking and having helped thousands of networking students around the world meet and exceed their career goals, I can tell you that yes, you REALLY do need to know this material! You’ll see why as we dive into the networking models and proceed through the course. Let’s get to it!

The OSI Networking Model

The OSI model isn’t just something to memorize for the exam and then forget. You’ll find it helpful for real-world troubleshooting and for breaking networking down into “pieces” that are easier to learn. We’ll first take an introductory look at the OSI model, getting familiar with what’s going on at each level, and then we’ll use it to create a path for CCENT and CCNA exam success.

Here’s the OSI model:

As network administrators, we’re primarily concerned with the bottom three layers. As CCENT and CCNA candidates, we’re concerned with all seven layers, so let’s start at the top of the model and work our way down.

The Application Layer

That’s not everything you need to know for the exam, but it’s a great start. The Application layer is where our end users interact with the network.

The Application layer performs important behind-the-scenes tasks: Makes sure the remote communications partner is available (remember, it takes two to network) Ensures that both ends of the communication agree on a myriad of rules, including data integrity, privacy, and error recovery (or lack of same)

The Presentation Layer

This layer answers one basic question, and that question is… “How should this data be presented?” Ever opened a PDF with a nonPDF-friendly word processing app? You can end up with hundreds of pages of unreadable garbage – just page after page of text that means nothing to you. That’s a Presentation Layer issue.

Encryption takes place at this layer, and you’re going to hear a lot about encryption in this and future courses!

The Session Layer This layer is the manager of the overall data transfer process. It handles the creation, maintenance, and teardown of the communication channel between the two parties (the “session”, hence the name of this layer!).

The Transport Layer The main purpose of the Transport layer is to establish a logical endto-end connection between two systems. That’s not the only thing going on here! Most of the additional Transport layer functions involve either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). Those two protocols are so important, they have their own section in the course – a section that could be worth 100+ points on your exam, so we’ll be hitting a lot of detail there.

Now we come to the OSI layers that you and I, the network admins, have regular interaction with.

The Network Layer For the router, the Network layer processes answer two basic questions: What valid paths exist from here to “Point B”? Of those paths, what is the best path to get to “Point B”? Boy, that was simple! Class over! Wellllll, it’s not quite that easy. We’ll dive into the details later in

this course. Right now, it’s enough to know that IP addresses (172.12.123.1, for example) are used at the Routing layer. There’s another important address we use in our networks, and that one runs at the next layer down.

The Data Link Layer

We’ll be spending a lot of time with switches in this course, and our switches run at this layer, as do these protocols: Ethernet HDLC (High Data Link Control) PPP (Point-to-Point Protocol) Frame Relay This address running at this layer is

usually referred to as the MAC address, but it literally has four other names: Layer 2 address Hardware address Burned-in address (BIA) Physical address The first name makes perfect sense, since we’re at Layer 2, but what about those others? This address is actually burned into the hardware, so you see where the

“hardware” and “burned-in” names come from. Be careful with that last name, though. We sometimes call the MAC address the “physical address” because it physically exists on the hardware – NOT because it runs at the Physical Layer of the OSI model, because it doesn’t. Right now, I want to introduce you to a set of terms that sound like they do the same thing, but we need to be very clear on the difference: Error detection Error correction

Remember: Detecting something doesn’t mean you’re correcting it. Here’s why I’m bringing this up right now…. The data link layer performs error detection via the Frame Check Sequence (FCS). The actual operation of the FCS goes beyond the scope of the CCENT exam, but as a network admin you really should know the FCS fundamentals: 1. The sender runs a mathematical formula (an algorithm) against the contents of the frame.

2. The sender places the result of that value into the FCS field of the frame, and then sends the frame. 3. The receiver of the frame runs the same algorithm against the contents of the frame. If the resulting value matches that contained in the FCS field, the frame is fine. If that resulting value does not match, that frame is considered corrupt and is then discarded.

So why no error recovery? It’s the recipient of the frame that detects the error, not the sender, and the recipient can’t retransmit the frame to itself! All the recipient can do is let the sender know there was a problem.

The Physical Layer

All the work we do at the upper layers of the OSI model is all about sending the data across the Physical layer in the form of ones and zeroes. The data our end users create is going to be eventually be “translated” into 1s and 0s. Anything having to do with a physical cable or the standards in use – the pins, the connectors, and the actual electric current – is

running at the Physical layer. With our end users entering data / sending photos / watching videos / whatever, it sounds like we have to do a lot of work to turn all of that into ones and zeroes. It’s almost like we need a plan to do so…. and here it is!

The Data Chopping Process

This is actually known as the overall data transmission process. It’s also a good reminder for network newcomers as to what we’re actually doing here. When the end user sends data, the data goes through all seven layers of the OSI model, but it doesn’t keep the same form – otherwise the physical layer would be getting HUGE chunks of data and have no idea what to do with it!

Instead, the Transport layer begins the process of taking the data and segmenting it into smaller units, and each layer below the Transport layer will break the units up into even smaller units, until the data has been transformed into a stream of ones and zeroes that can successfully be transmitted by the Physical layer. I can guarantee the following data unit terms and their associated OSI layers will show up on your CCENT and CCNA exams.

At the Application, Presentation, and Session layers, data is simply referred to as “data”. While there are important operations going on at these layers, the “chopping” of the data hasn’t started yet. That process begins at the Transport layer, where the data is placed into segments. At the Network layer, data is placed into packets.

At the Data Link layer, data is placed into frames. Finally, at the Physical layer, data takes the form of bits, and those bits are all ones and zeroes! For your exams, be very clear about the data transmission unit associated with each OSI layer. There’s a little extra overhead involved with the OSI model. Each layer is going to add its own header

that will be removed by the same layer on the other end of the session. These headers are layerspecific. The Transport layer doesn’t care about the contents of any header except the one placed there by the Transport layer on the other end of the session. There are almost always exceptions in networking, so let’s introduce you to one right now. Each of the top six layers will place only a header on the data except for the Data Link layer, which will add both a header and a trailer.

This combination of data and a layer-specific header is a Protocol Data Unit (PDU), and there’s a PDU for each layer shown above. They’re usually referred to by the layer – L7 PDU, L6 PDU, etc. Once the data is successfully

transmitted by the Physical layer, the data flows back up the model on the other end of the session. As you’d expect, each layer removes the header added by its counterpart on the other end of the session (“same-layer interaction”).

Let’s do a quick comparison of same-layer and adjacent-layer interaction. “Same-layer interaction” refers to an OSI layer on one end of the session removing the header placed on it by the same layer at the other end of the session.

Adjacent-layer interaction refers to the interaction between layers of

the OSI model on the same host. For example, the Application layer can have adjacent-layer interaction with the Presentation layer, the Presentation layer can have adjacent-layer interaction with both the Application and Session layers (the ones directly above and below it), and so forth. We’ve spent a lot of time with the OSI model, and while we’re not done with it, there’s another networking model….

The TCP / IP Networking Model

This model also uses layers to illustrate the data transport process, but only five layers as opposed to the OSI model’s seven. For the CCENT and CCNA exams, it’s an excellent idea to know the following: The layers of the TCP/IP and OSI models (check!) The responsibilities of each

layer (check!) How the layers of the two models map to each (coming up!) Enough prologue and dialogue… here’s the latest version of the TCP/IP model!

Let’s map the OSI model to the TCP

model:

The Application layer of the TCP/IP model maps to the top three layers of the OSI model (Application, Presentation, Session). After that, it’s a one-toone layer mapping all the way down. You TCP/IP model veterans know this is a LOT easier to deal with than the previous model, which follows:

The Internet layer is bulging a little bit, since I wanted to put at least two of the known names for that layer in there. Don’t even get me

started on the multiple names I’ve seen over the years for the bottom layer. We’re all happier with the much more intuitive, new TCP/IP model. There’s nothing tricky about these models, and they will be easy points for you on exam day. Having said that, on exam day, quickly double-check any questions you’re given on these models to be sure of the model you’re being asked about. If you’re asked about the OSI model, don’t give an answer of a layer in the TCP/IP model.

Here’s the magic question I just KNOW some of you are asking:

Why Do We Use Models, Anyway?

Mostly just to aggravate you on exam day. Just kidding! It only seems that way. One reason is that networking models help software vendors create (we hope) interoperable products. For our purposes, breaking the

overall networking process into smaller pieces makes it a lot easier to learn networking in the first place. This is a very important point, not just for this section, but for all of your studies – Cisco and otherwise. It’s really easy to become overwhelmed when you start learning this stuff. Here’s a surefire cure for that feeling: Just take it one feature at a time. Learn one thing about the

subject matter at a time, and soon you’re got it all mastered. That’s worked for me for over 15 years and it’ll work for you, too. Now back to the “why” regarding these models… Using a networking model to structure your troubleshooting approach is a real help, especially since most of what you and I do as network admins is troubleshooting.

(It’s not like we configure our networks from scratch every single day.) I always tell students to start troubleshooting at the Physical layer, and you’ll see what I mean as we perform troubleshooting throughout the course. In my experience, there are two kinds of network troubleshooters: 1. Those who have a structured approach. 2. Those who don’t have a

structured approach, and are basically just trying things blindly because they don’t truly understand what they’re doing. You want to be #1. In short – or in long – these networking models aren’t just something to memorize for your exams. You’ll be using them throughout your career, even if you’re not consciously thinking about it. Let’s take a closer look at two new friends that operate at the Transport layer. There’s a ton of fertile

ground for exam questions in this section!

TCP vs. UDP Here are the similarities between the two: They both operate at the Transport layer of the OSI model. They both perform something called “multiplexing”. You knew the first one, and we’ll talk about the second one later in this section – I promise! That’s about it for the similarities.

Let’s get to the differences! TCP Characteristics: Guarantees delivery of segments Performs error detection Performs “windowing” “Connection-Oriented”, meaning a two-way communication between the two endpoints will take place before data is actually exchanged

If those terms are new to you, that’s fine, we’ll define them in a moment. First, let’s take a look at UDP: UDP Characteristics: “Best-effort delivery”, which is good, but not guaranteed No error detection due to lack of Sequence or Ack numbers No windowing “Connectionless”, meaning there’s no pregame communication between the

two endpoints – the data exchange just starts! Even if you don’t know what windowing is, you have to admit that TCP already sounds a lot better than UDP. That very first characteristic of each – TCP guaranteeing delivery of segments while UDP doesn’t guarantee it – well, that sounds like enough for me to use TCP for everything and UDP for nothing! That’s not the case in the real

world. Many vital protocols, including the protocol that handles the very important work of dynamically assigning IP addresses, uses UDP. On top of that, the two most delaysensitive types of network traffic we have – voice and video – use UDP. Why? That question has a one-word

answer, and you just might figure out that one word as we examine the overall operation of both protocols, starting with TCP and a curious process called the “threeway handshake”.

TCP and the Three-Way Handshake

You’ve probably shaken hands many times in your life, but have you ever taken part in a three-way handshake? Let’s take this one step further: How in the world can you even have a three-way handshake? Here’s how and why: With TCP, there’s some work to be done before any segments are transmitted. The involved devices have to agree on some basic

parameters before any transmissions can happen, including the Initial Sequence Number (ISN). Here’s exactly how this handshake proceeds… The initiating server sends a TCP segment with the SYN bit set. That SYN stands for SYNchronization, and the primary value being synchronized is the TCP Sequence number.

The recipient responds with a TCP segment of its own, and this one has the Acknowledgement bit set as well as the SYN bit, which is why we call this a “SYN/ACK”.

Part of that SYN/ACK is the server acknowledging receipt of the original SYN, so it makes sense that the server getting the SYN/ACK needs to ack that. That’s the final step of our 3-way handshake!

Summing up: 1. The initiating server sends a

SYN in an effort to synchronize TCP values with the recipient. 2. The receiving server sends a SYN/ACK back for two reasons – one, to acknowledge receipt of the original SYN, and to further the synchronization process. 3. The initiating server sends an ACK to let the other server know the SYN/ACK was received, and that’s the end of

the handshake. Now let’s take a look at the UDP 3way handshake: < crickets chirping > That’s it, because UDP doesn’t have a 3-way handshake. Sounds like another strike against UDP. Now for another great TCP feature….

How TCP Detects and Recovers From Lost Segments

While the following is technically not error detection, it’s a very important concept of TCP – how TCP realizes segments have been lost, and how TCP recovers from lost segments. The TCP header contains separate fields for the sequence number and acknowledgement number, and it’s those two values that allow TCP to detect and recover from lost segments.

In the following example, one host is sending four segments to another. Each of the segments has a sequence number, and it’s that sequence number that tells the recipient in what order the segments should be reassembled. Here’s an illustrated look at the process, starting with S1 sending four 1500-byte segments to S2:

Once the four segments are received by S2, it would be a really good idea to let S1 know the segments arrived safely. S2 does that by sending a segment back to S1, but there will be no data in the segment. The ack number will be set, but it will not be set to the number of the last segment S2 received, as you might expect. Instead, it’ll be the number of the next data segment S2 expects to see, and with this pattern that would be 7500.

The natural assumption is that the ack would be set to 6000, but when S2 tells S1 the next segment number it expects to see, this cumulative acknowledgement scheme allows TCP to realize when segments have been lost. Let’s see what happens when one of

those four segments is lost.

S2 will send an ACK of 3000 back to S1.

When S1 sees that ACK of 3000 coming in, it knows the segment with SEQ 3000 wasn’t seen by S2, so S1 will then retransmit that particular segment. S2 will then send an ACK for that retransmitted segment indicating the NEXT sequence number it expects to see.

In networking studies, as in life, one answer tends to lead to another question. In this case, that question is “How does S1 know how many segments it can send before it has to receive an ACK?” Another question: “Why doesn’t the recipient just send an ACK for every segment it receives?”

Here’s why!

Flow Control and Windowing

Let’s step back to the TCP threeway handshake. We know that during that handshake, parameters are negotiated and agreed upon, one of those being the initial sequence number (ISN). Another value negotiated during this handshake is the initial size of the window, which determines how many bytes of data the recipient is willing to take in before it has to send an ACK.

Note this is the initial size of the window. It’s vital to remember that the size of the window is dynamic, not static. As the recipient realizes it can handle that initial amount of data effectively, the recipient will indicate to the sender that it can send more data before expecting an ACK.

To reiterate a couple of important points regarding windowing: The initial size of the window is negotiated and agreed upon during the threeway handshake. The size of the window is

expressed in bytes. The size of the window is dynamic, and it’s changed by the recipient, not the sender. This flow control can raise or lower the size of the window, also referred to as a sliding window. The recipient will lower the size of the window as it sees that errors and / or dropped segments are starting to creep in.

You can see why TCP flow control is such a great feature. It allows segment transmission to stay close to maximum speed while also letting the recipient slow the flow down when transmission is too fast. UDP offers no flow control.

Right now, TCP is looking pretty darn good, and UDP isn’t. There’s one word that describes UDP’s advantage over TCP. Before we trash UDP’s reputation entirely, let’s see what that word is.

UDP’s Advantage Over TCP Let’s compare the TCP and UDP headers. TCP’s header and data location:

UDP’s header:

Quite a difference in size, and that difference leads us to that single word that makes UDP so attractive… overhead. While TCP offers so many more

features than UDP, those features come at the cost of additional overhead. These headers are attached to every segment, and that really adds up – especially with the delay-sensitive voice and video traffic found on today’s networks. UDP is used for voice and video traffic for this very reason. The UDP header is much smaller than the TCP header, and it’s worth your time to note the two headers have three fields in common: Source port

Destination port Checksum For clarity’s sake, in the TCP header I listed a “Flags” section. That’s actually nine individual 1-bit flags, which include the ACK and SYN flags mentioned throughout this section.

This Just In: A UDP / TCP Similarity!

One factor UDP and TCP have in common is a little something called “multiplexing”. Before we see how multiplexing works, let’s see why we need it in the first place. You don’t need to know the ins and outs of IP addressing at this point in the course. It’s enough to know that IP addresses are used as a destination when data is sent to a network host.

So far, so good – until one host starts sending multiple flows of information to the other host. How is the recipient supposed to handle that? Let’s say the host at 10.1.1.11 is sending three different flows of data to 10.1.1.100:

A file transfer via the Trivial File Transfer Protocol Email via the Simple Mail Transfer Protocol Webpage data via HTTP

The server needs a way to keep the incoming flows separate so it can send the TFTP data to the application that will handle that data, SMTP data to the appropriate

application, and so on. That’s where our new best friends come in – well-knownport numbers! These port numbers may not be well-known by you (yet), but they are to our network devices. I’ll have a bigger list of well-known port numbers for you later in this section, and it would be an excellent idea for you to have those down cold for your exam. Let me introduce you to three of them now: TFTP: UDP port 69

HTTP: TCP port 80 SMTP: TCP port 25 The three data flows in our example will have the same source and destination IP addresses, but they’ll have different, pre-assigned port numbers. Those different port numbers are the reason the recipient knows how to handle the three incoming flows that all originated from the same host.

When a host receives data marked as UDP port 69, it knows that traffic should be delivered to the TFTP application, TCP port 80 traffic should go to the HTTP application, and so forth. These port numbers also allow the host to mix the three data types when sending to 10.1.1.100, rather than sending all the TFTP data first,

then the SMTP data, then the HTTP data. This mixing of data streams is multiplexing.

When you hear the term “socket”, you might think of a physical part of a network host, but it’s actually logical. The socket is simply a combination of IP address and port number. The socket for the TFTP traffic heading for 10.1.1.100 would be expressed as

10.1.1.100:69. You’ll sometimes see a socket (try saying THAT really fast three times!) expressed in this format: IP address, transport protocol, port number Using that mode of expression, the TFTP socket on 10.1.1.100 would be (10.1.1.100, UDP, 69). I’m sure you’ll agree with me that

the port number system works nicely as long as the hosts agree on the port used for a given protocol. For example, if 10.1.1.11 is using TCP port 55 for TFTP, we’re in a lot of trouble. That’s why most protocols use the same port number at all times, and these port numbers are well-known port numbers. Two bits of info for you on those: Any port number below 1024 is a reserved, well-known port number.

You do NOT have to memorize 1024 port numbers for your exams. I strongly recommend you memorize the port numbers listed throughout the rest of this section. You’ll find them helpful in both your networking studies and real-world network administration. They’re not just something to memorize for your exam that you’ll never use again, and like the OSI model layer, they become second nature to you.

TCP’s Four-Way Handshake (What?)

I didn’t want to hit you with this during the TCP/UDP comparison, since we had enough going on. Now that you’ve got all of this down, I want to show you TCP’s four-way handshake. The three-way handshake we saw earlier was during the connection establishment; the four-way handshake we’re about to see is the connection termination. I’ve seen

this expressed in several different ways over the years, but in the end each device needs to send a segment with the FIN bit set and they each need to ACK the FIN they receive.

We’ll wrap this section up with a couple of well-known port numbers

it would behoove ye to know, and then we’ll move on to Ethernet! TCP 20: File Transfer Protocol (file transfer) TCP 21: File Transfer Protocol (control) TCP 22: Secure Shell (SSH) TCP 23: Telnet TCP 25: Simple Mail Transfer Protocol TCP and UDP 53: Domain Name System

UDP 67, 68: Dynamic Host Configuration Protocol UDP 69: Trivial File Transfer Protocol TCP 80: Hypertext Transfer Protocol TCP 110: Post Office Protocol (Current version: 3) UDP 161: Simple Network Management Protocol TCP 443: Secure Sockets Layer (“HTTPS”) UDP 514: Syslog (Short for System Logging – more on that in your

ICND2 studies) UDP 546, 547: DHCP For IP Version 6 This is a great list to get started with, but it’s hardly all of the wellknown port numbers. The list on the following page is WAY beyond the scope of the CCENT and CCNA, but it’s a great reference list for future studies.

http://en.wikipedia.org/wiki/List_of_

With those in mind, let’s march on to Ethernet – and BEYOND!

You Got Your Ethernet In My Cabling! (You Got Your Cabling In My Ethernet!) When we get to Wide Area Networks, we’ll run into quite a few different encapsulations -HDLC, Frame Relay, and PPP in particular. With Local Area Networks, whether we’re connecting a host to a switch….

… or we’re connecting two switches…

… or we’re connecting a switch to a router…

…. we’re likely using Ethernet and Ethernet cables. In this section, we’ll talk a bit about how Ethernet works, and the different types of cables we’ll use in this network. And note that I said “Cables With An S”, because not all Ethernet cables are the same! Actually, not all Ethernet types are the same, so let’s start there.

Not All Ethernets Are The Same

“Ethernet” is really an umbrella term at this point, encompassing several different types of Ethernet, different capacities, and different challenges. For both a successful exam experience and a solid networking career, it’s a great idea to be comfortable with these values. Most kinds of Ethernet cables are Unshielded Twisted-Pair (UTP). The name is the recipe – the wires inside the cable are indeed pairs of

twisted cables. Why twist ’em? Twisting pairs of wires inside the cable cuts down on electromagnetic interference (EMI). EMI can interfere with the electrical signals carried by the wires, which in turn is really going to screw around with our network. EMI can come from other cables, and also (and infamously) from elevators. I know of more than one network that would slow down at lunchtime and quitting time because

that’s when the elevators were in heavy usage, and the network cables werun right next to the elevator shaft, which in turn gave our network the shaft. We can even have EMI problems from other wires in the same cable! This crosstalk happens when a signal “crosses over” from one pair of wires to another, making the signal on both sets of wires unusable. Near-end crosstalk (NEXT) occurs

when wires are crossed or crushed. The conductors inside the wires don’t have to be exposed, but if the conductors are too close, the signal traveling on one wire can interfere with the signal on another wire. Here are some common Ethernet types that run on regular old copper cabling, along with their official IEEE name and more common name. All have a maximum cable length of 100 meters. Ethernet’s official standard is

802.3, is generally referred to as 10Base-T, and runs at 10 Megabits per second (Mbps). Fast Ethernet (802.3u) is usually called 100Base-T, and runs at 100 Mbps. Gigabit Ethernet (802.3ab) is generally called 1000Base-T, and runs at 1000 Mbps. 10 Gig Ethernet (802.3an) is

NOT called 10000Base-T. It’s usually called 10GBaseT, and runs at – you guessed it! – 10 GBPS. There’s a huge difference between 10GBase-T and 10Base-T. Watch that G! We also have a version of Gig Ethernet that runs on fiber-optic cable. That version is 802.3z, and is often called 1000Base-LX. This version has a max cable length of 5000 meters, as opposed to 100

meters with all the other versions we’ve seen. So with that huge max cable length, why aren’t we running everything on 802.3z? It’s the sheer cost of the fiber optic cable. It’s a lot more expensive to install and troubleshoot than copper cable.

Multiple Standards Usually Equal Multiple Nightmares

Luckily, this isn’t one of those situations. When you send an Ethernet frame from Point A to Point B, there’s a chance the frame could go across a “regular” Ethernet link, then a Gig Ethernet link, and then a Fast Ethernet link as it arrives at its destination.

If we had to do some kind of translation every time a frame went from one Ethernet type to the other, we’d be doing a lot of translations and adding big time to our transmission time and overall network workload. Fear not – we don’t have to do anything like that. All of our different Ethernet standards have the same overall frame format:

Header, data, trailer. That’s it!

There’s another tool that allows us to seamlessly use network and host devices with different Ethernet capabilities – autonegotiation. Autonegotiation didn’t work all that well years ago, and it got to the point where most network admins manually set card and port settings. Cisco went so far as to make it a best practice NOT to use autonegotiation. I mention this because I don’t want you more-experienced network

admins glossing over this section, thinking “Hey, autonegotiation doesn’t work.” Autonegotiation has come so far since the bad old days that it’s actually mandatory for Gig Ethernet over copper, and it’s an important part of the overall Gig Ethernet standards. Having said that, let’s see how autonegotiation works! In this example, we have a host device connected to a Cisco switch port. The host is running 10BaseT,

and the switch port has a top capability of 1000BaseT.

The devices announce their capabilities via Fast Link Pulse (FLP). The logical question: “Fast as compared to what?” Fast as compared to the Normal Link Pulse (NLP)! Basically, the NLP is sent by an Ethernet device

when it has no data frames to send – it’s saying “Hello, I’m still here!” Here’s the NLP, compliments of Wikipedia:

Here’s the Fast Link Pulse, which autonegotiation-enabled devices use to announce their capabilities:

After the devices exchange this information…

… they come to an agreement on the values to use. As you’d expect, the

lowest speed is the one selected, and full-duplex is preferred over half-duplex. In this situation, the PC port and the switch port it’s connected to would run at 10 Mbps, full duplex. Autonegotiation will dynamically adjust if a port capacity changes. Let’s say you replace that PC with a PC that can run at Fast Ethernet speed.

If we manually set all of our switch port settings, we’d have to change the speed on that port manually. With autonegotiation, the switch will realize the new capacity of the device connected to that port, and voila – a 100 Mbps link! I highly recommend you use autonegotiation on both ends of a link such as this one, or don’t use it at all. You can end up with a link that isn’t working at its real capacity due to a duplex mismatch – a link where one endpoint is running at half-duplex and the other

end is running at full-duplex. When using Cisco switches, if autonegotiation is turned off on the other end of the link, the switch should still be able to sense the speed capacity of the other endpoint. If for some reason the speed capacity can’t be detected, the lowest speed supported will be used. That probably doesn’t surprise you, but this might. If that detected speed is less than or equal to 100 Mbps,

the switch will set its port speed to half-duplex. Hello, duplex mismatch! In this example, the Cisco switch has successfully detected the capacity of the remote endpoint to be 100 Mbps. No problem there, but a problem does arise when the Cisco switch sets its port connected to that host to half-duplex as a result of that speed.

Duplex mismatches have a special place in Network Heck, because they can be difficult to spot. The two devices will still be able to exchange data, but it’s going to be a slow, inefficient process. Run autonegotiation on both ends or don’t run it at all.

Crossover and Straight-Through Cables We’re going to use a simple network for this demo, and the two separate physical connections will require two different cable types.

A PC’s network sends on pins 1 and

2 and a switch sends on pins 3 and 6. In turn, the PC receives on pins 3 and 6, and the switch receives on pins 1 and 2. This means we can use a straight-through cable to connect the PC to the switch. The cable name comes from the wires inside the cable. Assuming we’re using Ethernet or Fast Ethernet for this connection (a safe assumption), we’re going to have four wires inside the cable, and each wire goes straight through from one end to another.

What exactly does “straight through” mean in this situation? The wire connected to Pin 1 at one end goes straight through to Pin 1 at the other end, the wire on Pin 2 goes straight through to Pin 2 at the other end, and the wires on Pins 3 and 6 go straight through to those pins on the other end. If you enjoy making your own cables and you run into a connection issue right away, I can practically guarantee the problem is that one of those wires in your straight-through cable is crossing

over to another pin. Gigabit Ethernet can use straightthrough cables as well, but to carry data that quickly, it follows that we’ll have more wires inside the cable. Where Ethernet and Fast Ethernet have 4 overall wires inside the cable, Gigabit Ethernet has eight. In a Gigabit straightthrough cable, one wire goes from Pin 1 to Pin 1, one wire from Pin 2 to Pin 2, and so forth for all eight pins. Wires crossing over inside the cable isn’t always bad. Sometimes

we want those wires to cross over in the cable – hence the name “crossover cable”, our next cable type! Crossover cables are necessary when we’re connecting two devices of the same type, and in a typical network, that’s going to be two switches. When we tackle switching in this course, you’ll see why interconnecting switches is so common. Our first step in this interconnection is choosing the right cable!

We can’t use a straight-through cable for a switch-to-switch connection, since they use the same pins to send and receive. We’d have the same pins sending data on both ends (a bad idea) and pins 3 and 6 on each end listening for data that will never arrive (sad!). Communication between the switches is made possible with a crossover cable. The four wires inside the cable “cross over” from one pin to another inside an Ethernet or Fast Ethernet crossover cable:

Wire on Pin 1 crosses over to Pin 3 Wire on Pin 2 crosses over to Pin 6 Wire on Pin 3 crosses over to Pin 1 Wire on Pin 6 crosses over to Pin 2 With this setup, when a switch sends data on the two pins used to send data (Pins 1 and 2), the switch on the other end of the cable will receive the data on pins used to receive data (Pins 3 and 6).

Gigabit Ethernet crossover cables have those same wires cross over in addition to the following: Wire on Pin 4 crosses over to Pin 7 Wire on Pin 5 crosses over to Pin 8 Wire on Pin 7 crosses over to Pin 4 Wire on Pin 8 crosses over to Pin 5 Now it’s time for a little “real world vs. theory” chat.

After reading that cabling section, some of you are saying “Hey, I used a straight-through cable to connect two switches with no trouble.” And you’re right – you just might have. Most Cisco switches will recognize what you’re trying to do when you connect them to each other with a straight-through cable, and the switch will dynamically adjust itself to make the straight-through cable work. Pretty cool!

When it comes to your CCENT and CCNA tests, though, you need to forget about that. Be clear on when you’d use a straight-through cable as opposed to a crossover cable: Devices transmit on same pins = crossover cable Devices transmit on different pins = straight-through cable

Both straight-through and crossover cables end with RJ-45 connectors, which “snap” right into place when connected to a PC NIC or switch / router Ethernet port. Five Names, One Address

Several protocols and services you’ll be introduced to in this course have more than one name, and we’ll start that tradition with the next topic, known by all of these names: MAC address (Media Access Control) Physical address (because the address physically exists on the network card) Layer 2 address

Burned-In address (BIA – the name comes from the address being physically burned into the NIC) Ethernet address That’s nothing! We used to have seven names for this address, but the terms “NIC address” and “LAN address” have pretty much fallen by the wayside. Throughout the courses, I’ll use the term “MAC address”, but you should be familiar with all the names listed

here. The MAC address is used by switches to send frames to the proper destination in the most efficient manner possible, a process you’ll be introduced to in the Switching section. Before we see how that works, I want to introduce you to the address format and the characters we’ll see in this address. The MAC address is six bytes long (48 bits), and can be expressed in either of these formats:

aa-bb-cc-11-22-33 aabb.cc11.2233

MAC addresses consist of hexadecimal values, and if that phrase gives you anxiety, fuggetaboutit. By the time this section is over and you get some practice in, you’ll be working with hex like a champ – or more accurately, like a CCENT and CCNA! You’ll hear me say this throughout the course, and I’ll start now: The

key to mastering hex, binary, and subnetting is practice. Reading about it is not enough! The MAC address has two parts, the first being the Organizationally Unique Identifier (OUI, not pronounced like the French word for “yes”, but Oh-You-Eye). The OUI is assigned to hardware vendors by the Institute of Electrical and Electronics Engineers (IEEE). The name is the recipe – the OUI is

unique to that organization and is not assigned to another org. The second half of the MAC address is simply a value not previously used by the hardware vendor with that particular OUI. Using the earlier MAC address example, we see that… The OUI of the address is aabb-cc The vendor has not yet used

11-22-33 with that particular OUI, so the vendor is doing so now There’s a special MAC address for broadcast frames, and as we get to that topic, let’s take a look at the three overalltypes of network traffic. Unicast traffic – Destined for one particular host Multicast traffic – Destined

for a group of hosts Broadcast traffic – Destined for everybody You can spot broadcast and multicast MAC addresses by using the following rules: The broadcast MAC address is ff-ff-ff-ff-ff-ff (or FF-FFFF-FF-FF-FF, as case doesn’t matter in hex).

We have a range of multicast MAC addresses. The first half of a multicast MAC address will always be 0100-5e. The second half will fall in the range of 7F-FF-FF. Watch that 7! Remember that Ethernet header and trailer I mentioned briefly? No? Well, I don’t blame you, it was a fast mention. Let’s take a more detailed look at both the header and trailer.

The Ethernet Header And Trailer Here’s a high-level look at the overall Ethernet frame:

A detailed look at the header:

From left to right, a quick look at each field:

The preamble is there for synchronization purposes. The nuts and bolts of this field are (thankfully) way beyond the scope of the CCENT and CCNA exams. If you’d like to read more about this field, check out the Wikipedia entry for Ethernet. The Start Frame Delimiter (SFD) indicates the preamble has ended and the destination MAC address is on deck.

Both the destination and source addresses are MAC addresses. Finally, the type (EtherType) field indicates the protocol type carried in the data field. In today’s networks, that’s likely IPv4 or IPv6, but it can be plenty of other protocols. Here’s a detailed look at the Ethernet trailer contents:

That’s it! Considering the FCS is the Ethernet caboose, it’s easy to think there’s not much going on there, but the FCS is a vital error detection tool. It’s basically a three-step process:

The sender runs an algorithm against the contents of the frame, and takes the result of that algorithm and puts it in the FCS field. The result is the checksum. The receiver runs the exact same algorithm against the same contents, and expects to come up with the same checksum contained in the FCS field of the incoming frame.

If the results are the same, the frame is fine. If the results are not the same, something happened to the frame contents as the frame went across the wire, and the frame is dumped. There is no explicit notification from the receiver to sender that the frame was discarded. The FCS brings us error detection, but not correction. But Wait – There’s More!

We saw two common network connections earlier (PC to switch, switch to switch), and there’s another one I want to introduce you to – connecting your laptop or PC directly to a switch or router in order to configure it. For that physical connection, you’ll need yet another type of cable.

When you physically connect your laptop to a router or switch, you’ll

be connecting to the Console port on the network device. For this, you’ll need a rollover cable, also called a console cable. There are 8 wires inside the rollovercable, and they each roll over to a different pin at the other end: Pin 1 to Pin 8, Pin 2 to Pin 7, Pin 3 to Pin 6, and so on. One end of the console cable will have an RJ-45 connector, similar to that on the end of a land line phone wire. You’ll feel (and maybe hear) that end of the cable snap into the Console port.

It’s the other end of the console cable you need to be aware of. Some console cables have a DB-9 connector on one end, and modern laptops don’t have a DB-9 port. If that’s your situation, get an adapter for your cable – you can find them online at any major cable dealer. (And even most minor ones!) The need for an adapter is a good thing to find out before you visit a client site. Rollover cables are easy to spot,

since they’re almost totally flat and usually colored light blue. Here’s a link to a page on PacketByte.com that shows you a console cable, along with two other cable types that you’ll find in any great Cisco home lab!

http://packetbyte.com/Content/Cablin Next up – hubs and repeaters. You might not see many of them in today’s networks, but you need to

understand how they work in order to really grasp switch operation – and you’ll see hubs, repeaters, and switches on your exam, so let’s hit it!

Hubs, Repeaters, And A Little More Ethernet Cisco switches operate at the Data Link layer, but to fully understand and appreciate how switches operate (and to be fully prepared for the CCENT and CCNA exams), we need to look at the devices that preceded switches. These devices are still in some networks, but for reasons you’re about to discover, repeaters and

hubs aren’t terribly popular. If you never work with either in a production network, to be blunt, you’re not really missing anything. Having said that, you might miss out on some vital exam points if you’re not familiar with them, so let’s dive right in!

Repeaters And Hubs

Both repeaters and hubs are Layer 1 devices. So with a repeater, what exactly are we repeating, and why? When you’re listening to a radio station (non-satellite, that is), you know how the signal starts gradually breaking up as you get farther and farther away from the station? That gradual weakening of

the signal is called attenuation, and attenuation happens to any electrical signal, including the ones and zeroes we’re sending across the wire. Let’s say we have two hosts 175 meters apart, and the maximum effective cable length is 100 meters. Obviously, that’s going to be a problem, since the signal will be strong when it leaves the transmitting host, but weak or practically nonexistent when it arrives at the other host.

That’s the problem repeaters helped us resolve. A repeater takes an incoming signal and then generates a new, clean copy of that exact signal.

A hub operates in the same fashion, but the hub has more ports. That’s pretty much the only difference between the two. Some hubs have greater capability than others, but a basic hub is simply a multiport repeater. These hubs and repeaters seem pretty sweet, right? Why would we ever need anything more complicated and expensive than that? To answer that question, let’s see

what happens when a hub is in the middle of a simple little four-PC network.

Using a hub here means that only one PC can send data at a time, because what we have here is one

giant collision domain, meaning that data that one host sends can collide with data sent by another host. The result: All data involved in the collision is unusable. To prevent this, hosts on a shared Ethernet segment such as this will use Carrier Sense Multiple Access with Collision Detection, thankfully referred to as CSMA/CD. Here’s the CSMA/CD process: A host that wants to send data will first listen to the wire,

meaning that it checks the shared media to see if it’s in use. If the media is in use, the host backs off for a few milliseconds before listening to the wire again. If the media is not in use, the host sends the signal. If two PCs happen to send data at the exact same time,

the voltage on the wire will change, indicating to the hosts that there has been a data collision. In that situation, the PCs that sent the data will generate a jam signal, which indicates to the other hosts on the shared media that they shouldn’t send data right now. The PCs that sent the data will then invoke a backoff

timer, set to milliseconds. When each host’s random timer expires, they will each begin the CSMA/CD process from the beginning where they listen to the wire. Since the backoff timer value is totally random, it’s unlikely the two hosts will have the same problem again. This entire process happens pretty quickly, but we’d prefer to have no collisions at all, since collisions slow down the network. With the ultra-delay-sensitive voice and

video traffic today’s networks have to handle, delays due to collisions and retransmissions are totally unacceptable – just another reason you won’t see many repeaters and hubs in today’s production networks. There’s another big issue with hubs and repeaters, this one having to do with broadcasts. Just as the current network is a single collision domain, tis also a single broadcast domain. Every

single time one of those PCs on that hub sends a broadcast, every other PC on the hub is going to receive a copy of that broadcast, even if that PC doesn’t need or want the broadcast.

I want to drive this into your brain right now: Everything we do on a router or switch has a cost. I don’t mean a financial cost. I mean a cost in time, a cost to our available processor power, and/or a cost to our available bandwidth. Let’s say there are 64 PCs attached to that hub, and only 3 of the other PCs ever need to get a copy of the

broadcast sent by that particular PC. That leads to several unnecessary costs to our network: The hub has to create and send 60 unnecessary copies of the data. Each PC that doesn’t need the broadcast still has to take the time to examine the incoming broadcast, because they don’t know they don’t need it until

they do that. The bandwidth required to send these 60 unnecessary copies adds up. It’s a great idea to limit the scope of our broadcasts. In other words, limiting the transmission of broadcasts to hosts that actually need them. That’s a topic we’ll come back to quite a bit in the next section of the course.

Right now, it’s enough to see that between the possible data collisions and unnecessary propagation of broadcasts, hubs and repeaters have serious limitations in today’s networks. Two devices that can help us with these collisions and broadcasts are bridges and switches, and we’ll cover those thoroughly in the next section of the course! Take a moment to join over 28,000 students in my free and almost-free

Video Boot Camps on Udemy! There’s something for everyone, and there’s a discount code on the opening page for every paid course! My students have made me the #1 individual instructor on Udemy out of over 8000 teachers, and these courses are the reason why! Some free, some paid, all great! See you there!

https://www.udemy.com/u/chrisbryan

Switching Fundamentals And Security (Or, “I’d Rather Fight Than Not Switch!”) There was one more step between hubs / repeaters and the move to switches, and it was a giant step forward. The introduction of bridges meant we could create smaller collision

domains, which in turn resulted in fewer collisions. Sounds great, and it was, but bridges didn’t necessarily replace our hubs and repeaters. Bridges would typically be placed between multiple repeaters and hubs, as shown in the next illustration.

Having more collision domains may sound like a bad thing at first, but the more collision domains you have, the fewer overall collisions. In this network, we still have the potential for collisions, but by

logically segmenting the network with a bridge, the chance of collisions is lessened. Bridges don’t affect broadcasts. When any host on this network sends a broadcast, every other host on the network will receive a copy. It’s unlikely every other host on the network actually needs that broadcast, so we’re very interested in limiting the scope of broadcasts.

Bridges were definitely a step up from hubs and repeaters, but we needed more help with collisions and any help we could get with broadcasts. We got all of that and more with the introduction of

switches. Let’s replace the hubs, repeaters, and bridge in our network with a single switch.

This is the universal representation of a switch, and if you’re shown

one on an exam, you’ll be expected to know it’s a switch without Cisco telling you. Same goes for a hub and bridge. Here’s the full lineup:

By connecting each host to a separate switch port, we create a collision domain for each host. Collisions literally cannot occur! Where we had one collision domain with a hub or repeater, we now have four separate collision

domains on our network by using a switch.

In addition to eliminating collisions, each host will now have

far more bandwidth available. When hosts are connected to individual switch ports, they no longer have to share bandwidth with other hosts. With the correct switch configuration and network cards, each host can theoretically run at 200 Mbps (100 Mbps sending, 100 Mbps receiving). Cisco switch default settings are great in many cases, but one thing Cisco switches do not do by default is break up broadcast domains. Each host connected to that switch is its own collision domain, but

they’re still all in one big broadcast domain. We know what that means – lots of unnecessary broadcasts!

This is one default setting you’ll want to change, and we’ll see how to do that later in this very section.

Before we move forward, let’s review the key concepts of hubs vs. switches. With hubs, we have one big collision domain consisting of all connected hosts. With switches, each individual switch port is its own little collision domain. Hubs allow only one device to transmit at a time, resulting in shared bandwidth. Switches allow hosts to

transmit simultaneously. When one host connected to a hub sends a broadcast, every other host on the hub receives that broadcast and we have no way around that. Switches have the same behavior by default, but we CAN do something about it, and we will do just that in the VLAN section of this course. Microsegmentation is a term occasionally used in Cisco documentation to describe the “one

host, one collision domain” segmentation performed by switches. It’s not a term I hear terribly often today, but you might see it in Cisco docs – or Cisco exams. The Frame Forwarding DecisionMaking Process A Cisco switch will do one of three things with an incoming frame: Forward it Filter it

Flood it The entire decision-making process is pretty simple. Having said that, there’s a lot of information in this section. Just take one of these three processes at a time and you’ll have this all mastered. There’s one little oddity I want to introduce to you, and this onequestion practice exam will do just that:

When a frame enters a switch, which of the following does the switch look at first? A. The source MAC address of the frame B. The destination MAC address of the frame It makes perfect sense that the switch would look at the destination address of the frame first. After all, the switch wants to get that frame to the right destination, and what better way to do that than to look at

the destination address first, right? Wrong! The switch will look at the destination MAC of the frame after it looks at the source MAC address. The logical question at this point is “Why does the switch even care where the frame came from?” The answer: “Because source

addresses are how the switch builds and maintains its dynamic MAC address table.” That’s not the only reason for this behavior, as you’ll see in this section, but it’s the major reason. When we work with dynamic routing protocols in this course, you’ll see EIGRP and OSPF used to build a routing table. The switches have no equivalent to those routing protocols, so the switches have to build their MAC tables another

way. We could build a MAC address table with static entries, but that approach has serious drawbacks: Every time you add a host to the switch, you’d have to remember to make a static MAC address entry for that host, and that’s really easy to forget – and even easier to mistype. If a port goes down and you switch the host connected to the bad port to a good port, you won’t have full

connectivity until you add a new static entry for that host’s MAC address. It’s easy in the heat of battle to forget to remove the old entry, leading to future hassles when someone connects to that port. If I have a choice between letting the hardware do the work and me doing the work, I’ll let the hardware do it every time. That doesn’t make me lazy, it makes me smart. It’s much more efficient to let the hardware carry out dynamic operations than forcing the admins to handle everything statically.

Let’s take a look at how a switch builds that all-important table, and we’ll see each of those frame forwarding options in action. We’ll start with four hosts and one switch, using a very odd topology in part of our network to illustrate one of those forwarding options. Note that Hosts A and B are connected to a hub, which in turn is connected to a switch. Each host will use its letter 12 times to make up its MAC address.

We’ll assume the switch has just been added to the network, which brings up another important point. When you first power a switch on, there will be a few entries in the

MAC address for the CPU. It’ll look something like this:

SW1#show mac address-table Mac Address Table -----------------------------Vlan Mac Address --------------- -------All 0008.7de9.9800 All 0100.0ccc.cccc All 0100.0ccc.cccd All 0100.0cdd.dddd

Type ---STAT STAT STAT STAT

Total Mac Addresses for this c

The only way the switch knows where the hosts are is for you and I

to add a bunch of static entries (bad) or let the switch learn the addresses dynamically (good). By the way, the MAC address table is also known as the switching table, the Content Addressable Memory (CAM) table, and the bridging table (even though it’s not on a bridge). Let’s begin with Host A sending a frame to Host C.

The frame enters the switch on port 0/1, and the switch looks at the source MAC address of the frame and asks itself one question: “Do I have an entry for this MAC address in my table?” There’s no grey area. There either is an address or there isn’t. In this case, since we just turned the switch on, there’s no entry for aaaa-aa-aa-aa-aa in the MAC table. The switch makes the entry, and our MAC table now looks like this:

SW1#show mac address-table Mac Address Table ------------------------------

Vlan Mac Address Ty --------------- --------All 0008.7de9.9800 ST All 0100.0ccc.cccc ST All 0100.0ccc.cccd ST All 0100.0cdd.dddd ST 1 aaaa.aaaa.aaaa DY Total Mac Addresses for this c

Note the CPU entries are static, but the Host A entry is dynamic. At long last, we get to the frame forwarding decision! Here are our

choices again: Forward Filter (drop) Flood The switch now examines the destination MAC address “bb-bbbb-bb-bb-bb” and asks itself another simple question: “Do I have an entry for this destination MAC address in my MAC table?”

When the answer is “no”, as it is here, the switch floods the frame. A copy of that frame will be sent out every single port on the switch except the port the frame came in on. This particular type of frame is an unknown unicast frame, since the frame is a unicast (destined for one particular host), but the port that leads directly to that destination is unknown.

Basically, the switch is saying “I have no idea where this destination host is, so I’ll just make sure it gets where it should by sending a copy out EVERY port except the one it

came in on.” Flooding ensures the frame will get where it needs to go, and it also guarantees the other hosts in this LAN will get the frame, and that’s a huge waste of bandwidth and switch resources. Flooding frames doesn’t seem like such a big deal, but we really want to limit flooding due to the costs we talked about earlier. If this is a 64port switch, and we have a host on every port, that means the switch has to send 63 copies of any flooded frame, 62 of which are

unnecessary. Nothing wrong with flooding frames as you add a host or a switch to a network – it really can’t be avoided – but after that, we’d rather not have a lot of flooding. And as you’re about to see, we won’t. Host C will now respond to Host A with a frame of its own.

The switch begins the process by checking out the source MAC address of that frame. Will there be an entry for cc-cc-cc-cc-cc-cc?

SW1#show mac address-table dyn Mac Address Table ------------------------------

Vlan Mac Address Type Ports -------------- ------1 aaaa.aaaa.aaaa DYNAM Total Mac Addresses for this c

Nope! In that case, the switch creates one:

SW1#show mac address-table dyn Mac Address Table ------------------------------

Vlan Mac Address Typ --------------- -------- --1 aaaa.aaaa.aaaa DYNA 1 cccc.cccc.cccc DYNA

Now we’ll start to see the dynamic entries in that table work for us. The switch now checks the MAC table for the destination address of aa-aa-aa-aa-aa-aa, and there is an entry, showing that host to be found off port 0/1. The address is known, so there’s no reason to flood the frame. The frame will only be forwarded out port 0/1. Much more efficient than flooding, and easier on our network!

If Host A responds to Host C, the switch will have an entry for Host C’s MAC address where it didn’t have one earlier. Frames from Host A to Host C will now be sent directly out port 0/2, rather than being flooded.

Let’s jump ahead to another scenario, where the topology is the same and the switch has a dynamic MAC entry for each host. Please note this is not a topology you’re going to see very often in production networks, if at all. It’s strictly presented here to illustrate the third option for the switch’s frame forwarding process. On this particular network, we have

an unusual setup where Hosts A and B are connected to a hub that in turn is connected to a switch. To the switch, both hosts are found off port 0/1, and that leads us to our third possibility for incoming frames. When Host A sends a frame to Host B, Host B will get a copy of it through the hub, as will the switch. The switch checks the source MAC address, sees an entry, checks the destination MAC, sees an entry … and then realizes both the source and destination are found off the same port!

On the (very) rare occasion this happens, the switch will filter the frame, which is a fancy way of saying the switch will simply drop the frame.

Let’s review these three switching decisions and when the switch uses each one: Flooding happens when the switch has no entry for the frame’s destination MAC address. When a frame is flooded, a copy of it is sent out every port on the switch except the one it came in on. Unknown unicast frames are always flooded. Forwarding happens when the switch has an entry for the frame’s destination MAC address. When a

frame is forwarded, it’s sent out only the port indicated by the MAC address table. Filtering happens when the source and destination MAC addresses are known to the switch and are found off the same port. Technically, filtering also occurs when a frame is not sent out a port because the destination is a known unicast. Filtering = not sending a frame out a port.

You’re probably tired of hearing the phrase “except the port it came in on”, but it is important. We don’t get to say “never” in networking very often, but this is one of those times: Switches never send a frame back out the same port the frame came in on. In addition to flooded frames, there’s another type of frame sent out every port except its arrival port, and that’s a broadcast frame.

Broadcast frames are intended for all hosts, and have a destination MAC address of ff-ff-ff-ff-ff-ff (also expressed as FF-FF-FF-FFFF-FF. Case doesn’t matter in hex addresses). Those Dynamically Learned MAC Addresses (Again) When I was going on about dynamically learned MAC addresses earlier, I’m sure this question entered your mind:

“Just how long do those dynamic addresses STAY in the table?” Great question! The default aging time for dynamically learned MAC addresses is 5 minutes, and that timer is reset when a frame comes in with that particular source MAC address. In short, as long as the switch hears from a host within any given fiveminute period, the host’s MAC address stays in the table.

As always with time-based IOS commands, be sure to use IOS Help to check the unit of time that particular command uses. For example, if I asked you to set that MAC address aging time to 10 minutes, and you already knew the IOS command to change that value is mac address-table aging-time, you might be tempted to enter the following command:

mac address-table aging-time 1

And that would be wrong. Really wrong, because IOS Help shows us the time unit for this particular command is seconds, which means the proper command is….

SW1(config)#mac-address-table Enter 0 to disabl Aging time in sec SW1(config)#mac-address-table

I strongly recommend you use IOS Help to check any numeric value. Time-related commands use different combinations of seconds, minutes, hours, days, etc. - you get the idea. Data-based commands use

megabits, kilobits, and other bitrelated measurements. In short: Use IOS Help! That’s why it’s there! Hopping down from my soapbox, let’s continue… Switches dynamically adapt their MAC address table when there’s a physical change to that switch’s connections, and this saves you a LOT of work and hassle. Take this

switch, where Host D is connected to port 0/3.

As sometimes happens, port 0/3 goes down, and we need to get the host connected to that switch back on the network ASAP. (I guarantee when a port goes

down, the user connected to that port will be a ridiculously highranking person in your company.) With dynamic MAC entries, all we have to do is plug the cable into an unused port, and we’re all set. As soon as frames come in from Host D onto the new port, the switch realizes this and does two things: Removes the Host D-to-port 0/3 mapping from its MAC table Adds the Host D-to-port 0/4 mapping to its MAC table

If we relied on static entries, we’d have to do that removal and addition manually. Remember, tis not lazy to let the switch do the work of maintaining the MAC table – tis smart! Also, the more manual entries you create, the higher the odds of

mistyping an entry, which leads to more troubleshooting and more time spent. In addition to the frame forwarding decision, the switch has to make a decision on how to process the frames. This is something you and I as network admins aren’t going to spend time configuring, but as future CCENTs and CCNAs, we need to know how each processing method works. Let’s get to it!

The Frame Processing Methods

They’re short and simple: Store-and-forward Cut-through Fragment-free Just about everything in networking has pluses and minuses, and that’s true with each of these methods. Store-and-forward is just what it sounds like. The entire frame is

stored by the switch, and then forwarded. We need to keep our eye on two particular values in the frame -- the Frame Check Sequence and the destination MAC address.

The FCS allows the recipient to determine if the data was corrupted during transmission, and when store-and-forward is in use, the storage of the entire frame allows the switch to check the FCS before actually forwarding the frame. This allows the greatest level of error detection of our three processing methods – and we love error detection! With cut-through switching, the switch reads the MAC addresses on the incoming frame and then begins to forward the frame even as part of

it is still being received. The FCS is not checked, so there is zero error detection. The middle ground between storeand-forward and cut-through is occupied by fragment-free processing. This processing makes the assumption that if there’s corruption in the frame, it’s in the first 64 bytes. As you’d guess, that’s exactly where fragment-free processing checks for problems! If no corruption is found in the first 64 bytes, the rest of the frame is assumed to be free of errors, and

the forwarding process begins. Comparing these three methods is like comparing TCP and UDP. Everything about one particular method sounds great – store-andforward, in this case. If this method gives us total error detection, and the others do not, what’s the tradeoff? Why not always use storeand-forward processing? The tradeoff is time. Store-andforward is the slowest of the three methods, with cut-through being the

fastest and fragment-free right in the middle. Let’s review the ups and downs of these methods: Store-and-forward: Best error detection. Slowest of the three. Cut-through: No error detection. Fastest of the three. Fragment-free: The middle ground

in both level of error detection and time. It’s time to head for the wonderful world of Virtual LANs! Don’t let the word “virtual” intimidate you. VLANs are simple to create and easy to work with. And they’ll be all over your production networks along with your CCENT and CCNA exams, so let’s hit ‘em!

Virtual LANs (VLANs)

One major reason for creating VLANs goes back to a default switch behavior. We’ll quickly review that particular behavior before we move on to VLAN creation. We have two default switch behaviors that combine to give us a bit of a headache: All hosts connected to the

switch are on the same physical Local Area Network (LAN). When a switch receives a broadcast, it sends a copy of that broadcast out every port except the one the broadcast came in on.

No big deal with a four-host network, but we don’t have many

four-host networks in this world or the planet Zeutron. If that’s a 64-port switch, 63 hosts are going to get a copy of a broadcast sent by any host on that switch, and it’s unlikely all 63 hosts need it. There’s a lot of wasted bandwidth and wasted network resources right there. As more and more hosts are added to the network, you end up with more and more broadcasts, and soon the switch is so busy handling

broadcasts that it can’t carry out basic switching functions in an efficient manner, and the network ends up coming to a standstill. This continual, gradual increase in broadcasts is a broadcast storm, and it can sink your network. A broadcast storm isn’t something that strikes out of nowhere. It’s a gradual degradation of the switch’s capability to handle its basic functions. Limiting the scope of broadcasts in our network is a big

step toward preventing broadcast storms. To illustrate exactly how Virtual LANs help us do just that, I’ll assign an IP address to each of our hosts, and then we’ll run some show commands on our live Cisco switches!

We’re going to use pings throughout this course to test connectivity. If you haven’t seen a ping before, it’s a fundamental connectivity test where a series of packets is sent to the remote address we specify. If the packets leave and packets come

back from that pinged address, the test is passed! Much more on pings throughout the course. We’ll send pings from 172.34.34.4 to the other three hosts. I’m using Cisco routers for our hosts, so these pings will look different than any you’ve run on a PC.

Host4#ping 172.34.34.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5

Host4#ping 172.34.34.3

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5 Host4#ping 172.34.34.1

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5

I hate sentences that end with five exclamation points, but I love pings that do so. When you see that, you know you have IP connectivity to the remote host you pinged.

I did go to each host and send pings to each of the other hosts, and we do have connectivity all the way around. With that connectivity verified, let’s have our first look at a Cisco switch configuration. I occasionally have a student tell me their Cisco network doesn’t use VLANs, but that just means they haven’t configured additional VLANs. By default, VLAN 1 already exists on Cisco switches, and all ports are members of that VLAN. VLAN 1 is so important, you can’t even delete it from the

switch. A key command to view and verify VLAN configuration and operation is show vlan brief, so we’ll start with that one. SW1#show vlan brief

VLAN Name ---- ----------------------------------------------1 default Fa0/2, Fa0/3, Fa0/4 Fa0/6, Fa0/7, Fa0/8 Fa0/10

1002 1003 1004 1005

fddi-default token-ring-default fddinet-default trnet-default

VLAN 1 is even named “default”! Under “ports”, you can see the 10 ports that are members of this VLAN, and at the bottom of this output you see the other four default VLANs. You may never use them, but it’s a really good idea to have that numeric range of default VLANs memorized. When a host in a VLAN sends a

broadcast, a copy of that broadcast will be sent out every port that is a member of that VLAN. Sounds familiar, right? But there’s good news – we can create multiple VLANs that will help us limit the number of broadcasts forwarded throughout our network. Each VLAN is a separate broadcast domain. For example, say we want only Host 2 to receive any broadcast sent by Host 4, and vice versa. We can make that happen by placing

those hosts into their own VLAN! We do that at the port level with the switchport access vlan command: SW1(config-if)#interface fast

SW1(config-if)#switchport acce % Access VLAN does not exist. SW1(config-if)#interface fast SW1(config-if)#switchport acce

If you try to put a port into a VLAN that doesn’t yet exist, the switch is kind enough to create that VLAN for you.

Now Hosts 2 and 4 are in VLAN 24, and Hosts 1 and 3 are still in VLAN 1. We’ll verify that with show vlan brief:

VLAN Name

---- ----------------------------------------------1 default Fa0/3, Fa0/5, Fa0/6 Fa0/8, Fa0/9, Fa0/10 24 VLAN0024 Fa0/4 1002 fddi-default 1003 token-ring-default 1004 fddinet-default 1005 trnet-default

This simple little configuration does a lot to limit the scope of broadcasts! Previously, when all four hosts were in the same VLAN and one host sent a broadcast, the

switch would make sure every other host on the switch got a copy of it. Now, when any member of either VLAN sends a broadcast, only the other members of that particular VLAN get a copy of it. Using the 64-port switch example, if we had 32 ports in each VLAN, the load on the switch would be cut in half every time a host sent a broadcast. Instead of sending out 63 copies, the switch would send out 31 copies of a broadcast sent out by any host. That’s a big load taken off our switch, and a lot of saved

bandwidth. You already know that in networking, there’s almost always a tradeoff. Something this good must have a drawback, right? Well, yeah, and here it is. If Host 4 sends a broadcast right now, Hosts 1 and 3 won’t see it. What about other types of traffic, like pings? Let’s see: Host4#ping 172.34.34.1

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos ..... Success rate is 0 percent (0/5 Host4#ping 172.34.34.2 Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos !!!!! Success rate is 100 percent (5 Host4#ping 172.34.34.3

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos ..... Success rate is 0 percent (0/5

When you ping a remote host and get five periods back, that’s bad. There’s no connectivity to the pinged destination, and that’s what we see here for inter-VLAN traffic. Host 4 can still ping Host 2, since they’re in the same VLAN, but Host 4 can no longer ping hosts in the other VLAN. Traffic cannot travel from one VLAN to another unless a Layer 3 device gets involved. That’s likely to be a router, but it doesn’t have to be.

There are two techniques for interVLAN communication you should be ready to configure and troubleshoot on the CCENT and CCNA exams: Layer 3 Switches (not a misprint) “Router On A Stick” (not a joke or misprint) Since both of these involve routing, I want you to go through all the routing sections in this course before we tackle those. For that reason, these two features are

covered thoroughly in a later section of the course. Right now, let’s take a little trip to… The Planet Of The Models Network models, that is! (Sorry about that.) Not all switches are alike – some are a lot more powerful than others,

some have different capabilities than others – and it’s vital to put your more powerful switches in the proper place in your network. That’s where the Cisco 3-Layer Switching Model comes in. The first time you hear the term campus network, the word campus may make you think of a college or university. While you might find a campus network there, this term is used to describe any network that connects physically close buildings.

The Cisco Switching Model consists of three layers: Access switches, those closest to the end users Distribution switches, the switches that connect Access-layer switches to Core-layer switches Core switches, your powerhouse switches.

This model’s purpose is totally different than that of the OSI or TCP models, and maps to neither. In the following example, traffic that goes from a PC in BLDG1 to a PC in BLDG2 must go through at least one switch in each layer. End

users are connected only to Accesslayer switches.

This particular design gives us quite a bit of redundancy. If one path between the buildings goes down, we have other available paths. Even if one of our core switches goes down, we can still

get data from building to building, from host to host. Redundancy is a fancy way of saying “we’ve got a backup plan”, and in networking and in life, we’ll take as many backup plans as we can get! We need a process that allows us to pick the fastest path between switches while having those backup paths standing by, and luckily, we have such a process – the Spanning Tree Protocol (STP). More about

STP coming up soon. Right now, let me introduce you to another vital switching topic:

Basic Switch Security

“Basic” security might not sound exciting, but these fundamentals can really save your network (and your behind!) The first tip is about as basic as it gets. Put locks between your hardware and people! Your server room should be locked. Ideally, your server and router cabinets should be secured as well, but that’s only going to happen in a perfect world. Keep at least one

locked door between nonnetworking personnel and your network hardware. Unused VLANs - An Often Overlooked Security Feature Cisco switch ports have some undesirable defaults: They’re open, where router interfaces are shut by default

They’re actively attempting to trunk by default, meaning they are available for connection to another switch (newer Cisco models don’t have this issue) All ports are in VLAN 1, and that’s common knowledge These are undesirable defaults when it comes to unused ports on a switch. From top to bottom, here’s how we can change those defaults to increase switch security:

Close unused ports with the shutdown command Prevent the port from trunking with the switchport mode access command Place the port into an unused VLAN I personally just close unused switch ports, but both Cisco and I recommend strongly that you either close such ports, prevent them from

trunking, or create a VLAN strictly for those ports and place them into that VLAN. No ports that are actually connected to hosts should be in that VLAN. Port Security When we learned how the switch builds its MAC address table, I mentioned (several times) that the switch will examine the source MAC address of a frame before anything else.

I’m mentioning it again, and with good reason. That behavior makes port security possible! Let’s assume you have a VP that uses a laptop with a MAC address of bb-bb-bb-bb-bb-bb. She suspects that someone else is using her office while she’s gone, and she wants to make sure that no other laptop except hers can connect to the network from her office. We can accomplish this with port security. When you enable port security, the

switch will first inspect the source MAC address of an incoming frame, as usual. If the incoming source MAC address is considered secure, the user will be able to access the network. If the source MAC is considered non-secure, the port will take one of several actions. The source MAC address of the incoming frame really acts as a password. In this lab, we’ll set a secure MAC

address of aa-aa-aa-aa-aa-aa on interface fast 0/2 and see what happens when a host with a different MAC address connects to the port.

SW1(config)#int fast 0/2 SW1(config-if)#switchport port aging Port-security a mac-address Secure mac addr maximum Max secure addresse violation Security violat

SW1(config-if)#switchport port SW1(config-if)#switchport port H.H.H 48 bit mac address sticky Configure dynamic se

SW1(config-if)#switchport port

aaaa.aaaa.aaaa

Take note of the following: Port security is off by default. To enable it, use the switchport portsecurity command. Without that, the other commands are useless. Note the format of the MAC address in the port-security command. I didn’t set the number of secure

MAC addresses. The default number is one, and that’s enough for our purposes right now. I didn’t set the violation mode. The default mode is shutdown, and we’ll look at the details of all three modes in just a few minutes! After connecting a host with the MAC address 00e0.1e68.91f0 to port 0/2 and generating some traffic through that port, this is the result of show interface:

SW1#show int fast 0/2 FastEthernet0/2 is down, line

With the default settings for port security left intact, this is what happened when a frame came in with a non-secure source MAC address. The port was shut down and put into err-disabled state. To get this port back up and running, you have to clear up the problem AND shut that port down, then reopen it. We know in our hearts that port

security made that happen, but how can we verify that? With these two show port-security commands! Show port-security gives you some good, fundamental information.

SW1#show port-security Secure Port MaxSecureAddr Cu SecurityViolation Security Ac (Count) ------------------------------------------------Fa0/2 1 Shutdown ------------------------------------------------Total Addresses in System (exc

: 0 Max Addresses limit in System port) : 1024

To get even more details, run show port-security interface. SW1#show port-security int Port Security Port Status Violation Mode Aging Time Aging Type SecureStatic Address Aging Maximum MAC Addresses Total MAC Addresses Configured MAC Addresses Sticky MAC Addresses Last Source Address

fas : E : S : S : 0 : A : D : 1 : 1 : 1 : 0 : 0

Security Violation Count

: 1

From top to bottom, you can see that port security is enabled on fast 0/2, the port is secure AND shut down, the maximum number of MAC addresses, the number of static (“configured”) and dynamic (“sticky”) MAC addresses seen on that port, and the exact MAC address that caused the port to shut down! I’ll remove the previous config, and we’ll configure the switch to learn

a single dynamic MAC address on that same port, and to make that dynamically learned address the secure address. (I’ve also reset the port that was shut down.)

SW1(config)#int fast 0/2 SW1(config-if)#no switchport p aaaa.aaaa.aaaa SW1(config-if)#no switchport p

SW1(config)#int fast 0/2 SW1(config-if)#switchport port SW1(config-if)#switchport port aging Port-security a mac-address Secure mac addr maximum Max secure addresse violation Security violat

SW1(config-if)#switchport port H.H.H 48 bit mac address sticky Configure dynamic se

SW1(config-if)#switchport port

A “sticky” MAC address is one learned dynamically by the switch port. In this case, the very next source MAC address the port sees will be the single secure MAC address. After sending traffic through that port from the same host we used in the previous lab, here are the

results of show port-security interface: SW1#show port-security int Port Security Port Status Violation Mode Aging Time Aging Type SecureStatic Address Aging Maximum MAC Addresses Total MAC Addresses Configured MAC Addresses Sticky MAC Addresses Last Source Address Security Violation Count

The port is secure, it’s up, there’s one sticky MAC address, and you

fas : E : S : S : 0 : A : D : 1 : 1 : 0 : 1 : 0 : 0

see the last source address. All is well! These two scenarios lead us to the musical question, “Can you have a port learn sticky addresses AND configure a static address or two?” You certainly can! You just have to leave room for the sticky learning when you configure the number of secure MAC addresses. Let’s say your port security configuration has two requirements:

The address aa-aa-aa-aa-aaaa must be configured as a statically secure address. The next source MAC address the port sees should also be learned as a sticky secure address. Here’s how we tackle that:

SW2(config)#int fast 0/2 SW2(config-if)#switchport mode SW2(config-if)#switchport port SW2(config-if)#switchport port aaaa.aaaa.aaaa

SW2(config-if)#switchport port

SW2(config-if)#switchport port aging Port-security a mac-address Secure mac addr maximum Max secure addresse violation Security violat

SW2(config-if)#switchport port

We configured one static secure address, enabled sticky learning, and left room for one address to be learned via the maximum 2 command.

After sending traffic through that port successfully, we’ll verify our config with show port-security int fast 0/2. SW2#show port-security int Port Security Port Status Violation Mode Aging Time Aging Type SecureStatic Address Aging Maximum MAC Addresses Total MAC Addresses Configured MAC Addresses Sticky MAC Addresses Last Source Address Security Violation Count

fas : E : S : S : 0 : A : D : 2 : 2 : 1 : 1 : 0 : 0

Looks good! Port security is enabled, the port is secure and up, and we met our requirements – we have one statically configured secure MAC (“configured MAC addresses”), and the switch has learned one secure address dynamically – the one shown as “last source address”. Let’s take a look at those violation mode options:

SW2(config-if)#switchport port protect restrict

Security violation p Security violation r

shutdown

Security violation s

IOS Help often gives us a great description of the options we’re looking at. This isn’t one of those times, so here’s the description we need… The default mode is shutdown, and this mode shuts the port down (duh), transmits a message to the log indicating the action taken, and drops the violating frames. The interface status will be errdisabled, (short for error-disabled) meaning it must be manually reopened. The LED on the switch

above the disabled port will go dark. restrict drops the violating frames and transmits a message to the log indicating an issue, but does not shut the port down. protect simply drops the violating frames. Port security is a simple and powerful feature, and here’s another one running on our Cisco switches

– the Spanning Tree Protocol. A Taste Of STP Redundancy is a fancy way of saying “we’ve got a backup plan”, and in networking, we’ll take as many backup plans as we can get! We can never assume that a switch will be 100% operational forever, because it just doesn’t work that way. (Not even Cisco switches are infallible!)

Whether we’re dealing with switching or routing, we always want a redundant path between any given two points. That allows us to avoid a single point of failure, a term used for a point in the network that, if broken, brings the network to a halt. Speaking of redundancy, in the following diagram there are multiple paths from Host A or Host B to Host C or Host D.

Right now, we have several possibilities for paths for sending frames from Host A to Host D: Sw1 - Sw3 Sw1 - Sw2 - Sw3 Sw1 - Sw4 - Sw3

Sw1 - Sw2 - Sw4 - Sw3 Sw1 - Sw4 - Sw2 - Sw3 If Sw2 or Sw4 goes down, frames can still be sent between the four hosts. Let’s assume that Sw2 is accidentally powered down. (As the philosopher Christopher Moltisanti said, “It happens.”)

There are still paths open for the hosts to successfully transmit frames to each other. Even if Sw4 suddenly went down in addition to Sw2, the link between Sw1 and Sw3 would be enough to allow the hosts to send frames to each other. Now that’s redundancy!

Often, a networking feature, service, or protocol will deliver a great benefit and a potential problem. In this case, our redundant switching network could be subject to switching loops. Let’s bring Sw2 and Sw4 back into our network.

The good news: We have multiple paths between hosts on either side of the switches, which gives us redundancy. The bad news: We have multiple paths between hosts on either side of the switches, which gives us the possibility of switching loops. A switching loop forms when a frame is transmitted and ends up being sent back and forth between the same switches, never reaching its destination. This can happen for various reasons - a corrupt MAC table, duplicate MAC addresses on network hosts

(due to programmable network cards that allow manual assignment of MAC addresses), or destination hosts being unavailable. None of these are common, but they do happen. A switching loop looks much like this:

Here’s what happened: Host A sends a frame to Host D, and it arrives at Sw1 Sw1’s MAC table says to reach Host D, the frame should be forwarded to Sw4

Sw4’s MAC table says to reach Host D, the frame should be forwarded to Sw2 Sw2’s MAC table says to reach Host D, the frame should be forwarded to Sw1 That forwarding process goes on and on, with the frame just circling in a logical loop, never reaching its destination. Cisco switches use the Spanning Tree Protocol (STP) to prevent switching loops, and STP is enabled by default. You’ll learn much more about STP in your

ICND2 studies, but you need to know its basics now. STP will determine a loop-free path for frames, and ports that are not on that path will be placed into blocking mode. In our previous example, STP would determine a “best path” from Host A to Host D, and would use only ports along that path to transport frames from A to D. If the best path becomes unavailable, STP would quickly recalculate the metrics to determine the new best path, and ports along that path would be brought out of

blocking mode. Do not assume that the physically shortest path from one host to another in a switching network is the path STP will choose as best. STP uses port speeds along a path to determine the port costs and the best paths. This is strictly an overview of STP, and you will learn much more about it during your ICND2 and CCNP studies. In the meantime, you now know what a switching loop looks

like, and that STP’s primary purpose in a switching network is to prevent those loops from forming. STP is on by default, and I strongly suggest you keep it on. Starting Your Switch Troubleshooting At Layer 1 A switch is a Layer 2 device, but you should begin your troubleshooting at Layer 1. With that in mind, there are some lights on the front of a typical Cisco switch that’ll help you nail switch

issues quickly. Here are the panels for the Cisco 2950 and 2960 switch models, in that order:

You’ll find these lights on the front left of these particular switch models. If all of these lights are out, there’s a great chance the switch is not getting any power. Check the

socket. Plugs can come loose over time, and a plug that looks secure in the socket may not be secure enough to deliver power. SYST: The System light No light: The switch is off. Green light: Up and running! Amber light: Uh oh. We have power, but something else is wrong

(I know that’s not much to go on, but it’s a starting point!) RPS: Redundant Power Supply status. Off: Well, your RPS is off. Or you don’t have one. Solid green: Connected and ready to roll. Flashing green: Connected, but

currently supplying power to another device, so it can’t help your switch right now. Solid amber: The RPS itself is in standby OR in fault. Flashing amber: Internal power supply of a switch has failed and the RPS is getting power to that switch. MODE: You’ll see a green light on either STAT, DUPLX, or SPEED,

and the light toggles through the three as you continue to press the MODE button. STAT: If this one’s green, it means the individual STATUS light now displayed for each port is accurate. DUPLX: If green, means the individual DUPLEX light now displayed for each port is accurate. SPEED: If green, indicates each individual SPEED light for each

port is accurate. As you view the port status, duplex, and speed values via the MODE button, the individual port LEDs will change. Here’s what the port LED indicates for all three of these values: STATUS: Flashing green: Traffic going through interface properly

Solid green: The port’s up, but no traffic is going through the port Flashing Amber: Blocked by STP (not necessarily a bad thing) Off: Port is not physically up (may be administratively shut down)

DUPLEX: Green: Port is running at full-

duplex Off: Port is running at half-duplex

SPEED: Flashing green: Port running at Gig Ethernet speed Solid green: Port running at Fast Ethernet speed

Off: Port running at Ethernet speed Watch that last one. It’s really easy to look at an off LED and think there’s a problem. Now that we know our switches are up and running properly, we need to be sure they can communicate with each other. We do that over trunks. Like TCP and UDP, this is a feature that works beautifully when you stick with the defaults. For our

CCENT and CCNA exams, we have to be 100% crystal clear on trunking operations, and that includes the similarities and differences with our two trunking protocols. Let’s jump right in! Trunking Trunking is the process of allowing frames to flow between physically connected switches. A tag indicating the destination VLAN is placed on the frame by the transmitting switch (“frame

tagging”). In the following network, we have two hosts in VLAN 10, and they’re connected to separate, trunking switches. A frame would be tagged “VLAN 10” before being sent across the trunk. When the receiving switch processes that incoming frame, the switch knows that frame should be distributed only to members of VLAN 10. This allows members in the same VLAN to communicate when they

are physically connected to different switches, which is a common need since VLANs can and usually do span multiple switches.

We need the help of a trunking protocol to build this trunk. Not all

Cisco switches support both of these protocols, but for your CCNA and CCENT exams, it’s an excellent idea to know them both and the differences between them. The Inter-Switch Protocol(ISL) is a Cisco-proprietary trunking protocol, meaning it can only be used between two Cisco switches. The entire frame is encapsulated before transmission across the trunk. IEEE 802.1Q, generally known as

“dot1q”, is the industry standard trunking protocol. (“Industry standard” is a fancy way of saying “everybody’s switches can run this”.) If a non-Cisco switch is involved in the trunk, this is the trunking protocol to use. Dot1q does not encapsulate the entire frame. Instead, a 4-byte value containing the VLAN ID is added to the Ethernet header. The key difference between the two is the way they handle - or do not handle - the native vlan. By default, the native vlan is VLAN 1. The native vlan is the “default vlan”.

When dot1q is ready to transmit a frame destined for the native vlan over the trunk, the protocol will not put that 4-byte value on the frame. Instead, the frame is transmitted “as-is”. This helps to cut down even more on overhead. When the receiving frame sees there is no VLAN tag on the frame, it assumes the frame is intended for the native vlan, and it is forwarded accordingly. Dot1q allows for a different VLAN to be selected as the native VLAN.

ISL doesn’t even know what a native VLAN is! Worse, every single frame transmitted over an ISL trunk will be encapsulated. That means a lot of additional overhead as compared to dot1q. Summing up:

ISL is the Cisco-proprietary trunking protocol. ISL encapsulates every frame before it crosses the trunk, and doesn’t recognize the native VLAN concept. Dot1q is the industry standard, places only a 4byte header onto a frame, and won’t even do that if the frame is destined for the native VLAN. ISL is so clunky that many Cisco switches don’t support it, including the popular home lab 2950 and

2960 switches. Access Ports, Trunk Ports, And Trunk Port Settings A Cisco switch port is going to be an access port or a trunk port. It cannot be both. An access port belongs to one and only one VLAN. Once you configure a port as an access port, that port cannot trunk. The default behavior of a trunk port is that it is a member of all VLANs, but you will not see this indicated by show vlan brief. Here’s the

output of that command on our switch where fast0/11 and 0/12 are trunking (ports 9 and 10 removed for clarity):

SW1#show vlan br VLAN Name ---- ------------------------------------------1 default Fa0/2, Fa0/3, Fa0/4 Fa0/6, Fa0/7, Fa0/8 1002 fddi-default 1003 token-ring-default 1004 fddinet-default 1005 trnet-default

Notice that 0/11 and 0/12 are

missing from the port list. They’re seen with the show interface trunk command.

SW1#show interface trunk Port Mode Encap Native vlan Fa0/11 desirable 802.1 Fa0/12 desirable 802.1 Port Fa0/11 Fa0/12

Vlans allowed on t 1-4094 1-4094

Port Fa0/11 Fa0/12 Port Fa0/11 Fa0/12

Vlans allowed and 1 1 Vlans in spanning 1 none

Let’s use IOS Help to look at our trunking options. We have several options, but they’re not all visible in one place.

SW1(config-if)#switchport mode access

Set trunking mode to

dynamic trunk

Set trunking mode to Set trunking mode to

The top choice refers to access ports. When you configure a port as an access port, this is in effect turning trunking OFF. The dynamic option will dynamically negotiate trunk mode,

while the trunk option unconditionally turns trunking on for this port. dynamic has a few options we need to know as well.

SW1(config-if)#switchport mode auto

Set trunking mode dy

desirable Set trunking mode dy

We have dynamic auto and dynamic desirable, and these options are generally referred to simply as “auto” and “desirable”. There’s one more “hidden” trunk port setting. Note this is an

interface-level command.

SW1(config-if)#switchport none

Therefore, according to IOS Help, we actually have five options for trunk ports. On means the switchport is unconditionally trunking, whether the other end of the trunk likes it or not. Off means the port will not trunk with the remote partner under any circumstances. This mode is the result of making a port an access

port. Desirable means the port will actively attempt to trunk. If the remote port is in on, desirable, or auto mode, a trunk will result. Auto means the port will trunk, but the other side must initiate trunking. If the remote port is in desirable or on mode, a trunk will result. If both sides are in auto mode, no trunk will result.

Finally, nonegotiate means the local port will go into permanent trunking mode, but Dynamic Trunking Protocol (DTP) frames are not sent across the trunk. There’s a slight difference in the default trunk port mode between two popular Cisco home lab switches. The 2950 defaults to dynamic desirable, where the 2960 (and most new Cisco switches) default to dynamic auto. Those of you who own 2950s or studied for the previous version of the exam, note the change!

Filtering VLAN Traffic Allowed Across The Trunk We can filter traffic going across our trunks on a per-VLAN basis. Before we get to the commands that allow you to make this happen, let’s talk about why you might want to use this option.

In this network, there’s no reason to send VLAN 30 traffic to the switch that only has hosts in VLAN 20. There’s also no reason to send VLAN 20-based traffic to the switch that only has hosts in VLAN 30.

I’ll use IOS Help so you can see the entire command and options:

SW1(config)#int fast 0/6 SW1(config-if)#switchport trun allowed Set allowed VLAN ch native Set trunking native pruning Set pruning VLAN ch

SW1(config-if)#switchport trun vlan Set allowed VLANs when

SW1(config-if)#switchport trun WORD VLAN IDs of the allo add add VLANs to the cur all all VLANs except all VLANs except the none no VLANs remove remove VLANs from th

This is an interface-level command, so we can filter VLAN 20 on fast 0/6 and VLAN 30 on fast 0/8 (our two trunks). With these options, we have different ways we could do this, one being the except option.

SW1(config-if)#switchport trun WORD VLAN IDs of the allowe

SW1(config-if)#switchport trun SW1(config-if)#int fast 0/8 SW1(config-if)#switchport trun

Verify with show interface trunk. SW1#show int trunk

Port Mode Native vlan Fa0/6 desirable Fa0/8 on Port Fa0/6 Fa0/8

Encap

802.1 802.1

Vlans allowed on trun 1-29,31-4094 1-19,21-4094

VLAN 20 is missing from “VLANs allowed on trunk” on fast 0/6, and VLAN 30 is missing from “VLANs allowed on trunk” on fast 0/8. Just what we wanted! You have to see the big picture with this command and where you filter

VLANs. Here’s the same network we just used – with two switches added:

In this case, you wouldn’t want to

filter either VLAN 20 or 30, since there are downstream switches that need traffic destined for both VLANs, and the only way for those switches to get that traffic is via the switches that don’t have hosts in both VLANs. Let’s say those two switches were just added to the network, and you now need to adjust the VLAN filtering we just configured. Actually, we need to negate the filtering. The most efficient way to do that is with the all option. (We could also use the “add” option to

add the denied VLANs back to our “permitted” list.)

SW1(config-if)#switchport trun WORD VLAN IDs of the allo add add VLANs to the cur all all VLANs except all VLANs except the none no VLANs remove remove VLANs from th

SW1(config-if)#switchport trun SW1(config-if)#int fast 0/8 SW1(config-if)#switchport trun

SW1#show int trunk

Port

Mode

Encap

Native vlan Fa0/6 desirable Fa0/8 on Port Fa0/6 Fa0/8

802.1 802.1

Vlans allowed on trun 1-4094 1-4094

Verified with show interface trunk, the previously denied VLANs are now allowed! With our switches trunking, we need to let them exchange VLAN information. It’s important for our switches to know about all VLANs in the network, not just the VLANs

configured on the switch. Let’s revisit that last network:

By default, the switches with no hosts in VLANs 20 and 30 won’t know those VLANs exist. Without some way of letting them know, they’ll drop incoming frames destined for the VLAN they don’t have ports in, and our connectivity is in a handbasket and headed for a really hot place. Luckily, we have a protocol that will let all of our switches know about all VLANs. It’s VTP – the VLAN Trunking Protocol.

VLAN Trunking Protocol (VTP)

VTP allows switches to advertise VLAN information between other members of the same VTP domain, which allows a consistent view of the switched network across all switches in that domain. When a VLAN is created on one switch in a VTP domain, all other VTP devices in the domain are notified of that VLAN’s existence. Switches in the VTP domain will know about every VLAN, even

VLANs that have no members on that switch. This information is shared between VTP devices in the form of summary advertisements. A VTP Server will send one of these advertisements every five minutes, and immediately upon a change in its VTP database. There are three separate VTP modes.

In Server mode, VLANs can be created, modified, and deleted. When these actions are taken, the changes are advertised to all switches in the VTP domain. VTP Servers can originate, forward, and process VTP summary ads. VTP Servers keep VLAN configuration info upon reboot by storing that information in nonvolatile RAM (NVRAM). In client mode, the switch cannot modify, create, or delete VLANs.

VTP clients cannot retain VLAN configuration information upon reboot. VTP clients keep this information in their running configuration, but not in NVRAM. If a VTP client is reloaded, it must obtain this information from a VTP server when it comes back up. VTP clients can accept and process summary advertisements. The third VTP mode is transparent mode. Take special note of the

differences between transparent mode and the other two VTP modes. Switches in transparent mode forward the VTP advertisements received from other switches, but they do not process the information contained in those ads. VLANs can be created, deleted, and modified on a switch in VTP transparent mode, but those changes are not advertised to the other switches in the VTP domain. They are locally significant only.

Transparent VTP switches keep their VLAN information in NVRAM, just as VTP Servers do. Setting the VTP mode of a Cisco switch is done with the vtp mode command. SW1(config)#vtp mode ? client Set the device server Set the device transparent Set the device

Some important VTP notes:

As you’d expect, the VTP domain name must match for VTP devices to successfully exchange information. There are three versions of VTP, with v3 being the most recent and the only one with cryptographic capabilities for the optional password. The other two leave the optional password sitting there in clear text. The domain name is case-sensitive, so “CISCO” and “cisco” are two different domains. The VTP domain is set with the vtp domain command. When you see

the domain name changed from NULL to a new name, NULL indicates there was no previous domain name. SW1(config)#vtp domain CCNA Changing VTP domain name from

To distribute information about a newly-created VLAN, the switch upon which that VLAN is created must be in Server mode. You can’t have a VTP domain with only VTP clients.

By the way, this is what happens when you try to create a VLAN on a switch configured as a VTP client:

SW1(config)#vtp mode client Setting device to VTP CLIENT m SW1(config)#vlan 20 VTP VLAN configuration not all

The switch is kind enough to remind you that you cannot create, modify, or delete VLANs on a VTP client. I doubt the exam will be so kind. The main VTP verification command is show vtp status. This

command will display the local switch’s operating mode, the VTP domain name, the configuration revision number, and more.

SW1#show vtp status VTP Version Configuration Revision Maximum VLANs supported locall Number of existing VLANs VTP Operating Mode VTP Domain Name VTP Pruning Mode VTP V2 Mode VTP Traps Generation MD5 digest 0x0B 0xF9 0x37 0x57 Configuration last modified by Local updater ID is 74.92.187.

Note the innocent little configuration revision number. When you take a switch that’s been in use in one network (production or lab, it doesn’t matter), and insert it into another network without zapping that config revision number back to zero, there’s a big chance for big trouble.

Why I’m Spending Time On VTP Revision Numbers

You may not see these numbers on your CCENT exam, but there is a huge issue with these numbers that EVERY Cisco network admin should know about. Consider this a bonus real-world lesson. Here’s the problem: If you take a switch from one network and put into another network, you have to reset the config revision number to zero on that switch BEFORE you

put it into another network, or you risk overwriting all the VLAN information in that second network! This chances for this are greater than most people realize. Let’s say you work for a consulting firm that’s been nice enough to put together a Cisco practice lab for you and the other admins. Let’s also say a client network has a switch suddenly go bad, and the firm doesn’t have any new switches sitting around.

Where do you think that switch is going to come from? From the practice lab, that’s where! Nothing wrong with that – we do what we have to do to keep the client up and running – but you must reset that revision number. If you don’t, and the revision number on your practice lab switch is higher than that of the switches in the client’s network, the client’s VLAN information will be overwritten by the lab switch.

It’s very likely that the VLANs are changed more often on the lab switch than the client switches, so the chance that the lab switch has a higher config revision number is very high. There are several methods to reset that revision number to zero, and the quickest one is to change the VTP domain name to another name, then set it to whatever you need it to be. (Don’t use the original name.) Here’s a switch with a config

revision number of 5. To set it to zero, I’ll change the VTP domain name to CCNA, then verify with show vtp status.

SW1#show vtp status VTP Version Configuration Revision Maximum VLANs supported locall Number of existing VLANs VTP Operating Mode VTP Domain Name SW1(config)#vtp domain CCNA Changing VTP domain name from

SW1#show vtp status VTP Version

Configuration Revision Maximum VLANs supported locall Number of existing VLANs VTP Operating Mode VTP Domain Name

When you get the switch to the client site, set the name to whatever the client’s using, and verify one more time.

SW1(config)#vtp domain CLIENTS Changing VTP domain name from

SW1#show vtp status VTP Version Configuration Revision Maximum VLANs supported locall Number of existing VLANs

VTP Operating Mode VTP Domain Name

When this switch receives a VTP summary ad from another switch on the client network, it will overwrite its VLAN information with the info contained in that ad. This is likely more than you’ll need to know about VTP for the CCENT exam, but it’s important that you know about resetting the VTP domain name. If for some reason resetting the name doesn’t reset the config revision number, do some

research on the Net – there are other methods, but this one is the most popular and is generally effective. That’s it for switching - for now! Near the end of the course, you’ll find a section on Router-On-AStick and Layer 3 switches. Those lessons will sink in quickly for you once you’ve gone through the routing sections in this course. Right now, let’s move forward and work with hexadecimal

conversions.

The Joy Of Hex

I really mean that. Hex questions are going to be a joy for you on the exam. With solid practice, you’re going to nail these questions quite easily, because working with hex is just like counting on your fingers (almost). If hex intimidates you in any way, or you’re not familiar with it, take a deep breath because I’m going to ease your mind in this section.

We did a little hex work in another section, but to get it down cold and nail every question on exam day, we have to get in some more practice! The MAC addresses we worked with in this section and that you’ll see in production networks are written in hexadecimal, and converting a decimal value to hex and from hex to decimal are vital skills that will come in handy on exam day.

One major difference between the decimal system you and I use every day and the hex system is that in hex, we’ll have letters that represent certain numeric values. When you read that, your reaction might have been something along the lines of “Great, we’re going to use letters for numbers. This must be some kind of super-complicated double-secret-probation kind of math!” Wrong! It’s neither super-

complicated nor on double-secret probation. It’s actually simple – with practice! The numbering system we use every day, decimal, uses units of 10. We don’t stop to think about it like that because we’re so familiar with it. If I ask you to give me 35 cents, you don’t think “Okay, that’s 3 units of 10 cents and 5 units of 1 cent.” You’d just give me the money. Same if I asked you to loan me 712 dollars. You don’t think “Okay,

that’s 7 units of 100, 1 unit of 10, and 2 units of 1.” You just think “How can I get out of here without giving Chris this money?” I kid, but you get the point.

“35” Decimal “712”

Units of 100

Units of 10 3

Units of 1 5

7

1

2

Hex numbers are read much the

same way, except we’re using units of 16 and their multiples rather than units of 10. For example… The hex value “35” represents 3 units of 16 and 5 units of 1. The hex value “712” represents 7 units of 256 (16 × 16), 1 unit of 16, and 2 units of 1. Units of

Units

Units

256 Hex “35” Hex “712”

7

of 16

of 1

3

5

1

2

The logical question at this point: “Since hex is based on units of 16, how can we represent a value of 10, 11, 12, 13, 14, or 15?” The answer: “That’s where the letters come in!”

Those six values are represented by the following letters: A = 10 B = 11 C = 12 D = 13 E = 14 F = 15 And that’s it! The case of the letter doesn’t matter – both “a” and “A” represent 10, and so forth.

For your CCENT and CCNA exams, you should be ready to convert a hex value to decimal, and vice versa. I’ve listed some of each below, and this is a good way for you get started with your conversion skills. The answers and explanations are given after the initial lists. Another great way to get some practice in – when you have a few minutes here and there at work or at home, just grab a piece of paper and something to write with, and write down some random decimal,

then practice converting them to hex. These smaller practice periods really add up to success on exam day! Convert the following hex values to decimal: 1c F1 2a9 14b 3e4

13 784 419 1903 345 The answers and explanations: 1c = 1 unit of 16, 10 units of 1 = 16 + 10 = 26 F1 = 15 units of 16, 1 unit of 1 = 240 + 1 = 241

2a9 = 2 units of 256, 10 units of 16, 9 units of 1 = 512+ 160 + 9 = 681 14b = 1 units of 256, 4 units of 16, 11 units of 1 = 256 + 64 + 11 = 331 3e4 = 3 units of 256, 14 units of 16, 4 units of 4 = 768 + 224 + 4 = 996 13 = 1 unit of 16, 3 units of 1 = 16

+ 3 = 19 784 = 7 units of 256, 8 units of 16, 4 units of 1 = 1792 + 128 + 4 = 1924 419 = 4 units of 256, 1 unit of 16, 9 units of 9 = 1024 + 16 + 9 = 1049 103 = 1 units of 256, 0 units of 16, 3 units of 1 = 256 + 1 = 257

345 = 3 units of 256, 4 units of 16, 5 units of 1 = 768 + 64 + 5 = 837 Convert the following decimal values to hex: 42 22 790 105 174 Answers and explanations:

This is one of those processes that looks like it would take a long time when you read about it, but when you put it into action, it takes only seconds. For this conversion, just use this simple little chart and work from left to right: 256

16

42

Any units of 256 in 42? No, so leave that blank.

1

Any units of 16 in 42? Sure, 2 of them. That equals 32, which in turn gives us a remainder of 10. Any units of 1 in 10? Yep, 10 of them! We represent that with the letter “A” in hex. Final answer: The decimal value 42 converts to the hex value 2A.

42

256 0

Next value: 22

16 2

1 A

256

16

1

22

Any units of 256 in 22? No, so we leave that blank. Any units of 16 in 22? Yes, one of them, with a remainder of 6. There are six units of 1 in 6, so the final conversion is the decimal value 22 to the hex value 16. 256 22

16 1

1 6

Next value: 790 256

16

1

790

Any units of 256 in 790? Yes, three of them, for a total of 768. That gives us a remainder of 22. Any units of 16 in 22? Sure, one of them, giving us a remainder of six. There are six units of 1 in six, so our conversion of the decimal 790 results in the hex value 316.

790

256 3

16 1

1 6

16

1

Next value: 105 256 105

Any units of 256 in 105? No, so we skip that one. Any units of 16 in 105? Six of them, for a total of 96 and a remainder of 9. There are 9 units of 1 in 9, so our conversion of the decimal 105

gives you the hex value 69. 256 105

16 6

1 9

16

1

Next decimal: 174 256 174

Any units of 256 in 174? Nope, so we skip that one. Any units of 16 in 174? 10 of them,

and 10 is represented by “A” in hex. We have a remainder of 14, which in turn is represented by “E” in hex. The decimal 174 converts to the hex value AE. 256 174

16 A

1 E

And that’s it! With the “mystery” of hex out of the way, the conversions are simple, and they’ll be easy points for you on exam day – with practice, that is!

That just about wraps up this section, and while the following videos aren’t required viewing, they’re certainly helpful! There are quite a few free Cisco switching videos on my YouTube channel, including the following… and while you’re out there, be sure to join my 10,000+ subscribers so you’re the first to know about every new video! Video Practice Exam on Cisco Switching Fundamentals:

http://www.youtube.com/watch? v=OLrj3qzTGw4 “You Might Just Be A Root Switch If…” http://www.youtube.com/watch? v=9Db_5o_eXKE “You Might Not Be A Root Switch If….” http://www.youtube.com/watch? v=Hxf8f5U3eKU

Odd Switch Behavior: http://www.youtube.com/watch? v=XuAXVVhILZ8 Cisco Switching Video Practice Exam: http://www.youtube.com/watch? v=u8oAvpGsJYw Over 325 free videos right here, along with your opportunity to subscribe to my YouTube channel!

http://www.youtube.com/user/ccie12

See you there – and in the next section! Chris B.

A Network Admin’s Book Of WANs (Well, a summary, anyway.) This section is an intro to WANs, and when I say “intro”, I mean “intro”! When one router wants to talk to another over a long distance, that’s our Wide Area Network at work, and we have plenty of options for our WANs. It won’t surprise you to learn that each option has plenty of details.

Those are details we aren’t going to visit today. The CCENT doesn’t go into detail on WANs. You’ll hit plenty of labs and details regarding HDLC, Frame Relay, and PPP in your CCNA studies, but for right now, let’s take an introductory look at these and other WAN options! Here’s a typical WAN connection…

… except, of course, routers in a

production network WAN will not be directly connected. There are generally devices belonging to the WAN service provider between the two routers that we don’t configured. More on those in a minute. Right now, let’s talk about some of the encapsulation options for our WAN! HDLC And PPP With a point-to-point WAN link, we have two options for encapsulation: The High Data-Link Control Protocol (HDLC)

The Point-to-Point Protocol (PPP) A Cisco serial interface is running HDLC by default… R3#show int s1

Serial1 is up, line protocol i Hardware is HD64570

Internet address is 172.12.1

Encapsulation HDLC, loopback

… but for reasons we’ll see during our PPP discussion, you’ll likely want to change this default.

The version of HDLC running on Cisco routers is actually a version of HDLC developed by Cisco themselves. It’s not Ciscoproprietary, and is technically known as cHDLC. Most if not all documentation on this version of the protocol refers to it simply as HDLC, and I’ll do the same in this book. What was wrong with the original HDLC, you ask? It didn’t offer multiprotocol support, and that was not going to work with Cisco routers. Cisco simply added a TYPE field to the original version,

and voila – multiprotocol support! I’d Rather Switch Than Fight HDLC’s Shortcomings Even though HDLC is the default encap on Cisco serial interfaces, it has some real shortcomings when compared to PPP. It’s not as much an argument of “HDLC stinks” as it is “PPP is great”! PPP offers many features that HDLC does not, including: Authentication through the use of the Password

Authentication Protocol (PAP) and the ChallengeHandshake Authentication Protocol (CHAP) Support for error detection and error recovery features Multiprotocol support (which Cisco’s HDLC does offer, but the original HDLC does not) Those aren’t all of PPP’s advantages over HDLC, but they’re the most important to us as network admins. No look at WAN protocols would be complete without a look at

Frame Relay. The following intro to Frame is from my ICND2 book, and when you hit that one and pursue your CCNA, we’ll run several labs so you can see it in action. Right now, let’s see how Frame Relay works – and what the heck a “frame cloud” is! Frame Relay Point-to-point networks are nice, but there’s a limit to scalability. It’s just not practical to build a dedicated PTP link between every single router in our network, nor is it cost-effective. It would be a lot easier (and cheaper) to share a

network that’s already in place, and that’s where Frame Relay comes in! A frame relay network is a nonbroadcast multi-access (NBMA) network. “nonbroadcast” means that broadcasts are not transmitted over frame relay by default, not that they cannot be sent. “multiaccess” means the frame relay network will be shared by multiple devices. The frame provider’s collection of frame relay switches has a curious name - frame relay cloud. You’ll often see the frame provider’s switches represented with a cloud

drawing in network diagrams, much like this:

We have two kinds of equipment in this network: The Frame Relay switches, AKA the Data Circuitterminating Equipment (DCE). These belong to the frame relay provider, and we

don’t have anything to do with their configuration. The CSU/DSU that supplies clockrate to our router may be on our property, but it’s still considered to be DCE. The routers, AKA the Data Terminal Equipment. We have a lot to do with their configuration! Each router’s serial interface will be connected to a CSU/DSU, and the DCE must send a clockrate to that DTE. If the clockrate isn’t there, the line protocol will go down.

Those two frame switches are not going to be the only switches in that cloud. Quite the contrary, there can be hundreds of them! For simplicity’s sake, the following diagram will have less than that.

The point at which our network

meets that of the service provider is the demarcation point, sometimes referred to as the “demarc point” or the “point of presence”. You and I, the network admins, don’t need to list or even know every possible path in that cloud. Frankly, we don’t care. The key here is to know that not only will there be multiple paths through that cloud from Router A to Router B, but data probably will take different paths through that cloud. That’s why we call this connection between the routers a virtual circuit. We can send data over it

anytime we get ready, but data will not necessarily take the same path through the provider’s switches every time. Consider this your Frame Relay teaser! In the ICND2 material, we’ll get much more into the nuts and bolts of running Frame on Cisco routers, including plenty of lab work. Right now, let’s take a very brief look at running Ethernet over a WAN!

“Strong Enough For A WAN, Made For A LAN”

We couldn’t even consider using Ethernet for a WAN for many years, since it really was designed for a LAN. That’s no longer the case, though! Basically, using Ethernet for a WAN (an EWAN) is set up much like Frame Relay, in that we have our connection to the service provider’s devices, and what happens in the cloud is the business of the service provider. (In Ethernet-over-WAN terminology,

the point of connection to the service provider is called the point of presence.) In CiscoLand, we use Ethernet over Multiprotocol Label Switching (EoMPLS) for Ethernet over WANs, and frankly, it’s pretty darned complicated. Definitely not something we’re getting into during your CCENT studies! At this point, it’s really enough just to know that Ethernet is becoming a more and more popular WAN option!

Between And About The Lines We’ll discuss the cables inside our LAN in another section. Right now, let’s talk a little about the cable that makes our WAN possible. Our leased line is our physical connection to the service provider. It’s that simple. What’s not as simple is keeping up with the zillion names we’ve given the leased line over the years. When I first started working on the WAN side of the ball, I heard all of the terms listed here in my first couple of days – I thought they were all separate lines! Or links,

depending on who was talking! Serial line T1 (or “T1 line”) PTP link (Point-to-Point) You can substitute the word “link” for “line” in any or all of those. They’re all leased lines. More about those in your ICND2 studies.

Home WAN Access

Household access to a WAN was once limited to a slow-by-today’sstandards connection to an internet services provider. (I still have the baseballs from winning the regular season and playoffs in my 1993 Prodigy Fantasy Baseball League.) It wasn’t much, but we were darn glad to have it! For many of us, the argument about internet access vs. using the phone is in the past. Analog modems (“dialup modems”) are still out

there, and we can’t use the phone and access the Internet at the same time with those. Many of today’s home users utilize digital subscriber lines (DSL) to connect to the Internet, with a specialized piece of equipment on each end of the connection. Man, when DSL came along, it was party time! It’s always on, no dialup needed (“Do we have an AOL disk anywhere?”), and you could access the Internet while someone else used the phone! The home user connects via a DSL Modem, which in turn

communicates with the service provider’s DSL Access Multiplexer (DSLAM). We’ll see multiplexing pop up later in the course, so let’s define that term now. Multiplexing is the process of sending multiple data streams simultaneously over a single channel. The data is then “demultiplexed” once it’s crossed the channel.

Illustration courtesy of Wikipedia, user “The Anome”. In the case of DSL, we’re using the phone line, so we’re multiplexing computer data and voice data. This allows us to access the Internet while someone else uses the phone. Might not sound like much today, but it was kind of a big deal back then! There are several DSL types, collectively referred to as “xDSL”. I won’t list all of them here, but there are two in particular you should be aware of – and you might be reading this over one right now!

ADSL is Asymmetric DSL, the DSL type that delivers greater speed in the downstream direction (to the customer) than upstream. Most DSL connections are this of this type. SDSL is Symmetric DSL, where the upload and download streams flow at the same rate. Not all home internet access is via the telephone line! Many homes connect through their cable connection. I’ve done so myself,

and when the ‘net connection would drop, the first thing we’d do in our troubleshooting (after checking the cables, of course) was to check to see if the cable TV was still working. If it wasn’t, we knew there was a provider issue. This concludes our quick look at our WAN technologies and DSL variations. We’ll hit Frame, PPP, and HDLC hard during your CCNA studies. If you’re interested in going way beyond the scope of the exam regarding DSL, check out Wikipedia – they have more on DSL than you might want to know!

Onward!

DNS, DHCP, and ARP There are network services that run almost flawlessly, to the point where we don’t really give them much thought on a day-to-day basis. For our CCENT and CCNA exams, we need to give these services a LOT of thought! We’ll start by taking a look at our Layer 2 and Layer 3 address acquisition methods, and we’ll

work in some L3 IP address assignment work as well. Let’s get started! One Data Transmission, Two Destination Addresses As network admins, we spend a lot of time concerning ourselves with IP addresses -- assigning them, filtering them, etc. We don’t think about MAC addresses that often, but data going from Host A to Host B must have a destination MAC address for Host B as well as a

destination IP address!

To get these two required destination addresses, Host A will use two separate protocols: Domain Name System (DNS) for the IP address Address Resolution Protocol (ARP) for the MAC address

Host A will require the IP address first, since it must know the IP address of the remote host in order for the ARP process to work properly. Let’s take a quick look at the DNS process. Domain Name System Host A will know the computer name of Host B. For this discussion we’ll assume that name to be “hostb”. Now it needs an IP address and a MAC address for that hostname, and DNS will help it get that IP address.

The DNS process is very simple. Each host will have the IP address of a DNS server, and a host needing the IP address of another host will send a DNS Request to the DNS server. Note that all devices in the following example are on the same network segment. There are no routers involved.

The natural question: “How does Host A know the IP address of the DNS server in the first place?” That happens in one of two ways:

The DNS server address is hard-coded on Host A The DNS server address was learned via DHCP Let’s look at the output of ipconfig /all for the wireless connection on a PC. Note the DNS server locations.

Wireless LAN adapter Wireless Connection-specific DNS Suf sbx13912.richmva.wayport.net

Description . . . . . . . . (2.4GHz and 5GHz)

Physical Address. . . . . .

DHCP Enabled. . . . . . . .

Autoconfiguration Enabled . Link-local IPv6 Address . . fe80::3122:85f1:77bc:140%12(Pr

IPv4 Address. . . . . . . .

Subnet Mask . . . . . . . .

Lease Obtained. . . . . . . AM

Lease Expires . . . . . . . AM

Default Gateway . . . . . .

DHCP Server . . . . . . . .

DHCPv6 IAID . . . . . . . .

DHCPv6 Client DUID. . . . . F0-1F-AF-22-12-E5 DNS Servers . . . . . . . . .

Primary WINS Server . . . . .

NetBIOS over Tcpip. . . . . .

Now that Host A has the IP address of Host B, we’re halfway home, but now Host A needs that MAC address to send data successfully to Host B -- and that’s where the Address Resolution Protocol (ARP) comes in.

The Address Resolution Protocol We have a DNS server that took care of the hostname-IP address resolution, but now we need the MAC address of Host B, and there is no ARP server on the network. That’s because there is no such thing as an “ARP Server”. The ARP process uses a series of broadcasts and replies. Host A is the host needing the MAC

address of a remote device, so it’ll be Host A that sends out the initial ARP Request. This request is a Layer 2 broadcast, meaning…. The source MAC address will be that of Host A The destination MAC address will be ff-ff-ff-ff-ff-ff The source IP address will be that of Host A The destination IP address will be that of Host B (learned via DNS)

In this particular topology, all other devices will see the request. Host C and the DNS server will see the destination IP address of 10.1.1.2,

see that it’s not theirs, and will simply ignore the ARP Request. Host B will see that it is an ARP Request and that it does match its IP address, so Host B will send an ARP Reply containing its MAC address.

Thanks to DNS and ARP, Host A now has the IP and MAC address of Host B, and can successfully send data to that host.

This is a nice, neat little process, but those ARP requests are broadcasts, and we’re always interested in minimizing those. Wouldn’t it be great if the PC could keep a list of MAC addresses it learns for a while – a cache of addresses, perhaps? Behold the ARP Cache! These

caches contain an IP address-toMAC address mapping table such as the one shown here on a Windows PC with the command arp -a : C:\>arp -a Internet Address 192.168.5.1

Physica 00-90-f

This table mentions “Physical Address”, one of several names used to describe the MAC address. It’s still a Layer 2 address, despite that name; it’s called a “Physical Address” because it physically exists on the card.

After learning Host B’s IP and MAC addresses, Host A enters them into its ARP cache. The next time Host A needs to send data to Host B, the information needed to do so is right there in the ARP cache and no ARP Request needs to be sent. In our previous example, we had a switch in the middle of our network, and that didn’t affect the ARP process at all. Switches have no problem forwarding the broadcasted ARP Request, and since the ARP Reply is a unicast, there’s no issue there.

If we have a router in the mix, we do have a problem, because routers don’t forward broadcasts. (For clarity’s sake, the switches and cables are not included in the following illustrations.)

All is not lost! Using Proxy ARP, the router can answer the request with the MAC address of the interface that received the ARP

Response – in this case, Ethernet0.

Host A has no idea the MAC address it received in the ARP Response is actually not that of Host B, but rather that of the Ethernet0 interface of the router. Host A doesn’t care, either. All Host A knows is that it sent an ARP Request and got a Response. Now

when Host A sends data to Host B, the data will have the following destinations: IP destination address is Host B’s IP address MAC destination address is the one assigned to the router’s E0 interface Proxy ARP is a little odd to work with at first, and can result in some unexpected source and destination MAC addresses at different points in your network. If you’re not aware of these changes in MAC addresses when you use a network traffic

analysis tool, you may wonder where those addresses are coming from! Let’s walk through an example of when, how, and if MAC and IP addresses change as a result of Proxy ARP. Two simple rules to remember: The source and destination MAC addresses will only change when routers are involved, since that’s when Proxy ARP has to step in. The source and destination IP addresses do not change,

period. For this example, we’ll use the same network, but with MAC and IP address assigned to the hosts and the router’s Ethernet interfaces.

Host A sends an ARP Request for

the MAC address of Host B. The router receives the Request on its E0 interface, and sends a Proxy ARP response back to Host A. This request tells Host A the MAC address of the device at 10.3.1.2 is 11-11-11-11-11-11, which is actually the MAC address on the router’s E0 interface.

As a result, when Host A sends data to Host B… The source IP and MAC addresses are that of Host A The destination IP address is that of Host B

BUT…the destination MAC is that of R1’s E0 interface!

Now the router will forward that data to Host B. The destination IP and MAC will be that of Host B, but the source MAC address will be that of the router’s E1 interface -

the interface that’s forwarding the data.

As a result, Host A’s ARP cache will look like this: C:\>arp -a Internet Address

Physical

10.3.1.2

11-11-11-

The IP address never changes in this scenario, but the MAC address does as a result of Proxy ARP. That’s enough ARPing around for a while! Let’s talk about how these hosts get their own IP address in the first place! Dynamic Host Configuration Protocol A host needs some very important information before it can even start to act as part of the network:

What’s my IP address? What’s my network mask? What are the IP addresses of the DNS servers? What’s my default gateway? We have two options for getting that info to the host: Visit each workstation and configure the information manually Enable each workstation for DHCP You might think there’s no big

difference, since each option involves visiting the workstations. The difference is in how many times you end up doing that. Sooner or later, some of that information is going to change and the hosts will need to know about these changes. If you previously hardcoded the information on all hosts, you’ll now have to go out and visit every workstation again and change the information manually. If you used DHCP to begin with, you now just have to change the

information on the DHCP server, and then push that information out to the hosts. When the choice is between visiting the hundreds or thousands of hosts on a typical network or using DHCP to dynamically handle IP address assignment information, there really is no choice. When you have the choice to do something manually or to let the router or switch do the work, it’s almost always a great idea to let the hardware do the work. That doesn’t make your lazy, it makes you smart. Your time is your most valuable

resource – make it count. Also, consider our mobile users – and today, everybody’s a mobile user! If you’re a laptop owner, there’s no way hardcoding that IP address information on your laptop would work out, since you’ll need to be on one network at the coffee shop, another at the airport, another at the grocery store, and on and on… In short, today’s networks demand dynamic assignment of IP address information, and that’s what DHCP is all about. There are four basic steps that

allow a host (the DHCP Client) to acquire all of this information from a DHCP Server, and you can easily keep them in mind by using the acronym DORA: Discover Offer Request Acknowledgement Let’s take a look at each of those steps. The client begins the process by sending a DHCP Discover message

with the destination IP address 255.255.255.255 (a broadcast). The client has to send a broadcast, since it has no idea where the DHCP server is. Basically, this step is the host yelling “Hey, anybody out there a DHCP Server?”

Any DHCP server that receives that

message will respond with a DHCP Offer. The Offer contains the following: The IP address the DHCP Server is offering the Client The network mask the DHCP Server is offering the Client The amount of time the Client can keep this information if the Offer is accepted (the lease) The IP address of the DHCP Server making the offer

Since the original DHCP Discovery sent by the host is a broadcast, more than one DHCP Server may see it and respond with an Offer. That’s great, since we always like having a choice! The choice made by the host in this example isn’t exactly made by a scientific method, though – the host simply

accepts the first Offer it gets. The DHCP Offer is also a broadcast. That’s necessary since the client doesn’t have an IP address yet. By sending this message as a broadcast, the client is guaranteed to see it (and other clients will just ignore it). The client then broadcasts a DHCP Request, identifying the DHCP Offer it’s accepting. This is usually a broadcast, and the DHCP Server whose offer is being accepted can tell that via a “Transaction ID” value in the DHCP Request message. (Under certain

circumstances, such as the client renewing a previous lease, this can be a unicast.)

Finally, the DHCP Server whose offer was accepted sends a DHCP Acknowledgement back to the client with any other information the

Client needs. The client doesn’t yet have its IP address officially, so this message is also a broadcast. The process is now complete and the client’s all set. Of course, we’re going to verify that in just a moment!

You can see the IP address a host

has been assigned, along with the lease length and other information, with ipconfig /all. You can verify that the host is running DHCP with this command as well. DHCP Enabled. . . . . . . . . Autoconfiguration Enabled . .

Link-local IPv6 Address . . . fe80::3122:85f1:77bc:140%12(Pr IPv4 Address. . . . . . . . . Subnet Mask . . . . . . . . . Lease Obtained. . . . . . . . 7:21:59 AM

Lease Expires . . . . . . . . 8:54:37 AM Default Gateway . . . . . . . DHCP Server . . . . . . . . . DHCPv6 IAID . . . . . . . . . DHCPv6 Client DUID. . . . . . 1F-AF-22-12-E5

I’m sure you noticed the DHCPv6 information near the bottom of that output. We’ll discuss DHCPv6 in the IP Version 6 section of this course!

Working Around Routers With DHCP The DHCP process is nice and clean as long as we have a DHCP server on each physical subnet in our network, like this…

… but what happens when we get routers involved?

Our nice, clean process just got a bit messy. Why? Routers don’t forward broadcasts, and the entire DHCP process is dependent on broadcasts.

To solve this problem, we need a little help from the ip helperaddress command, which allows the router to serve as a DHCP relay agent. The “helper” part comes in as the router is now allowed to take an incoming broadcast and make two major address changes to that packet: The source IP address is changed to that of the router interface that received the packet The destination IP address is

changed to the IP address of the DHCP server. Since that destination address of the packet is no longer a broadcast, the router can route it successfully. Slick, eh? First things first! We need the ip helper-address command on the router interface that will be receiving DHCP broadcasts from clients. In this network, we’d only need it on the Ethernet0 interface on R1, but it’s commonplace to see it on all router interfaces that face

LANs.

R1 config: Interface ethernet0

Ip helper-address 172.23.2

Here are the source and destination IP addresses of a DHCP Discover packet as it arrives on the E0 interface of R1: Source: 0.0.0.0 Destination: 255.255.255.255 (broadcast) The router then changes those two values before routing the packet: Source: 172.12.123.1 (R1’s E0 interface) Destination: 172.23.23.100 (The DHCP Server)

When the DHCP server replies with an offer, the router will see the message is destined for an interface on the router itself. That’s a tipoff to the router that this is a DHCP message that needs to be relayed, so it sets the destination address to 255.255.255.255 and then relays away!

In networking, a solution tends to lead to another question, and a logical question here would be “What if we have more than one DHCP server on our network?”

No worries, we’ll just configure multiple helper addresses!

R1 config: Interface ethernet0 Ip helper-address 172.23.23 Ip helper-address 172.23.23

This option does allow

communication from that remote LAN to both DHCP servers, but doesn’t perform any kind of workload balancing (or “load balancing”, in network-speak). To do that, we’d have to start adjusting the delay values on those DHCP servers, and that’s way beyond the scope of your CCENT and CCNA studies. Besides, we have to leave something for later studies!

Configuring DHCP With Cisco Routers

Our Cisco router can serve as a DHCP server, and all information about the packet types, messages, etc., stands. We create an address pool with ip dhcp pool < POOLNAME >. Once we’re in DHCP pool configuration mode, you can create… well, we’ll see! The odd thing about using a Cisco router as a DHCP server is that your config starts by identifying the

addresses you do NOT want to be assigned to hosts from the address pool. Let’s say you’re assigning addresses from the 10.1.1.0 /24 subnet, and you don’t want to have the first five addresses in that subnet assigned to hosts. Identify the excluded addresses with the ip dhcp excluded-address command, which is NOT found in the ip dhcp pool command options – it’s a command onto itself:

R1(config)#ip dhcp excluded-ad A.B.C.D

Low IP address

R1(config)#ip dhcp excluded-ad A.B.C.D

High IP address



R1(config)#ip dhcp excluded-ad ?

R1(config)#ip dhcp excluded-ad

Note that you’re identifying a range with those two addresses. You can also use this command to enter a single excluded address. Let’s say the Ethernet interface facing the

inside hosts uses the IP address 10.1.1.100. We want to exclude that from the pool, and that’s no problem – you can use the ip dhcp excluded-address command as many times as necessary.

R1(config)#ip dhcp excluded-ad

R1(config)#ip dhcp excluded-ad

We’ll start our pool config with the ip dhcp pool command: R1(config)#ip dhcp pool ? WORD

Pool name

R1(config)#ip dhcp pool NETWOR

R1(config)#ip dhcp pool NETWOR R1(dhcp-config)#

We drop into DHCP config mode, and that’s where our options finally show up! A lot of them! There are almost 30 options with this command, so I’ve left in only the ones we use most often: R1(dhcp-config)#?

DHCP pool configuration comman

default-router

Default

dns-server

DNS ser

domain-name

Domain

exit configuration mode

Exit fr

host

Client

lease

Address

network

Network

no defaults relay

Negate

Functio

A typical DHCP configuration, along with a little help from IOS Help to see the options:

R1(dhcp-config)#default-router Hostname or A.B.C.D

Router’

R1(dhcp-config)#default-router

R1(dhcp-config)#dns-server ? Hostname or A.B.C.D

Server’

R1(dhcp-config)#dns-server 100 Hostname or A.B.C.D

Server’

R1(dhcp-config)#dns-server 10.

Hostname or A.B.C.D Server’s

R1(dhcp-config)#dns-server 10.

R1(dhcp-config)#domain-name ? WORD Domain name

R1(dhcp-config)#domain-name th

R1(dhcp-config)#lease ?

Days

infinite

Infinite lease

R1(dhcp-config)#lease 10 ?

Hours

R1(dhcp-config)#lease 10 0 ?

Minutes

R1(dhcp-config)#lease 10 0 0?

R1(dhcp-config)#lease 10 0 0 ?

R1(dhcp-config)#lease 10 0 0

Watch those lease numbers! And you know what I’m going to say next – always use IOS Help to verify numeric values of a command before entering the

command! That’s enough DHCP for now – let’s head to the next section! Before you do, though, head out to Udemy and join my free and almostfree CCNA, CCNP, CCENT, and Security Video Boot Camps! You can join my 27-hour CCNA Video Boot Camp for just $44 with the BULLDOG60 coupon code, and all videos are fully downloadable!

https://www.udemy.com/u/chrisbryan See you there!

Router Memory, Configs, and More

This is a great section full of “nuts and bolts” – quite a few details that are important for both your exam performance and your network’s performance. I guarantee you’ll use the information in this section for the rest of your Cisco routing and switching career. This info really is that important. You can’t

troubleshoot effectively without knowing this material. You also can’t pass your exams without it, so let’s have at it! Router And Switch Memory The memory components and functions discussed in this section are the same for routers and switches, but to keep from saying “routers and switches” 500 times, I’ll just say “routers”. Let’s examine these four memory components closely and see what each one does -- and what is

retained and NOT retained on a reload. ROM: Read-Only Memory. ROM stores the router’s bootstrap startup program, operating system software, and power-on diagnostic test programs (POSTs). Flash Memory: Generally referred to simply as “flash”, the IOS images are held here. Flash is erasable and reprogrammable ROM. Flash memory content is retained by the router on reload. RAM: Random-Access Memory. Stores operational information such as routing tables and the running

configuration file. RAM contents are lost when the router is powered down or reloaded. NVRAM: Non-volatile RAM. NVRAM holds the router’s startup configuration file. NVRAM contents are not lost when the router is powered down or reloaded. Some important comparisons: RAM contents are lost on reload, where NVRAM and Flash contents are not. NVRAM holds the startup

configuration file, where RAM holds the running configuration file. We’ll talk about the startup and running configuration files later in this section. Right now, let’s take a look at the boot process of a Cisco router, and then talk about the dreaded Setup Mode! The Router Boot Process When a Cisco router powers up, it first runs a series of POSTs (Power-On Self Tests). A POST is a diagnostic test designed to verify

the basic operation of the network interfaces, memory, and CPU. Depending on the model of router you’re using, you’ll see messages regarding the POSTs passed as the router boots. Here, I’ve reloaded a Cisco 2950 switch, and you can see some of the POSTs being run and thankfully passed at the very beginning of the bootup process.

Initializing flashfs… Done initializing flashfs. POST: System Board Test : Pass POST: Ethernet Controller Test ASIC Initialization Passed POST: FRONT-END LOOPBACK TEST

Here are the POSTs shown when I reloaded a 2960 switch:

POST: CPU MIC register Tests : POST: CPU MIC register Tests : POST: PortASIC Memory Tests : POST: PortASIC Memory Tests :

POST: CPU MIC interface Loopba POST: CPU MIC interface Loopba

POST: PortASIC RingLoopback Te POST: PortASIC RingLoopback Te

POST: PortASIC CAM Subsystem T POST: PortASIC CAM Subsystem T

POST: PortASIC Port Loopback T POST: PortASIC Port Loopback T

“Status Passed” is always a good thing! POSTs are particularly effective at detecting major problems, such as a broken fan, early in the boot process. If the POST detects a critical problem that would cause the router to overheat after booting, the POST will fail, give you a clear message as to why the POST failed , and the boot process stops. In the event a fan fails, you’ll see a message regarding an “environmental factor”. And once that pain in your stomach

that feels like someone kicked you goes away, you can start troubleshooting. (Seriously, the first time you see a POST fail, it’s painful!) Let’s assume our fictional router has a good fan and move forward! After the router passes the POST, it looks for a source from which to load a valid Internetwork Operating System (IOS) image. The router has three sources from which it can load an IOS image, and it’s a good idea to know these sources and the order in which the router will look in each for that image:

1. Flash memory (the default). 2. A TFTP server. (Trivial File Transfer Protocol) 3. Read-Only Memory (ROM) To change that order, a change must be made to the configuration register. It’s similar to the Microsoft Registry in that you should never change this value unless you are sure of the result. More on that later. Once the IOS is found, the router looks for a valid startup configuration file. By default, the

router will look for the startup configuration file in Non-Volatile RAM (NVRAM). If there’s no startup file there, the router looks for a TFTP Server. If no valid startup configuration file is found, the router prompts you to enter setup mode, where the router runs the system configuration dialogue, a series of questions involving basic router setup. A lot of questions. Setup Mode, The CLI, and Pain When you take a Cisco router out of

the box and boot it up for the first time, it’s dumber than a bag of rocks. Well, not quite. It’s not dumb. You just haven’t told it anything yet. A router doesn’t automagically know what IP addresses you want to assign to its interfaces, what security features you do and do not want to run, or any of your other preferences. We have two ways to tell it these things: Setup Mode Manually configuration via

the Command-Line Interface (CLI) Seems like a pretty simple choice, doesn’t it? Well…… I’m all for automating processes and letting devices dynamically learn things that we could tell it statically (MAC addresses, for instance). That’s not because I’m lazy, it’s because allowing dynamic learning is generally more efficient than maintaining static configurations. However, Setup Mode is not

dynamic. You’re still going to have to tell the router what it needs to know, and with Setup Mode, you do so by answering a series of questions. A lot of questions. Here’s the prompt to enter Setup Mode:

--- System Configurat

Would you like to enter the in dialog? [yes/no]:

If you answer “yes”, you’re going to

enter Setup Mode, which is a fancy way of saying the device is going to start asking you questions about what you do and do not want to configure. If you answer “no”, you’ll be taken to the Command-Line Interface (CLI), and you’ll have to configure the device with no prompting. Sounds like choosing Setup Mode is a no-brainer! There’s nothing technically wrong with Setup Mode, but I’ve seen many admins that get tired of answering questions in a few minutes and wish they could just forget the whole thing

and get back to the CLI. ( I personally lose my patience with Setup Mode in about 30 seconds.) Let’s answer “y” to that innocent little question and see what Setup Mode is all about.

Would you like to enter the in dialog? [yes/no]: y

At any point you may enter a q help. Use ctrl-c to abort configurat Default settings are in square

Basic management setup configu connectivityfor management of setup will ask youto configure system

Would you like to enter basic [yes/no]:

Sure, we’ll go with basic management setup. Sounds great! By the way, when you see two responses in the brackets, you have to manually enter one of them – you can’t just hit Enter. When you see one value in brackets, like [Router], that’s the default response. To accept that,

just hit the Enter key, which is what I did here.

Configuring global parameters: Enter host name [Router]:

The enable secret is a passw access to privileged EXEC and This password, after entered, configuration. Enter enable secret:

Understandably, there’s no default for the enable secret password! Let’s say that we want to enter our enable secret password later, or not to configure one at all. Let’s see if

we can get around this… Enter enable secret: No defaulting allowed Enter enable secret: No defaulting allowed Enter enable secret:

Nope! This is one reason many admins don’t care for Setup Mode. You’re going to be asked questions regarding services you have no intention of running, and believe me, you’re going to get tired of it. How do we get out of this mode without saving changes? There was a hint earlier in the config…..

At any point you may enter a q

Use ctrl-c to abort configurat

Default settings are in square

Simple enough, as long as you noticed that at the beginning! Let’s try the ctrl-c keystroke and see what happens.

Configuration aborted, no chan Press RETURN to get started!

We’re thrown out of Setup Mode and once you hit RETURN, you’re back at the command-line interface

(CLI). There’s nothing technically wrong with Setup Mode. It’s just unwieldy, and most admins want to get out of it the first time they try it. Make sure you keep that keystroke in mind for both the exam and working with real-world networks! As for configuring from the CLI, all you have to do is type enable at that prompt, then you’re ready to enter configuration mode with conf t (short for “configure terminal”) Finally, if you’d like to enter Setup Mode from the router prompt, simply type setup.

R2#setup

--- System Configurat Continue with configuration di

Fundamental Router (And Switch) Commands To see the active configuration on the router, enter the command show running-config. There won’t be much of a config on there now, but we’ll take care of that soon. This router has multiple Serial interfaces, and since their default

configs all look the same, I’ve cut a few of them out for clarity. Router#show run Building configuration… Current configuration: service timestamps debug up service timestamps log uptime

no service password-encryption hostname Router ip subnet-zero interface Ethernet0

no ip address shutdown

interface Serial0 no ip address shutdown ! line con 0 transport input none line aux 0 line vty 0 4

We worked with the console and VTY lines on a switch elsewhere in the course, and outside of the router having fewer VTY lines (the switch had 16 lines, the router only has five), the Telnet commands and privilege level 15 command work just the same here. Note these defaults in that config: The router’s name is “Router” no service passwordencryption is in the config by default The interfaces are all shut

down Changing the router’s name is easy, so we’ll start there. Just use the hostname command followed by the name you want the router to have. Router#conf t Router(config)#hostname R1 R1(config)#

The router prompt changed immediately and the hostname of the router is now R1. You rarely have to reload aCisco router in order for commands to take effect.

Let’s take a look at the no service password-encryption command. When we looked at enable passwords and enable secret passwords, we learned the enable password is in clear text by default, and the enable secret is encrypted by default. Let’s set an enable password of CISCO and an enable secret of CCNA and compare the two in the running configuration.

R1(config)#enable password CIS R1(config)#enable secret CCNA

While we’re at it, we’ll configure a

VTY line password of CCENT for our incoming Telnet users, and set the privilege level to 15 for all of them.

R1(config)#line vty 0 4 R1(config-line)#password CCENT R1(config-line)#privilege leve

What do the passwords look like in the running configuration?

enable secret 5 $1$.2Ut$44fqDN enable password CISCO line vty 0 4 privilege level 15 password CCENT login

The enable secret password is encrypted, but the enable and VTY line passwords are just sitting there in clear text, waiting to be read. What if we want to encrypt all of the passwords in the configuration? We’d run the command service password-encryption!

R1(config)#service password-en R1(config)#

The Importance Of Keeping Extra Copies Of Your Configs

You know how many PC owners “mean” to keep backups of their hard drives, but never really do anything about it until their hard drive fails? Unfortunately, there are network admins out there who aren’t as diligent with keeping up-to-date backups of their router and switch configs as they should be. I know we’re always busy with

something, but this process doesn’t take very long, and you’re going to be very happy that you have them if you ever need them. Why would you ever need them? I’ve seen three different situations where these backups came in handy. In order of probability: Network attackers changing or deleting the config An honest mistake made by a network admin

Just as any file can become corrupt over time, so can a startup-config file You can actually save a router’s config to the flash memory of another router if that router has enough space in its flash, but the most common method of saving config files is to use a TFTP Server.

When you hear “TFTP Server”, you tend to think of a traditional server. Such a server can act as a TFTP Server, but so can a laptop or a Cisco router. I know network

admins who literally have dozens of backup router configs stored on their laptop, and when it comes to updating an IOS, a Cisco router can make a great TFTP Server. On occasion, changing a router’s IOS is more of an art form than a science. Sometimes it goes beautifully, on occasion it does not. Here’s a quick real-world tip regarding config file backups. Before working on a client’s config file, just copy and paste the config from the router to a Notepad or Word file. If things go badly with your config changes, you don’t have

to guess about the startup config you’ve got a copy right there. Updating A Router IOS The trickiest part of changing a router’s IOS image might be getting the image you want! You can download IOSes from Cisco, but a Cisco Connection Online (CCO) login is not enough. The rules change as to who can and cannot download IOS images, so I won’t list those rules here, but you can find out quickly by searching Cisco’s site. Just keep in mind that

you can’t just go out to Cisco’s website to download the latest IOS image for your router on a whim. Once you get the IOS you want, that file needs to be put on a TFTP server the router has access to. That TFTP server can be another Cisco router or a typical server, but generally it’s going to be your laptop.

If you have to perform an IOS upgrade, you might be tempted to do so remotely rather than physically visit the client site - until you see the following warning! I’ve telnetted into a router and issued the copy tftp flash command, and that means we’re copying from a TFTP server to the router’s Flash. Here’s the warning I received, and I’ve bolded the very, very important part: BRYANT_AS_5#copy tftp flash **** NOTICE **** Flash load helper v1.0

This process will accept the c then terminate the current sys the ROM based image for the co Routing functionality will not during that time. If you are l telnet, this connection will t Users with console access can copy operati [There are active users logged system] Proceed? [confirm]

As you guessed, I answered “n” and left the router up and running. Once you do finish copying the new IOS to Flash, this is one of the rare occasions where you have to reload the router for the change to take effect. Before copying to Flash,

though, run show flash to see how much room you have left! The following output indicates we don’t have much room left on this particular router, so copying a new IOS image to this router without deleting the current one is just about impossible. BRYANT_ADVANTAGE_2#show flash

System flash directory: File Length Name/status 1 7432656 c2500-i-l.120 2 [7432720 bytes used, 955888 av 8388608 total] 8192K bytes of board System flash (Read ONLY)

The Configuration Register

One day, you will have to change the config register, most likely to perform a password recovery. I will just give this warning one time: If you change the register to an incorrect value and then reload the router, you can cripple the router and even Cisco can’t bring it back. You really just have to be careful and get the right value for what you’re trying to do before you change the config register. Another key is to change the register back to the original value once you’re done

with your work. To see the current config register value, run the always-helpful command show version. The config register value is at the very bottom of that output, but while we’re here, let’s take a look at all of this information. R1#show version

Cisco IOS Software, C2600 Soft Version 12.4(15)T1 2, RELEASE SOFTWARE (fc3)

Technical Support: http://www.

Copyright (c) 1986-2010 by Cis

Compiled Fri 22-Jan-10 00:53 b

ROM: System Bootstrap, Version SOFTWARE (fc1) R1 uptime is 6 minutes

System returned to ROM by powe

System image file is “flash:c2 15.T12.bin”

< Huge Security Warning Edited

Configuration register is 0x21

The first bolded field tells you what IOS software and version this

router is running. The second bolded field shows you how long the router’s been up, why the router went down the last time it did so (“reload”), and the IOS file contained in flash. Finally, the all-important config register value. The value shown, 0x2102, is the factory default. This value forces the router to look in its own Flash memory for a valid IOS on startup. The config register value requires a reload for a changed value to take effect. I’ll change this value to 0x2142 and run show version again,

cropping out all information except the config register. (The register setting 0x2142 forces the router to bypass the startup configuration file kept in NVRAM.)

Router1(config)#config-registe Router1#show version

Configuration register is 0x21 next reload)

I don’t mean to scare you away from this command, and the odds are that you’re going to change more than one configuration register setting in your career. Like debugs,

the config-register command should be used with caution. Another common value used with config-register is 0x2100, which boots the router into ROM Monitor mode. To review these common configuration register settings: 0x2102: The default. Router looks for a startup configuration file in NVRAM and for a valid IOS image in Flash. 0x2142: NVRAM contents are bypassed, startup configuration is ignored.

0x2100: Router boots into ROM Monitor mode. A real-world reminder: When you change the configuration register value to perform password recovery, don’t forget to change it back and then reload the router! Let’s move on and get an introduction to IP addressing and the fundamentals of the routing process!

IP Addresses and the Routing Process

For one host to successfully send data to another, the sending host needs two destination addresses: Destination MAC address (Layer 2) Destination IP address (Layer 3) And yes, you have heard that

before! In this section, we’re going to concentrate on Internet Protocol (IP) addressing. IP addresses are often referred to as “Network addresses” or “Layer 3 addresses”, since that is the OSI layer at which these addresses are used. The IP address format you’re familiar with - addresses such as “192.168.1.1” - are IP version 4 addresses. That address type is the focus of this section. IP version 6 addresses are now in use, and they’re radically different from IPv4 addresses. I’ll introduce you

to IPv6 later in this course, but unless I mention IPv6 specifically, every address you’ll see in this course is IPv4. The routing process and IP both operate at the Network layer of the OSI model, and the routing process uses IP addresses to move packets across the network in the most effective manner possible. In this section, we’re going to first take a look at IP addresses in general, and then examine how routers make a decision on how to ge packet from source to destination. The routing examples in this section

are not complex, but they illustrate important fundamentals that you must have a firm grasp on before moving on to more complex examples. To do any routing, we’ve got to understand IP addressing, so let’s start there!

IP Addressing and an Introduction to Binary Conversions If you’ve worked as a network admin for any length of time, you’re already familiar with IP addresses. Every PC on a network will have one, as will other devices such as printers. The term for a network device with an IP address is host, and I’ll try to use that term as often as possible to get you used to it! The PC…err, the host I’m creating this document on has an IP address, shown here with the Microsoft command ipconfig.

C:\>ipconfig Windows IP Configuration Ethernet adapter Local Area Co IP Address: 192.168.1.100 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.1.1

All three values are important, but we’re going to concentrate on the IP address and subnet mask for now. We’re going to compare those two values, because that will allow us to see what network this particular host belongs to. To perform this comparison, we’re going to convert both the IP address and the subnet mask to binary strings. You’ll find this to be an easy

conversion with practice. First we’ll convert the IP address 192.168.1.100 to a binary string. The format that we’re used to seeing IP addresses take, like the 192.168.1.100 shown here, is a dotted decimal address. Each one of those numbers in the address are decimal representations of a binary string, and a binary string is simply a string of ones and zeroes. Remember - “it’s all ones and zeroes”! We’ll convert the decimal 192 to

binary first. All we need to do is use the following series of numbers and write the decimal that requires conversion on the left side: 128

64

32

16

192

All you have to do now is work from left to right and ask yourself one question: “Can I subtract this number from the current remainder?” Let’s walk through this example and you’ll see how easy it is! Looking at that chart, ask yourself “Can I

subtract 128 from 192?” Certainly we can. That means we put a “1” under “128”. 128 192

64

32

16

1

Subtract 128 from 192 and the remainder is 64. Now we ask ourselves “Can I subtract 64 from 64?” Certainly we can! Let’s put a “1” under “64”.

192

128

64

1

1

32

16

Subtract 64 from 64, and you have

zero. You’re practically done with your first binary conversion. Once you reach zero, just put a zero under every other remaining value, and you have your binary string!

192

128

64

32

16

8

1

1

0

0

0

The resulting binary string for the decimal 192 is 11000000. That’s all there is to it! If you know the basics of binary and decimal conversions, AND practice these skills diligently, you can answer any subnetting question

Cisco asks you. I’ll go ahead and show you the entire binary string for 192.168.1.100, and the subnet mask is expressed in binary directly below it.

192.168.1.100 = 11000000 10101

255.255.255.0 = 11111111 11111

The subnet mask indicates where the network bits and host bits are. The network bits of the IP address are indicated by a “1” in the subnet mask, and the host bits are where the subnet mask has a “0”. This address has 24 network bits, and

the network portion of this address is 192.168.1 in decimal. Any IP addresses that have the exact same network portion are on the same subnet. If the network is configured correctly, hosts on the same subnet should be found on one “side” of the router, as shown below.

Assuming a subnet mask of 255.255.255.0 for all hosts, we have two separate subnets, 192.168.1.x and 192.168.4.x. What you don’t want is the following:

This could lead to a problem, since hosts in the same subnet are separated by a router. We’ll see

why this could be a problem when we examine the routing process later in this section, but for now keep in mind that having hosts in the same subnet separated by a router is not a good idea!

The IP Address Classes Way back in the ancient times of technology - September 1981, to be exact - IP address classes were defined in RFC 791. RFCs are Requests For Comments, which are technical proposals and/or documentation. Not always the most exciting reading in the world, but it’s well worth reading the RFC that deals with the subject you’re studying. Technical exams occasionally refer to RFC numbers for a particular protocol or network service. To earn your CCENT and CCNA

certifications, you must know these address classes and be able to quickly identify what class an IP address belongs to. Here are the three ranges of addresses that can be assigned to hosts: Class A: 1–126 Class B: 128–191 Class C: 192–223 The following classes are reserved and cannot be assigned to hosts: Class D: 224–239. Reserved for multicasting, a topic not

covered on the CCENT or CCNA exams, although you will need to know a few reserved addresses from that range. You’ll find those throughout the course. Class E: 240–255. Reserved for future use, also called “experimental addresses”. Any address with a first octet of 127 is reserved for loopback interfaces. This range is *not* for Cisco

router loopback interfaces. For your exams, I strongly recommend that you know which ranges can be assigned to hosts and which ones cannot. Be able to identify which class a given IP address belongs to. It’s straightforward, but I guarantee those skills will serve you well on exam day! The rest of this section concentrates on Class A, B, and C networks. Each class has its own default network mask, default number of network bits, and default number of host bits. We’ll manipulate these

bits in the subnetting section, and you must know the following values in order to answer subnetting questions successfully - in the exam room or on the job! Class A: Network mask: 255.0.0.0 Number of network bits: 8 Number of host bits: 24 Class B: Network mask: 255.255.0.0

Number of network bits: 16 Number of host bits: 16 Class C: Network mask: 255.255.255.0 Number of network bits: 24 Number of host bits: 8

The RFC 1918 Private Address Classes If you’ve worked on different production networks, you may have noticed that the hosts at different sites use similar IP addresses. That’s because certain IP address ranges have been reserved for internal networks - that is, networks with hosts that do not need to communicate with other hosts outside their own internal network. Address classes A, B, and C all have their own reserved range of addresses. You should be able to recognize an address from any of

these ranges immediately. Class A: 10.0.0.0 10.255.255.255 Class B: 172.16.0.0 172.31.255.255 Class C: 192.168.0.0 192.168.255.255 You should be ready to identify those ranges in that format, with the dotted decimal masks, or with prefix notation. (More about prefix

notation later in this section.) Class A: 10.0.0.0 255.0.0.0, or 10.0.0.0 /8 Class B: 172.16.0.0 255.240.0.0, or 172.16.0.0 /12 Class C: 192.168.0.0 255.255.0.0, or 192.168.0.0 /16 You may already be thinking “Hey, we use some of those addresses on

our network hosts and they get out to the Internet with no problem at all.” (It’s a rare network that bans hosts from the Internet today – that approach just isn’t practical.) The network services NAT and PAT (Network Address Translation and Port Address Translation) make that possible, but these are not default behaviors. We have to configure NAT and PAT manually. We’re going to do just that later in this course, but for now, make sure you know those three address ranges cold!

Introduction To The Routing Process Before we start working with routing protocols, we need to understand the very basics of the routing process and how routers decide where to send packets. We’ll take a look at a basic network and follow the decision-making process from the point of view of the host, then the router. We’ll then examine the previous example in this section to see why it’s a bad idea to have hosts from the same subnet separated by a router. Let’s take another look at a PC’s

ipconfig output.

C:\>ipconfig Windows IP Configuration Ethernet adapter Local Area Co IP Address: 192.168.1.100 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.1.1

When this host is ready to send packets, there are two and only two possibilities regarding the destination address: It’s on the 192.168.1.0 255.255.255.0 network.

It’s not. If the destination is on the same subnet as the host, the packet’s destination IP address will be that of the destination host. In the following example, this PC is sending packets to 192.168.1.15, a host on the same subnet, so there is no need for the router to get involved. In effect, those packets go straight to 192.168.1.15.

192.168.1.100 now wants to send packets to the host at 10.1.1.5, and 192.168.1.100 knows it’s not on the same subnet as 10.1.1.5. In that case, the host will send the packets to its default gateway - in this case, the router’s ethernet0 interface. The transmitting host is basically saying

“I have no idea where this address is, so I’ll send it to my default gateway and let that device figure it out. In Cisco Router I trust!”

When a router receives a packet, there are three possibilities regarding its destination: Destined for a directly connected network. Destined for a non-directly connected network that the router has an entry for in its routing table. Destined for a non-directly connected network that the router does not have an entry

for. In each of these possibilities, the router will check the encapsulating frame for errors via the CFC (remember that?), and then go about the business of routing the packet. Let’s take an illustrated look at each of these three possibilities.

How A Router Handles A Packet Destined For A Directly Connected Network We’ll use the following network in this section:

The router has two Ethernet interfaces, referred to in the rest of this example as “E0” and “E1”. The switch ports will not have IP

addresses, but the router’s Ethernet interfaces will – E0 is 10.1.1.2, E1 is 20.1.1.2. Host A sends a packet destined for Host B at 20.1.1.1. The router will receive that packet on its E0 interface and see the destination IP address of 20.1.1.1.

The router will then check its

routing table to see if there’s an entry for the 20.0.0.0 255.0.0.0 network. Assuming no static routes or dynamic routing protocols have been configured, the router’s IP routing table will look like this:

R1#show ip route Codes: C - connected, S - stat Gateway of last resort is not C C

20.0.0.0/8 is directly co 10.0.0.0/8 is directly co

See the “C” and the “S” next to the word “codes”? You’ll see anywhere from 15–20 different

types of routes listed there, and I’ve removed those for clarity’s sake. You don’t see the mask expressed as “255.0.0.0” - you see it as “/8”. This is prefix notation, and the number simply represents the number of 1s at the beginning of the network mask when expressed in binary. That “/8” is pronounced “slash eight”. 255.0.0.0 = binary string 11111111 00000000 00000000 00000000 = /8 The “C” indicates a directly connected network, and there is an entry for 20.0.0.0. The router will

then send the packet out the E1 interface and Host B will receive it.

Simple enough, right? Of course, the destination network will not always be directly connected. We’re not getting off that easy! It’s important to note that a router is going to look through its entire routing table for what we call the

“best match” – the entry that most closely matches the destination IP address of the packet. It’s for this reason that we like to keep our routing tables complete and concise. Route summarization is a great way to do that, and that’s a skill we’ll hit later in this course. Now back to our network!

How The Router Handles A Packet Destined For A Remote Network That Is Present - Or Not - In The Routing Table Here’s the topology for this example:

If Host A wants to transmit packets to Host B, there’s a problem. The first router that packet hits will not

have an entry for the 30.0.0.0 /8 network, will have no idea how to route the packets, and the packets will be dropped. There are no static routes or dynamic routing protocols in action on a Cisco router by default. Once we apply those IP addresses and then open the interfaces, there will be a connected route entry for each of those interfaces with IP addresses, but that’s it. When R1 receives the packet

destined for 30.1.1.2, R1 will perform a routing table lookup to see if there’s a route for 30.0.0.0. The problem is that there is no such route, since R1 only knows about the directly connected networks 10.0.0.0 and 20.0.0.0.

R1#show ip route Codes: C - connected, S - stat Gateway of last resort is not C C

20.0.0.0/8 is directly con 10.0.0.0/8 is directly con

Without some kind of route to 30.0.0.0, the packet will simply be

dropped by R1.

We can use a static route or a dynamic routing protocol to resolve this. Let’s go with static routes, which are created with the ip route command. The interface named at the end of the command is the local router’s exit interface. (Plenty more

on this command coming in a later section!)

R1(config)#ip route 30.0.0.0 2

The routing table now displays a route for the 30.0.0.0 /8 network. The letter “S” indicates a static route.

R1#show ip route Codes: C - connected, S - stat C C S

20.0.0.0/8 is directly con 10.0.0.0/8 is directly con 30.0.0.0/8 is directly con

R1 now has an entry for the

30.0.0.0 network, and sends the packet out its E1 interface. R2 will have no problem forwarding the packet destined for 30.1.1.2, since R2 is directly connected to that network.

If Host B wants to respond to Host A’s packet, there would be a problem at R2, since the incoming

destination address of the reply packet would be 10.1.1.1, and R2 has no entry for that network. A static route or dynamic routing protocol would be needed to get such a route into R2’s routing table. The moral of the story: Just because “Point A” can get packets to “Point B”, it doesn’t mean B can get packets back to A!

Why We Want To Keep Hosts In One Subnet On One Side Of The Router Earlier in this section, the following topology served as an example of how not to configure a network.

Now that we’ve gone through some routing process examples, we can

see why this is a bad setup. Let’s say a packet destined for 192.168.1.17 is coming in on another router interface.

The router receives that packet and performs a routing table lookup for 192.168.1.0 255.255.255.0, and

sees that network is directly connected via interface E0. The router will then send the packet out the E0 interface, even though the destination IP address is actually found off the E1 interface!

In future studies, you’ll learn ways to get the packets to 192.168.1.17. For your CCENT and CCNA exams, keep in mind that it’s a good practice to keep all members of a given subnet on one side of a router. It’s good practice for production networks, too!

Secondary IP Addressing The following info is going to violate every rule of IP addressing you know, and some you’ll learn in the future, so have some duct tape ready, because I’m about to blow your mind and you’ll need something to put it back together. If absolutely necessary, you can assign multiple IP addresses to a router interface with the secondary option, as shown here:

R1(config)#int fast 0/0 R1(config-if)#ip address 172.1 R1(config-if)#ip address 172.1 secondary Make this IP addre

R1(config-if)#ip address 172.1

R1#show ip route 172.12.0.0/24 is subnette C 172.12.13.0 is directl FastEthernet0/0 C 172.12.14.0 is directl FastEthernet0/0

Using secondary addressing is a lot like applying a tourniquet to your leg. It’ll really come in handy under certain extreme situations, but it’s not something you want to do unless those situations arise. Cisco’s website mentions the following as three such situations:

You run out of addresses on a given segment When changing the IP addressing scheme of your network Two separate subnets of the same network are separated by another network. Remember our “keep your subnets on one side of the router” discussion! Secondary addressing is both a blessing and a curse. It can definitely help in particular situations like the ones discussed

here, but it can also bring up unexpected routing issues. I personally don’t use it unless I absolutely have to, but it’s a good tool to have in your mental toolbox!

The Administrative Distance Earlier, I mentioned that our routing process is looking for the best match for the destination IP address of the packet. There just might be a tie for that honor, and in that case, we’ll need a tiebreaker! Let’s say we’re running both OSPF and EIGRP on our router. If they both give the router a route for 20.1.1.0 /24, how does the router decide which one to put into the table? The route will use the Administrative Distance of the source of the route to make that

decision. The admin distance is a measure of the source’s believability, and the lower the AD, the more believable the source. Here’s a list of common route sources and their Ads. The lower, the better – the more believable, that is! Connected routes: 0 Static routes: 1 Internal EIGRP: 90 OSPF: 110 RIP: 120 External EIGRP: 170

An AD of 255 indicates an unreliable source. I know you haven’t hit these dynamic routing protocols in your study yet, but I wanted to introduce you to this concept now. As we get to these topics, I’ll remind you of the AD and where to see it.

Inside The Router The CCENT doesn’t go any real detail about the processes going on inside our routers. We really don’t mind that, since we have enough to learn as it is! However, there’s one term I’d like you to be familiar with. Cisco Express Forwarding (CEF) is highly advanced routing logic that adds a great deal of speed over other routing logic processes. It does so by keeping tables beyond the IP route table we’ll be looking at through this course:

The Forwarding Information Base (FIB), a version of the route table The Adjacency Table, which contains L2 adjacency information, which in turn cuts down on the need for ARP Requests Now that we have a firm grasp on IP addressing and the overall routing process, let’s move forward and work with some Cisco router and switch commands! Also, I’d like to invite you to join

my free and almost-free Video Boot Camps on Udemy! Tens of thousands of students are already there, and this is a great time for you to join us! All of my Video Boot Camps are fully downloadable and streamable, so you can watch them whenever you like, wherever you like! Just visit this URL for a full list of my courses, and I’ll see you on Udemy!

https://www.udemy.com/u/chrisbryan

Config Modes And Fundamental Commands An Overview Of Configuration Modes To put commands into action on a Cisco router, we enter a configuration mode before actually configuring the command. You’ve seen a few of these modes already, so let’s take just a moment to review them. Global configuration mode is

entered with the configuration terminal command. You have to go into this mode to get to any other configuration mode, as you’ll see in just a moment. Note how the prompt changes in the following configuration when I enter global config mode. R1#configure terminal Enter configuration commands, CNTL/Z. R1(config)#

Any command you enter in global configuration mode takes effect immediately. To illustrate, I’ll use the hostname command to change

the router’s name from R1 to Router1. R1#configure terminal Enter configuration commands, CNTL/Z. R1(config)#hostname Router1 Router1(config)#

The name changed immediately, with no reload or reboot necessary. It’s rare that you have to reboot a Cisco router or switch to make a change take effect, but it does happen – and just might happen later in this course! You’ve seen two line configuration

modes, for the Console port and the VTY lines. I have to go into global configuration mode before entering line configuration mode. If I try to go straight to line config mode, I get an error.

Router1#line vty 0 4 ^ % Invalid input detected at '^

When I go from global configuration mode to line config mode, you see the prompt change again. Router1#conf t Enter configuration commands,

CNTL/Z. Router1(config)#line vty 0 4 Router1(config-line)#

To configure an interface, we enter - you guessed it! - interface configuration mode. We have to be in global configuration mode first.

Router1#interface serial 0 ^ % Invalid input detected at '^ in global mode)

Router1#conf t Enter configuration commands, CNTL/Z. Router1(config)#interface seri Router1(config-if)#

Watch the prompts on exam day questions - they’re a tipoff as to which configuration mode a router or switch is in. You don’t have to memorize every config mode, but you should be comfortable with the following prompts. Global configuration mode: Router1(config) Interface configuration mode: Router1(config-if)# Line configuration mode: Router1(config-line) To get back to the Enable prompt from any configuration mode, enter

once or type EXIT to go back one config mode. example: Router1(config-if)#^Z Router1#

Note we went straight from interface configuration mode to the Enable prompt. Typing EXIT brings you back one config mode. In the following example, entering EXIT once took us from interface configuration mode to global config mode; typing EXIT a second time took us back to

the enable prompt. Router1(config-if)#exit Router1(config)#exit Router1#

There is no right or wrong choice between EXIT and < CTRL - Z >, but do keep in mind that EXIT takes you back one mode while < CTRL Z> takes you back to the Enable prompt. If all of these keystrokes are a little confusing at first, don’t worry about it. It does take time to get comfortable with them. You’ll find that all of this becomes second

nature very quickly, and you’ll enter the config mode you need without even thinking about it.

Physical Connections And Passwords Note: All of the information regarding passwords in this section is true for both routers and switches, even though I don’t continually say “routers and switches”. Here’s a very small clip of a Cisco switch configuration: line con 0 line vty 0 4 login line vty 5 15 login

A Cisco router has five VTY lines: line vty 0 4

This small, seemingly insignificant portion of the configuration actually determines what passwords a user must enter in order to connect successfully. When you connect to a Cisco router or switch, you’re going to do so in one of two ways: Physically connecting, usually via a laptop Logically connecting from a

remote location via Telnet or SSH We’re going to discuss the physical connection first, including the cables and adapters you may need, as well as how to configure a password for such a connection. You may have already seen some of this in an earlier section, and it will only help you to see it again. For a physical connection, you’re going to need a rollover cable. This is typically a blue cable with an RJ45 connector on one end and a DB9 connector on the other. The RJ-45 connector snaps into the Console

port of the switch or router, and the DB-9 connector connects to your laptop -- maybe!

The first “gotcha” you have to watch out for is placing the rollover cable’s RJ-45 connector into the correct port. The Console port will be labeled “CON”, but there are quite a few ports on that switch that will let you plug the rollover cable

into them - but the connection will not work. Additionally, some Cisco routers will have an Auxiliary port right next to the Console port, and the rollover cable will fit into the AUX port. It’ll fit, it just won’t work! No modern laptops are going to have a DB-9 port on them. Before you visit a client site with a brandnew laptop and a rollover cable, make sure to get the proper adapter for your laptop. They’re generally cheap and readily available on eBay and from any cable dealer. Real-world note: If you buy cables

on eBay, watch the shipping charges. Some dealers put high shipping charges on cables to make up for their “low price”. Now that we’ve got the cable and the adapters that we need, it’s time to connect to the switch! You’re going to use a terminal emulator program to do so, and you should use the following settings with your emulator: 9600 bits per second 8 data bits No parity

1 stop bit no flow control Now that we’re connected, let’s get back to the passwords. You won’t be prompted for a password when connecting through the console port. That means anyone with a laptop and a rollover cable can connect successfully to this switch, and that’s a default we’d like to change. Let’s take another look at the password portion of our switch’s configuration:

line con 0 line vty 0 4 login line vty 5 15 login

To protect the switch’s console port, it’s the “line con 0” we need to be concerned with. If we’re going to use a single password to protect the console port, we’ll actually need two commands: the password command the login command I want to clear up one point of

contention right here. It does not matter in which order you configure these commands as long as you configure them both! Here’s how we set a password of CCENT for console port access:

SW1#configure terminal Enter configuration commands, CNTL/Z. SW1(config)#line console 0 SW1(config-line)#login SW1(config-line)#password CCEN

Now when a user attempts to connect to the switch via the console port, they’ll be prompted for that password, as shown here:

Router1 con0 is now available Press RETURN to get started.

User Access Verification

Password:

Router1>

Done and done! Now that we’ve secured our local connection to the router, let’s secure

our remote connection tools – Telnet and SSH!

Telnet And SSH Passwords To review the methods available to connect to a Cisco router: Physically connecting a laptop to the Console port Connecting from a remote location via Telnet (TCP port 23) or SSH (TCP port 22) Typically you’re going to be outside a network when you use Telnet, but you can use Telnet inside a network as well. If you’re working at a PC in your local network and you need to check something on a switch or

router in that same network, there’s no need to physically visit the switch or router - you can just Telnet to it. There is one major rule that holds true for any Telnet configuration on a Cisco router or switch: You must configure a password on the VTY lines. Without a password on the VTY lines, no user will be able to telnet to a Cisco router or switch! In the following example, I’ve attempted to telnet to a Cisco router that has no VTY line password set.

R1#telnet 172.12.123.3 Trying 172.12.123.3 ... Open

Password required, but none se

[Connection to 172.12.123.3 cl

The console port didn’t require a password, but there is a little basic security in place when using that port, since the user has to physically be present in order to access the router. With Telnet connections, the user doesn’t have to be physically present -- that’s the reason we use it in the first place! We certainly don’t want just anyone connecting to our

network, so Cisco routers and switches require a password to be set for Telnet access. Failure to set one results in a message like the one we just saw. Let’s set a Telnet password! On a Cisco router, the password portion of the router configuration will look almost the same as it does on a switch. A router has fewer VTY lines than a switch does - a router has five, most switches have 16. The vty lines are the virtual terminal lines, and it’s those lines that are used for Telnet. To

configure a password on all five vty lines at once, just use this configuration: R3(config)#line vty 0 4

R3(config-line)#password CCENT R3(config-line)#login

Now what happens when we try to telnet from R1 to R3 again? R1#telnet 172.12.123.3 Trying 172.12.123.3 ... Open

User Access Verification Password:

R3>

Success! We were prompted for the password, and after we entered it, we’re now in R3, as indicated by the prompt. Some vendors have asterisks appear as you enter a password, but Cisco routers and switches do not. You will not see any characters appear as you enter that password. Take a look at the prompts in the password entry example. Note that R1 has a pound sign after “R1”, but that R3 has a “greater than” symbol.

Before we continue our Telnet discussion, we’re going to talk about router modes, privilege levels, and exactly what those particular symbols indicate.

User, Enable, And Privilege Modes When you first connect to a Cisco router or switch via Telnet or SSH, by default you’re going to be placed into user exec mode. This mode is indicated by the “>” symbol after the device name. R1>

There’s not much you can do from this mode except use some show commands to have a look around. If I attempt to use the configure terminal command, the router will not allow me to do so.

R1>configure terminal ^ % Invalid input detected at '^ R1>

That little carat shows you where you went wrong. In this case, we went wrong by trying to use this command in user exec mode. To configure the router, we need to go to the next level, privileged EXEC mode (generally called “enable mode>). To get there, we need to enter the enable command in user exec mode. The prompt should change slightly…

R1>enable R1#

… and it does! I can now enter configure terminal successfully to enter global configuration mode, and use the hostname command to change the device’s name. (To get you used to seeing the popular abbreviation for configure terminal, I’ll use it for the rest of this course.) R1#conf t Enter configuration commands, CNTL/Z. R1(config)#hostname Router1 Router1(config)#

We have the option of setting a password for enable mode. Actually, we have two options, he enable password and the enable secret. You often see routers with one or both of these set, so we better know what the differences are between the two as well as what happens when you set both!

The “enable password” and “enable secret” Commands Using an enable mode password is optional - unless you have users connecting via Telnet. (Thought I had forgotten about the Telnet discussion, didn’t you? We’re getting back to that in just a minute!) We have two options for configuring an enable mode password, as shown below by IOS Help. Router1(config)#enable ? password password

Assign the privil

secret secret

Assign the privil

I’ve edited a couple of non-CCENT and non-CCNA-related options from that IOS Help output so we can concentrate on these two options. Looking at the IOS Help readout, it looks like they do pretty much the same thing, and in reality, they do, with one big difference. To demonstrate, we’ll first use the enable password command to set a password of CCENT.

Router1(config)#enable passwor

Now we’ll go back to user exec mode, and then try to get back to enable mode. Router1#logout Router1 con0 is now available Press RETURN to get started.

Router1>enable Password: < I entered CCENT he appear on the screen > Router1#

After entering the enable command to get into enable mode, I was now prompted for a password. I entered CCENT (which was not shown on the screen as I typed it in), and I am now back in enable mode. Let’s take a quick look at the current router configuration with show running-config. The console and VTY line passwords will appear at the bottom of the config, but the enable passwords will appear near the top, so I’ll show you only that part of the config in this section. Router1#show running-config Building configuration...

hostname Router1 enable password CCENT

The enable password worked fine, but it’s appearing in clear text. That’s not a very secure password. Anyone looking over your shoulder can see what the password is! (That’s referred to as the “over-theshoulder network attack”.) We have a method of encrypting that password, along with the others in the configuration. Before we do that, let’s use the enable secret

command to set an enable password of CCNA and see what happens when we try to get back into enable mode. Router1(config)#enable secret

I’ll now enter a password of CCENT, the enable password, which worked just a few minutes ago.

Router1>enable Password: < I entered CCENT > Password:

That isn’t a typo, and you’re not seeing double. I’m being prompted for a password a second time because the first one I entered was not correct. If CCENT wasn’t right, will CCNA be? Router1>enable Password: Router1#

Yes! When both the enable secret and enable password commands are in use, the password set with the enable secret command will always take precedence. By the way, here’s what you see if

you wait too long to enter the enable password: Password: % Password: timeout expired! Password:

By default, you get three tries to enter the correct enable password. If all three attempts fail, you’ll get a “bad enable” or “bad secrets” message from the router, and you’re placed back at the user exec prompt. Router1>enable Password:

Password:

To see the other major difference between the two, let’s take a look at the running configuration. enable secret 5 1$oNGE$OYXryHhM7E3GIXcDdCAwF1 enable password CCENT

That’s not exactly “CCNA” behind enable secret there, is it? The main

reason we use enable secret today rather than the enable password command is that passwords configured with enable secret are automatically encrypted. It’s easy to see what the enable password is, but I don’t think too many of us are going to look at that enable secret value and determine that it means “CCNA”!

Enable Passwords And Telnet Now you may well be asking why I interrupted our Telnet discussion to talk about router modes and enable passwords. Good question, and trust me, there’s a good reason! Enable passwords are optional unless users will be connecting via Telnet. Let’s go back to our original example where I’m connecting from R1 to R3: Router1#telnet 172.12.123.3 Trying 172.12.123.3 ... Open

User Access Verification

Password: R3>

We’ve successfully used Telnet to connect to R3 from R1, and all is well. Let’s use the enable command on R3 to enter privileged exec mode and get started configuring! R3>enable % No password set

Oops. We have a major problem. No enable password has been set on

R3, so we literally cannot enter privileged exec mode though the Telnet connection! To allow that kind of connection, someone with physical access to R3 is going to have to set an enable password. R3(config)#enable secret CCIE

After doing so, we’ll telnet from R1 to R3 again and see what happens. Router1#telnet 172.12.123.3 Trying 172.12.123.3 ... Open

User Access Verification

Password: R3>enable Password: password> R3#

< I entered CCENT h

< I entered CCIE he

I’ve now successfully used Telnet to open the connection by entering the VTY password, and I’ve entered privileged exec mode with the enable password “CCIE”. To recap: No password is required for connecting to the router via the Console port, but it’s recommended.

A password on the VTY lines is required to allow Telnet or SSH users to connect. For Telnet and SSH users to access enable mode, either an enable password must be configured OR the following command must be configured on the VTY lines. This command, that is!

Using “privilege exec 15” You may want incoming Telnet users to be placed directly into privileged exec mode without being prompted for an enable password. (An excellent idea for lab work.) To do so, configure the privilege level 15 command on the VTY lines of the router allowing the connections. We’ll do that now on R3 and then telnet to that router from R1.

R3(config)#line vty 0 4 R3(config-line)#privilege leve

Router1#telnet 172.12.123.3 Trying 172.12.123.3 ... Open

User Access Verification Password: R3#



If we want to add a warning to that - say, a message warning against unauthorized access - we can create a login banner. That banner’s

contents will appear after the MOTD, but before the login prompt.

SW2(config)#banner login % Enter TEXT message. End with t %. Unauthorized Login Prohibit

I’ve added a console line password of cisco as well: line con 0 exec-timeout 0 0 password cisco logging synchronous login

When I log out and then log back in,

I see the MOTD banner message followed by the login banner message. SW1 con0 is now available Press RETURN to get started.

Network down for router IOS up tonight! Unauthorized Access P But You Knew That. User Access Verification Password: SW1>

This is how you’ll see the banners appear in the config:

banner login ^C Unauthorized Access Prohibited Knew That. ^C banner motd ^C Network down for router IOS up tonight! ^C

No matter what delimiting character you use, you’ll see it represented as ^C in the config. Let’s use IOS Help to look at our other options:

SW2(config)#banner ? LINE c banner-text delimiting character exec Set EXEC proc incoming Set incoming login Set login ban motd Set Message o

prompt-timeout timeout

Set Message f

slip-ppp

Set Message f

To present a banner message to users who have successfully authenticated, use the banner exec command. You can use the ENTER key for hard breaks in a banner message, as shown below.

SW1(config)#banner exec * Enter TEXT message. End with t Welcome to our nice, clean net pressed > Please keep it that way. *

After logging out and back in, the exec banner is presented after I successfully authenticate with the password cisco.

Network down for router IOS up tonight! Unauthorized Access P But You Knew That.

User Access Verification Password: Welcome to our nice, clean net Please keep it that way. SW1>

Those Odd Little Commands You might have noticed these two commands on the console line: line con 0 exec-timeout 0 0 logging synchronous

Here’s why I love these two commands. When the router wants you to know something, it wants you to know right now. If the router sends a message to the console while

you’re entering a command, by default the router will interrupt your work to show you that message. In the following example, I opened a Serial interface, which will always result in at least two messages relating to the physical and logical state of the interface. I started typing a sentence immediately after I opened the interface to show you what happens. I’ve bolded the sentence I was entering. R1(config)#int s0 R1(config-if)#no shut R1(config-if)#^Z

R1#so here i am 4d04h: %SYS-5-CONFIG_I: Config consoletyp 4d04h: %LINK-3-UPDOWN: Interfa to uping and 4d04h: %LINEPROTO-5-UPDOWN: Li Serial0, changed state to upi’ quite badly! 4d04h: %LINEPROTO-5-UPDOWN: Li Serial0, changed state to down

This may seem trivial, but when you have a long command entry interrupted by a console message, you’ll wonder how to prevent that from happening. (After you stop yelling at the router, that is.) By configuring the logging synchronous command on the

console port, you’re telling the router to hold such messages until it detects no input from the keyboard and no other output from the router, such as a show command’s output. R1(config)#line console 0 R1(config-line)#logging ? synchronous Synchronized message output

The second command I always enter on the console port of a home lab router is exec-timeout 0 0. This disables the console session default inactivity timeout of 5 minutes and

0 seconds. If you want to change that timer rather than disabling it, the first number represents the number of minutes in the inactivity timer and the second number is the number of seconds.

R1(config)#line con 0 R1(config-line)#exec-timeout ? Timeout in minutes

R1(config-line)#exec-timeout 0 Timeout in seconds

R1(config-line)#exec-timeout 0

(disables the inactivity timer

This command can also be configured on the VTY lines to set or disable the inactivity timer for Telnet and SSH users. Here, we’ll set the VTY line inactivity timer to 10 minutes, double the default time.

R1(config)#line vty 0 4 R1(config-line)#exec-timeout ? Timeout in minutes

R1(config-line)#exec-timeout 0 Timeout in seconds

R1(config-line)#exec-timeout 1 Timeout in seconds

R1(config-line)#exec-timeout 1

I don’t like to disable a production router’s Telnet and SSH inactivity timers, as there are security risks associated with doing so. They’re great commands for your present or future home lab, and I also recommend you know them for your CCENT and CCNA exams!

Keystroke Shortcuts There are quite a few key combinations that will make your life easier, and I’m going to list the most popular ones here. I want to make it clear that you do not have to use these in real life. I only use a few myself! One of my favorites is the up arrow, which will show you the last command you entered. If you continue to hit the up arrow, you’ll continue to go through the command history. does the same thing. As you might expect, the down

arrow brings you one command back in the command history. It’s a good key to use when you use the up arrow too fast. does the same thing. takes the cursor all the way to the front of your current command; takes the cursor all the way to the end of your current command. Want to move around on a percharacter basis in your current command without deleting characters? Use the left arrow or to move backward one character, and use the right arrow or

to move forward one character. Some of the lesser-used shortcuts include: deletes one character. You can do the same thing with the BACKSPACE key. moves back one word in the current command. moves forward one word in the current command. Again, you don’t have to use all or any of these. I like the up and down arrows to see the command history, and is great when you

want to just put “no” in front of a command to negate it. You can certainly pick and choose the ones you prefer, but it’s a good idea to know what they all do for your CCENT and CCNA exams. And speaking of the command history….

Manipulating And Changing History (Buffers, That Is) It may be forbidden for you to interfere with human history, but that doesn’t mean you can’t take a look at it! Using the up and down arrows to see the commands recently run on the router is really handy, but it’s easy to miss the one you need. To see the command history, run show history. R1#show history show dialer show ip ospf neighbor

conf t show ip ospf neighbor show dialer show ip ospf neighbor show dialer show run show hsitory show history

As you can see, I misspelled “history” in the next-to-last command! If you’re in the middle of a large

configuration or a tough troubleshooting situation, you can change the size of the history buffer from its default of 10. Note that you do this at the Console port or VTY line level. Changing the default for the Console port doesn’t change the default for the VTY line, and vice versa.

R1(config)#line con 0 R1(config-line)#history size ? Size of history buffer

R1(config)#line vty 0 4 R1(config-line)#history size ? Size of history buffer

The number we specify with this command is the number of commands held in the history buffer.

A Word About The Password Recovery Process There’s no feeling like it in the world… you reload a router, attempt to enter Enable mode, get a password prompt…. and no one at the site knows what the password is. Sooner or later, you’re going to have to perform a password recovery process. This process differs from one Cisco model to another, but luckily for us there’s a single page on Cisco’s website that lists the procedures. I will not put the URL here since those do change,

but if you simply put the phrase “cisco password recovery” in Google, you’ll find the page quickly. A word to the wise: Read the recovery process for your particular model at least twice before starting it. The process generally requires altering the router or switch configuration register, and changing the register value incorrectly can cause irreparable damage to the router. Not trying to scare you, because you will have to perform password recovery sooner or later. You just

want to be careful when doing so! Time for some more routing! Static routing, that is – and it’s coming right up in the next section!

Static Routing (With A Side Of DistanceVector) In the “Intro To Routing” section, you were given a sneak peek at static routing. In this section, we’ll configure static routing on live Cisco routers and use some new commands to test IP connectivity. There’s plenty of IP connectivity troubleshooting built into this section! It’s important to understand static

routing for your CCENT and CCNA exams, and you’ll find it helpful in working with real-world networks as well. Let’s jump right in!

Static Routes Here’s the network we’ll use for the static routing discussion and labs:

This is a hub-and-spoke network, with R1 serving as the hub and R2

and R3 as the spokes. When one spoke sends traffic to another spoke, that traffic will go through the hub. That’s a very important concept to keep in mind! I’m introducing you to loopback interfaces in this section. Loopback interfaces are logical interfaces they do not physically exist on the router. You’re going to see more and more real-world uses for loopbacks as you progress in your studies, and they’re great in lab environments for adding extra networks for extra practice! For this lab and all others in this

course, the last octet of the IP address for any physical interface will be the router number. For loopbacks, we’ll use the router number for each octet. The networks used in this section: Hub-and-Spoke Network: 172.12.123.x /24 R2’s loopback interface: 2.2.2.2 /32 R3’s loopback interface: 3.3.3.3 /32

We’re going to use pings to test IP connectivity throughout this section. When you ping an IP address, you’re sending five ICMP Echo packets to the IP address you specify. If you get five ICMP Echo Replies in return, you’ll see five exclamation points, and that means you have IP connectivity to the specified destination. For example, right now R1 can ping the serial interfaces on R2 and R3. 1#ping 172.12.123.2

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos

is 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 68/68/72 ms R1#ping 172.12.123.3

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos is 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 68/68/68 ms

There are no routing protocols or static routes on the routers, so how can R1 successfully ping those two destinations? Let’s use show ip route to take a look at R1’s routing

table. (For clarity, I’ve removed all route codes except the connected and static codes.) R1#show ip route

Codes: C - connected, S - stat Gateway of last resort is not

172.12.0.0/24 is subnette C

172.12.123.0 is direct

There’s only one route in R1’s routing table, but that’s enough for those two pings. It’s the 172.12.123.0 /24 network, and the two destinations we pinged are on

that network. That network appears as a Connected network, meaning there’s an interface on this router that’s configured with an IP address from that subnet. The entry also tells you which interface that is. Let’s see if our spokes can ping the hub, and each other. Can R2 ping both 172.12.123.1 and .3? R2#ping 172.12.123.1

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos

seconds: !!!!!

Success rate is 100 percent (5 68/68/68 ms R2#ping 172.12.123.3

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos seconds: !!!!!

Success rate is 100 percent (5 128/133/144 ms

R2 can ping both addresses. Here’s the path the data took to get to R3:

The ping reply from R3 to R2 also comes back through R1.

Can R3 ping 172.12.123.1 and .2?

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos seconds: !!!!!

Success rate is 100 percent (5 68/68/68 ms R3#ping 172.12.123.2

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos seconds: !!!!! Success rate is 100 percent (5 128/133/144 ms

Yes! Both the ping from R3 to R2 and the ping reply from R2 to R3 went through R1. Since every router on the network has an entry for 172.12.123.0 /24 in its routing table (the connected route), there’s no problem.

So right now, everything’s great! Let’s see if R3 can ping R2’s loopback interface (2.2.2.2). R3#ping 2.2.2.2

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos .....

Success rate is 0 percent (0/5

Nope! When you see periods come back from a ping, that’s not good. That means that we do not have IP connectivity to the remote address

we pinged. Ping tells you that you don’t have connectivity, but doesn’t really tell you why. A command that’s very helpful in diagnosing the “why” is debug ip packet. HUGE IMPORTANT STUDY TIP: Debugs are also an outstanding learning tool, one that many CCENT and CCNA candidates overlook. I urge you to use debugs early and often in your lab or simulator work, since this allows you to see what goes on “behind the

command”. When you know what things look like when they’re working, it’s a lot easier to know what’s wrong when you run debugs when things aren’t working! WARNING: Do NOT practice debugs on a production network. Some debugs, especially debug ip packet, can overwhelm a router or switch CPU and render the device unable to route or switch. Let’s run that debug on R3 and then send a ping to 172.12.123.2, which we already know we can ping successfully. That will allow us to see what the debug ip packet output

looks like when all is well. R3#ping 172.12.123.2

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos seconds: !!!!!

00:34:43: IP: s=172.12.123.3 ( len 100, sending

00:34:43: IP: s=172.12.123.2 ( (Serial0), len 100, rcvd 3

(The previous two lines will t I’ve edited.)

Success rate is 100 percent (5

140/140/140 ms R3#

The key words there are “sending” and “rcvd” (short for received). That’s what we want to see! Now let’s see what we don’t want to see by keeping that debug on and pinging 2.2.2.2, an address we know we do not have connectivity to. R3#ping 2.2.2.2

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos seconds:

00:36:39: IP: s=3.3.3.3 (local unroutable.

00:36:41: IP: s=3.3.3.3 (local unroutable.

00:36:43: IP: s=3.3.3.3 (local unroutable.

00:36:45: IP: s=3.3.3.3 (local unroutable.

00:36:47: IP: s=3.3.3.3 (local unroutable.

Success rate is 0 percent (0/5

The key word here is “unroutable”. Here’s the reason for that message,

courtesy of show ip route:

R3#show ip route Codes: C - connected, S - stat Gateway of last resort is not 3.0.0.0/32 is subnetted, C 3.3.3.3 is directly co 172.12.0.0/24 is subnette C 172.12.123.0 is direct

There’s no entry for the 2.2.2.0 network in R3’s routing table, and no “gateway of last resort” set, so the packets destined for a host on that network cannot be routed. Literally, those packets aren’t getting anywhere – they’re not even leaving R3!

Before we proceed, we’ll turn the debug off with undebug all, which (naturally) turns off all debugs. If you want to specify the debug to turn off, use the no debug command followed by the name of the debug itself.

R3#undebug all All possible debugging has bee

OR R3#no debug ip packet IP packet debugging is off

We have two choices to get a route to 2.2.2.0 into that table:

Configure a static route. Configure a dynamic routing protocol throughout the network. Since we’re in the static routing section of the course, let’s create a static route! We use the ip route command to create static routes, and we actually have two choices in the type of static route. We can create… A “regular” static route to a given host or destination

network A default static route, which will be used when there is no other match in the routing table for a destination network. I would be very familiar with those options for your exam, along with the syntax of each, which we’ll see throughout the rest of this section. We’ll use IOS Help to illustrate the choices with this command and many others throughout the course. Get plenty of practice with IOS

Help during your exam prep if at all possible. You can use IOS Help in two ways. When you leave a space between the question mark and the last word in what you’ve typed, IOS Help will display the available options for the next word.

R3#debug ? aaa AAA Authorization and Accounting access-expression Bool adjacency adja all Enab arp IP A transactions

When you end a partial word with a question mark, IOS Help displays all acceptable entries that begin with the letters you’ve entered. R3#deb? debug

Now back to the lab! We’re going to configure the destination IP prefix 2.2.2.0 here for our static route (the loopback network we want to ping). R3(config)#ip route ? A.B.C.D

Destination prefix

profile vrf instance

Enable IP routing t

Configure static ro

R3(config)#ip route 2.2.2.0

Now we specify the network mask, 255.255.255.0.

R3(config)#ip route 2.2.2.0 ? A.B.C.D Destination prefix m R3(config)#ip route 2.2.2.0 25

At this point in the ip route command, you must specify one of these two values: The local router’s exit

interface (NOT an IP address on the local router) The IP address on the next router we want to send the packets to = the “next-hop address” I personally like to use the next-hop IP address, but there’s nothing wrong with using the local router’s exit interface. Besides, you better know them both for your CCENT and CCNA exams! We’ll use the next-hop IP address 172.12.123.1, R1’s serial interface, since we

already have IP connectivity to that address.

R3(config)#ip route 2.2.2.0 25 Distance metric for name Specify name of t permanent permanent route tag Set tag for this

When you see at the bottom of IOS Help output, that means the command is acceptable to enter as it is. We’re going to save the options here for future studies, and go forward with this static route. Let’s press the ENTER key to have the route entered into the routing

table, use to go back to the enable prompt, and then verify the route entry with show ip route. Always use a show command to verify your latest configuration!

R3(config)#ip route 2.2.2.0 25 R3(config)#^Z R3#show ip route Codes: C - connected, S - stat Gateway of last resort is not 2.0.0.0/24 is subnetted, S 2.2.2.0 [1/0] via 172. 3.0.0.0/32 is subnetted, C 3.3.3.3 is directly co 172.12.0.0/24 is subnette C 172.12.123.0 is direct

And there’s our static route, as

indicated by the “S”! Nothing to it! Also note the “1” in the brackets after the network in that route. That first number is the administrative distance of the route. The AD is a measure of the route source’s believability, and the lower the AD, the more reliable the source. It’s used as a tiebreaker in case the router hears about the route from more than one source – two different protocols, for example. The second number is the metric for the route, which in this case is zero. Now we’ll send a ping to 2.2.2.2, and let’s see what happens.

R3#ping 2.2.2.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos U.U.U

Hmm. That’s one weird ping output. Let’s run debug ip packet and send the ping again and we’ll see what’s going on. (I hope!) R3#ping 2.2.2.2

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos U

00:58:14: IP: s=172.12.123.3 ( 100, sending

00:58:14: IP: s=172.12.123.1 ( (Serial0), len 56, rcvd 3

00:58:14: IP: s=172.12.123.3 ( 100, sending. U

00:58:16: IP: s=172.12.123.3 ( 100, sending

00:58:16: IP: s=172.12.123.1 ( (Serial0), len 56, rcvd 3

00:58:16: IP: s=172.12.123.3 ( 100, sending. U

Success rate is 0 percent (0/5 R3#

00:58:18: IP: s=172.12.123.3 ( 100, sending

00:58:18: IP: s=172.12.123.1 ( (Serial0), len 56, rcvd 3

Note the actual ping output of U.U.U is still there, but it’s scrambled amongst the debug ip packet output. That’s why I ran the ping once without the debug and then once with. Interestingly, the packets are being sent - you can see the word “sending” five times. The pings are leaving R3, but aren’t getting to 2.2.2.2.

I showed you this to illustrate a basic principle of pings and IP connectivity that many network admins forget. When you send pings, it’s not enough for the local router to have an entry in its routing table for the remote network - the downstream routers need an entry, too! Let’s walk through the ping process, which is a short walk – there are just two steps. The pings leave R3, and where do they go? Let’s revisit R3’s routing table for that answer:

R3#show ip route Codes: C - connected, S - stat

2.0.0.0/24 is subnetted, subnets S 2.2.2.0 [1/0] 172.12.123.1 3.0.0.0/24 is subnetted, C 3.3.3.0 is directly co 172.12.0.0/24 is subnette C 172.12.123.0 is direct

The next hop for those pings is R1’s serial0 interface, 172.12.123.1. Does R1 have a route to 2.2.2.2? Let’s check R1’s routing table for the answer.

R1#show ip route Codes: C - connected, S - stat Gateway of last resort is not

172.12.0.0/24 is subnette C

172.12.123.0 is directl

R1 has no entry in its routing table that will allow it to forward packets to 2.2.2.2, and as a result

the packets are dropped at R1.

We need to get a route into R1’s routing table to allow it to route packets to 2.2.2.2. We’ll use this opportunity to configure a default static route on R1. A router only

uses a default route if there is no other matching entry in the routing table for a given destination. The syntax for a default static route looks a bit odd, so be ready to identify it on the exam: R1#conf t Enter configuration commands,

R1(config)#ip route 0.0.0.0 0.

Both the destination network and the mask are all zeroes in a default static route. As with a “regular” static route, we have the option of configuring a next-hop IP address or

the local router’s exit interface. I’ve configured a default static route with the next-hop address of 172.12.123.2 (R2’s serial interface), so R1 is basically saying this: “Any packets that need routing and have no matching entry in my routing table, send them to 172.12.123.2 and let THAT router take care of it!”

Let’s verify the default static route with show ip route on R1.

R1#show ip route Codes: C - connected, S - stat Gateway of last resort is 172. 172.12.0.0/24 is subnette

C S*

172.12.123.0 is directl 0.0.0.0/0 [1/0] via 172.1

There’s an asterisk next to the “S”, which indicates a default static route. (Technically, it’s a candidate default route, but since there’s only one candidate…) The gateway of last resort now lists the next-hop address of the static route. Will this allow R3 to successfully ping R2’s loopback interface?

R3#ping 2.2.2.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos

!!!!! Success rate is 100 percent (5 140/142/152 ms

Yes! Default static routes serve two major purposes, one of which we’ve just seen in action -- we can send data to networks that have no specific entry in the routing table. This also helps to keep routing tables concise and complete, and as you advance in your Cisco studies, you’ll learn it’s important to control the size of the routing table while keeping it complete. Static routes have their place, but

they’re not terribly scalable. Scalability refers to a network feature or protocol’s ability to remain useful without a great deal of manual intervention as the network grows, and it’s a term you’ll hear often in your Cisco studies and your real-world job. Static routes can help you cut waaaaay back on unnecessary overhead under certain circumstances, such as the following:

In this network, all roads lead back to the hub. If any spoke wants to communicate with any other spoke, the next hop is ALWAYS the hub. With this topology, the spokes don’t need full routing tables containing an entry for every network on every

other spoke - - they just need a single route pointing to the hub. The hub can take over the routing from there. This is a particularly helpful setup under any of the following circumstances: Your spoke routers aren’t exactly top-of-the-line. The full routing tables on the spokes are rather large. Bandwidth over the spokehub circuits is at a premium. Some of the spoke-hub links

are links that go up and down on occasion (“flapping links”), causing your routing protocols to unnecessarily recalculate routes. Every routing protocol has overhead, including some kind of routing update or “hello” keepalive packet. If you use a single default route on these spokes instead of a routing protocol, there is no such overhead.

Distance Vector Routing IGPs (For Comparison To Link-State Protocols) The main distance vector protocol in use in today’s networks is the Routing Information Protocol (RIP). We used to have IGRP, but that’s long gone from Cisco routers and exams, so RIP is the last Distance Vector Interior Gateway Protocol we have. You’re not responsible for configuring RIP on your CCENT exam, but you should know just a little about it so you can fully appreciate link state protocols (and

be fully prepared for the exam). Let’s start with two distance vector protocol behaviors you won’t find in link-state protocols – split horizon and route poisoning, both designed to help prevent routing loops. A routing loop occurs when a packet enters a “loop”, resulting in the packet just bouncing around between two or three routers in an unending circle. Loops generally occur due to router misconfiguration or poor network design. I highly recommend you watch this

free video on my YouTube channel so you can see what a routing loop actually looks like on a live Cisco router. http://www.youtube.com/watch? v=pKimoicJCFQ (And if you’re wondering “Hey, why don’t link state protocols have loop prevention behaviors?” – stick around, we’re getting to that in the OSPF section!)

Split Horizon And Route Poisoning Split Horizon is a simple yet powerful routing loop avoidance feature. The rule of split horizon is that a route cannot be advertised out the same interface that same router would use as an exit interface to send packets to that network. That sounds complicated, but it’s not. For example, if R1 uses Serial0 as the exit interface for packets destined for 100.1.1.0 /24, R1 cannot advertise that route via Serial0. Route Poisoning occurs when a

route becomes unavailable. You’d think a distance vector routing protocol would simply stop advertising a route when it becomes unavailable, but that’s not quite what happens. With route poisoning, the router with the failed route continues to advertise the route, but with a metric indicating the route is unreachable. With RIP, that means advertising the route with a metric of 16, which RIP considers an unreachable route. Upon receipt of the advertisement containing the metric of

“unreachable”, the downstream routers remove the network from their routing tables, and will no longer advertise that route. This is a slow process at a time we need speed. Distance vector protocols do not converge quickly, and that’s one reason you won’t see much of them in today’s production networks. (Convergence is the process of our routers agreeing on a change in the network.) Here’s a look at the output of debug ip rip when a route’s being poisoned.

R3#deb ip rip RIP: sending v2 update to 224. 172.12.123.0/24 -> 0.0.0. 1.1.1.1/32 -> 172.12.123. 2.2.2.2/32 -> 172.12.123. 3.3.3.3/32 -> 0.0.0.0, me

RIP uses the Bellman-Ford algorithm, which results in RIP using hop count as its sole metric. RIP doesn’t consider bandwidth or the speed of the paths, it just counts hops. Not very scientific, and not very accurate. RIP would consider a 1-hop path over a 56k link superior to that of a 2-hop link over T1 lines. Overall, RIP has some serious

issues… Slow convergence. Inaccurate metrics. Two versions of RIP exist, one of which doesn’t support subnet masking and uses broadcasts rather than multicasts to send routing updates. RIP sends out a full routing table every 30 seconds, regardless of whether there’s even been a change in the network, which is a WHOPPING waste of router

resources and bandwidth. These are the main reasons you won’t see RIP very often in production networks, and this info will help you understand OSPF and EIGRP operations. Speaking of OSPF… let’s head there now, after a quick look at wildcard masks!

The Wildcard Mask

You’ll find these masks in the OSPF, EIGRP, and ACL sections. It’s a good idea to review this short section before studying any of those areas. ACLs use wildcard masks to determine what part of a network number should and should not be examined for matches against the ACL. Their use in EIGRP and OSPF allows us to tie down which

interfaces should and should not be enabled with the protocol in use. Wildcard masks are required in OSPF network statements. They’re not required in EIGRP network statements, but their use is highly recommended. Wildcard masks are written in binary, and then converted to dotted decimal for router configuration. Zeroes indicate to the router that this particular bit must match, and ones are used as “I don’t care” bits – the ACL does not care if there is a match or not. In this example, all packets that

have a source IP address on the 196.17.100.0 /24 network should be allowed to enter the router’s Ethernet0 interface. No other packets should be allowed to do so. We need to write an ACL that allows packets in if the first 24 bits match 196.17.100.0 exactly, and does not allow any other packets regardless of source IP address. 1st Octet – All 00000000 bits must match. 2nd Octet – All 00000000 bits must match.

3rd Octet – All 00000000 bits must match. 4th Octet – “I 11111111 don’t care” 00000000 Resulting 00000000 Wildcard Mask: 00000000 11111111

Use this binary math chart to convert from binary to dotted decimal: 128 64 32 16 8 4 2 1 1st

Octet: 0 2nd 0 Octet: 3rd 0 Octet: 4th 1 Octet:

0

0

0

0 0 0 0

0

0

0

0 0 0 0

0

0

0

0 0 0 0

1

1

1

1 1 1 1

Converted to dotted decimal, the wildcard mask is 0.0.0.255. Watch that on your exam. Don’t choose a network mask of 255.0.0.0 for an ACL when you mean to have a wildcard mask of 0.0.0.255.

I grant you that this is an easy wildcard mask to determine without writing everything out. You’re going to run into plenty of wildcard masks that aren’t as obvious, so practice this method until you’re totally comfortable with this process. We also use wildcard masks in EIGRP and OSPF configurations. Consider a router with the following interfaces: serial0: 172.12.12.12 /28 (or in dotted decimal, 255.255.255.240) serial1: 172.12.12.17 /28 The two interfaces are on different

subnets. Serial0 is on the 172.12.12.0 /28 subnet, where Serial1 is on the 172.12.12.16 /28 subnet. If we wanted to run OSPF on serial0 but not serial1, using a wildcard mask makes this possible. The wildcard mask will require the first 28 bits to match 172.12.12.0; the mask doesn’t care what the last 4 bits are. 1st Octet: All 00000000 bits must match. 2nd Octet: All 00000000 bits must match.

00000000 3rd Octet: All bits must match. 4th Octet: First 00001111 four bits must match. 00000000 Resulting 00000000 Wildcard Mask: 00000000 00001111

Converted to dotted decimal, the wildcard mask is 0.0.0.15. That’s all there is to it! As with anything, practice makes perfect – so practice, already!

OSPF coming right up!

OSPF In Particular, Link-State Protocols In General Link-State Protocol Concepts A major drawback of distance vector protocols is that they not only send routing updates at a regularly scheduled time, but these routing updates contain full routing tables for that protocol. When a RIP router sends a routing update packet, that packet contains every

single RIP route that router has in its routing table! This takes up valuable bandwidth and puts an unnecessary drain on the receiving router’s CPU and memory. Sending full routing updates on a regular basis is generally unnecessary. You’ll see very few networks that have a change in their topology every 30 seconds, but that’s how often a RIP-enabled interface will send a full routing update! At the end of the Static Routing and RIP section, a RIP debug showed us

that routes and metrics themselves are in the RIP routing updates. Link state protocols do not exchange routes and metrics. Linkstate protocols exchange just that – the state of their links, and the cost associated with those links. (OSPF refers to its metric as cost, a term we’ll revisit later in this section.) As these Link State Advertisements (LSAs) arrive from OSPF neighbors, the router performs a series of computations on these LSAs, giving the router a complete picture of the network. This series of computations is known as the

Shortest Path First (SPF) algorithm, also referred to as the Dijkstra algorithm. You can see the actual database with show ip ospf database. (This is a VERY small OSPF database.) R2#show ip ospf database

OSPF Router with ID (2.

Router Link St Link ID Link count

ADV Router

2.2.2.2 1

2.2.2.2

172.23.23.3 1

172.23.23.3

Net Link State Link ID

ADV Router

172.23.23.3

172.23.23.3

Technically, what you see here is a routing table, but I wouldn’t want to be the one to figure out the routes. Luckily, the SPF algorithm will do the dirty work for us and leave us with a routing table that’s much easier on the eyes. This exchange of LSAs between neighbors helps bring about one

major advantage of link state protocols - all routers in the network will have a similar view of the overall network. In comparison to RIP updates (every 30 seconds!), OSPF LSAs aren’t sent out all that often – they’re flooded when there’s an actual change in the network, and each LSA is refreshed every 30 minutes. Before any LSA exchange can begin, a neighbor relationship must be formed. Neighbors must be discovered and form an adjacency, after which LSAs will be

exchanged.

The Designated Router And Backup Designated Router

If all routers in an OSPF network had to form adjacencies with every other router, and continued to exchange LSAs with every other router, a large amount of bandwidth would be used any time a router flooded a network topology change. In short, we’d end up with an inefficient design and wasted network resources. Most OSPF segments will elect a designated router and a backup

designated router to handle adjacency changes. The designated router is the router that will receive the LSAs from the other routers in the area, and then flood the LSA indicating the network change to all non-DR and non-BDR routers. Routers that are neither the DR nor the BDR for a given network segment are indicated in show ip ospf neighbor as DROTHERS, as you’ll see shortly. Instead of having every router flood the network with LSAs after a network change, the change

notification is sent straight to the DR, and the DR then floods the network with the change.

If the DR fails, the backup designated router (BDR) takes its

place. The BDR is promoted to DR and another election is held, this one to elect a new BDR. That’s why the BDR listens for updates – because it may have to quickly step in and become the DR. Hello Packets: The “Heartbeat” Of OSPF Hello packets perform two main tasks in OSPF, both of them vital: OSPF Hellos allow neighbors to dynamically

discover each other OSPF Hellos allow the neighbors to remind each other that they are still there, which means they’re still neighbors! OSPF-enabled interfaces send hello packets at regularly scheduled intervals. The default intervals are 10 seconds on a broadcast segment such as Ethernet and 30 seconds for non-broadcast links such as Serial links. OSPF Hellos have a destination IP

address of 224.0.0.5, an address from the reserved Class D range of multicast addresses (224.0.0.0 239.255.255.255)

OSPF neighbor relationships are just like neighbor relationships between people in more than one way. As human beings, we know that just because someone moves in next door and says “Hello!”, it

doesn’t mean that we’re going to be true neighbors with that person. Maybe they play their music too loud, have noisy parties, or don’t mow their lawn. OSPF routers don’t care how loud the potential neighbor is, but potential OSPF neighbors must agree on some important values before they actually become neighbors. Mismatches regarding the following values between potential neighbors are the #1 reason OSPF adjacencies do not form as expected. Troubleshooting OSPF adjacencies

is usually simple - you just have to know where to look. We’ll assume an Ethernet link between the routers in question, but the potential OSPF neighbors must agree on the following values regardless of the link type. Ordinarily there will be a switch between these two routers, but for clarity’s sake I have left that device out of the diagrams.

Neighbor Value #1 & 2: Subnet Number And Mask

Simple enough –the routers must be on the same subnet and using the same mask in order to become neighbors. Let’s build our first OSPF network with the following routers both in Area 0. (More on Area 0 shortly.) They’re both on the same Ethernet segment.

Let’s get the config started….. There’s no problem with these routers pinging each other: R2#ping 172.12.23.3

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos seconds: !!!!!

Success rate is 100 percent (5 4/4/4 ms

R3#ping 172.12.23.2

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos seconds: !!!!!

Success rate is 100 percent (5

But will they become OSPF neighbors? We’ll examine the network command in more detail throughout this section, but here’s

what the network statements will look like. OSPF network statements use wildcard masks, not subnet or network masks.

R2(config)#router ospf 1 R2(config-router)#network 172.

R3(config)#router ospf 1 R3(config-router)#network 172.

A few minutes after entering that configuration, I ran show ip ospf neighbor on R2 and saw… nothing. When you run a show command and you’re shown nothing, there’s nothing to show you! That means

there’s no OSPF adjacency between R2 and R3. To find out why, run debug ip ospf adj. R2#debug ip ospf adj

OSPF adjacency events debuggin R2#

00:13:54: OSPF: Rcv hello from 172.12.23.3

00:13:54: OSPF: Mismatched hel

00:13:54: Dead R 40 C 40, Hell 255.255.255.128 C 255.255.255. 0

I love this debug! It shows us immediately that the problem is “mismatched hello parameters from 172.12.23.3”, and then lists the parameters in question. “Dead” and “Hello” match up, but the mask is different. That’s the problem right there. Since we’re on R2, we’ll change the E0 mask here to 255.255.255.128, change the OSPF network command, and see what happens.

R2(config)#int e0 R2(config-if)#ip address 172.1 R2(config-if)#router ospf 1 R2(config-router)#no network 1 R2(config-router)#network 172.

Let’s run show ip ospf neighbor to see if we have an adjacency: R2#show ip ospf neighbor Neighbor ID Interface 172.12.23.3 Ethernet0

Pri 1

State FULL/DR

And we do! Let’s now switch focus to the other two values you saw in that debug command - the Hello and Dead timers. Neighbor Value #3 & 4: The Hello And Dead Timers

These timers have vastly different roles, but they are bound together in one very important way. The Hello timer defines how often OSPF Hello packets will be multicast to 224.0.0.5, while the Dead timer is how long an OSPF router will wait to hear a Hello from an existing neighbor. When the Dead timer expires, the adjacency is dropped! Note in the previous example that show ip ospf neighbor shows the dead time for

each neighbor. The default dead time for OSPF is four times the hello time, which makes it 40 seconds for Ethernet links and 120 seconds for nonbroadcast links. The OSPF dead time adjusts dynamically if the hello time is changed. If you change the hello time to 15 seconds on an Ethernet interface, the dead time will then be 60 seconds. Let’s see that in action. The command show ip ospf interface will show us a wealth of information, including the Hello and

Dead timer values for a given interface. Given the defaults mentioned earlier, what timers should we expect to see on the Ethernet interface?

R2#show ip ospf int Ethernet0 is up, line protocol Internet Address 172.12.23.2 Process ID 1, Router ID 103. Transmit Delay is 1 sec, Sta Designated Router (ID) 172.1 Backup Designated router (ID Timer intervals configured, Hello due in 00:00:04 Neighbor Count is 1, Adjacen Adjacent with neighbor 172 Suppress hello for 0 neighbo

OSPF broadcast interfaces have defaults of 10 seconds for the Hello timer and 40 for the Dead timer (four times the Hello timer). What happens if we change the Hello timer to 15 seconds with the interface-level command ip ospf hello? R2(config)#int e0

R2(config-if)#ip ospf hello 15

R2#show ip ospf int

Ethernet0 is up, line protocol

Internet Address 172.12.23.2

Process ID 1, Router ID 103. Cost: 10

Transmit Delay is 1 sec, Sta

Designated Router (ID) 103.1 172.12.23.2 No backup designated router Timer intervals configured, Retransmit 5 Hello due in 00:00:12 Neighbor Count is 0, Adjacent

Suppress hello for 0 neighbo

Two things have happened, one that we knew about and another that we should have suspected: Both the Hello and Dead timers changed (the Dead timer changed to 4 times the new Hello value) We apparently lost the adjacency to R3, since the adjacent neighbor count fell to zero show ip ospf neighbor verifies that we have no OSPF neighbors on R2.

R2#show ip ospf nei R2#

I’m sure you already know what caused this, but let’s run debug ip ospf adj and find out for sure! R2#debug ip ospf adj

OSPF adjacency events debuggin

00:21:34: OSPF: Rcv hello from 172.12.23.3

00:21:34: OSPF: Mismatched hel

00:21:34: Dead R 40 C 60, Hell

255.255.255.128 C 255.255.255. 128

We again have mismatched hello parameters, but this time it’s the Hello and Dead timer mismatch that brought the adjacency down. We’ll change the Hello timer on R2 back to its default of 10 seconds by negating the previous command, and see if the adjacency comes back. R2(config)#int e0

R2(config-if)#ip ospf hello 10

R2#show ip ospf neigh

Neighbor ID Interface

Pri

172.12.23.3 Ethernet0

1

State

FULL/DR

Since the Hello and Dead timers again match, the OSPF adjacency comes back up. There’s no need to reset an interface or the router. I would know those Hello and Dead timers like the back of my hand for both the exam room and working with production networks. And

when you’re done debugging, use the undebug all command to turn them all off! R2#undebug all

All possible debugging has bee

A Value That DOESN’t Have To Match Between Potential Neighbors The very first command you’ll ever and always enter in an OSPF config is the OSPF process number. A router can run multiple OSPF

processes, and the routes learned by one OSPF process are not automatically known by the other OSPF processes running on the router.

Potential OSPF neighbors do not have to agree on the OSPF process number. The OSPF process number is “locally significant only”, which means it’s not advertised to other

routers and only has importance to the local router. I’ve removed the previous OSPF config from R2 and R3 and put in the network numbers you see in the previous illustration. R2(config)#router ospf 1

R2(config-router)#network 172.

R3(config)#router ospf 7

R3(config-router)#network 172.

00:05:15: %OSPF-5-ADJCHG: Proc from LOADING to

FULL, Loading Done

R3#show ip ospf neighbor Neighbor ID Interface 103.1.1.1 Ethernet0

Pri

1

State

FULL/DR

The reason I’m really driving this home with you is that another popular routing protocol, EIGRP, has a number in the exact same spot in the config…. Router eigrp 1

…but that number’s not a process number. It’s an Autonomous System number, and it does have to match between potential EIGRP neighbors. That’s all I’m going to say about EIGRP right now, I promise! Just remember that OSPF neighbors don’t have to agree on the number in the router ospf command, but EIGRP neighbors do have to agree on the number in the router eigrp command. OSPF Areas

With both OSPF and EIGRP, we’re going to put our routers into logical groups. In OSPF, those logical groups are areas, and there are some basic rules and terms regarding areas that we must be aware of: OSPF’s backbone area is Area 0, and every other area we create must contain a router that has a physical or logical interface in Area 0. Routers with interfaces in more than one area are Area

Border Routers OSPF areas allow us to build a hierarchy into our network, where we have a “backbone area” (Area 0), and expand the network from there. This concept also allows us to create stub areas, where the routers in the stub areas will not have full routing tables – they’ll have a combination of individual routes and default routes. We’ll get a taste of those stub areas later in this section, and we’ll really get into them in your CCNA and CCNP studies.

Configuring OSPF Hub-And-Spoke Networks (If You Dare!) An OSPF hub-and-spoke network configuration isn’t included in your CCENT studies, but watching one will definitely help to drive home many of the concepts you’re reading about in this section. For these reasons, I haven’t put a hub-and-spoke config in this course, but I do have one on Udemy that you should strongly consider watching. And it’s free! Again, it’s not required watching for the CCENT,

but I know it’ll help you absorb the info in this section. Hop on over and watch it – here’s the URL! https://www.udemy.com/ccna-bootcamp/ Right now, let’s head back to our Ethernet segment adjacency and take a closer look at each value from show ip ospf neighbor. R3#show ip ospf neighbor Neighbor ID Interface 103.1.1.1

Pri

1

State

FULL/DR

Ethernet0

Neighbor ID: By default, a router’s OSPF ID is the highest IP address configured on a LOOPBACK interface. This can also be manually configured with the command router-id in OSPF configuration mode, and is usually set with that command instead of leaving the RID selection up to the router. Pri: Short for “Priority”, this is the OSPF priority of the interface on the remote end of the adjacency. The default is 1. This value is used in the DR/BDR election – highest

priority of all interfaces in the election wins. State: FULL refers to the state of the adjacency. The next value – DR, BDR, or DROTHER – indicates where the remote router ranks in the scheme of things for this particular network segment. DROTHER means that particular router is neither the DR nor the BDR. Dead Time: A decrementing timer that resets when a HELLO packet is received from the neighbor. Address: The IP address of the interface on the neighbor through which this adjacency is formed.

May or may not be the same value seen under Neighbor ID. Interface: The adjacency was created via this local interface. Speaking of that OSPF router ID…. Configuring the OSPF Router ID By default, the OSPF Router ID (RID) will be the numerically highest IP address of all loopback interfaces configured on the router, even if that interface is not OSPFenabled.

Why use a loopback address for the OSPF RID instead of the physical interfaces? A physical interface can become unavailable in a number of ways the actual hardware can go bad, the cable attached to the interface can come loose, etc. - but the only way for a loopback interface to be unavailable is for it to be manually deleted or for the entire router to go down. In turn, a loopback interface’s higher level of stability and availability results in fewer SPF

recalculations, which results in a more stable network overall. Oddly enough, an interface does not have to be OSPF-enabled to have its IP address used as the OSPF RID – it just has to be “up” if it’s a loopback, and physically “up” if it’s a physical interface. It’s rare to have a router running OSPF that doesn’t have at least one loopback interface, but if there is no loopback, the highest IP address on the router’s physical interfaces will be the RID. Again, the interface whose IP address is used as the RID does not have to be OSPF-

enabled. You can hardcode the RID with the router-id command. R2(config)#router ospf 1 R2(config-router)#router-id ?

A.B.C.D OSPF router-id in IP

R2(config-router)#router-id 22

R2(config-router)#router-id 22

Reload or use “clear ip ospf p effect

The good news: No options for this command! The bad news: We have to take our adjacencies down to make this command take effect. Here’s a rarity, at least with Cisco. For the new RID to take effect, you must either reload the router or clear the OSPF processes. That’s a fancy way of saying “All existing OSPF adjacencies will be torn down.” The router will warn you of this when you run that command. R1#clear ip ospf process

Reset ALL OSPF processes? [no]

When the router’s prompt says “no”, you should think twice before saying yes! Once you reload successfully, your adjacencies should come right back as well – as long as you didn’t change anything else! Here’s a video from my YouTube channel that demos the OSPF RID! http://www.youtube.com/watch? v=Wk6KnMY35cA

Default-Information Originate (Always?)

One of the benefits of running OSPF is that all of our routers have a similar view of the network. There are times, though, that you may not want all of your routers to have a full routing table. This involves the use of stub and total stub areas, and while the configuration of those areas is beyond the scope of the CCENT exam, I do want to show you an example of when we might configure such an area. This also

helps to illustrate a command that you just might see on your exam!

There’s no reason for the three routers completely in Area 100 to

have a full OSPF routing table. For those routers, a default route will do, since there is only one possible next-hop IP address for any data sent by those three routers. If that central router has a default route that it can advertise to the stub routers, the default-information originate command configured on the central router will get the job done. That’s great, but what if the central router doesn’t have a default route to advertise?

Let’s use IOS Help to look at our options for this command - there’s a very important one here.

R2(config-router)#default-info always metric

Always advertis OSPF default me

metric-type route-map

OSPF metric typ Route-map refer

The always option allows the router to advertise a default route without actually having one in its routing table. Without that option, the router must have a default route in its table in order to advertise one. You’ll learn much more about the different types of stub areas and their restrictions and requirements in your CCNA and CCNP studies. For now, know the difference between using default-information

originate with and without the always option. A Bonus Look At OSPF Routes Cisco has basically cut OSPF in half for your CCENT and CCNA studies. You get plenty of important concepts here in the ICND1 portion of your studies, but you really don’t see any live labs or many route types until ICND2. For those of you stopping after you get your CCENT, I’ve posted some bonus route info here. Those of you going after your full CCNA will see

all of this and much more in my ICND2 Study Guide. Here’s our network… .. and here are the OSPF routes on R1.

R1# show ip route ospf 2.0.0.0/32 is subnetted, 1 O IA 2.2.2.2 [110/75] via 172. 172.12.0.0/16 is variably O 172.23.23.0 [110/74] via

There are two types of OSPF routes in this table, intra-area and interarea. Intra-area routes are marked simply

with an “O”, and that’s a route in one of the same areas the local router is part of. Inter-area routes are marked with “O IA”, and that indicates a route to an area the local router is not part of. In this case, the route to R2’s loopback (in OSPF Area 2) is marked “ O IA” on R1, which is not connected to Area 2. There are other OSPF area types, but we’ll save those for future studies. Let’s tackle a vital Cisco router and switch feature that also uses wildcard masks, and a protocol that makes it possible for

our routers and switches to keep the right time!

Access Lists and the Network Time Protocol (“Your Network Better Know What Time It Is”) Introduction To Access Control Lists The basic purpose of Access Control Lists (ACLs) is to allow a router to permit or deny packets based on a variety of criteria. The ACL is configured in global mode, but for filtering packets on a permit/deny basis, it’s applied at

the interface level. An ACL does not take effect until it is expressly applied to an interface with the ip access-group command. Packets can be filtered as they enter or exit an interface. Throughout your Cisco studies, you’re going to find more and more uses for ACLs. You’re going to find quite a few right here! We’ll block traffic from entering an interface, exiting an interface, prevent Telnet access, allow Telnet access…. the list of uses for ACLs goes on and on. This is one skill you must master in order to work with

today’s Cisco devices, and I guarantee ACL usage will come up more than once on your CCENT and CCNA exams! We’ll start our ACL discussion with the most common usage - to deny or allow traffic from entering or exiting an interface. There are some basic rules you must keep in mind in order to master ACLs, and I’ll make special note of those as we go through some examples. I may get repetitive on some of this – actually, I will repeat several rules several times -- but you’ll be glad I did when these rules become

second nature and you earn your CCENT and CCNA!

ACL Logic And The Implicit Deny When a packet enters or exits an interface with an ACL applied, the packet is compared against the criteria of the ACL. If the packet matches the first line of the ACL, the appropriate “permit” or “deny” action is taken. If there is no match, the second line’s criteria is examined. The process repeats: If there is a match, the appropriate action is taken; if there is no match, the next line of the ACL is compared to the packet’s addressing.

This process continues until a match is found, at which time the ACL stops running. If no match is found, an implicit deny is applied to the packet. If a packet is not expressly permitted by a line in the ACL, it will be subject to the implicit deny. Take special note of the implicit deny feature. Forgetting about this deny is the #1 reason for ACLs not giving you the desired results. The behavior described above concerning the line-by-line ACL matching process, along with the implicit deny, are default behaviors of all ACL types discussed in this

section.

Comparing Standard And Extended ACLs A standard ACL is concerned with only one factor -- the source IP address of the packet. The destination IP address is not considered. Extended ACLs consider both the source and destination IP address of the packet, and can consider the port number as well. You’ll see those options later in this section. Standard and extended ACLs do not use the same numeric ranges, and you must watch out for these on the exam and in the real world.

Standard ACLs use the ranges 1-99 and 1300-1999, where extended lists use 100-199 and 2000 to 2699. On that subject, let’s use IOS Help to view a few additional ACL numeric ranges.

R1(config)#access-list ? IP standard IP extended Extended 48 IP standard acc Protocol typ IP extended ac

48-bit MAC a

dynamic-extended rate-limit

Extend the d Simple rate-

The obvious questions: “Why are the standard and extended list ranges broken up? Why aren’t they just one big range?” When ACLs were first introduced, it was thought we’d never need more than 100 ACLs of any type on a single router. We also used to think we’d never need a home data storage device bigger than a floppy disk, and that certainly changed! As the number of different uses for ACLs expanded, the range of available ACLs on many routers began to shrink. That’s

when Cisco came up with the expanded ranges, and I’m sure you noticed the expanded ranges are much larger in scope than the originals! You don’t really need to know which ranges were original and which are expanded, but I’d know both of those ranges cold before taking the exam. ACLs use wildcard masks, not network or subnet masks. Before we hit ACLs hard, let’s review wildcard mask logic.

The Wildcard Mask ACLs use wildcard masks to determine what part of a network number should and should not be examined for matches against the ACL. Wildcard masks are written in binary, and then converted to dotted decimal for router configuration. Zeroes indicate to the router that a particular bit must match, and ones are used as “I don’t care” bits. This becomes much clearer with an example. Here, all packets that have a source

IP address on the 196.17.100.0 /24 network should be allowed to enter the router’s Ethernet0 interface. No other packets should be allowed to do so. We need to write an ACL that allows packets if the first 24 bits match 196.17.100.0 exactly, and does not allow any other packets regardless of source IP address. 1st Octet – All 00000000 bits must match. 2nd Octet – All 00000000 bits must match.

3rd Octet – All 00000000 bits must match. 4th Octet – “I 11111111 don’t care” 00000000 Resulting 00000000 Wildcard Mask: 00000000 11111111

Use this binary math chart to convert from binary to dotted decimal: 128 64 32 16 8 4 2 1 1st

Octet: 0 2nd 0 Octet: 3rd 0 Octet: 4th 1 Octet:

0

0

0

0 0 0 0

0

0

0

0 0 0 0

0

0

0

0 0 0 0

1

1

1

1 1 1 1

Converted to dotted decimal, the wildcard mask is 0.0.0.255. Watch that on your exam. Don’t choose a network mask of 255.0.0.0 for an ACL when you mean to have a wildcard mask of 0.0.0.255.

I grant you this is an easy wildcard mask to determine without writing anything out. You’re going to run into plenty of wildcard masks that aren’t as obvious, so practice this method until you’re totally comfortable with this process. We also use wildcard masks in EIGRP and OSPF configurations. Consider a router with the following interfaces: serial0: 172.12.12.12 /28 (or in dotted decimal, 255.255.255.240) serial1: 172.12.12.17 /28

The two interfaces are on different subnetworks. Serial0 is on the 172.12.12.0 /28 subnet, where Serial1 is on the 172.12.12.16 /28 subnet. If we wanted to run OSPF on serial0 but not serial1, using a wildcard mask makes it possible. The wildcard mask will require the first 28 bits to match 172.12.12.0. The mask doesn’t care what the last 4 bits are. 1st Octet: All 00000000 bits must match. 2ndOctet: All 00000000 bits must match.

3rdOctet: All 00000000 bits must match. 4thOctet: First 00001111 four bits must match. 00000000 Resulting 00000000 Wildcard Mask: 00000000 00001111

Converted to dotted decimal, the wildcard mask is 0.0.0.15. Now on to standard ACLs!

Configuring Standard Access Lists Let’s review our ACL theory so far: Standard ACLs consider only the source IP address for matches. The ACL lines are run from top to bottom. If there is no match on the first line, the second is run. No match on the second, then the third line is run, and so on until there is a match or the end of the ACL is reached. This top-tobottom process places

special importance on the order of the lines. This theory is true of all ACLs. There is an implicit deny at the end of every ACL. If packets are not expressly permitted, they are implicitly denied. If Router 3’s Ethernet interface should only accept packets with a source network of 172.12.12.0, the ACL will be configured like this:

R3(config)#access-list 5 permi

That’s all we need to write out. The

implicit deny will deny all packets not matching the first line. The ACL is then applied to the Ethernet0 interface with the ip access-group command. I’ll use IOS Help to show you one more important option. R3(config)#int e0 R3(config-if)#ip access-group in inbound packets out outbound packets R3(config-if)#ip access-group

You must finish the ip access-group command by indicating the direction of packets to which the

ACL should be applied. Overall, using an ACL to deny or permit traffic at the interface level is a two-step process: Write the ACL with the access-list command. Apply the ACL with the ip access-group command. Using the ACL we just wrote, traffic sourced from the 172.12.12.0 /24 network is accepted by the first and only line of the ACL. All other traffic is stopped by the implicit

deny. A great rule of thumb when determining the effect of an ACL: “If traffic isn’t explicitly permitted, it’s implicitly denied.”

Adding Remarks Access lists can become quite large and intricate. If one admin writes an ACL and another admin comes in six months later to troubleshoot an issue, that second admin may have no idea what the ACL was trying to accomplish. When you see a convoluted 70-line ACL that just doesn’t make sense to you, you’ll wish there was some kind of basic explanation! It’s professional courtesy to add a remark line or two to describe what an ACL was written for. To do so, use the remark ACL command:

R3(config)#access-list 5 ? deny Specify packets to r permit Specify packets to f remark Access list entry comm R3(config)#access-list 5 remar LINE Comment up to 100 chara R3(config)#access-list 5 remar

Using “Host” and “Any” for Wildcard Masks There’s no problem with using a wildcard mask of all ones or all zeroes. A wildcard mask of 0.0.0.0 means the address specified in the ACL line must be matched exactly. A wildcard mask of 255.255.255.255 means all addresses will match the line. You have the option of using the word host to represent a wildcard mask of 0.0.0.0. Consider a configuration where only packets from IP source 10.1.1.1 should be

allowed and all other packets denied. The following ACLs both do that.

R3#conf t R3(config)#access-list 6 permi

R3(config)#conf t R3(config)#access-list 7 permi

The keyword any can be used to represent a wildcard mask of 255.255.255.255. Both of the following lines permit all traffic.

R3(config)#access-list 15 perm R3(config)#access-list 15 perm

There’s no “right vs. wrong” decision to make when you’re configuring ACLs in the real world. For your exam, I’d be comfortable with the proper use of host and any.

The Order Of The ACL Lines Is Vital There’s definitely a “right vs. wrong” decision to make when it comes to the order of the ACL lines. Getting just two lines in the wrong order can wreck everything you’re trying to do with the ACL. Here’s an example of a short ACL where the intent is to deny traffic from 172.18.18.0 /24, but allow traffic sourced from any other subnet.

Which of the following ACLs should we use?

R3(config)#access-list 15 deny R3(config)#access-list 15 perm

R3(config)#access-list 15 perm R3(config)#access-list 15 deny

R3(config)#access-list 15 deny R3(config)#access-list 15 perm

R3(config)#access-list 15 perm R3(config)#access-list 15 deny

We can eliminate the bottom two choices, because they have wildcard masks of 255.0.0.0. That would match only on the 2nd, 3rd, and 4th octets, which isn’t what we need. We’re matching on the 1st, 2nd, and 3rd octets of 172.18.18.0, so the proper wildcard mask is

0.0.0.255. Here are the two remaining possibilities:

R3(config)#access-list 15 deny R3(config)#access-list 15 perm

R3(config)#access-list 15 perm R3(config)#access-list 15 deny

The ACL is matched against packets one line at a time, top to bottom. The top choice will deny all packets sourced from 172.18.18.0 first. Traffic that does not match that

line drops to the second line, which permits all traffic. That’s what we’re trying to do! What about the second choice? The very first line is permit any, so it’s impossible for any traffic not to match that line, including the traffic we’re trying to block. The second line will never be run, since the first line matches all possible traffic. The permit any statement does negate the implicit deny, but watch where you put it in the ACL. If the permit any statement is at the top of

any ACL, it doesn’t matter how many deny statements follow it. They’ll never be read.

Extended Access Control Lists Extended ACLs allow both the IP source and destination address to be matched. Actually, they require it. Even if you don’t want to use either of those two criteria for matching, you still have to put any for the one you don’t want to use. The source port, destination port, and protocol type can also be matched. These are truly optional options - you don’t have to specify a value for any of these options if you’re not using them to match traffic. In the next example, packets

sourced from network 172.50.50.0 /24 should not be permitted if they are destined for network 172.50.100.0 /24. All other packets should be allowed.

Let’s use IOS Help to take a look at the options in the access-list command.

R3(config)#access-list 100 ? deny Specify packets to dynamic Specify a DYNAMIC l permit Specify packets to remark Access list entry c

We can match by protocol name or by number. We’ll keep it simple here and select IP. If we planned to use port numbers for matching, we’d need to specify TCP or UDP.

R3(config)#access-list 100 den An IP protocol number ahp Authentication eigrp Cisco’s EIGRP esp Encapsulation gre Cisco’s GRE tu

icmp igmp igrp ip ipinip nos ospf pcp pim tcp udp

Internet Contr Internet Gatew Cisco’s IGRP r Any Internet P IP in IP tunne KA9Q NOS compa OSPF routing p Payload Compre Protocol Indep Transmission Con User Datagram

We’re going to specify the 172.50.50.0 /24 network as the source.

R3(config)#access-list 100 den A.B.C.D Source address any Any source host

host

A single source hos

R3(config)#access-list 100 den A.B.C.D Source wildcard bit

R3(config)#access-list 100 den

Now we’ll define the destination address.

R3(config)#access-list 100 deny ip 17 A.B.C.D Destination address any Any destination host host A single destination host

R3(config)#access-list 100 deny ip 17 A.B.C.D Destination wildcard bits R3(config)#$ 100 deny ip 172.50.50.0

Notice the dollar sign next to the pound sign prompt in that last line? That’s what you see when your command runs too long to be shown in its entirety on the screen – nothing more, nothing less. You can still hit ENTER to enter the command, which is just what I did. The next and final line of the ACL will negate the implicit deny. Since this is an extended ACL, we have to enter any twice – the first time for the source and the second time for the destination. This line allows all traffic.

R3(config)#access-list 100 per

A.B.C.D any host

Source address Any source host A single source hos

R3(config)#access-list 100 per A.B.C.D Destination address any Any destination host host A single destination

R3(config)#access-list 100 per

To verify your ACLs and the order of the lines, run show access-list.

R3#show access-list 100 Extended IP access list 100 deny ip 172.50.50.0 0.0.0. permit ip any any

Another rule of extended ACLs: Both the source and destination must match the line for the action to be carried out. In this case, a packet sourced from 172.50.50.0 /24 is only denied IF the destination is on the 172.50.100.0 /24 subnet. If either the source or destination IP address does not match the line, there is no match.

Finally, the ACL is applied to the interface with the ip access-group command…..

R3(config)#int e0 R3(config-if)#ip access-group in inbound packets out outbound packets R3(config-if)#ip access-group

… making sure you indicate the direction of packets to be filtered, that is! Another important ACL rule: A single interface can have two ACLs applied to it for each protocol - one for outbound traffic and the other for inbound traffic. To illustrate what happens when you configure two ACLs in the

same direction and for the same protocol, I’ve created another extended ACL that matches any TCP traffic, regardless of source or destination IP address, as long as the destination port is port 80.

R1(config)#access-list 160 den R1(config)#access-list 160 per

I’ll use the show ip interface command to verify ACL 150 is currently configured on Ethernet0, both inbound and outbound.

R1#show ip interface eth0 Ethernet0 is up, line protocol Internet address is 172.23.2

Broadcast address is 255.255 Address determined by setup MTU is 1500 bytes Helper address is not set Directed broadcast forwardin disabled Outgoing access list is 150 Inbound access list is 150 < output truncated for clarity

Just for shiggles, I’ll add ACL 160 to filter inbound traffic and see what happens when we have two ACLs applied to the same interface and in the same direction filtering the same protocol. R1(config)#int e0 R1(config-if)#ip access-group

No warning from the router, but what does show ip interface show?

R1#show ip interface ethernet0 Ethernet0 is up, line protocol Internet address is 172.23.2 Broadcast address is 255.255.255.255 Address determined by setup command MTU is 1500 bytes Helper address is not set Directed broadcast forwardin Outgoing access list is 150 Inbound access list is 160

The latest ACL to be configured and applied for inbound traffic, ACL 160, is the effective ACL for that interface.

Now we know we’re limited to two ACLs per protocol on an interface one inbound and one outbound - and what happens when we break that rule! It’s very rare you’d want to apply the same ACL to inbound and outbound traffic, but you can. R1(config)#int e0 R1(config-if)#ip access-group R1(config-if)#ip access-group

Named Access Lists Named ACLs are just that. Rather than using a number to identify them, names are used. Consider a router with 75 ACLs. If the ACLs are given intuitive names, it can be much easier to see what the author of the list was trying to do (especially if they don’t leave remarks with their numbered ACLs). The syntax of a named ACL is slightly different than the numbered type, but the operation is the same,

as is the use of host and any. A router with a Serial interface that should allow no traffic from network 175.56.56.0 /24 to leave that interface regardless of destination, but should allow all other traffic, would be configured as follows. For named ACLs, the command is ip access-list, not access-list.

R3#conf t R3#ip access-list extended NO_

R3(config-ext-nacl)#deny ip 17 R3(config-ext-nacl)#permit ip

A few details to note from that configuration: Use the ip access-list command to create named ACLs. (It’s so nice, I told ya twice!) The named ACL has to be defined as standard or extended. I personally like to put underscores in the named ACL’s name to make it easier to read, but that’s certainly optional.

All ACL rules we’ve discussed to this point apply to named ACLs. Named ACLs are applied to an interface in the same fashion as numbered ACLs, with the ip access-group command. As always, the direction of packets affected by the ACL must be defined. R3#conf t R3(config)#interface serial0 R3(config-if)#ip access-group

Using An ACL To Limit Telnet Access Earlier, you learned how to configure a password on a router or switch’s VTY lines to control access only to those who know the password. That might not be enough, as you may want to control Telnet access according to the IP address of the host attempting to connect. We can do that with an ACL and the access-class command. The ACL is used to indicate what host or hosts can telnet to a router. The syntax of the ACL is the same,

but it is applied in a slightly different manner. The ACL is applied to the vty lines with the access-class command. In this lab, we currently have two hosts on our network that can telnet into R3. One is 172.23.23.2 (R2), and the other is 172.12.123.1 (R1). R1’s telnet attempt: R1#telnet 172.12.123.3 Trying 172.12.123.3 … Open User Access Verification Password: R3>

R2’s telnet attempt: R2#telnet 172.23.23.3 Trying 172.23.23.3 … Open User Access Verification Password: R3>

We’ve decided to allow only the connection from R2, while blocking all others. We’re going to specify the R2 IP address 172.23.23.2 as the permitted address.

R3(config)#access-list 55 perm R3(config)#access-list 55 deny

Use the access-class command to apply an ACL to your VTY lines. The direction of the connection must be defined. R3(config)#line vty 0 4

R3(config-line)#access-class 5 in

Filter incoming connect

out

Filter outgoing connect

R3(config-line)#access-class 5

When both hosts that could previously telnet to R3 try it again….

R1#telnet 172.12.123.3 Trying 172.12.123.3 …

% Connection refused by remote

R2#telnet 172.23.23.3 Trying 172.23.23.3 … Open User Access Verification Password: R3>

… R2 can still telnet to R3 successfully, but R1 is now blocked!

This is a great tool to limit Telnet access to certain IP addresses in your internal network while blocking all others.

Where Should An ACL Be Placed? The #1 rule of placing ACLs in the real world is to prevent traffic from traveling across a WAN when that traffic is going to be blocked from getting to its final destination in the first place.

If we want to prevent that PC from accessing the server on the other side of the WAN, logic dictates we’d use an extended ACL and

place the ACL as close to that PC as possible. Using the extended ACL allows us to specify a source AND destination IP address for filtering, as opposed to a standard ACL, which only allows you to filter on the source IP address.

Placing the ACL as close to the

source of the traffic prevents unnecessary traffic from going across the WAN. If the traffic is not going to be allowed to reach the email server, why let it go across the WAN to begin with? Using an extended ACL in this situation stops the unwanted traffic from crossing the WAN and unnecessarily using bandwidth and the remote router’s resources. If you have to use a standard ACL, put it on the interface closest to the destination. We can’t put the standard ACL at the same point at which we would apply an extended

ACL, because that would block all traffic sourced from that PC.

In the real world, there’s no way you would ever use a standard ACL here. If you were out of numbered extended ACLs, which does happen, you’d simply write a named extended ACL. Since you can only filter on source IP addresses with a standard list,

you’d need to put it as close to the destination as possible. Placement can also be affected when you consider how inbound and outbound ACLs handle traffic. Outbound ACLs are applied after packets have already been sent to the outbound interface by the routing engine, but before they’re put in the transmission queue. In contrast, inbound ACLs are applied before the routing engine handles them.

All things being equal, the router’s better off by blocking traffic with an inbound ACL as opposed to an outbound ACL. Sometimes - like when you’re stuck using a standard ACL to block traffic - the ACL has to go on the outbound interface. Let’s do a lab involving ACL placement that will give us additional practice with standard and extended ACLs. Here’s our network:

The requirements for the first part of this lab: A standard ACL must be used. We want to prevent any host on network 11.11.11.0 /24

from accessing our moneybags server at 44.44.44.4, along with any other hosts we may add to that same network segment in the future. All other networks should be allowed to access 44.44.44.0 /24. Hosts on 11.11.11.0 /24 should be allowed to reach hosts on 33.33.33.0 /24. Since a standard ACL allows us to filter only on the source IP address, we can’t place the ACL on either

interface on R2, since this would block all traffic from 11.11.11.0 /24. The only answer is to put the ACL on R3. But where? We can’t put the ACL on the Serial interface, because that would have the same result as putting it on R2. We’d block all traffic coming in from 11.11.11.0 /24, and we were specifically told hosts on 11.11.11.0 /24 should be able to reach 33.33.33.0 /24. The ACL would have to go on the interface leading directly to 44.44.44.0 /24.

The config on R3 (Ethernet0 is the interface on 44.44.44.0 /24): R3(config)#access-list 5 deny R3(config)#access-list 5 perm R3(config)#int e0 R3(config-if)#ip access-group

Pings from the host 11.11.11.11 to the 33 network go through, but the pings to network 44 are successfully blocked. Note the ping results for the blocked ping, as I’m using a Cisco router for the host. HOST#ping 33.33.33.33

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos second !!!!!

Success rate is 100 percent (5 36/38

HOST#ping 44.44.44.4

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos second U.U.U

Success rate is 0 percent (0/5

Done and done! Or….. ARE we? We met the requirements, but it’s not a solid real-world config. This config is inefficient at several

levels. The host is sending packets that have no chance of getting to 44.44.44.0 = wasted effort. Packets processed and forwarded by R2 that have no chance of getting to 44.44.44.0 = wasted effort. WAN bandwidth is sucked up by packets that will be stopped on the other side of the WAN = wasted bandwidth. R3 has to process incoming packets it’ll dump before

forwarding them = wasted effort again! An extended ACL is a far superior choice, since we can put that on R2, and traffic that has no chance to get to 44.44.44.0 /24 will be stopped before it crosses the WAN.

Here’s the config on R2:

R2(config)#access-list 110 den 44.44.44.0 0.0.0.255

R2(config)#access-list 110 per R2(config)#int e0 R2(config-if)#ip access-group

Done and done – and in a much more efficient manner! In a nutshell, unless a practice exam (or real exam) question forces you to use a standard ACL, you should use an extended ACL. And if you run out of numbers, use a named

one!

To Block Or Not To Block, That’s The Pinging Question! Blocking pings is simple enough. Just write an ACL that blocks ICMP echo packets and allows everything else. Here, R2 and R3 are on the same Ethernet segment, and R2 has no problem pinging R3. R2#ping 172.23.23.3

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos seconds: !!!!! Success rate is 100 percent (5 ms

To block pings coming in on R3’s

ethernet interface, we simply write an extended ACL specifying ICMP as the protocol and specifying echo at the end of the line. If we wanted to block ping responses, we’d configure echo-reply at the end of this line.

R3(config)#access-list 101 deny icmp ICMP message t administratively-prohibited Administ alternate-address Alternat conversion-error Datagram dod-host-prohibited Host pro dod-net-prohibited Net proh dscp Match pa echo Echo (ping) echo-reply

Echo reply

< readout truncated for clarity >

After applying the ACL to R3, R2 can no longer successfully ping R3.

R3(config)#access-list 101 den R3(config)#access-list 101 per R3(config)#int e0 R3(config-if)#ip access-group R2#ping 172.23.23.3

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos seconds: U.U.U

Success rate is 0 percent (0/5

The show access-list and show ip access-list commands show you how many matches each line of the ACL has.

R3#show access-list Extended IP access list 101 deny icmp any any echo (5 matc permit ip any any R3#show ip access-list Extended IP access list 101

deny icmp any any echo (5 matc permit ip any any

Why are we concerned about blocking or allowing pings in the first place? Many networks block pings and / or ping replies on their border router’s outside interface to prevent network attacks. That include reconnaissance attacks, which are not designed to cause damage. Rather, these are information-gathering attacks, and ping output is chock full o’ info!

Don’t go wild blocking pings on the inside of your network, though. Pings are the first thing we use when network connectivity is compromised, and if pings are blocked that can lead to “false negatives”. That is, we think a device is down when it doesn’t return pings, but it’s actually up and

the ACL is blocking the pings.

Advanced ACL Types Standard and extended ACLs fit the bill perfectly in many situations, but when you come right down to it, they’re hardcore when it comes to permitting and denying. Either you’re permitted to do something or you’re not - there’s no grey area with these ACL types. In networking, we’re always going to need some kind of exception to the rule. The following are more advanced ACLs than the standard and extended ones we generally use, and even if you don’t see them on your exam, you’ll definitely see

them in production networks. And if you do see them on your exam, you’ll be happy you knew about them! Commonly referred to as “lockand-key”, dynamic ACLs are dynamic extended access lists. Certain Telnet users will be able to authenticate as usual, but this access to their intended destination is strictly temporary. Once their access time has elapsed, the access is terminated. It’s much like a guest knocking on your door – you may choose to let them in for a certain period of time, then remind them

when it’s time for them to go, and then lock the door right behind them when they leave. Here are the basic steps for lockand-key: The remote user telnets to the router.

The router authenticates the user, and then creates an entry in the dynamic ACL that allows this

particular host access to networks that we’ve previously defined.

The natural question is “How long does the remote host have access to the specified network?” That’s up to us as the network admins, and there are two different kinds of timeouts we can set: an absolute timeout, where the remote host has “x”

minutes of access, and that’s it an idle timeout, where the connection is terminated once no data is exchanged for “x” minutes Time-based ACLs can be set to deny or permit traffic on -- you guessed it! - - the basis of time. These babies really come in handy on occasions where you need to allow or deny access for a certain period of time, or for certain days of the week. To write a time-based ACL, you

must first use the time-range command to define the times that certain lines of the ACL will be applied. You can do this on a perday basis or choose daily, weekdays, or weekend, as shown by IOS Help.

R1(config)#time-range NOTELNET

R1(config-time-range)#periodic Friday Monday Saturday Sunday Thursday Tuesday Wednesday

Friday Monday Saturday Sunday Thursday Tuesday Wednesday

daily weekdays weekend

Every day of the Monday thru Frida Saturday and Sund

We’ll choose weekend and then define an hourly time range.

R1(config-time-range)#periodic hh:mm Starting time

R1(config-time-range)#periodic to ending day and time

R1(config-time-range)#periodic

hh:mm Ending time - stays va minute

R1(config-time-range)#periodic

When you write the ACL, specify the protocol and port number as you normally would, and then add the time-range option at the end of the ACL line you want to be subject to the time range values.

R1(config)#access-list 101 den NOTELNET R1(config)#access-lis

R1#show access-list 101 Extended IP access list 101 10 deny tcp any any eq tel 20 permit ip any any

R1(config)#int s0/0 R1(config-if)#ip access-group

You’re all set! Note (inactive) in the output of show access-list. You’ll only see that when you have a time range defined, and that means that particular line of the ACL is not being applied because the present time is not in the time range. I set the lab router’s clock to a time when the NOTELNET time range would be valid, and here’s the result:

R1#show ip access-list

Extended IP access list 101 deny tcp any any eq telnet permit ip any any

ACL Sequence Numbers I recommend when you’re writing an ACL, you first write it out in Notepad or your favorite word processing software and then check out the top-to-bottom logic. It’s amazing how many headaches that can save you. Sooner or later, though, you’ll write an ACL on the router and then realize you forgot a line. I just happen to have an example of that right here:

R1(config)#access-list 45 deny R1(config)#access-list 45 deny R1(config)#access-list 45 deny

R1(config)#access-list 45 perm

After writing the ACL, you realize you meant to deny the 172.12.13.0 /24 network as well. In the good old days, you’d have to delete that ACL and type it in all over again, because any new lines would be added at the bottom. I used an older IOS to illustrate this old problem below. R1(config)#access-list R1(config)#access-list R1(config)#access-list R1(config)#access-list

45 45 45 45

deny deny deny perm

After saving the list, I add a line to the ACL and run show access-list to verify.

R1(config)#access-list 45 deny

R1#show access-list 45 Standard IP access list 45 deny 172.12.0.0, wildcard deny 172.14.0.0, wildcard deny 172.16.0.0, wildcard permit any deny 172.13.0.0, wildcard

That list wouldn’t give us the desired result, so we’d have to remove the ACL from any interfaces it had been applied to, delete the

ACL, and then rewrite it. A real pain in the butt – but no longer! The Cisco IOS now assigns a sequence number to each line in an ACL, and you can use those sequence numbers to your advantage. After configuring the same list on a router with a more recent IOS, I’ll run show ip accesslist 45.

R1#show ip access-list 45 Standard IP access list 45 10 deny 172.12.0.0, wildca 20 deny 172.14.0.0, wildca 30 deny 172.16.0.0, wildca

40 permit any

Note the sequence numbers at the far left. We didn’t enter those manually – those numbers were assigned to each line by the IOS. Now we want to enter a line into this ACL that denies 172.13.0.0 /24, and we want it to be the very first line. We need to assign it a sequence number between 1 and 9, so we’ll take the middle ground and assign it a sequence number of 5. We enter ACL config mode with the ip access-list command.

R1(config)#ip access-list stan

R1(config-std-nacl)#?

Standard Access List configura default

Sequence Numbe

Set a command to its

deny

Specify packet

exit

Exit from access-list c

no

Negate a command or set

permit

Specify packets to f

remark

Access list entry co

R1(config-std-nacl)#5 deny 172

The change is verified with show access-list 45.

R1#show access-list 45 Standard IP access list 45 5 deny 172.13.0.0, wildcard 10 deny 172.12.0.0, wildcard 20 deny 172.14.0.0, wildcard 30 deny 172.16.0.0, wildcard b 40 permit any

You can remove a line in this mode as well. If we decided not to block access to 172.14.0.0 /16, we’d use the no option to remove that particular line.

R1(config)#ip access-list stan R1(config-std-nacl)#no ?

Sequence Number deny permit

Specify packets to Specify packets

R1(config-std-nacl)#no 20 ? R1(config-std-nacl)#no 20

R1#show access-list 45 Standard IP access list 45 5 deny 172.13.0.0, wildc 10 deny 172.12.0.0, wild

30 deny 172.16.0.0, wild 40 permit any

That really beats rewriting an entire ACL! “no 20” seems like an odd command, but that short command packs a big punch!

A Quick Look At NTP Our time-based ACLs aren’t going to be much good to us if the routers don’t have and keep accurate time! It’s vital to have time synchronized across your network, and the Network Time Protocol helps make that possible. If your network devices have different times configured, problems from the annoying to the critical can occur, including… Timestamps on log entries will be incorrect, making troubleshooting more difficult

than it already is Secure certificates will not function correctly Security services and tools that rely on consistent time across the network will not function correctly Time-based ACLs are going to have a pretty hard time of it, too! The typical NTP configuration begins with a network device or devices getting their time from a highly believable, secure source.

From there, those devices under our control give the time to the other devices in our network.

CCNA Security teaser ahead! You have to be careful that other devices outside your network aren’t using this router as their NTP server. That’s a lot of extra work on your router, and it’s not a secure network setup. NTP servers are classified by

stratum. At the top of the NTP hierarchy we’ll find HUGE NTP master clocks, classified as “Stratum Zero”. A Cisco router can’t be configured as a Stratum Zero device, as shown here by IOS Help: NTP_SERVER(config)#ntp master Stratum number

In this example, we’ll configure R1 as a NTP time server, and configure R3 to get its time from R1, verifying the final config with show ntp associations and show ntp status.

Note the NTP Client has the ntp server command configured on it. There is no “ntp client” command. R3(config)#ntp ? access-group authenticate authentication-key time sources broadcastdelay

Control

Authenti

Authenti

Estimate

clock-period

Length o

logging

Enable N

master

Act as N

max-associations associations

Set maxi

peer

Configur

server

Configur

source address

Configur

trusted-key sources

Key numb

R3(config)#ntp server ?

Hostname or A.B.C.D IP addre vrf Information

VPN Rout

R3(config)#ntp server 20.1.1.1

R1(config)#ntp master ?

Stratum number



R1(config)#ntp master 5 ?

R1(config)#ntp master 5

On R3, I’ll run show ntp associations. Note both symbols next to “20.1.1.1”. The asterisk means the local router is synced with the NTP master; the other means the entry was statically configured. That synchronization only took a few seconds in a lab, but can take several minutes in a real-world network, so be patient with this command. (I’ve removed some of the output for clarity.)

R3#show ntp associations

address *~20.1.1.1

ref cloc

127.127.7.1

* master (synced), # master ( candidate, ~ configured

You also need to see the magic words “clock is synchronized” in the output of show ntp status. R3#show ntp status

Clock is synchronized, stratum

nominal freq is 250.0000 Hz, a

Hz, precision is 2**18

reference time is D5A99118.19B Mon Aug 5 2013)

clock offset is 0.3298 msec, r root dispersion is 1.74 msec, msec

That’s enough to get you started with NTP! Let’s head to the next section and spend some time with one of the most handy Cisco skills you’ll ever develop!

Route Summarization This is a fantastic technique for keeping your routing tables complete and concise! When our router looks for a given destination in the routing table, it will look at all possible routes in search of the best match for the destination in question. The larger the table, the more time this takes. Large routing tables are also a drain on router memory. EIGRP, and OSPF both use different commands to perform

route summarization, but the process of coming up with the summarized route is the same. R2 is sending a routing update to R3 for the networks 100.4.0.0 /16, 100.5.0.0 /16, 100.6.0.0 /16, and 100.7.0.0 /16. Without route summarization, R2 will send an update containing the four individual routes.

To configure a summary for these networks, write the network numbers out in binary. (For obvious reasons, you don’t have to write out the last two octets.)

100.4.0.0 100.5.0.0 100.6.0.0 100.7.0.0

1st Octet 01100100 01100100 01100100 01100100

2nd Octet 00000100 00000101 00000110 00000111

3 00 00 00 00

Moving left to right, determine what bits the network numbers have in common.

100.4.0.0 100.5.0.0 100.6.0.0 100.7.0.0

1st Octet 01100100 01100100 01100100 01100100

2nd Octet 00000100 00000101 00000110 00000111

The networks have the first 14 bits in common. Just add the common bits and you have the summary, which in this case is 100.4.0.0. Simple, right? Right! BUT -- we’re not quite done! We need a mask to go with that summary, and most of that work is

3 00 00 00 00

already done. The mask is determined by putting “1” in for each of the common bits of the summary network number, and “0” for the “don’t care” bits. In this example, the binary mask would be 11111111 11111100 00000000 00000000, resulting in a mask of 255.252.0.0. The final summary address is 100.4.0.0 255.252.0.0, or 100.4.0.0 /14. Watch that on both the exam and in network documentation those two summaries are the same, just expressed in different formats.

The summary may include networks that don’t exist (yet). The previous example is a “clean” route summarization; it includes only the four specified networks and no others. What if the routes to be summarized were 100.1.0.0, 100.2.0.0, 100.3.0.0, and 100.4.0.0? 1st Octet

2nd Octet 3

100.1.0.0 100.2.0.0 100.3.0.0 100.4.0.0

01100100 01100100 01100100 01100100

00000001 00000010 00000011 00000100

The four networks have their first 13 bits in common. The resulting network number 100.0.0.0 is the summary for the networks. The subnet mask is determined by putting “1” in for each common bit and “0” for the rest, resulting in a mask of 255.248.0.0. The final summary address is 100.0.0.0 255.248.0.0.

00 00 00 00

A routing issue could arise because this summary also includes networks 100.5.0.0, 100.6.0.0, and 100.7.0.0:

100.1.0.0 100.2.0.0 100.3.0.0 100.4.0.0 100.5.0.0 100.6.0.0 100.7.0.0

1st Octet 0110100 0110100 0110100 0110100 0110100 0110100 0110111

2nd Octet 00000001 00000010 00000011 00000100 00000101 00000110 00000111

This does not make the summary

3rd 000 000 000 000 000 000 000

address wrong, but it does make routing problems a possibility if those yet-to-be configured networks are placed elsewhere in the network.

Once the summary address and

mask have been determined, write out the next consecutive network number to see if the summary address will also represent that address. If so, be aware of the potential routing issues of using that network number in another section of the network. EIGRP is not a CCENT topic, but I would like you to see some realworld examples of route summarization, so I’m going to include this EIGRP example. You don’t have to know EIGRP from a hole in the ground to benefit from

this demo. (We’ll take care of that in your CCNA studies!)

Route Summarization With EIGRP R2 is advertising four routes to R3 via EIGRP: 100.0.0.0 /8, 101.0.0.0 /8, 102.0.0.0 /8, and 103.0.0.0 /8.

To summarize the networks, convert them from dotted decimal to binary, and determine how many incommon bits exist from left to right.

100.1.0.0 101.1.0.0 102.1.0.0 103.1.0.0

1st Octet 01100100 01100101 01100110 01100111

2nd Octet 00000001 00000001 00000001 00000001

3 00 00 00 00

The networks have six bits in common, resulting in the network number 100.0.0.0. The summary mask is determined by placing 1s in the mask for the in-common bits and 0s for the rest. The binary mask is 11111100 00000000 00000000 00000000, which in dotted decimal is 252.0.0.0.

The summary address and mask is 100.0.0.0 252.0.0.0. The interfacelevel command ip summaryaddress eigrp is used to advertise the summary.

R2#conf t R2(config)#interface ethernet0

R2(config-if)#ip summary-addre

And when we’re done, the

summarized route appears on R3!

R3#show ip route eigrp D 100.0.0.0/6 [90/2297856] v Ethernet0

OSPF route summarization isn’t quite as straightforward. The process of arriving at the summary and mask is the same, but the advertising of the route is a lot different, and it involves concepts you haven’t hit in your studies. We’ll leave examples of that for your CCNP studies. For the CCENT, be sure to practice

the math and processes in this short section and you’ll be ready for success on exam day. Let’s head to the next section and remove the dread from the dreaded IP Version 6!

IP Version 6 IP Version 6 is all around us today, and even if you’re not working directly with it today, you will be one day! Well, you will be if you’ve taken the initiative to learn IPv6. A lot of network admins have put off learning IPv6, which is a huge mistake. Even if it doesn’t impact your current career, you’re definitely limiting your future prospects if you aren’t strong with IPv6 – and you’re strengthening

your prospects when you are! By studying the material in this section, you’ll have a strong foundation in IPv6, and your future success is all about the foundation you build today. The IPv6 addresses themselves are the scariest part of IPv6 for many admins, and we’ll dive right into addresses – and you’re going to master them! The IPv6 Address Format

Typical IPv4 address: 129.14.12.200

Typical IPv6 address: 1029:9183:81AE:0000:0000:0AC1:2 As you can see, IPv6 isn’t exactly just tacking two more octets onto an IPv4 address! I haven’t met too many networkers who really like typing, particularly numbers. You’ll be happy to know there are some rules that will shorten those addresses a bit, and it’s a very good idea to be fluent with these rules for your CCNA exam.

You’ll also need the skill of reexpanding the addresses from their compressed state to their full 128bit glory, and you’ll develop that skill in this section as well. Be sure to have something to write with and on when studying this section. Zero Compression And Leading Zero Compression When you have consecutive blocks of zeroes in an IPv6 address, you can represent all of them with a single set of colons. It doesn’t

matter if you have two fields or eight, you can simply type two colons and that will represent all of them. The key is that you can only perform this zero compression once in an IPv6 address. Here’s an example:

Original format: 1234:1234:0000:0000:0000:0000:34 Using zero compression: 1234:1234::3456:3434 Since blocks of numbers are separated by a single colon in the first place, be careful when

scanning IPv6 addresses for legality. If you see two sets of colons in the same address, it’s an illegal address – period, no exceptions. (Hooray!) We can also drop leading zeroes in any block, but each block must have at least one number remaining. You can perform leading zero compression in any address as many times as you like. By the way, I refer to each individual set of numbers in an IPv6 address as “blocks” and occasionally “fields”; you can call them whatever you like, since

there’s no one official term.

Let’s look at an example of leading zero compression. Taking the address 1234:0000:1234:0000:1234:0000:12 we have four different blocks that have leading zeroes. The address could be written out as it is, or we can drop the leading zeroes.

Original format: 1234:0000:1234:0000:1234:0000:01 With leading zero compression: 1234:0:1234:0:1234:0:123:1234 For your exam and for the real

world, both of those expressions are correct. It’s just that one uses leading zero compression and the other does not. Watch that on your exam! Using zero compression and leading zero compression in the same address is perfectly legal:

Original format: 1111:0000:0000:1234:0011:0022:00 With zero and leading zero compression: 1111::1234:11:22:33:44 Zero compression uses the double colon to replace the second and

third block of numbers, which were all zeroes. Leading zero compression replaced the “00” at the beginning of each of the last four blocks. Just be careful and take your time with both zero compression and leading zero compression and you’ll do well on the exam and in the real world. T

Why Can’t You Use Zero Compression More Than Once? As soon as you tell me I can’t do something, I want to know why – and then I’ll probably try it anyway. (Mom always said I was a strongwilled child.) So when I was checking out IPv6 for the first time and ran into that zero compression limitation, I thought “Why can’t you use that more than once?” Let’s check out this example to see why:

1111:0000:0000:2222:0000:0000:00

If we were able to use zero compression more than once, we could compress that address thusly: 1111::2222::3333 Great! But what happens when the full address is needed? We know there are eight blocks of numbers in an IPv6 address, but how would we know the number of blocks represented each set of colons? That full address could be this:

1111:0000:2222:0000:0000:0000:00 Or this:

1111:0000:0000:0000:0000:2222:00

Or this!

1111:0000:0000:0000:2222:0000:00 If multiple uses of zero compression were legal, every one of those addresses could be represented by 1111::2222::3333 – and none of them would actually be the original address! That’s why using zero compression more than once in an IPv6 address is illegal – there would be no way to know exactly what the original address was, which would kind of defeat the purpose of compression!

“The Trailing Zero Kaboom” Watch this one – it can explode points right off your score. When you’re working with zero compression, at first it’s easy to knock off some trailing zeroes along with the full blocks of zeroes, like this:

1111:2222:3300:0000:0000:0000:00 … does NOT compress to… 1111:2222:33::44:5555 The correct compression: 1111:2222:3300::44.5555

You can’t compress trailing zeroes. That’s another way to identify illegal IPv6 addresses -- if you see multiple colon sets or zeroes at the end of a block being compressed, the address expression is illegal.

Decompressing While Avoiding The Bends Decompressing an IPv6 address is pretty darn simple. Example: 2222:23:a::bbcc:dddd:342 First, insert zeroes at the beginning of each block that has at least one value in it. The result: 2222:0023:000a::bbcc:dddd:0342 Next, insert fields of zeroes where you see the set of colons. How many fields, you ask? Easy! Just count how many blocks you see now and subtract it from eight. In

this case, we see six blocks, so we know we need two blocks of zeroes to fill out the address.

2222:0023:000a:0000:0000:bbcc:dd Done and done! This is also an easy skill to practice whenever you have a few minutes, and you don’t even need a practice exam to do so. Just take a piece of paper, and without putting a lot of thought into it, just write out some compressed IPv6 addresses and then practice decompressing them. (You should put thought into that part.)

The Global Routing Prefix: It’s Not Exactly A Prefix While the address formats of IPv4 and v6 are wildly different, the purpose of many of the IPv6 addresses we’ll now discuss will seem familiar to you – and they should! These v6 addresses have some huge advantages over v4 addresses, particularly when it comes to subnetting and summarization. The IPv4 address scheme really wasn’t developed with subnetting or summarization in mind, where IPv6 was developed with those helpful

features specifically in mind. In short, v6 addresses were born to be subnetted and summarized! I mention that here because our first address type was once often referred to as “aggregateable global unicast address”. Thankfully, that first word’s been dropped, but the global unicast address was designed for easier summarization and subnetting. Basically, when your company gets a block of IPv6 addresses from an ISP, it’s already been subnetted a bit. At the top of the “IPv6 address subnet food chain” is the IANA, the

Internet Assigned Numbers Authority (http://www.iana.org/). The IANA has the largest block of addresses, and assigns subnets from those blocks to Regional Internet Registries (RIRs) in accordance with very strict rules. In turn, the Registries assign subnets of their address blocks to ISPs. (The IANA never assigns addresses directly to ISPs!) These RIRs are located in

The ISPs then subnet their address blocks, and those subnets go to their customers. I strongly recommend you visit http://www.iana.org/numbers for more information on this process. It’s beyond the scope of the CCENT exam, but it’s cool to see where the Registries are, along with charts showing how the IANA keeps highly detailed information on where the IPv6 global unicast addresses have been assigned.

Now here’s the weird part – these blocks of addresses are actually referred to as “global routing prefixes”. When you think of a “prefix” at this point, you likely think of prefix notation (/24, for example). It’s just one of those IPv6 things we have to get used to. Here’s something else we need to get used to – you and I are now the network admins of Some Company

With No Name (SCWNN). And our first task awaits!

“Now What Do I Do?” We’ve requested a block of addresses from our ISP (a “global routing prefix”, in IPv6-speak), and we’ve got ‘em. Now what do we do? We subnet them! Hey, come back! It’s not that bad. Personally, I believe you’ll find IPv6 subnetting to be easier than IPv4 subnetting – after you get some practice in, of course! When we get the global routing prefix from our ISP, that comes with a prefix length, and in our example

we’ll use a /48 prefix length. The prefix length in IPv6 is similar to the network mask in IPv4. (The /48 prefix length is so common that prefixes with that length are sometimes referred to as simply “forty-eights”.) You might think that leaves us a lot of bits to subnet with, but there’s also an Interface Identifier to work with, and it’s almost always 64 bits in length. This ID is found at the end of an IPv6 address, and it identifies the host. We’ll go with that length in this exercise.

So far we have a 48-bit prefix and a 64-bit identifier. That’s 112 bits, and since our addresses are 128 bits in length, that leaves us 16 bits for --- subnetting! Global Routing Prefix: 2001:1111:2222 (48 bits) Subnet ID: 16-bit value found right after the GRP Interface ID: 64-bit value that concludes the address Can we really create as many subnets as we’ll ever need in our

company with just 16 bits? Let’s find out. We use the same formula for calculating the number of valid subnets here as we did with v4 – it’s 2 to the Nth power, with “N” being the number of subnet bits. 2 to the 16th power is 65,536. That should cover us for a while! Now we need to come up with the subnet IDs themselves.

Determining The Subnet ID Nothing to it, really. In our example of 2001:1111:2222 as the global routing prefix, we know that the next block will represent the subnets. You can just start writing them out ( or entering them in a spreadsheet – highly recommended) and go from there. Your first 11 subnets are 0001, 0002, 0003, 0004, 0005, 0006, 0007, 0008, 0009, 000A, and 000B. I listed that many as a gentle reminder that we’re dealing with hex here! Our first full subnet is

2001:1111:2222:0001::/64, the next is 2001:1111:2222:0002 /64, and so forth. That’s it! Just be sure to keep careful records as to where each of your subnets are placed in your network, and I strongly recommend you issue them sequentially rather than just pulling values at random. Now we’re going to start assigning IPv6 addresses to router interfaces. We have options with IPv6 that are similar to IPv4’s static assignment and DHCP, but there are important differences we must be aware of in order to past the exams – and just as

importantly, to be ready to work with IPv6 in the field. Let’s get to work

First Things First: Enable IPv6 Routing – Twice? We don’t think twice about using IPv4 routing on a Cisco router, since it’s on by default. However, when using IPv6 routing, you need to enable it twice: Enable IPv6 routing globally with the ipv6 unicast-routing command Enable IPv6 routing on an interface level with ipv6 address, followed by the IPv6 address itself.

V6ROUTER1(config)#ipv6 unicast

V6ROUTER1(config)#int fast 0/0

V6ROUTER1(config-if)#ipv6 addr 2001:1111:2222:0001:1::/64

You won’t get a message that IPv6 routing has been enabled after you run ipv6 unicast-routing, nor will pigeons be let loose, so you better verify with show ipv6 interface and show ipv6 interface brief. Note: It’s really easy to leave the “ipv6” part of those commands out, since we’re used to running those commands without it.

Another note: I’m going to truncate the output of both of these commands for now – you’ll see the full output later. V6ROUTER1#show ipv6 interface

FastEthernet0/0 is up, line pr

IPv6 is enabled, link-local ad FE80::20C:31FF:FEEF:D240

No Virtual link-local addres Global unicast address(es):

2001:1111:2222:1:1::, subn 2001:1111:2222:1::/64 Joined group address(es):

FF02::1 FF02::2 FF02::1:FF00:0 FF02::1:FFEF:D240

A little of this output is familiar, particularly that first line. Just as with IPv4, we need our IPv6 interface to show as “up” and “up” – and if they’re not, we go through the exact same troubleshooting checklist as we would if this were an IPv4 interface. Always start troubleshooting at the physical

layer, no matter what version of IP you’re running! Since we’re good on the physical and logical state of the interface, we can look at the rest of the config – and everything’s different here! We see the global unicast address we configured on the interface, and the subnet is right next to that. After that, we seem to have joined some groups, and we’ve also got something called a “link-local address”. Before we delve into those topics, let’s have a look at show ipv6 interface brief. V6ROUTER1#show ipv6 interface

FastEthernet0/0 [up FE80::20C:31FF:FEEF:D240 2001:1111:2222:1:1:: Serial0/0 [ad FastEthernet0/1 [ad Serial0/1 [ad

Brief, eh? All we get here is the state of each interface on the route, and the IPv6 addresses on the IPv6enabled interfaces. Note the output doesn’t even tell you what those two addresses even are, so we better know the top one is the linklocal address and the bottom one is the global unicast address. We know what the global unicast

address is, so let’s spend a little time talking about that link-local address – tis an important IPv6 concept!

The Link-Local Address Another “name is the recipe” topic! Packets sent to a link-local address never leave the local link – they’re intended only for hosts on the local link, and routers will not forward messages with a link-local address as a destination. (Since these are unicast messages, the only host that will process it is the one it’s unicast to.) Fun fact: IPv4 actually has linklocal addresses, but they rarely come into play. In IPv6, a link-local address is assigned to any and every IPv6-enabled interface. We

didn’t configure a link-local address on our Fast 0/0 interface, but when we ran our show ipv6 interface commands, we certainly saw one! V6ROUTER1#show ipv6 interface

FastEthernet0/0 is up, line pr

IPv6 is enabled, link-local ad FE80::20C:31FF:FEEF:D240

Soooo… if we didn’t configure it, where did it come from? Is our router haunted by a g-g-gghoooooooost? Nothing as fun as that. The router

simply created the link-local address on its own, in accordance with a few simple rules. I’m sure you noticed that address was expressed using zero and leading zero compression, so let’s decompress it and examine the address in all its 128-bit glory. Compressed: FE80::20C:31FF:FEEF:D240

Uncompressed: FE80:0000:0000:0000:020C:31FF:F According to the official IPv6 address standards, the link-local reserved address block is Fe80::/10. That means the first ten

bits have to match FE80, and breaking that down into binary…. (8,4,2,1 for each block) FE80 = 1111 1110 1000 0000 … we see that by setting the last two bits in the third block to all possible different values, we end up with 1000, 1001, 1010, and 1011. That means link-local addresses should be able to begin with Fe8, Fe9, FeA, and FeB. However, RFC 4291 states the last 54 bits of a link-local address should all be set to zero, and the

only value that makes that possible is Fe80. Following that standard – which is exactly what you should do on exam day and in the field – link-local addresses should begin with Fe80, followed by three blocks of zeroes. So far, our link-local address is Fe80:0000:0000:0000. We’re 64 bits short, and the Cisco router’s going to take care of that by creating its own interface ID via EUI-64 rules. And while the router will figure out its own interface identifier in the field, you may just be asked to determine a couple of

these on your exam or job interview. With that said, let’s take a close look at the process and compare it to what we’re seeing on our live equipment!

How Cisco Routers Create Their Own Interface Identifier It’s easy, and I’d be ready to perform this little operation on exam day. The router just takes the MAC address on the interface, chops it in half, sticks FFfe in the middle, and then perfoms one little bit inversion. Done! In our example, we’ll use 11-2233-aa-bb-cc. Chop it in half and put the FFfe in the middle… 1122:33FF:FEAA:BBCC … and you’re almost done.

Write out the hex value for the first two digits, “11” in this case, and invert the 7th bit. “Invert the bit” is a fancy way of saying “If it’s a zero, make it a one, and if it’s a one, make it a zero.” 11 = 0001 0001 Invert the 7th bit 0001 0011 result is 13 Replace the first two characters with the ones you just calculated,

and you’re done! The interface identifier is 1322:33FF:FEAA:BBCC. Let’s practice this skill using the MAC address of FastEthernet 0/0 on our live IPv6 router. V6ROUTER1#show int fast 0/0

FastEthernet0/0 is up, line pr

Hardware is AmdFE, address i 000c.31ef.d240)

The MAC address is 000c.31ef.d240, so we’ll split that right in half and put FFFE in the middle:

000c:31FF:FEEF:D240 Now for that bit inversion! We know 00 = 0000 0000, so invert the 7th bit to a 1, and we have 0000 0010, which equals 02. Put the “02” in the address in place of the “00” at the beginning of the identifier, and we have…. 020c:31FF:FEEF:D240 … and after a (very) little leading zero compression, we’re left with 20c.31FF:FEEF:D240. Is that correct? Let’s check out that linklocal address…. V6ROUTER1#show ipv6 interface

FastEthernet0/0 is up, line pr IPv6 is enabled, link-local FE80::20C:31FF:FEEF:D240

We’re right! The full link-local address is shown, and after the zero compression of the prefix FE80:0000:0000:0000, the interface identifier is listed – and it matches our calculations exactly! While this is an important process to know about, you can also configure an interface’s link-local address with the ipv6 address command:

V6ROUTER1(config-if)#ipv6 addr WORD

General

X:X:X:X::X

IPv6 lin

X:X:X:X::X/

IPv6 pre

autoconfig autoconfiguration

Obtain a

Naturally, you have to abide by the link-local address rules we talked about earlier.

Using The EUI-64 Process With The ipv6 Address Command Earlier, we statically applied the full IPv6 address to the FastEthernet 0/0 interface, and that’s one way to get that address on the interface. However, if you just want the address to be unique and you don’t need to assign a certain specific address to the interface, you can use the eui-64 option with the ipv6 address command to come up with a unique address. I’ll use that option on the live equipment, after first removing the

full address we applied earlier.

V6ROUTER1(config)#int fast 0/0

V6ROUTER1(config-if)#no ipv6 a 2001:1111:2222:1:1::/64

Enter the prefix and prefix length, followed by eui-64.

V6ROUTER1(config-if)#ipv6 addr ? anycast

Configure as an any

eui-64

Use eui-64 interfac



V6ROUTER1(config-if)#ipv6 addr

eui-64

Verify the global unicast address creation with show ip6 interface.

V6ROUTER1#show ipv6 int fast 0

FastEthernet0/0 is up, line pr IPv6 is enabled, link-local FE80::20C:31FF:FEEF:D240

No Virtual link-local addres Global unicast address(es):

2001:1111:2222:1:20C:3 2001:1111:2222:1::/64 [EUI]

Note the global unicast address is

now the prefix followed by the linklocal address. The result is a unique address that was calculated in part by the router, and not totally configured by us. Would you believe there’s a third way for that interface to get its address? Since the first two methods have been static configurations, I bet you think this one’s dynamic. Let’s use IOS Help to see that one…

V6ROUTER1(config)#int fast 0/0 V6ROUTER1(config-if)#ipv6 addr WORD General X:X:X:X::X IPv6 lin

X:X:X:X::X/ IPv6 pr autoconfig Obtain ad autoconfiguration

Sounds kinda dynamic! More about autoconfiguration later – right now, let’s talk about the IPv6 equivalent of IPv4’s Address Resolution Protocol!

The Neighbor Discovery Protocol NDP allows an IPv6 host to discover its neighbors – but you already knew that just by reading the protocol name. The “neighbors” we’re talking about here are other hosts and routers, and the processes for discovering the routers isdifferent than the host-discovery process. Let’s start with finding our routers! To start the router discovery process, the host sends a Router Solicitation multicast onto its local link. The destination is FF02::2, the “All-IPv6-Routers” address. The

primary value the host wants is the router’s link-local address.

Any router on the link that hears that message will respond with a Router Advertisement packet. That advertisement can have one of two destination addresses If the querying host has an

IPv6 address that would have been in the RS message, and the router will unicast its RA back to that address. If the querying host does not yet have an IPv6 address, the source message of the RS will be all zeroes, and in that case the router will multicast the RA to FF02::1, the “All IPv6 Nodes” address.

IPv6 routers don’t just sit around and wait to be asked for that info; on occasion, they’ll multicast it onto the link without receiving an RS. By default, the RA is multicast to FF02::1 every 200 seconds.

Now that we’re successfully discovering routers, let’s start discovering neighbors, with the aptly-named Neighbor Solicitation

and Neighbor Advertisement messages! The Neighbor Solicitation message is the rough equivalent of IPv4’s ARP Request. The main difference is that an ARP Request asked for the MAC address of the device at a particular IPv4 address….

… and a Neighbor Solicitation message asks neighbors found in the solicited-node multicast address

range of the destination IPv6 address to reply with their linklayer addresses.

This leads us to the musical question “What the $&%*)%*)*$ is a solicited-node multicast address?” Welllll, this isn’t exactly one of those “the name is the recipe”

protocols we’ve seen in this course, so let’s take a few minutes to examine this address and figure out exactly what the “range” is.

The Solicited-Node Multicast Address “Dying is easy. Comedy is hard.” -- Edmund Kean “Determining the solicited-note multicast address for a given IPv6 address is easy. Figuring out what the heck a ’solicited-node multicast address’ is – now THAT’S hard.” -- Chris Bryant I doubt my quote goes down in posterity, but it really does apply to this little section of our studies. Here’s the deal with this address. It

is a multicast that goes to other hosts on the local link, but not to all hosts on the local link -- just the ones that have the same last six hex values as the destination IPv6 address of the message.. I kid you not – that’s what it is! This wasn’t developed just to be funny or to help create tricky exam questions. There are IPv6 services that rely on this address, and you’ll see those in future studies. For right now, we need to know what this address is (covered) and how to determine the solicited-node multicast address for a given IPv6

address (coming right up!) This address is actually in the output of show ipv6 interface, but we better know where and how it was calculated, since neither is very obvious. I’ve left in a little more info in this command output than I have in the past – there’s a big hint as to where to find the solicited-node multicast address. V6ROUTER1#show ipv6 interface

FastEthernet0/0 is up, line pr IPv6 is enabled, link-local FE80::20C:31FF:FEEF:D240

No Virtual link-local addres Global unicast address(es):

2001:1111:2222:1:20C:31FF: 2001:1111:2222:1::/64 [EUI] Joined group address(es): FF02::1 FF02::2 FF02::1:FFEF:D240

Under “joined group address(es)”, you see three different addresses. The first two, FF02::1 and FF02::2, we saw earlier in this section. The third, FF02::1:FFEF:D240, is the

solicited-node multicast address for the local host. Solicited note addresses always begin with FF02::1:FF. To get the rest, just grab the last six digits of the global unicast address, and tack it right on the end of the multicast address. V6ROUTER1#show ipv6 interface

FastEthernet0/0 is up, line pr IPv6 is enabled, link-local FE80::20C:31FF:FEEF:D240

No Virtual link-local addres Global unicast address(es):

2001:1111:2222:1:20C:31FF: 2001:1111:2222:1::/64 [EUI] Joined group address(es): FF02::1 FF02::2 FF02::1:FFEF:D240

That’s it! Now back to our Neighbor Solicitations and Advertisements! When last we left our IPv6 host, now named “Host A”, it was sending a Neighbor Solicitation to

the solicited-note multicast address that corresponds with the IPv6 address of the destination host, “Host B”.

You can see how this cuts down on overhead when compared to IPv4’s ARP. This initial request for information is a multicast that’s going to be processed by a very few hosts on the link, where an IPv4

ARP Request was a broadcast that every host on the link had to stop and take a look at. After all that, it’s time for a Neighbor Advertisement! Host B answers the NS with an NA, and that NA contains Host B’s linklocal address. Host A pops that address into its Neighbor Discovery Protocol neighbor table (the equivalent of IPv4’s ARP cache), and we’re done!

DHCP In IPv6 DHCP is one of the most useful protocols we’ll ever use, so IPv6 certainly wasn’t going to eliminate it – but just as we can always get better, so can protocols. Let’s jump into DHCP for IPv6, starting with a comparison of Stateful DHCP and Stateless DHCP. Stateless DHCP works a lot like the DHCP we’ve come to know and love in our IPv4 networks. See if this story sounds familiar: “A host sends a DHCP message, hoping to hear back

from a DHCP server. The server will give the host a little initial information, and after another exchange of packets, the host is good to go with its IP address it accepted from the client. That address is good for the duration of the lease, as defined by the server. There are four overall messages in the entire DHCP process, two sent by the client and two by the server. The location of the DNS servers is also given to the

client. The server keeps a database of information on clients that accept the IP addresses that it offers. A problem comes in when there’s a router in between our host and DHCP server. In that case, we need the router to act as a relay agent. “ Those paragraphs describe both DHCPv4 and Stateful DHCPv6. There are some differences, of course:

The DHCPv6 messages Solicit, Advertise, Request, and Reply take the place of DHCPv4’s Discovery, Offer, Request, Acknowledgement messages. Note that while DHCPv6 lets the client know where the DNS servers are, just like DHCPv4 does, DHCPv6 does not include default router information as DHCPv4 does. The host will get that information from NDP. Overall, the DHCPv6 Relay Agent operation is just like that of DHCPv4. There are obviously some different messages and

addresses involved, but this illustration of a typical Relay Agent operation will show you how similar the two are.

That Solicit message is link-local in scope, so if there’s a router between the host and the DHCP server, we have to configure the router as a relay agent. We do that

by configuring the ipv6 dhcp relay command on the interface that will be receiving the DHCP packets that need to be relayed.

V6ROUTER1(config)#int fast 0/0 V6ROUTER1(config-if)#ipv6 dhcp client Act as an IPv6 DHCP relay Act as an IPv6 DHCP server Act as an IPv6 DHCP

V6ROUTER1(config-if)#ipv6 dhcp destination Configure relay

V6ROUTER1(config-if)#ipv6 dhcp X:X:X:X::X IPv6 address V6ROUTER1(config-if)#$elay des 2001:1111:2222:1:20E:D7FF:FEA4

The dollar sign appears at the far left of the input, since this command is too long for the screen. As a result of this command, the router will relay the DHCP Solicit to the destination we specify. When the router sees return messages from the DHCP server, the router will relay those messages to Host A. Verify the router is a now a member of the “All DHCP Servers and Agents” multicast group with the show ipv6 interface command. The interface with the relay agent config will show FF02::1:2 under “Joined

Group Address(es)”.

V6ROUTER1#show ipv6 int fast 0

FastEthernet0/0 is up, line pr IPv6 is enabled, link-local FE80::20C:31FF:FEEF:D240

No Virtual link-local addres Global unicast address(es):

2001:1111:2222:1:20C:31FF: 2001:1111:2222:1::/64 [EUI] Joined group address(es): FF02::1 FF02::2

FF02::1:2 FF02::1:FFEF:D240

Now let’s have a look at Stateless Autoconfiguration! Where Stateful Autoconfiguration has a lot in common with DHCPv4, Stateless is a whole new world. We have hosts that create their own IPv6 addresses! That process starts with some info the host received from the router way back during those Router Solicitation and Router Advertisement messages. We

discussed a little of that info at that time, but here’s some more detail on what the RA contains – and one important value it does NOT contain.

Among the information contained in that RA sent to the host is the link’s prefix and prefix length, and that

info allows the host to get started on creating its own IP address. All the host has to do is tack its 64-bit interface identifier onto the back of the 64-bit prefix, and voila …. A 128-bit IPv6 address! There’s a very good chance this address will be unique on the local link, but we don’t want to leave that kind of thing to chance. Instead, that local host will perform the Duplicate Address Detection procedure before using this newly created IPv6 address.

A True DAD Lecture When I give a quick reminder about acting responsibly in the field – using the remark option with your ACLs, running undebug all before you leave a client site, that kind of thing – I usually refer to it as a “dad lecture”. What follows here is a real DAD lecture – the Duplicate Address Detection procedure, that is! It’s also a quick lecture, because DAD is a very quick process. Basically, DAD is the host attempting to talk to itself, and if the host succeeds in doing so, there’s a

duplicate address problem. To perform DAD, the host just sends a Neighbor Solicitation message to its own address.

Then one of two things will happen: The host that sent the NS receives a Neighbor Advertisement (NA), which means another host on the

link is already using that address, and the host that wanted to use it can’t do so. The host that sent the NS doesn’t hear anything back, so it’s okay for that host to use its new address. And that’s it! DAD is just a quick, handy little check the interface runs when it’s about to use an IPv6 unicast address for the first time, or when an interface that already had an IPv6 address in use is brought down and then back up for any reason. This little double-check can spare you some big headaches!

So What About DNS? In short, we’ve got to have a DHCP server to get the DNS server info to the hosts. Even though Stateless Autoconfiguration doesn’t eliminate the need for a DHCP server, it comes very close, and there’s lot less to configure, verify, and maintain when the only thing our DHCP servers are responsible for is getting out the word about the DNS server locations. RFC 6106 lists RA options for DNS information. That doc is beyond the scope of the CCENT and CCNA exams, but it is worth

noting that they’re working on ways to get DNS information to the hosts without using a DHCP server. tools.ietf.org/html/rfc6106

Pining for Pinging Pings and traceroutes work much the same in IPv6 and IPv4. We just have to be aware of a small difference or two. Here are the current addresses of R1 and R3, along with a handy little reminder of a handy little command: R1: V6ROUTER1#show ipv6 interface

FastEthernet0/0 is up, line pr IPv6 is enabled, link-local FE80::20C:31FF:FEEF:D240

No Virtual link-local addres Global unicast address(es):

2001:1111:2222:1:20C:31FF: 2001:1111:2222:1::/64 [EUI]

R3: V6ROUTER3#show ipv6 interface

FastEthernet0/0 is up, line pr IPv6 is enabled, link-local FE80::20E:D7FF:FEA4:F4A0

No Virtual link-local addres Global unicast address(es):

2001:1111:2222:1:20E:D7FF

2001:1111:2222:1::/64 [EUI]

Let’s send a ping between R1 and R3. We can use the good ol’ fashioned ping command….

V6ROUTER1#ping 2001:1111:2222:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos 2001:1111:2222:1:20E:D7FF:FEA4 seconds: !!!!!

Success rate is 100 percent (5

min/avg/max = 0/0/4 ms

… or the extended command, usi V6ROUTER1#ping Protocol [ip]: ipv6

Target IPv6 address: 2001:1111:2222:1:20e:d7ff:fea4 Repeat count [5]: Datagram size [100]: Timeout in seconds [2]: Extended commands? [no]: Sweep range of sizes? [no]:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos 2001:1111:2222:1:20E:D7FF:FEA4 seconds: !!!!!

Success rate is 100 percent (5 min/avg/max = 0/0/4 ms

Traceroute works just as it did for v4. Granted, there’s not much of a path with this setup, but as your v6 networks grow, so will your traceroute output. The escape sequence is the same, too – the only thing that changes is the format of the address you enter.

Believe me, you’ll be using ping a lot more than traceroute as you learn IPv6!

V6ROUTER1#traceroute 2001:1111:2222:1:20e:d7ff:fea4

Type escape sequence to abort.

Tracing the route to 2001:1111:2222:1:20E:D7FF:FEA4

1 2001:1111:2222:1:20E:D7FF:F msec

I don’t want to overwhelm you with show ip v6 commands, since there are quite a few in the IOS (about 40

of them when I looked today), but there is one more I want to introduce youto in this course – show ipv6 neighbors. You can look at all of your router’s neighbors, or you can identify the local router’s interface to filter the output. V6ROUTER1#show ipv6 neighbors IPv6 Address State Interface FE80::20E:D7FF:FEA4:F4A0 000e.d7a4.f4a0 STALE Fa0/0

2001:1111:2222:1:20E:D7FF:FEA4

000e.d7a4.f4a0 STALE Fa0/0 Going from left to right---

The IPv6 address field is cert

Age refers to the last time in was reachable. Static entries hyphen. Link-layer is the MAC address State is way beyond the scope exams, but if you want to dig descriptions here:

http://www.cisco.com/en/US/docs/io xml/ios/ipv6/command/ipv6s4.html#wp1680937550

Interface refers to the local interface through which the neighbor is reached. Speaking of “local”, let’s spend a little time with our IPv6 route types and protocols. With both IPv4 and v6, there are no routes in the routing table by default. With IPv4, after we put IP addresses on the interfaces and then open them, we expect to see only connected routes. With IPv6, we’re going to see connected routes and a new route type, the local route. For clarity, I’m going to delete the

route codes from the table unless we’re actually talking about that route type. V6ROUTER1#show ipv6 route

IPv6 Routing Table - 3 entries

Codes: C - Connected, L - Loca C

2001:1111:2222:1::/64 [0/0] via ::, FastEthernet0/0

L

2001:1111:2222:1:20C:31FF:F via ::, FastEthernet0/0

L

FF00::/8 [0/0]

via ::, Null0

We expect to see the connected route, but that local route’s a new one on us. The IPv6 router will not only put a connected route into the table in accordance with the subnet configured on the local interfaces, but will also put a host route into the table for that route. In this case, it’s R3’s interface on that same Fast Ethernet segment.

Static and Default Routing Just as with ping and traceroute, both static and default static routing work under the same basic principles in IPv6 as they did in IPv4. We just have to get used to a slightly different syntax! In this lab, we’ll set up connectivity between R1 and a loopback on R3 with a regular static route, then with a default static route. It won’t surprise you to learn that we create both of these route types with the ipv6 route command, followed by some old friends as options!

V6ROUTER1(config)#ipv6 route 2 Dialer

Dialer interfa

FastEthernet Loopback

FastEthernet I Loopback inter

MFR

Multilink Fram

Multilink

Multilink-grou

Null

Null interface

Port-channel

Ethernet Chann

Serial

Serial

X:X:X:X::X

IPv6 address o

I removed some of the available

interface types for clarity, but yes, we have much the same choices with IPv6 as we did with IPv4 – the local exit interface or the IP address of the next hop! I personally like to use the next-hop address, since it’s easier to troubleshoot in case of trouble, but you can use either. Just as with IPv4, make sure to choose the local router exit interface or the next-hop address. Here, I used R3’s fastethernet0/0 IP address as the next-hop address, and that command is so long that it brought up the dollar sign in the

prompt. Hint: You can always run show ipv6 neighbors to grab the next-hop address via copy and paste rather than typing it in. V6ROUTER1#show ipv6 neighbors IPv6 Address Addr State Interface

Ag

FE80::20E:D7FF:FEA4:F4A0 000e.d7a4.f4a0 STALE Fa0/0

5

2001:1111:2222:1:20E:D7FF:FEA4 000e.d7a4.f4a0

V6ROUTER1(config)#$2001:2222:3 2001:1111:2222:1:20E:D7FF:FEA4

Full command from config:

ipv6 route 2001:2222:3333:1::/64 2001:1111:2222:1:20E:D7FF:FEA4: Let’s send a ping from R1 to R3’s loopback….

V6ROUTER1#ping 2001:2222:3333:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos 2001:2222:3333:1:20E:D7FF:FEA4 seconds: !!!!!

Success rate is 100 percent (5 min/avg/max = 0/1/4 ms

Success, indeed! Let’s run the exact same lab but with a default static route. First, we’ll remove the previous route by using our up arrow and then ctrl-a to go to front of the lonnnng command, and enter the word “no”:

V6ROUTER1(config)#no ipv6 rout 2001:1111:2222:1:20E:D7F$

Then we’ll enter a default route, IPv6 style:

V6ROUTER1(config)#ipv6 route : 2001:1111:2222:1:20E:D7FF:FEA4

That’s right -- ::/0 plus the local router exit interface or next-hop IPv6 address is all you need! We’ll verify with ping:

V6ROUTER1#ping 2001:2222:3333:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos 2001:2222:3333:1:20E:D7FF:FEA4 !!!!!

Success rate is 100 percent (5 min/avg/max = 0/1/4 ms

Ta da!

When checking your V6 routing table, be sure to give it a twiceover – it’s really easy to scan right past the routing table entry for the default static route. V6ROUTER1#show ipv6 route

IPv6 Routing Table - 4 entries

Codes: C - Connected, L - Loca S

::/0 [1/0]

via 2001:1111:2222:1:20E:D7 C

2001:1111:2222:1::/64 [0/0] via ::, FastEthernet0/0

L

2001:1111:2222:1:20C:31FF:F via ::, FastEthernet0/0

L

FF00::/8 [0/0] via ::, Null0

OSPF For IPv6 (AKA the confusingly-named “OSPF Version 3” First things first: OSPF for IPv6 is the same thing as “OSPF Version 3”. The OSPF for IPv4 we’ve all come to know and love is “OSPF Version 2”. You rarely see “OSPFv2” used anywhere, so if you see the simple letters “OSPF”, we’re talking about the version for IPv4. Let’s take a look at some basic OSPFv3 commands and compare OSPF v3 to IPv4’s OSPF v2. In IPv6, you’re not going to start an

OSPF configuration with router ospf. One major difference between the OSPF v2 and OSPF v3 is that while OSPF v2 is enabled globally, OSPF v3 is enabled on a perinterface basis. This will automatically create a routing process.

R1 (config-if) #ipv6 ospf area

One similarity between the two versions is their use of the OSPF RID. OSPF v3 is going to use the exact same set of rules to determine the local routers RID - and OSPF v3 is going to use an IPv4 address

as the RID! If there is no IPv4 address configured on the router, you’ll need to use our old friend router-id to create the RID. The RID must be entered in IPv4 format, even if you’re only running IPv6 on the router. R1 (config-router) #router-id

Other similarities and differences between OSPF v2 and v3: They both use the same overall terms and concepts when it comes to areas,

LSAs, and the OSPF metric cost. Values such as the hello and dead time must be agreed upon for an adjacency to form, and for that adjacency to remain in place. The SPF algorithm is used by both versions, and dynamic neighbor discovery is supported by both.

One big difference – OSPFv3 routers do not have to agree on the prefix length. OSPF v3 point-to-point and point-to-multipoint configurations do not elect DRs and BDRs, just like IP v4. OSPF v3 headers are smaller than v2, since v3 headers have no authentication fields. The OSPF v2 reserved address 224.0.0.5 is

represented in OSPF v3 by FF02::5. The OSPF v2 reserved address 224.0.0.6 is represented in OSPF v3 by FF02::6.

A Sample OSPFv3 Configuration As always, we need the ipv6 unicast-routing command to do anything IPv6-related. We also need the ipv6 router ospf 1 command enabled globally.

V6ROUTER1 (config) #ipv6 unica

V6ROUTER1 (config) #ipv6 route

Eigrp Enhanced Interior Gate (EIGRP) Ospf

Open Shortest Path F

Rip

IPv6 Routing Informa

V6ROUTER1 (config) #ipv6 route Process ID

V6ROUTER1 (config) #ipv6 route V6ROUTER1 (config-rtr) #

*Nov 5 18:43:56.600: %OSPFv3-4 1 could not pick a router-id,

We never like to start a new config with a notification from the router, but this one’s easily resolved. One oddity of OSPFv3 is that you have to have an IPv4 dotted decimal value for the router to use as its OSPF RID – and if you have

no IPv4 addresses on the router, you must set a RID with the router-id command before you can even start your config! Crazy, I know, but true, as verified by that console message! Let’s set a RID of 1.1.1.1 on R1 and verify with show ipv6 ospf.

V6ROUTER1 (config) #ipv6 route V6ROUTER1 (config-rtr) #

*Nov 5 18:43:56.600: %OSPFv3-4 1 could not pick a router-id,

V6ROUTER1 (config-rtr) #router

V6ROUTER1#show ipv6 ospf

Routing Process “ospfv3 1” with ID 1.1.1.1 Watch that “v6” in all of your “show ospf” commands! Here’s the R3 config:

V6ROUTER3 (config) #ipv6 route V6ROUTER3 (config-rtr) #

*Nov 5 18:59:45.566: %OSPFv3-4 1 could not pick a router-id,

V6ROUTER3 (config-rtr) #router V6ROUTER3#show ipv6 ospf

Routing Process “ospfv3 1” wi

Now we’ll put the Fast 0/0 interfaces on each router into Area 0. I’ll run IOS Help to show you that quite a few options from OSPFv2 are here in OSPFv3:

V6ROUTER1 (config) #int fast 0

V6ROUTER1 (config-if) #ipv6 os

Process ID

Authentication

Enab

cost

Cost

database-filter Filt synchronization and flooding dead-interval is declared dead

Inter

demand-circuit

OSPF

encryption

Enabl

flood-reduction

OSPF

hello-interval

Time

mtu-ignore

Ignor

neighbor

OSPF

network

Netwo

priority

Route

retransmit-interval link state

Time

adver

transmit-delay Link V6ROUTER1(config-if)#ipv6 ospf

area Set the OSPF area I

V6ROUTER1(config-if)#ipv6 ospf

OSPF area ID as A.B.C.D

OSPF area ID in IP

V6ROUTER1(config-if)#ipv6 ospf

R3: ROUTER3(config)#int fast 0/0

V6ROUTER3(config-if)#ipv6 ospf V6ROUTER3(config-if)#^Z V6ROUTER3#

*Nov 5 19:03:45.986: %OSPFv3-5 1.1.1.1 on FastEthernet0/0 fro Loading Done

Seconds after finishing the config on R3, our adjacency is in place! We’ll verify with show ipv6 ospf neighbor, and you’ll see that much of the info from show ip ospf

neighbor in IPv4 made the cut to IPv6! V6ROUTER1#show

ipv6

Neighbor ID Pri Int ID Interface 3.3.3.3 4

ospf

State

1 FULL/BDR FastEthernet0

Now let’s add R3’s loopback interface to the OSPF config by putting it into Area 1, and then check R1’s IPv6 routing table. I’ll leave the OSPF routes in the routing table this time.

V6ROUTER3(config)#int loopback V6ROUTER3(config-if)#ipv6 ospf

V6ROUTER1#show ipv6 route

IPv6 Routing Table - 4 entries

Codes: C - Connected, L - Loca

O - OSPF intra, OI - O 1, OE2 - OSPF ext

ON1 - OSPF NSSA ext 1, O C

2001:1111:2222:1::/64 [0/0 via ::, FastEthernet0/0

L

2001:1111:2222:1:20C:31FF:

via ::, FastEthernet0/0

OI 2001:2222:3333:1:20E:D7FF:F via FE80::20E:D7FF:FEA4:F4A L

FF00::/8 [0/0] via ::, Null0

We have our first inter-area route, and with a familiar pair of values in the brackets for that route! Let’s ping the loopback from R1….

V6ROUTER1#ping 2001:2222:3333: Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos 2001:2222:3333:1:20E:D7FF:FEA4

seconds: !!!!!

Success rate is 100 percent (5 min/avg/max = 0/2/4 ms

… and we’re done! When it comes to verifying and troubleshooting your OSPFv3 configs, you can almost always just put in “ipv6” for “ip” in your OSPFv2 show ip ospf commands and get the same information. You’ve already seen a few of these, and it can only help to see them again:

show ipv6 route ospf will show you only your OSPF-discovered routes, just like show ip route did for OSPFv2.

V6ROUTER1#show ipv6 route ospf

IPv6 Routing Table - 4 entries

Codes: C - Connected, L - Loca

U - Per-user Static rou

I1 - ISIS L1, I2 - ISIS IS - ISIS summary

O - OSPF intra, OI - OS 1, OE2 - OSPF ext 2 ON1 - OSPF NSSA ext 1,

OI

D - EIGRP, EX - EIGRP e 2001:2222:3333:1:20E:D7F

via FE80::20E:D7FF:FEA4:

Here’s another look at show ipv6 ospf neighbor.

V6ROUTER1#show ipv6 ospf neigh Neighbor ID Pri Int ID Interface 3.3.3.3 Fast 0/0

1

State

FULL/BDR

One of my favorite troubleshooting

D

commands, show protocols, got quite the overhaul with IPv6. Here’s the output of that command at the end of that last lab.

V6ROUTER1#show ipv6 protocols IPv6 Routing Protocol is “conn IPv6 Routing Protocol is “stat IPv6 Routing Protocol is “ospf Interfaces (Area 0): FastEthernet0/0 Redistribution: None

Let’s wrap up with your first OSPFv3 debug! To spot mismatch problems with

hello and dead timers, run debug ipv6 ospf hello. I created one before running this debug so you could see the output when there’s a problem – and after our earlier OSPF section, this output should look familiar!

V6ROUTER1#debug ipv6 ospf hell

OSPFv3 hello events debuggin V6ROUTER1#

*Nov 5 19:37:09.454: OSPFv3: R area 0 from FastEthernet0/0 FE 7FF:FEA4:F4A0 interface ID 4

*Nov 5 19:37:09.458: OSPFv3: M parameters from FE80::20E:D7FF

*Nov 5 19:37:09.458: OSPFv3: D 11 C 25

We’re going to stop here with your IPv6 studies – for now! One final word on this subject… Please make IP Version 6 part of your future studies. Understanding IPv6 is going to be a major boost to your career and your future. Notice that I didn’t say “might be a major boost”.

Let’s move forward and visit a couple of new friends that might just feel ignored at this point in the course!

NAT and PAT NAT allows a network host with a private IP address to have the source IP address of their packets “translated” into a routable address. Otherwise, hosts with RFC 1918 private addresses could not access the Internet, nor could they communicate with remote hosts across a WAN. Without NAT or PAT, the host in the following example cannot access any webbased hosts.

The private IP address ranges are defined by RFC 1918, and they fall into these ranges: Class A: 10.0.0.0 /8 Class B: 172.16.0.0 /12 Class C: 192.168.0.0 /16

Note that the masks that accompany these private address ranges are not the network masks for the full classes (/8, /16, /24). There are four terms used to describe these addresses at different points in the entire NAT process. They’re close to each other in wording, but not in meaning, so let’s take a close look at these addresses. Inside local addresses are used by hosts on the inside network to communicate with other hosts on that same network. These are the addresses that are actually

configured on the hosts. In the earlier diagram, the inside local address is 10.1.1.1 /16. These inside local addresses are translated into inside global addresses. Inside global addresses are routable addresses. In the previous illustration, we haven’t configured NAT yet, so there is no inside global address. Outside global addresses are the addresses assigned by NAT on a remote network. Finally, outside local addresses are the actual addresses of hosts on the remote network. These are also

RFC 1918 private addresses. The terms “inside” and “outside” are relative - if they’re addresses on your end of the WAN, they’re inside. If they’re addresses on the remote end of the WAN, they’re outside. In the following example, 10.1.1.1 is the inside local address and 150.1.1.1 is the inside global address.

The destination host never sees the private address, only the public one. From the destination’s point of view, 10.1.1.1 is the outside local address and 150.1.1.1 is the outside global address. It’s all about the vantage point when it comes to these address types! When the packet returns, the NAT router will remove the public IP address, replace it with the appropriate private IP address, and

the packet is sent to the host. The private address is never seen on the Internet, and the originating host doesn’t know that anything happened. The only device in the entire process that even knows address translation occurred is the NAT router. We have two “flavors” of NAT static and dynamic. While you’re much more likely to run into dynamic NAT configuration in the real world, there are static NAT configs out there.

Static NAT If a limited number of hosts on a private network need Internet access, static NAT may be the appropriate choice. Static NAT maps a private address directly to a public, routable address. Static NAT could be helpful in a network such as the following:

We have three hosts on the Class A RFC 1918 private address range. The router’s Ethernet0 interface is connected to this network, and the Internet is reachable via the Serial0 interface. The IP address of the Serial network is 210.1.1.1 /24, with all other addresses on the 210.1.1.0/24 network available in this example. With Static NAT, we’ll need to create three separate mappings. That’s the easy part to remember. You’ll hear me say this several times before the end of this section,

but the #1 error made in NAT and PAT configurations is forgetting to use the ip nat inside and ip nat outside commands on the appropriate interfaces! The ip nat inside command should be configured on the interface(s) that face the inside hosts, and the ip nat outside command should be configured on the interface facing the Internet.

R3(config)#interface ethernet0 R3(config-if)#ip address 10.5. R3(config-if)#ip nat inside R3(config-if)#interface serial R3(config-if)#ip address 210.1 R3(config-if)#ip nat outside

R3(config)#ip nat inside sourc 210.1.1.2 R3(config)#ip nat inside sourc 210.1.1.3 R3(config)#ip nat inside sourc 210.1.1.4

Earlier, you may have wondered how the NAT router knew when translations needed to be made for incoming packets. The router checks its NAT translation table for that information.

R3#show ip nat translations Inside global Inside local Outside global --210.1.1.2 10.5.5.5

-------

210.1.1.3 210.1.1.4

10.5.5.6 10.5.5.7

10.5.5.5 is mapped to the routable address 210.1.1.2, just as we configured it with the ip nat inside source command. The other two mappings are there exactly as we configured them. There is no pool of addresses involved with static NAT, so the same inside global address will be mapped to the same inside local address every time. You can see the number of active translations, along with the location

of the ip nat inside and ip nat outside commands, with show ip nat statistics. Note the active translations are all static, as we’d expect when using static NAT.

R3#show ip nat statistics Total active translations: 3 ( extended) Outside interfaces: Serial0 Inside interfaces: Ethernet0 H Expired translations: 0

Dynamic NAT The obvious problem with Static NAT is a lack of scalability. If you have only a few hosts that need Internet access, it’s fine, but most organizations have a LOT of hosts that need that access. In today’s world of web-based apps and The Almighty Cloud, it’s not practical to have just a few hosts on the ’Net. Dynamic NAT allows a pool of inside global addresses to be created. The public IP addresses are mapped to a private address on an as-needed basis, and the mapping is dropped when the

communication ends. There’s no permanent one-to-one mapping as we saw with Static NAT. Like Static NAT, Dynamic NAT requires the interfaces connected to the Internet and the private networks be configured with ip nat outside and ip nat inside, respectively. Using the previous network example, R3 is now configured to assign an address from a NAT pool to these three network hosts on an as-needed basis. R3#conf t

R3(config)#access-list 1 permi

R3#conf t R3(config)#interface ethernet0 R3(config-if)#ip nat inside R3(config-if)#interface serial R3(config-if)#ip nat outside

R3#conf t R3(config)#ip nat inside sourc R3(config)#ip nat pool NATPOOL

Another use for ACLs! An accesslist is used to identify the hosts that will have their addresses translated by NAT. This ACL allows any host using an IP address to use NAT if the first three octets of the host’s IP

address are 10.5.5. The nat inside source command calls that list and then names the NAT pool to be used. The next line of the config defines the pool, which I’ve named NATPOOL. The two addresses listed are the first and last addresses of the pool, meaning that 200.1.1.2, 200.1.1.3, 200.1.1.4, and 200.1.1.5 are in the pool, all using a mask of 255.255.255.0. Take care not to include the serial address of the NAT router in the

pool. The access list permits all hosts on 10.5.5.0/24, meaning that any host on that subnet can grab an IP address from the NAT pool. Show ip nat statistics will display the name and configuration of the NAT pool.

R3#show ip nat statistics Total active translations: 0 ( Outside interfaces: Serial0 In Hits: 0 Misses: 0 Expired translations: 0 Dynamic mappings: -- Inside So access-list 1 pool

NATPOOL refcount 0 pool NATPO end 200.1.1.5 type generic, t allocated 0 (0%), misses 0

We have four addresses in the NAT pool, and only three hosts that need translation. No problem, right? Right -- for now, anyway. Let me introduce you to a phrase you’ll run into over and over in CiscoLand: “planning for future growth”. Let’s say you were called to the client site that had the very topology we’ve been working with. They’re fine with just those three hosts having access to the Internet, so you

write the above configuration and leave the site, knowing all is well. Three months later, three more hosts are added to that subnet.

The config that served us to beautifully before is now going to bite us in the tuckus:

R3(config)#access-list 1 permi R3(config)#int e0 R3(config-if)#ip nat inside R3(config-if)#int s0 R3(config-if)#ip nat outside

R3(config)#ip nat inside sourc R3(config)#ip nat pool NATPOOL netmask 255.255.255.0

With this configuration, any host on the 10.5.5.0 /24 network can have its address translated to a routable address from that pool. That was fine when we had only three hosts, but now we have six hosts and only four addresses in the pool. That’s an overcrowded pool area!

The eventual phone call from the client will be something like this: Everything was fine, but now some people can get to the Internet and some can’t. And we never know who can and who can’t!” The client is right, even if he doesn’t know why. The first four hosts to request a routable address for that pool will get one, and the others are out of luck. They’ll be able to get one eventually when another host’s NAT mapping ends,

but that’s still not going to make the client happy. If the client wants all of those hosts to have Internet access, we have two choices: Add more routable addresses to the pool Configure Port Address Translation PAT is really the best solution, since that will allow for even more hosts to be added to that subnet in the future without adding more

routable addresses to that pool. We only need one routable address for PAT - and it’s a routable address already in use! Let’s take a look at PAT, more commonly referred to as overloading. The private address will be translated to a single public address and a random port number, allowing the same IP address to support multiple hosts. The router will keep the connections separate by using a

different port number for each translation, even though the same IP address will be used. Port Address Translation is simple to configure. Instead of referring to a NAT pool with the ip nat inside source command, refer to the outside interface name followed by the word overload. R2(config)#int ethernet0 R2(config-if)#ip nat inside R2(config-if)#int serial0 R2(config-if)#ip nat outside

R2(config)#ip nat inside sourc

serial0 overload R2(config)#access-list 1 permi

overload indicates the IP address of the named interface will be the only one used for NAT. A different port number will be used for each translation. That allows the router to keep the different translations separate while using only a single IP address. Each host that matches the ACL will have its private IP address translated to the same routable IP address - in this case, the same IP

address the serial interface is already using - but each host will be assigned a random port number.

These ports will not be from the well-known port number range. The NAT translation table will keep track of the port number mappings,

so as packets come in with 210.1.1.1 as the destination, the router will translate the 210.1.1.1: < port number> address back to the appropriate host IP address. As with both versions of NAT, the entire process is transparent to the hosts.

Static, Dynamic, and PAT Labs Let’s do some quick labs and take a look at the translation table along the way. First, we’ll use Static NAT to translate packets from R4’s address of 10.1.1.4 to 172.12.123.4 on R3, our NAT router. R3 config:

R3(config)#ip nat inside sourc 172.12.123.4 R3(config)#int s0 R3(config-if)#ip nat outside R3(config-if)#int e0 R3(config-if)#ip nat inside

A default static route has been configured on R4. After sending a ping from R4 to R1, let’s have a look at R3’s NAT translation and statistics tables.

R3#show ip nat translations Inside global Inside loca Outside global --- 172.12.123.4 10.1.1. ---

R3#show ip nat stat Total active translations: 1 ( Outside interfaces: Serial0

Inside interfaces: Ethernet0 Hits: 2 Misses: 0 Expired translations: 0 Dynamic mappings:

Nothing to it! I’ll remove the static NAT statement, leave the interface NAT commands on, and we’ll get a dynamic NAT config rolling.

R3(config)#no ip nat inside so 172.12.123.4

The dynamic NAT config includes an ACL that identifies the inside hosts that will have their addresses translated by NAT. I’ll use IOS

Help throughout the ip nat inside source command to remind you of the options.

R3(config)#access-list 5 permi R3(config)# R3(config)#ip nat inside sourc list Specify access li addresses route-map Specify route-map static Specify static lo

R3(config)#ip nat inside sourc Access list number fo WORD Access list name fo

R3(config)#ip nat inside sourc interface Specify interface

pool

Name pool of glob

R3(config)#ip nat inside sourc WORD Pool name for global a

R3(config)#ip nat inside sourc

Now to the pool! R3(config)#ip nat pool ? WORD Pool name

R3(config)#ip nat pool NATPOOL A.B.C.D Start IP addr netmask Specify the n prefix-length Specify the p

R3(config)#ip nat pool NATPOOL A.B.C.D End IP address

R3(config)#ip nat pool NATPOOL 172.12.123.10 ? netmask Specify the n prefix-length Specify the p

R3(config)#ip nat pool NATPOOL 172.12.123.10 netmask ? A.B.C.D Network mask

R3(config)#$NATPOOL 172.12.123 255.255.255.0

Here are those commands in all

their splendor:

access-list 5 permit 10.1.1.0 ip nat pool NATPOOL 172.12.123 255.255.255.0 ip nat inside source list 5 po

After sending some pings from R4 to translate, here’s the output of our two show ip nat commands on R3:

R3#show ip nat stat Total active translations: 1 ( extended) Outside interfaces: Serial0 Inside interfaces: Ethernet0 Hits: 6 Misses: 0

Expired translations: 0 Dynamic mappings: -- Inside Source access-list 5 pool NATPOOL ref pool NATPOOL: netmask 255.255 start 172.12.123.4 end type generic, total ad (14%), misses 0

R3#show ip nat trans Inside global Inside local Outside global --- 172.12.123.4 10.1.1.4 ---

Finally, I’ll remove the ip nat inside command from that lab and add the line that enables PAT.

R3(config)#no ip nat inside so R3(config)#ip nat inside sourc serial0

Let’s see what happens when we send a ping from R4 through PAT! R3#show ip nat trans Pro Inside global icmp 172.12.123.3:6488 icmp 172.12.123.3:6489 icmp 172.12.123.3:6490 icmp 172.12.123.3:6491 icmp 172.12.123.3:6492

Inside local 10.1.1.4:6488 10.1.1.4:6489 10.1.1.4:6490 10.1.1.4:6491 10.1.1.4:6492

Five translations, each with a different port number! Note the IP addresses across the board are exactly the same – it’s the ports that

are different. Let’s leave NAT and PAT and head for multilayer switching!

ROAS And L3 Switching Waaaaaaaaay back in the Switching section, I mentioned these two methods of allowing inter-VLAN communication, and I said we’d hit ’em after you’d been introduced to routing. You’ve had more than an introduction by this time, so let’s get to it!

We have two options for configuring inter-VLAN communication: Using an L3 switch Configuring “router on a stick” (ROAS) We’ll first go through an ROAS configuration with the following network, and then we’ll take a detailed look at troubleshooting it. This network’s IP addressing: Host 2: 172.12.2.2, VLAN 2

Host 4: 172.12.4.4, VLAN 4 R6: 172.12.2.6 and 172.12.4.6 on Fast 0/0 subinterfaces 0/0.2 and 0/0.4, respectively We’ll use ISL as the trunking protocol. Once this config is up and running, you can leave it alone for months or years, but there are quite a few details that we need to watch to get it up and running!

Here’s the network:

A few important details to take note of: The switch ports connected to the hosts are access ports.

The switch port connected to the router must be trunking, and the trunking protocol (ISL or dot1q) must be the same on the switch and the router subinterfaces. You have to hardcode the trunking protocol on the switch rather than leaving the trunk port at “dynamic” or “auto”, because Cisco routers don’t negotiate trunking protocols.

The router must use a minimum of a Fast Ethernet port for ROAS. A regular Ethernet port will not get the job done. The Fast Ethernet interface on the router will be using subinterfaces, and we’ll use two commands on each subinterface: the encapsulation command, matching the encap type set on the connecting switch’s trunk port

an appropriate IP address for the VLAN indicated by the encapsulation command The IP address for a subinterface must come from the address space of the VLAN configured with the encapsulation command on that same subinterface. This interface will be part of VLAN 2, so we have to put an IP address from the 172.12.2.0 /24 subnet. Where did I get that IP range? Check the IP address of the host that’s already in VLAN 2. We’ll start with the IP address on R6’s fast 0/0.2 subinterface.

R6(config)#int fast 0/0.2 R6(config-subif)#ip address 17

% Configuring IP routing on a allowed if that subinterface i part of an IEEE 802.10, IEEE 8

Then again, maybe we won’t start with the IP address! This is one of those rare situations where the order of the commands does matter. You have to enter the encapsulation command before you apply the IP address to the subinterface.

R6(config-subif)#encapsulation R6(config-subif)#ip address 17

Done and done!

Now let’s configure a subinterface to be part of VLAN 4. That subinterface will need an IP address from the 172.12.4.0 /24 subnet.

R6(config-subif)#int fast 0/0. R6(config-subif)#encap isl 4 R6(config-subif)#ip address 17

When you’re done, don’t forget to open the physical interface!

R6(config-subif)#int fast 0/0 R6(config-if)#no shut %SYS-5-CONFIG_I: Configured fr %LINK-3-UPDOWN: Interface Fast state to up %LINEPROTO-5-UPDOWN: Line prot FastEthernet0/0, changed state

Verify with show interface. R1#show interface fast 0/0.2

FastEthernet0/0.2 is up, line Hardware is AmdFE, address i 000a.4164.31c1) Internet address is 172.12.2 MTU 1500 bytes, BW 100000 Kb usec, reliability 255/255 1/255, rxload 1/255 Encapsulation ISL Virtual LA ARP type: ARPA, ARP Timeout Last clearing of “show inter

R1#show interface fast 0/0.4 FastEthernet0/0.4 is up, line Hardware is AmdFE, address i 000a.4164.31c1) Internet address is 172.12.4 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255

Encapsulation ISL Virtual LAN, Color 4. ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of “show inter

At this point, the default gateway on the hosts must be set to the appropriate subinterface IP address on the router. Do not use any IP address configured on the switch. Always use the appropriate IP address from the router’s subinterfaces as the default gateway for each host.

For our host in VLAN 2, that address is the router subinterface’s IP address in the VLAN 2 address space, and for the VLAN 4 host it’s the subinterface’s IP address in VLAN 4.

From Host 4, let’s ping the following addresses: Host 4’s own default gateway, 172.12.4.6 Host 2’s default gateway, 172.12.2.6 Host 2, 172.12.2.2 Host4#ping 172.12.4.6

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos

is 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 1/3/4 ms Host4#ping 172.12.2.6

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos is 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 1/2/4 ms Host4#ping 172.12.2.2

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos is 2 seconds: !!!!! Success rate is 100 percent (5

min/avg/max = 1/3/4 ms

All three are successful! Let’s ping the following three destinations from Host 2: Host 2’s own default gateway, 172.12.2.6 Host 4’s default gateway, 172.12.4.6 Host 4, 172.12.4.4 Host2#ping 172.12.2.6

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos is 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 1/2/4 ms Host2#ping 172.12.4.6

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos is 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 1/2/4 ms Host2#ping 172.12.4.4

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos

is 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 1/2/4 ms

If you have connectivity issues from one host to another after configuring ROAS, always ping your local host’s default gateway first. If you can’t ping that, there’s no way you can ping the other two addresses! That’s really all there is to ROAS. Nail the details and you’re on your way to exam and real-world success!

Let’s review the ROAS details on the router, switch, and hosts, and we’ll follow that with some ROAS troubleshooting tips.

The Router:

The port must be a Fast Ethernet port at minimum. An Ethernet port won’t do the job. You can create Ethernet subinterfaces, but the encapsulation command will not be recognized.

R3(config)#interface e0.12 R3(config-subif)#encapsulation % Unrecognized command

The trunking protocol configured on the router’s subinterfaces must match that of the trunk port connected to that router. If we used

dot1q in this lab instead of ISL, the commands used would have been the same (except for the encapsulation ISL command, of course, which would be have been encapsulation dot1q.) The IP address configured on a subinterface must be part of the subnet used by the VLAN indicated in the encapsulation command.

The Switch:

The switch port connected to the router must be trunking. The trunking protocol in use (ISL or dot1q) must match the one on the router’s subinterfaces. The ports leading to the hosts must be access ports.

The Hosts:

Each host should have its default gateway set to the IP address on the router subinterface that is part of that VLAN’s address space.

ROAS FSuC (Frequently Screwed-up Configurations)

I think you’ll agree with me that the ROAS config is straightforward. Since there’s not much to configure in the first place, the misconfiguration is pretty easy to spot. Since we perform most of the ROAS config on the router, we tend to concentrate on the router config when we have a problem. What we have to keep in mind with ROAS

troubleshooting is that the problem might not be on the router - it might be on the hosts, or even the switch! Do you see a problem with the following setup?

If you spotted that right away, nice work! The default gateway settings on the hosts are reversed. The default gateway address must always be in the same subnet as the host’s IP address. How about the following configuration? See any problems here?

R6 Config: interface FastEthernet0/0 no ip address duplex auto speed auto !

interface FastEthernet0/0.2 encapsulation isl 4 ip address 172.12.2.6 255.255.255.0

! interface FastEthernet0/0.4 encapsulation isl 2 ip address 172.12.4.6 255.255.255.0

An IP address from VLAN 2’s subnet has been applied to the subinterface with the VLAN 4 ID, and vice versa. With that config, neither host will even be able to

ping its own default gateway. Use this structured approach to your ROAS troubleshooting and you’ll tshoot it successfully every time: Always check the default gateway settings on the hosts first. Make sure the port leading to the router is trunking, and watch for trunking protocol mismatches.

On the router, make sure the IP address assigned to each subinterface is from the subnet assigned to the VLAN that’s assigned to that subinterface. Follow those three tips and you’ll configure and troubleshoot ROAS successfully every time!

Multilayer Switching with L3 Switches Multilayer switches make it possible to have inter-VLAN communication without having to use a separate L3 device or configuring router-on-a-stick. If two hosts in separate VLANs are connected to the same multilayer switch, the correct configuration will allow that communication without the data ever leaving that switch. Multilayer switches allow us to create a logical interface, the

Switched Virtual Interface ( SVI), as a representation of the VLAN. An SVI exists for VLAN 1 by default, but that’s the only VLAN that has a “pre-created” SVI. BTW, SVIs are informally referred to as “VLAN Interfaces”, and here’s why: interface vlan1 no ip address no ip routecache

shutdown

On a Layer 3 switch, such a logical interface can be configured for any VLAN, and you configure it just as you would any other interface -just go into config mode, create the interface and assign it an IP address, and you’re on your way. It’s very simple: MLS(config)#interface vlan 10

MLS(config-if)#ip address 10.1

Let’s put SVIs to work with a basic inter-VLAN routing configuration.

Before we begin configuring, we’ll send pings between the two hosts. In this example, I’m using routers for hosts, but there are no routes of any kind on them. HOST_1#ping 30.1.1.1

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos 2 seconds: ..... Success rate is 0 percent (0/5 HOST_3#ping 20.1.1.1

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos 2 seconds: ..... Success rate is 0 percent (0/5

As expected, neither host can ping the other. Let’s fix that, Macgyver L3 style!

To get started, we’ll put the port leading to Host 1 into VLAN 11, and the port leading to Host 3 in VLAN 33.

SW1(config)#int fast 0/1 SW1(config-if)#switchport mode SW1(config-if)#switchport acce

SW1(config-if)#int fast 0/3 SW1(config-if)#switchport mode SW1(config-if)#switchport acce

We’re going to create two SVIs on the switch, one representing VLAN 11 and the other representing VLAN 33.

SW1(config)#int vlan11 01:30:04: %LINK-3-UPDOWN: Inte state to up 01:30:05: %LINEPROTO-5-UPDOWN: Interface Vlan11, changed stat SW1(config-if)#ip address 20.1

SW1(config-if)#int vlan33 01:30:11: %LINK-3-UPDOWN: Inte state to up 01:30:12: %LINEPROTO-5-UPDOWN: Interface Vlan33, changed stat SW1(config-if)#ip address 30.1

There’s a strict limit of one SVI per VLAN. If you don’t see “up” for the

interface itself and/or the line protocol, you likely haven’t created the VLAN yet or placed a port into that VLAN. Do those two things and you should see the following result with show interface vlan. I’ll only show the top three rows of output for each SVI.

SW1#show int vlan11 Vlan11 is up, line protocol is Hardware is EtherSVI, addres 0012.7f02.4b41) Internet add

SW1#show int vlan33 Vlan33 is up, line protocol is Hardware is EtherSVI, addres 0012.7f02.4b42) Internet add

Now let’s check that routing table… SW1# show ip route Default gateway is not set Host Uses

Gateway Interface

ICMP redirect cache is empty

Hmm, that’s not good. We don’t have one! There’s a simple reason, though on L3 switches, we need to enable IP routing, because it’s off by default!

Step One In L3 Switching Troubleshooting: Make Sure IP Routing Is On!

SW1(config)#ip routing SW1(config)#^Z SW1#show ip route Codes: < removed for clarity > Gateway of last resort is not

C C

20.0.0.0/24 20.1.1.0 30.0.0.0/24 30.1.1.0

is is is is

subnetted, directly c subnetted, directly c

Now that looks like the routing table we’ve come to know and love! In this particular case, there’s

no need to configuring a routing protocol. Why not, you ask? You recall that when router-on-a-stick is configured, the IP address assigned to the router’s subinterfaces should be the default gateway setting on the hosts. When SVIs are in use, the default gateway set on the hosts should be the IP address assigned to the SVI that represents that host’s VLAN. After setting this default gateway on

the hosts, the hosts can now successfully communicate. Since we’re using routers for hosts, we’ll use the ip route command to set the default gateway.

HOST_1(config)#ip route 0.0.0.

HOST_3(config)#ip route 0.0.0.

Can the hosts now communicate, even though they’re in different VLANs? HOST_1#ping 30.1.1.1

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 1/1/4 ms HOST_3#ping 20.1.1.1

Type escape sequence to abort. Sending 5, 100-byte ICMP Echos 2 seconds: !!!!! Success rate is 100 percent (5 min/avg/max = 1/2/4 ms

Nothing beats a success rate of 100 percent!

A couple of final notes on L3 switching: During the lab, we didn’t have to do anything to the SVIs to have them show as up/up. On some Cisco switch models, you may have to run no shutdown on them to bring them up. On most L3 switches, hardware support for routing is already on, but you need to use the ip routing command to turn on software support.

On some L3 switches, you’ll also need to manually enable hardware support with the sdm prefer lanbase-routing command. You’ll learn even more about L3 switch capabilities in your future studies – a LOT more, but that’s it for now. PS – The following symbol is sometimes used to represent an L3 switch. It might show up on your exam or in documentation you’re

reading, so be ready to identify it!

Binary Math And Subnetting Mastery Get ready for total success with binary math and subnetting practice exam questions AND real-world networking! In the next few sections, every bit of anxiety you may have about binary or subnetting is going to totally disappear, to be replaced with confidence. You’re about to learn how to solve tough binary and

subnetting questions quickly and efficiently – and it all starts with the fundamentals!

Converting Binary To Dotted Decimal It’s easy to overlook the importance of this section, or just to say, “Hey, I know how to do that, I’m going to the next section.” Don’t do that. Success in networking is all about mastering the fundamentals, and that’s true more of subnetting than any other single feature on the CCENT and CCNA exams.

When you master the fundamentals and then continually practice applying them, you can answer any question Cisco or a job interviewer asks you. That philosophy has worked for thousands of CCENT and CCNA candidates around the world, and it’ll work for you. Let’s jump right in to a typical binary-to-decimal conversion.

Convert 01100010 00111100 11111100 01010101 to binary. To answer this, we’ll use this simple chart: 128 64 32 16 8 4 2 1 1st 2nd 3rd 4th Just plug the binary values under the 128, 64, etc., add ’em up, and you’re gold!

Filling it in from left to right, here’s the first octet conversion.

1st

128 64 32 16 8 4 2 1 0 1 1 0 0 0 1 0

There are ones in the column for 64, 32, and 2. Just add them up, and that is the decimal value for the first octet -- 98. Repeat the process for each octet, and you quickly have the dotted decimal equivalent of the binary string – in this case, 98.60.252.85.

128 64 32 16 8 4 2 1 To

1st 0 Octet

1 1 0 0 0 1 0 98

2nd 0 Octet

0 1 1 1 1 0 0 60

3rd 1 Octet

1 1 1 1 1 0 0 25

4th 0 Octet

1 0 1 0 1 0 1 85

You certainly don’t have to write out “1st”, “2nd”, etc. I do recommend you still write out “128”, “64”, and so forth. It’s just too easy to skip over a number when you don’t write those out, and we’re not here to give away exam

points – we’re here to take them! Let’s get in some more practice with binary-to-decimal, and then we’ll move on to the next fundamental conversion skill.

Binary-To-Decimal Practice Questions Convert each binary string to dotted decimal. The string: 11110000 00110101 00110011 11111110

128 64 32 16 8 4 2 1 Tota 1st 1

1 1 1 0 0 0 0 240

2nd 0 3rd 0

0 1 1 0 1 0 1 53

4th 1

1 1 1 1 1 1 0 254

0 1 1 0 0 1 1 51

Answer: 240.53.51.254. The string: 00001111 01101111 00011100 00110001

1st

128 64 32 16 8 4 2 1 Tota 0 0 0 0 1 1 1 1 15

2nd 0 3rd 0

1 1 0 1 1 1 1 111

4th 0

0 1 1 0 0 0 1 49

0 0 1 1 1 0 0 28

Answer: 15.111.28.49.

The string: 11100010 00000001 11001010 01110110

1st

128 64 32 16 8 4 2 1 Tota 1 1 1 0 0 0 1 0 226

2nd 0 3rd 1

0 0 0 0 0 0 1 1

4th 0

1 1 1 0 1 1 0 118

1 0 0 1 0 1 0 202

Answer: 226.1.202.118. The string: 01010101 11111101 11110010 00010101

1st

128 64 32 16 8 4 2 1 Tota 0 1 0 1 0 1 0 1 85

2nd 1 3rd 1

1 1 1 1 1 0 1 253

4th 0

0 0 1 0 1 0 1 21

1 1 1 0 0 1 0 242

Answer: 85.253.242.21. The string: 00000010 11111001 00110111 00111111

1st

128 64 32 16 8 4 2 1 Tota 0 0 0 0 0 0 1 0 2

2nd 1 3rd 0

1 1 1 1 0 0 1 249

4th 0

0 1 1 1 1 1 1 63

0 1 1 0 1 1 1 55

Answer: 2.249.55.63. The string: 11001001 01011111 01111111 11111110

1st

128 64 32 16 8 4 2 1 Tota 1 1 0 0 1 0 0 1 201

2nd 0

1 0 1 1 1 1 1 95

3rd 0

1 1 1 1 1 1 1 127

4th 1

1 1 1 1 1 1 0 254

Answer: 201.95.127.254 The string: 11111000 00000111 11111001 01100110

128 64 32 16 8 4 2 1 Tota 1st 1

1 1 1 1 0 0 0 248

2nd 0 3rd 1

0 0 0 0 1 1 1 7

4th 0

1 1 0 0 1 1 0 102

1 1 1 1 0 0 1 249

Answer: 248.7.249.102. The string: 00111110 11111111 01011010 01111110

1st

128 64 32 16 8 4 2 1 Tota 0 0 1 1 1 1 1 0 62

2nd 1 3rd 0

1 1 1 1 1 1 1 255

4th 0

1 1 1 1 1 1 0 126

1 0 1 1 0 1 0 90

Answer: 62.255.90.126.

The string: 11001101 11110000 00001111 10111111

1st

128 64 32 16 8 4 2 1 Tota 1 1 0 0 1 1 0 1 205

2nd 1 3rd 0

1 1 1 0 0 0 0 240

4th 1

0 1 1 1 1 1 1 191

0 0 0 1 1 1 1 15

Answer: 205.240.15.191 The string: 10011001 11110000 01111111 00100101

1st

128 64 32 16 8 4 2 1 Tota 1 0 0 1 1 0 0 1 153

2nd 1 3rd 0

1 1 1 0 0 0 0 240

4th 0

0 1 0 0 1 0 1 37

1 1 1 1 1 1 1 127

Answer: 153.240.127.37 The string: 11011111 01110110 11000011 00111111

1st

128 64 32 16 8 4 2 1 Tota 1 1 0 1 1 1 1 1 223

2nd 0 3rd 1

1 1 1 0 1 1 0 118

4th 0

0 1 1 1 1 1 1 63

1 0 0 0 0 1 1 195

Answer: 223.118.195.63. The string: 00000100 00000111 00001111 00000001

1st

128 64 32 16 8 4 2 1 Tota 0 0 0 0 0 1 0 0 4

2nd 0

0 0 0 0 1 1 1 7

3rd 0

0 0 0 1 1 1 1 15

4th 0

0 0 0 0 0 0 1 1

Answer: 4.7.15.1. The string: 11000000 00000011 11011011 00100101

128 64 32 16 8 4 2 1 Tota 1st 1

1 0 0 0 0 0 0 192

2nd 0 3rd 1

0 0 0 0 0 1 1 3

4th 0

0 1 0 0 1 0 1 37

1 0 1 1 0 1 1 219

Answer: 192.3.219.37. The string: 10000000 01111111 00110011 10000011

1st

128 64 32 16 8 4 2 1 Tota 1 0 0 0 0 0 0 0 128

2nd 0 3rd 0

1 1 1 1 1 1 1 127

4th 1

0 0 0 0 0 1 1 131

0 1 1 0 0 1 1 51

Answer: 128.127.51.131

The string: 11111011 11110111 11111100 11111000

1st

128 64 32 16 8 4 2 1 Tota 1 1 1 1 1 1 1 1 251

2nd 1 3rd 1

1 1 1 0 1 1 1 247

4th 1

1 1 1 1 0 0 0 248

1 1 1 1 1 0 0 252

Answer: 251.247.252.248 Great work!

Before we move on, let me share a bonus exam prep tip with you. The only thing you need to practice this skill is a piece of paper and something to write with, and you don’t need to practice for consecutive hours. When you have 10 minutes to yourself at work or home, spend that time jotting down strings of 1s and 0s and then converting them to binary. That little bit of time spent practicing REALLY adds up in the end!

With that said, let’s move forward!

Converting Decimal To Binary “Second verse, not quite the same as the first….” We’re pretty much doing the same thing that we did in the first section, just in reverse. Makes sense, right? Well, it will once we go through some examples. This is definitely one of those skills that seems REALLY complicated when you read about it, but when you do it, you realize how easy it is! Let’s practice with the decimal 217.

128 64 32 16 8 4 2 1 217

You must now determine whether each column should have a “1” or a “0”. Work from left to right, and ask this question: “Can I subtract this column’s value from the current octet value with the result being a positive number or zero?” If so, perform the subtraction, put a “1” in the column, and go to the next column. If not, place a “0” in the column, and repeat the process for the next

column. It takes much longer to explain than to actually do. Let’s look at that chart again: 128 64 32 16 8 4 2 1 217

Can 128 be subtracted from 217, and result in zero or a positive number? Sure, with the result being 89. Put a “1” in the 128 column and go to the next column, repeating the operation with the new result.

128 64 32 16 8 4 2 1 217 1

Can 64 be subtracted from the new result, 89? Yes, with a remainder of 25. Put a “1” in the 64 column and repeat the operation in the next column, using the new result of 25. 128 64 32 16 8 4 2 1 217 1 1

Can 32 be subtracted from 25, with the remainder being 0 or a positive

number? No. Place a “0” in the 32 column, and repeat the operation in the next column with the value of 25. 128 64 32 16 8 4 2 1 217 1 1 0

Can 16 be subtracted from 25? Yes, with a remainder of 9. Place a “1” in the 16 column, and go to the next column with the new value of 9. 128 64 32 16 8 4 2 1 217 1 1 0 1

Can 8 be subtracted from 9? Yes, with a remainder of 1. Place a “1” in the 8 column, and repeat the operation in the next column with a remainder of 1. 128 64 32 16 8 4 2 1 217 1 1 0 1 1

We can quickly see that neither of the next two columns, 4 or 2, can be subtracted from 1. Place a “0” in both of those columns. 128 64 32 16 8 4 2 1

217 1

1 0 1 1 0 0

Subtracting 1 from 1 brings us to zero, and also to the end of the columns. Place a “1” in the 1 column, and you have the binary equivalent of the decimal 217. 128 64 32 16 8 4 2 1 217 1 1 0 1 1 0 0 1

The binary equivalent of the decimal 217 is 11011001.

Two points of note: You can never have a value greater than “1” in any bit. You should never have a remainder at the end of the line. If you do, you need to go back and do it again. Let’s get in some more work with this vital skill!

Converting Decimal To Binary Questions The address: 100.10.1.200 128 100 0 10 0 1 0 200 1

64 1 0 0 1

32 1 0 0 0

16 0 0 0 0

8 0 1 0 1

4 1 0 0 0

Answer: 01100100 00001010 00000001 11001000.

2 0 1 0 0

1 0 0 1 0

The address: 190.4.89.23 128 190 1 4 0 89 0 23 0

64 0 0 1 0

32 1 0 0 0

16 1 0 1 1

8 1 0 1 0

4 1 1 0 1

2 1 0 0 1

1 0 0 1 1

Answer: 10111110 00000100 01011001 00010111. The address: 10.255.18.244 128 64 32 16 8 4 2 1

10 255 18 244

0 1 0 1

0 1 0 1

0 1 0 1

0 1 1 1

1 1 0 0

0 1 0 1

1 1 1 0

0 1 0 0

2 0 0 1

1 0 1 1

Answer: 00001010 11111111 00010010 11110100. The address: 240.17.23.239 128 240 1 17 0 23 0

64 1 0 0

32 1 0 0

16 1 1 1

8 0 0 0

4 0 0 1

239 1

1 1 0 1 1 1 1

Answer: 11110000 00010001 00010111 11101111. The address: 217.34.39.214

217 34 39 214

128 1 0 0 1

64 1 0 0 1

32 0 1 1 0

16 1 0 0 1

8 1 0 0 0

4 0 0 1 1

2 0 1 1 1

1 1 0 1 0

Answer: 11011001 00100010 00100111 11010110. The address: 20.244.182.69

20 244 182 69

128 0 1 1 0

64 0 1 0 1

32 0 1 1 0

16 1 1 1 0

8 0 0 0 0

4 1 1 1 1

Answer: 00010100 11110100 10110110 01000101.

2 0 0 1 0

1 0 0 0 1

The address: 198.3.148.245 128 198 1 3 0 148 1 245 1

64 1 0 0 1

32 0 0 0 1

16 0 0 1 1

8 0 0 0 0

4 1 0 1 1

2 1 1 0 0

1 0 1 0 1

Answer: 11000110 00000011 10010100 11110101. The address: 14.204.71.250 128 64 32 16 8 4 2 1

14 204 71 250

0 1 0 1

0 1 1 1

0 0 0 1

0 0 0 1

1 1 0 1

1 1 1 0

1 0 1 1

0 0 1 0

2 1 0 1 1

1 1 1 0 1

Answer: 00001110 11001100 01000111 11111010. The address: 7.209.18.47

7 209 18 47

128 0 1 0 0

64 0 1 0 0

32 0 0 0 1

16 0 1 1 0

8 0 0 0 1

4 1 0 0 1

Answer: 00000111 11010001 00010010 00101111. The address: 249.74.65.43

249 74 65 43

128 1 0 0 0

64 1 1 1 0

32 1 0 0 1

16 1 0 0 0

8 1 1 0 1

4 0 0 0 0

Answer: 11111001 01001010 01000001 00101011.

2 0 1 0 1

1 1 0 1 1

The address: 150.50.5.55

150 50 5 55

128 1 0 0 0

64 0 0 0 0

32 0 1 0 1

16 1 1 0 1

8 0 0 0 0

4 1 0 1 1

2 1 1 0 1

1 0 0 1 1

Answer: 10010110 00110010 00000101 00110111. The address: 19.201.45.194

19

128 64 32 16 8 4 2 1 0 0 0 1 0 0 1 1

201 45 194

1 0 1

1 0 0 1 0 0 1 0 1 0 1 1 0 1 1 0 0 0 0 1 0

Answer: 00010011 11001001 00101101 11000010. The address: 43.251.199.207

43 251 199 207

128 0 1 1 1

64 0 1 1 1

32 1 1 0 0

16 0 1 0 0

8 1 1 0 1

4 0 0 1 1

2 1 1 1 1

1 1 1 1 1

Answer: 00101011 11111011 11000111 11001111. The address: 42.108.93.224

42 108 93 224

128 0 0 0 1

64 0 1 1 1

32 1 1 0 1

16 0 0 1 0

8 1 1 1 0

4 0 1 1 0

Answer: 00101010 01101100 01011101 11100000.

2 1 0 0 0

1 0 0 1 0

The address: 180.9.34.238

180 9 34 238

128 1 0 0 1

64 0 0 0 1

32 1 0 1 1

16 1 0 0 0

8 0 1 0 1

4 1 0 0 1

2 0 0 1 1

1 0 1 0 0

Answer: 10110100 00001001 00100010 11101110. The address: 243.79.68.30

243

128 64 32 16 8 4 2 1 1 1 1 1 0 0 1 1

79 68 30

0 0 0

1 0 0 1 1 1 1 1 0 0 0 1 0 0 0 0 1 1 1 1 0

Answer: 11110011 01001111 01000100 00011110. Great work! Now we’ll start applying these fundamentals to some real-world scenarios!

Determining The Number Of Valid Subnets Once the subnetting’s been done, it would be a really good idea to know how many subnets you have to go around! Actually, you should calculate that number before you do the actual subnetting. In this question type, the subnetting’s already been performed and we have to come up with the number of valid subnets. Here’s the best part – with enough practice, you’ll be able to answer these questions in less than a

minute, and without writing much (if anything) down on your exam board! Here’s a typical “number of valid subnets” question: “How many valid subnets exist on the 10.0.0.0 /12 network?” “How many valid subnets exist on the 10.0.0.0 255.240.0.0 network?” These examples are actually asking the same thing, just in different formats. You’re familiar with the

standard dotted decimal mask, but what about the number following the slash in the first version of the question? This is prefix notation, and it’s the more common way of expressing a subnet mask. The number behind the slash indicates how many consecutive ones there are at the beginning of this mask. The dotted decimal mask 255.240.0.0, converted to decimal, is 11111111 11110000 00000000 00000000. (If you’re unsure how this value is derived, review Section Three.) There are twelve ones at the beginning of the mask,

and that’s where the “/12” comes from. Why use this method of expressing a mask? It’s easier to write and to say. Try expressing a Class C network mask out loud as “two fifty five, two fifty five, two fifty five, zero” a couple of times, then try saying “slash twenty-four”. See what I mean? You’re going to hear the prefix notation version of network masks mentioned more often than someone reading out the entire mask, so familiarize yourself with expressing masks in this fashion. You’re likely

to see both dotted decimal masks and prefix notation on any Cisco exam. Now let’s get in some practice!In print, this seems like a long operation, but once you’re doing it, it’s not. Before you can determine the number of valid subnets with a given network number and subnet mask, you must know the network masks for Class A, B, and C networks. They are listed here for review: Class A

Class B

C

1st Octet Range Network Mask # of Network Bits # of Host Bits

1–126

128–191

1

255.0.0.0 255.255.0.0 2 8

16

2

24

16

8

Subnetting always borrows bits from the host bits – always. To determine the number of valid subnets, you first have to know how

many subnet bits there are. Let’s look at the example question again: How many valid subnets exist on the 10.0.0.0 /12 network? There are two ways to determine the number of subnet bits. The first method is longer, but it shows you exactly what’s going on with the subnets. The second method is much shorter, and you should feel free to use that one when you’re comfortable with the first one. By looking at the network number, we see this is a Class A network.

By default, a Class A network mask has 8 network bits and 24 host bits. In this mask, 12 bits are set to 1. This indicates that four host bits have been borrowed for subnetting. The subnet bits are shown below in bold. 1st Octet

2nd Octet 3rd O

Class “A” 11111111 00000000 00000 NW Mask: SN 11111111 11110000 00000 Mask

Now that you know how many subnet bits there are, place that number into this formula: The number of valid subnets = (2 raised to the power of the number of subnet bits) We have four subnet bits, so we need to raise 2 to the 4th power. When you multiply 2 by itself four times (2 × 2 × 2 × 2), you get 16, and that’s how many valid subnets we have. That’s all there is to it! Let’s go through another example,

and we won’t draw a chart for this one. All you need is your knowledge of network masks and a little math, and you’re done! “How many valid subnets exist on the 150.10.0.0 /21 network?” This is a Class B network, so we know the network mask is 255.255.0.0, or /16. The subnet mask is /21. Just subtract the number of “1”s in the network mask from the number of 1s in the subnet mask, and you have the number of subnet bits. 21 − 16 = 5, and 2 to the 5th power

equals 32 valid subnets. It’s just that simple! Once you’re done with these practice questions, practice writing your own questions and solving them – that’s the ultimate way to practice this vital skill, and you can’t beat the cost! I’ll list the networks and masks here, and you’ll find the answers after this list. No peeking! How many valid subnets exist on each of the following networks?

15.0.0.0 /13 222.10.1.0 / 30 145.45.0.0 /25 20.0.0.0 255.192.0.0 130.30.0.0 255.255.224.0 128.10.0.0 /19

99.0.0.0 /17 222.10.8.0 /28 20.0.0.0 255.254.0.0 210.17.90.0 /29 130.45.0.0 /26 200.1.1.0 /26

45.0.0.0 255.240.0.0 222.33.44.0 255.255.255.248 23.0.0.0 255.255.224.0 “Number Of Valid Subnets” Questions and Answers Note: The NW mask and SN mask are written out for each question. You don’t have to write them out if you’re comfortable with the quicker method. 15.0.0.0 /13

Class A, 8 network bits. Subnet mask listed is /13. 13 – 8 = 5, and 2 to the 5th power is 32 = 32 valid subnets.

NW 11111111 00000000 000000 Mask SN 11111111 11111000 000000 Mask

222.10.1.0/30 Class C, 24 network bits. 30 – 24 = 6, 2 to the 6th power = 64 valid subnets.

NW 11111111 11111111 1111111 Mask SN 11111111 11111111 1111111 Mask 145.45.0.0/25 Class B, 16 network bits. 25 – 16 = 9, 2 to the 9th power = 512 valid subnets.

11111111 11111111 NW Mask SN 11111111 11111111 Mask11111111

20.0.0.0 255.192.0.0 Class A, 8 network bits. Subnet mask converts to /10 in prefix notation. 10 – 8 = 2, 2 to the 2nd power = 4 valid subnets.

NW 11111111 00000000 000000 Mask SN 11111111 11000000 000000 Mask

130.30.0.0 255.255.224.0

Class B, 16 network bits. Subnet mask converts to /19 in prefix notation. 19 – 16 = 3, 2 to the 3rd power = 8 valid subnets.

NW 11111111 11111111 000000 Mask SN 11111111 11111111 1110000 Mask

128.10.0.0/19 Class B, 16 network bits. 19 – 16 = 3, 2 to the 3rd power = 8 valid subnets.

NW 11111111 11111111 000000 Mask SN 11111111 11111111 1110000 Mask

99.0.0.0/17 Class A, 8 network bits. 17 – 8 = 9. 2 to the 9th power = 512 valid subnets.

NW 11111111 00000000 000000 Mask

11111111 11111111 100000 SN Mask

222.10.8.0/28 Class C, 24 subnet bits. 28 – 24 = 4. 2 to the 4th power = 16 valid subnets.

NW 11111111 11111111 1111111 Mask SN 11111111 11111111 1111111 Mask

20.0.0.0 255.254.0.0 Class A, 8 network bits. Mask converts to /15 in prefix notation. 15 – 8 = 7. 2 to the 7th power = 128 valid subnets.

NW 11111111 00000000 000000 Mask SN 11111111 11111110 000000 Mask

210.17.90.0 /29 Class C, 24 network bits. 29 – 24 =

5. 2 to the 5th power = 32 valid subnets.

NW 11111111 11111111 1111111 Mask SN 11111111 11111111 1111111 Mask

130.45.0.0/26 Class B, 16 network bits. 26 – 16 = 10. 2 to the 10th power = 1024 valid subnets.

NW 11111111 11111111 000000 Mask SN 11111111 11111111 1111111 Mask

200.1.1.0/26 Class C, 24 network bits. 26 – 24 = 2. 2 to the 2nd power = 4 valid subnets.

NW 11111111 11111111 1111111 Mask SN 11111111 11111111 1111111 Mask

45.0.0.0 255.240.0.0 Class A, 8 network bits. SN mask converts to /12 in prefix notation. 12 – 8 = 4. 2 to the 4th power = 16 valid subnets.

NW 11111111 00000000 000000 Mask SN 11111111 11110000 000000 Mask

222.33.44.0 255.255.255.248 Class C, 24 network bits. SN mask converts to /29 in prefix notation.

29 – 24 = 5. 2 to the 5th power = 32 valid subnets.

NW 11111111 11111111 1111111 Mask SN 11111111 11111111 1111111 Mask

23.0.0.0 255.255.224.0 Class A, 8 network bits. SN mask converts to /19. 19 – 8 = 11. 2 to the 11th power = 2048 valid subnets.

NW 11111111 00000000 000000 Mask SN 11111111 11111111 111000 Mask

And that’s it! Once you practice this question type, you’ll nail the questions accurately and quickly – and you’ll see the same is true of our next question type!

Determining The Number Of Valid Hosts On A Subnet As in the previous section, the subnetting’s been done, and we’re now being asked to come up with a value regarding that subnetting. In this case, we need to come up with the number of valid hosts per subnet. We first need to know how many host bits are in the subnet mask, and there’s a lightning-fast way to figure that out: (32 – the number of 1s in the mask) = # of host bits

That’s all there is to it! Using 200.10.10.0/26 as an example, all you do is subtract 26 from 32, which gives us 6 host bits. Then plug that number into this simple formula: (2 raised to the power of the number of host bits) – 2 2 to the 6th power is 64, and 64 – 2 = 62. That’s your number of valid host addresses! With practice, you’ll easily figure this out for any subnet in well under a minute. A couple of things to watch out for:

Note this formula uses the number of host bits, not the number of subnet bits. We subtract 2 from the almost-final answer. What’s going on with that “-2” at the end? That accounts for the two following unusable host addresses: The first address in the range is the subnet number itself. The last address in the range is the subnet’s broadcast address.

Since neither of these addresses should be assigned to hosts, we need to subtract 2 when calculating the number of valid hosts in a subnet. Since practice makes perfect CCENTs and CCNAs, let’s get in some practice with this question type. I’ve broken the answers down to the bit level, since you need both the right answer and how we arrived at that answer! Feel free not to write the masks out on exam day. To avoid the unbearable pressure of not peeking at the answers, the questions are listed together first,

followed by the answers and explanations. Let’s get started! The Questions Determine how many valid host addresses exist in each of the following subnets: 220.11.10.0 /26 129.15.0.0 /21 222.22.2.0 / 30 212.10.3.0 /28 14.0.0.0 /20 221.10.78.0 255.255.255.224

143.34.0.0 255.255.255.192 128.12.0.0 255.255.255.240 125.0.0.0 /24 221.10.89.0 255.255.25.248 134.45.0.0 /22 The answers…. 220.11.10.0 /26 Nothing to this. Subtract the length of the subnet mask from 32 and you have your number of host bits. In this case, that’s 6, and 2 to the 6th power is 64. Subtract 2 and you

have 62 valid host addresses. 129.15.0.0 /21 Subtract the mask length from 32. That gives us 11. 2 to the 11th power equals 2048. Subtract 2 from that and 2046 valid host addresses remain. 222.22.2.0 /30 Subtract the mask length from 32. That gives us 2. 2 to the 2nd power equals 4.

Subtract 2 from that and 2 valid host addresses remain. 212.10.3.0 /28 Subtract the mask length from 32. That gives us 4. 2 to the 4th power equals 16. Subtract 2 from that and 14 valid host addresses remain. 14.0.0.0 /20 Subtract the mask length from 32, and we have 12.

2 to the 12th power is 4096; subtract 2 from that and 4094 valid host addresses remain. 221.10.78.0 255.255.255.224 Subtract the mask length from 32. That mask has its first 27 bits set to 1, so in prefix notation that’s /27. 32 – 27 = 5. 2 to the 5th power is 32; subtract 2 from that, and 30 valid host addresses remain. 143.34.0.0 255.255.255.192

Subtract the mask length from 32. This mask has its first 26 bits set to 1, so that’s 32 – 26 = 6. 2 to the 6th power is 64; subtract 2 from that, and 62 valid host addresses remain. 128.12.0.0 255.255.255.240 This mask converts to /28. 32 – 28 = 4. 2 to the 4th power is 16. Subtract 2 from that, and 14 valid host addresses remain. 125.0.0.0 /24

32 – 24 = 8. 2 to the 8th power is 256. Subtract 2 from that, and 254 valid host addresses remain. 221.10.89.0 255.255.255.248 In prefix notation, that’s a /29 mask. 32 – 29 = 3. 2 to the 3rd power is 8; subtract 2 from that, and 6 valid host addresses remain. 134.45.0.0 /22 32 – 22 = 10, so we have 10 host bits.

2 to the 10th power is 1024; subtract 2 from that and 1022 valid host addresses remain. All right! We’re now comfortable with the fundamental conversions as well as determining the number of valid hosts and subnets – all valuable skills to have for your exam and your career! In the next section, we’ll put all of this together to determine three important values with one single math operation – and there’s a great shortcut semi-hidden in the next

section, too. Let’s get started!

Determining The Subnet Number Of A Given IP Address This skill is going to serve you well in both the exam room and in production networks – and I’m going to teach you how to perform this operation in minutes. (Or just one minute, with practice on your part!) Being able to determine what subnet an IP address is on is an invaluable skill for troubleshooting production networks and labs. You’d be surprised how many issues pop up just because an admin thought a host was on “Subnet A”

and the host was actually on “Subnet B”! Let’s tackle an example: “On what subnet is the IP address 10.17.2.14 255.255.192.0 found?” All you have to do is break the IP address down into binary, add up the network and subnet bits ONLY, and you’re done!

That address in binary is:

00001010 00010001 00000010 00001110 That subnet mask converts to /18 in prefix notation, so add the first 18 bits, convert the value back to binary, and you’re done…. …. and the subnet upon which that address is found is 10.17.0.0 255.255.192.0!

Let’s hit some more practice questions! I’ll give you the IP addresses first, and following that you’ll find the answers and explanations. Let’s get it done! For each IP address listed here, determine its subnet. 217.17.23.200 /27 24.194.34.12 /10 190.17.69.175 111.11.126.5 255.255.128.0 210.12.23.45 255.255.255.248

222.22.11.199 /28 111.9.100.7 /17 122.240.19.23 /10 184.25.245.89 /20 99.140.23.140 /10 10.191.1.1 /10 222.17.32.244 /28

Answers and explanations:

210.17.23.200 /27

Convert the address to binary, add up the first 27 bits, and you’re done! 210.17.23.200 = 11010010 000100010001011111001000 Subnet: 210.17.23.192 /27. 24.194.34.12 /10 24.194.34.12 = 0001100011000010 00100010 00001100

Add up the first 10 bits = 24.192.0.0 /10.

190.17.69.175 /22 190.17.69.175 = 10111110 00010001 01000101 10101111 Add up the first 22 bits = 190.17.68.0 /22 is your subnet!

111.11.126.5 255.255.128.0

111.11.126.5 = 01101111 00001011 01111110 00000101 Add up the first 17 bits = 111.11.0.0 255.255.128.0 is your subnet!

210.12.23.45 255.255.255.248 210.12.23.45 = 11010010 00001100 00010111 00101101 Add up the first 29 bits = 210.12.23.40 255.255.255.248 is

your subnet!

222.22.11.199 /28 222.22.11.199 = 11011110 00010110 00001011 11000111 Add up the first 28 bits = 222.22.11.192 /28 is your subnet!

111.9.100.7 /17

111.9.100.7 = 01101111 00001001 01100100 00000111 Add up the first 17 bits = 111.9.0.0 /17 is your subnet!

122.240.19.23 /10 122.240.19.23 = 01111010 11110000 00010011 00010111 Add up the first 10 bits = 122.192.0.0 /10 is your subnet!

184.25.245.89 /20 184.25.245.89 = 10111000 00011001 11110101 01011001 Add up the first 20 bits = 184.25.240.0 /20 is your subnet!

99.140.23.143 /10 99.140.23.143 = 01100011 10001100 00010111 10001111

Add up the first 10 bits = 99.128.0.0 /10 is your subnet!

10.191.1.1 /10 10.191.1.1 = 00001010 10111111 00000001 00000001 Add up the first 10 bits = 10.128.0.0 /10 is your subnet!

222.17.32.244 /28

222.17.32.244 = 11011110 00010001 00100000 11110100 Add up the first 28 bits = 222.17.32.240 /28 is your subnet!

Onward!

Determining Broadcast Addresses & Valid IP Address Ranges For A Given Subnet (With The Same Quick Operation!) The operation we perform in this section will answer two different questions. Need to determine the broadcast address for a subnet? Got you covered. Need to determine the valid address range for a subnet? Got it! Best of all, it’s a quick operation.

Let’s go through a sample question and you’ll see what I mean. What is the range of valid IP addresses for the subnet 210.210.210.0 /25? We need to convert this address to binary AND identify the host bits, and we know how to do that.

Octet 1 Octet 2 210.210.210.0 11010010 1101001 /25 11111111 11111111

There are three basic rules to remember when determining the subnet address, broadcast address, and range of valid addresses once you’ve identified the host bits – and these rules answer three different questions. 1. The address with all 0s for host bits is the subnet address, also referred to as the “allzeroes” address. This is not a valid host address. 2. The address with all 1s for

host bits is the broadcast address, also referred to as the “all-ones” address. This is not a valid host address. 3. All addresses between the allzeroes and all-ones addresses are valid host addresses. The “all-zeroes” address is 210.210.210.0. That’s easy enough – and so is the rest of this operation. When you change all the host bits to 1, the result is 210.210.210.127, and that’s our broadcast address for

this subnet. Every address in the middle of those two addresses (210.210.210.1 – 126) is a valid IP address. That’s all there is to it! Let’s tackle another example: Octet 1 Octet 2 150.10.64.0 11010010 00001010 11111111 11111111 /18 What is the broadcast address of the subnet 150.10.64.0 /18?

You don’t have to write out the mask on exam day if you don’t want to. I’m including it here so you see exactly what we’re doing. If all the host bits (bolded) are zeroes, the address is 150.10.64.0, the subnet address itself. This is not a valid host address. If all the host bits are ones, the address is 150.10.127.255. That is the broadcast address for this subnet and is also not a valid host address. All bits between the subnet address and broadcast address are considered valid addresses. This

gives you the range 150.10.64.1 – 150.10.127.254. Let’s get some more practice! First, I’ll list the subnets, and it’s up to you to determine the range of valid host addresses and the broadcast address for that subnet. After the list, I’ll show you the answer and explanation for each subnet. 222.23.48.64 /26 140.10.10.0 /23 10.200.0.0 /17 198.27.35.128 /27 132.12.224.0 /27

211.18.39.16 /28 10.1.2.20 /30 144.45.24.0 /21 10.10.128.0 255.255.192.0 221.18.248.224 /28 123.1.0.0 /17 203.12.17.32 /27 Time for answers and explanations! 222.23.48.64 /26 Octet 1

Octet

11011110 000101 222.23.48.64 255.255.255.192 11111111 111111

All-Zeroes (Subnet) Address: 222.23.48.64 /26 All-Ones (Broadcast) Address: 222.23.48.127 /26 Valid IP address range: 222.23.48.65 – 222.23.48.126 140.10.10.0 /23 Octet 1 Octet 2 140.10.10.0 10001100 00001010

/23

11111111 11111111

All-Zeroes (Subnet) Address: 140.10.10.0 /23 All-Ones (Broadcast) Address: 140.10.11.255 /23 Valid IP address range: 140.10.10.1 – 140.10.11.254 10.200.0.0 /17

Octet 1 Octet 2 O 10.200.0.0 00001010 11001000 0 11111111 11111111 1 /17

All-Zeroes (Subnet) Address: 10.200.0.0 /17 All-Ones (Broadcast) Address: 10.200.127.255 /17 Valid IP address range: 10.200.0.1 – 10.200.127.254 198.27.35.128 /27

Octet 1 Octet 2 198.27.35.128 11000110 00011011 11111111 11111111 /27

All-Zeroes (Subnet) Address:

198.27.35.128 /27 All-Ones (Broadcast) Address: 198.27.35.159 /27 Valid IP address range: 198.27.35.129 – 198.27.35.158 132.12.224.0 /27

Octet 1 Octet 2 132.12.224.0 10000100 00001100 11111111 11111111 /27

All-Zeroes (Subnet) Address: 132.12.224.0 /27

All-Ones (Broadcast) Address: 132.12.224.31 /27 Valid IP address range: 132.12.224.1 – 132.12.224.30 211.18.39.16 /28

Octet 1 Octet 2 211.18.39.16 11010011 00010010 11111111 11111111 /28

All-Zeroes (Subnet) Address: 211.18.39.16 /28 All-Ones (Broadcast) Address:

211.18.39.31 /28 Valid IP address range: 211.18.39.17 – 211.18.39.30 10.1.2.20 /30

Octet 1 Octet 2 Oc 10.1.2.20 00001010 00000001 00 11111111 11111111 11 /30

All-Zeroes (Subnet) Address: 10.1.2.20 /30 All-Ones (Broadcast) Address: 10.1.2.23 /30

Valid IP address range: 10.1.2.21 – 10.1.2.22 /30 144.45.24.0 /21 Octet 1 Octet 2 144.45.24.0 10010000 00101101 11111111 11111111 /21

All-Zeroes (Subnet) Address: 144.45.24.0 /21 All-Ones (Broadcast) Address: 144.45.31.255 /21 Valid IP address range: 144.45.24.1

– 144.45.31.254 /21 10.10.128.0 255.255.192.0

Octet 1 Octet 2 10.10.128.0 00001010 0000101 255.255.192.0 11111111 11111111

All-Zeroes (Subnet) Address: 10.10.128.0 255.255.192.0 All-Ones (Broadcast) Address: 10.10.191.255 255.255.192.0 Valid IP address range: 10.10.128.1 – 10.10.191.254

221.18.248.224 /28

Octet 1 Octet 2 221.18.248.224 11011101 000100 11111111 1111111 /28

All-Zeroes (Subnet) Address: 221.18.248.224 /28 All-Ones (Broadcast) Address: 221.18.248.239 /28 Valid IP address range: 221.18.248.225 – 238 /28 123.1.0.0 /17

Octet 1 Octet 2 Oc 123.1.0.0 01111011 00000001 000 11111111 11111111 100 /17

All-Zeroes (Subnet) Address: 123.1.0.0 /17 All-Ones (Broadcast) Address: 123.1.127.255 /17 Valid IP address range: 123.1.0.1 – 123.1.127.254 /17 203.12.17.32 /27 Octet 1

Octet 2

203.12.17.32 11001011 00001100 11111111 11111111 /27

All-Zeroes (Subnet) Address: 203.12.17.32 /27 All-Ones (Broadcast) Address: 203.12.17.63 /27 Valid IP address range: 203.12.17.33 – 203.12.17.62 Great work! Now let’s put this ALL together and

tackle some real-world subnetting situations that just might be CCENT and CCNA subnetting situations as well! On to the next section!

Meeting Stated Design Requirements (Or “Hey, We’re Subnetting!”)

Now we’re going to put our skills together and answer questions that are asked before the subnetting’s done! Actually, we’re doing the subnetting (at last!) A typical subnetting question …. “Using network 150.50.0.0, you must design a subnetting scheme

that allows for at least 200 subnets, but no more than 150 hosts per subnet. Which of the following subnet masks is best suited for this task?” (The question could also give you no choices and ask you to come up with the best possible mask, just like my practice questions.) We’re dealing with a Class B network, which means we have 16 network bits and 16 host bits. We’ll borrow subnet bits from the host bits, so we’ll leave the host bits area blank for now.

1st

2nd

3rd 4th

NW 11111111 11111111 Bits Host Bits

The formulas for determining the number of bits needed for a given number of subnets or hosts: The number of valid subnets = (2 raised to the power of the number of subnet bits) The number of valid hosts = (2 raised to the

power of the number of host bits) − 2 The key to this question is to come up with the minimum number of bits you’ll need for the required number of subnets, and make sure the remaining host bits give you enough hosts, but not too many hosts. We need eight subnet bits to give us at least 200 subnets: 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 = 256 subnets. Proposed solution: 255.255.255.0

NW 11111111 11111111 Bits SN 11111111 Bits Host Bits

This mask leaves eight host bits, which would result in 254 hosts. This violates the requirement that we have no more than 150 hosts per subnet. What happens if you borrow one more host bit for subnetting, giving you 9 subnet bits and 7 host bits?

9 Subnet Bits: 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 = 512 7 Host Bits: 2 × 2 × 2 × 2 × 2 × 2 × 2 = 128 − 2 = 126 This gives you 510 subnets and 126 hosts, meeting both requirements. The great thing about this question type is that it plays to your strengths. You already know how to work with subnet bits and host bits. What you must watch out for are answers that meet one requirement but do not meet the other. Let’s walk through another

example: Using network 220.10.10.0, you must develop a subnetting scheme that allows for a minimum of six hosts and a minimum of 25 subnets. What’s the best mask to use? Watch this question – it’s asking for two minimums. This is a Class C network, so 24 of the bits are already used with the network mask. You have only eight bits to split between the subnet and the host bits. Before subnetting: Class C

network mask 255.255.255.0

NW 11111111 11111111 11111111 Bits SN Bits Host Bits

For at least 25 subnets, 5 subnet bits are needed: 2 × 2 × 2 × 2 × 2 = 32 subnets

This would leave three host bits. Does this meet the other requirement? 2 × 2 × 2 = 8 − 2 = 6 hosts. That meets the second requirement, so a mask of 5 subnet bits and 3 host bits will get the job done. 1st

2nd

3rd

NW 11111111 11111111 11111111 Bits SN Bits

Host Bits

The resulting mask is 255.255.255.248. As you’ve seen, this question type brings into play skills you’ve already developed. Just be careful when analyzing the question’s requirements, and you’ll get the correct answer every time. Practice makes perfect, so let’s practice!

“Meeting Design Requirements” Questions: Your network has been assigned the address 140.10.0.0. Your network manager requires that you have at least 250 subnets, and that any of these subnets will support at least 250 hosts. What’s the best mask to use? This Class B network has 16 network bits, which we never borrow for subnetting, and 16 host bits, which we always borrow for

subnetting. (hint hint) Before subnetting: Class B network mask 255.255.0.0

NW 11111111 11111111 Bits SN Bits Host 0000000 Bits

You must have at least 250 subnets, and eight subnet bits would give us that (256, to be exact). That leaves

eight host bits, giving us 254 hosts, so the resulting mask of 255.255.255.0 meets both requirements. 1st Octet

2nd Octet

3rd Octet

NW 11111111 11111111 Bits SN Bits Host Bits

Your network has been assigned

the network number 200.10.10.0. Your network manager has asked you to come up with a subnet mask that will allow for at least 15 subnets. No subnet should ever contain more than 12 hosts, but should contain at least five. What’s the best mask to use? This Class C network’s mask has 24 network bits. There are only eight host bits to borrow for subnetting. Before subnetting: Class C network mask 255.255.255.0

NW 11111111 11111111 11111111 Bits SN Bits Host Bits

Four subnet bits would give you 16 subnets, meeting the first requirement. The problem is that this would leave 4 host bits, resulting in 14 hosts, which violates the second requirement. The maximum number of host bits

you can have in this answer is three, which would result in 6 hosts. You can’t have less, because that would allow only two hosts. That would leave five subnet bits, which meets the first requirement. The only mask that meets both requirements is /29. 1st Octet

2nd Octet

3rd Octet

NW 11111111 11111111 11111111 Bits SN Bits

Host Bits

Your network has been assigned 134.100.0.0. Your network manager requests that you come up with a subnet mask that allows for at least 500 subnets, but no subnet should be able to hold more than 120 hosts per subnet. What is the best subnet mask to use? Network 134.100.0.0 is a Class B network with a network mask of 255.255.0.0. Sixteen bits remain to be split between the subnet bits and

host bits. Before subnetting: Class B mask 255.255.0.0

NW 11111111 11111111 Bits SN Bits Host 0000000 Bits

For 500 subnets, a minimum of nine subnet bits will be needed (2 to the 9th power is 512).

That would leave 7 host bits. Does this meet the second requirement? No. 2 to the 7th power is 128. Subtract 2 and 126 host addresses remain, violating the second requirement. A mix of 10 subnet bits and 6 host bits will work. 10 subnet bits result in 1024 valid subnets, meeting the first requirement. That would leave 6 host bits, which yields 62 valid hosts. That meets the second requirement. 1st Octet

2nd Octet

3rd Octet

NW 11111111 11111111 Bits SN 11111111 Bits Host Bits

The mask is 255.255.255.192. This is the type of question you really have to watch. It would be easy to say “okay, 9 subnet bits gives me 512 subnets, that’s the right answer”, choose that answer, and move on. You must ensure that your answer meets both

requirements!

Your network has been assigned 202.10.40.0. Your network manager requests that you come up with a subnet mask that allows at least 10 subnets, but no subnet should allow more than 10 hosts. What is the best subnet mask to use? Network 202.10.40.0 is a Class C network with a mask of 255.255.255.0. Only eight bits

remain to be split between the subnet bits and host bits. Before subnetting: Class C network mask 255.255.255.0

NW 11111111 11111111 11111111 Bits SN Bits Host Bits

For a minimum of 10 subnets, at least four subnet bits would be

needed (2 to the 4th power = 16). This would leave four host bits. Does this meet the second requirement? No. There would be 14 hosts. Five subnet bits and three host bits will meet the requirements. This would yield 32 subnets and 6 hosts. The resulting mask is 255.255.255.248. 1st Octet

2nd Octet

3rd Octet

NW 11111111 11111111 11111111 Bits SN

Bits Host Bits

You’re working with 37.0.0.0. Your manager requests that you allow for at least 500 hosts per subnet; however, he wants as many subnets as possible without exceeding 1000 subnets. What is the best subnet mask to use? Network 37.0.0.0 is a Class A network, so we have 24 host bits to work with. Before subnetting: Class A

network mask 255.0.0.0

NW 11111111 Bits SN Bits Host 00000000 000000 Bits

The requirement for 500 hosts is no problem; we only need nine host bits to have 510 valid host addresses (2 to the 9th power − 2 = 510).

The problem comes in with the requirement of not having more than 1000 subnets. If we only used nine host bits, that would leave 15 subnet bits, which would result in over 32,000 subnets! How many subnet bits can we borrow without going over 1000 subnets? Nine subnet bits would give us 512 valid subnets; that’s as close as we can come without going over. Doing so would leave us with 15 host bits, which would certainly meet the “minimum number of hosts” requirement.

1st Octet

2nd Octet

3rd Octet

NW 11111111 Bits SN 11111111 1 Bits Host 0000000 Bits

The best mask to use to meet both requirements is 255.255.128.0. Do not let the “minimum” part of the requirement throw you off. If you’re asked for a minimum of 500 hosts or 500 subnets, as long as

you’ve got more than that, it doesn’t matter how many more you have. The requirement is met. The key is to meet both requirements. You’re working with 157.200.0.0. You must develop a subnetting scheme where each subnet will support at least 200 hosts, and you’ll have between 100 and 150 subnets. What is the appropriate subnet mask to use? This network number is Class B, so we have 16 host bits to work with. Before subnetting: Class B network mask 255.255.0.0

NW 11111111 11111111 Bits SN Bits Host 0000000 Bits

Eight host bits would result in 254 hosts, enough for the first requirement. However, this would also result in 256 valid subnets, violating the second requirement. (2 to the 8th power = 254). The only number of subnet bits that

results in between 100 and 150 valid subnets is 7; this yields 128 valid subnets. (Six subnet bits would yield 64 valid subnets.) This means we would have nine host bits left, more than meeting the “at least 200 hosts” requirement.

NW 11111111 11111111 Bits SN 1111111 Bits Host Bits

The proper mask is 255.255.254.0.

Given network number 130.245.0.0, what subnet mask will result in at least 250 valid hosts per subnet, but between 60 and 70 valid subnets? With this Class B network, there are 16 host bits. How many subnet bits need to be borrowed to yield between 60 and 70 subnets? The only number of subnet bits that yield this particular number is six, which gives us 64 valid subnets.

Five subnet bits yield too few valid subnets (32), while seven subnet bits yield too many (128). If you borrow six subnet bits, how many hosts will be available per subnet? The remaining ten host bits will give you 1022 valid host addresses, more than enough for the first requirement. Therefore, the appropriate mask is 255.255.252.0. 1st Octet

2nd Octet

3rd Octe

NW 11111111 11111111 Bits SN 111111

Bits Host Bits

Time for our final exam! Let’s get right to it – in the very next section!

0

Finals!

Let’s put it all together for one big final exam! We’ll sharpen our skills for exam success on these questions, and they’re presented in the same order in which they appeared in this book. If you’re a little hesitant on how to answer any of these questions, be sure to go back and get more practice! Let’s get started!

Converting Binary To Dotted Decimal The string: 01010101 11100010 01101010 01001010

Answer: 85.226.106.74 The string: 11110000 00001111 01111111 10000000

Answer: 240.15.127.128. The string: 11001101 00000011 11110010 00100101

Answer: 205.3.242.37.

The string: 00110010 00100011 11110011 00100111

Answer: 50.35.243.39. The string: 10000111 00111111 01011111 00110010

Answer: 135.63.95.50 Converting Dotted Decimal Addresses To Binary Strings The address: 195.29.37.126

Answer: 11000011 00011101 00100101 01111110. The address: 207.93.49.189

Answer: 11001111 01011101 00110001 10111101. The address: 21.200.80.245

Answer: 00010101 11001000 01010000 11110101. The address: 105.83.219.91

Answer: 01101001 01010011

11011011 01011011. The address: 123.54.217.4

Answer: 01111011 00110110 11011001 00000100. Determining The Number Of Valid Subnets How many valid subnets are on the

222.12.240.0 /27 network? This is a Class C network, with a network mask of /24. The subnet mask is /27, indicating three subnet bits. 2 to the 3rd power is 8 = 8 valid subnets. How many valid subnets are on the 10.1.0.0 /17 network? This is a Class A network, with a network mask of /8. The subnet mask is /17, indicating nine subnet bits. (17 − 8 = 9)

2 to the 9th power is 512 = 512 valid subnets. How many valid subnets are on the 111.0.0.0 /14 network? This is a Class A network, with a network mask of /8. The subnet mask is /14, indicating six subnet bits. (14 − 8 = 6) 2 to the 6th power is 64 = 64 valid subnets. How many valid subnets are on the 172.12.0.0 /19 network?

This is a Class B network, with a network mask of /16. The subnet mask is /19, indicating three subnet bits. (19 − 16 = 3) 2 to the 3rd power is 8 = 8 valid subnets. How many valid subnets are on the 182.100.0.0 /27 network? This is a Class B network, with a network mask of /16. The subnet mask is /27, indicating 11 subnet bits. (27 − 16 = 11) 2 to the 11th power is 2048 = 2048 valid subnets.

How many valid subnets exist on the 221.23.19.0 /30 network? This is a Class C network, with a network mask of /24. The subnet mask is /30, indicating six subnet bits. (30 − 24 = 6) 2 to the 6th power is 64 = 64 valid subnets. How many valid subnets exist on the 17.0.0.0 255.240.0.0 network? This is a Class A network, with a network mask of 255.0.0.0. The

subnet mask here is 255.240.0.0 (/12), indicating four subnet bits. (12 − 8 = 4) 2 to the 4th power is 16 = 16 valid subnets. How many valid subnets exist on the 214.12.200.0 255.255.255.248 network? This is a Class C network, with a network mask of 255.255.255.0. The subnet mask here is 255.255.255.248 (/29), indicating five subnet bits. (29 − 24 = 5) 2 to the 5th power is 32 = 32 valid

subnets. How many valid subnets exist on the 155.200.0.0 255.255.255.128 network? This is a Class B network, with a network mask of 255.255.0.0. The subnet mask here is 255.255.255.128 (/25), indicating nine subnet bits. (25 − 16 = 9) 2 to the 9th power is 512 = 512 valid subnets. Determining The Number Of

Valid Hosts How many valid host addresses exist on the 211.24.12.0 /27 subnet? To determine the number of host bits, just subtract the subnet mask length from 32. 32 – 27 = 5. To then determine the number of host addresses, bring 2 to that result’s power and subtract 2. 2 to the 5th power = 32, 32 − 2 = 30 valid host addresses. How many valid host addresses exist on the 178.34.0.0 /28 subnet?

To determine the number of host bits, just subtract the subnet mask length from 32. 32 − 28 = 4. To then determine the number of host addresses, bring 2 to that result’s power and subtract 2. 2 to the 4th power = 16, 16 − 2 = 14 valid host addresses. How many valid host addresses exist on the 211.12.45.0 /30 subnet? Subtract the subnet mask length from 32. 32 − 30 = 2 host bits. Bring 2 to that result’s power and

subtract 2. 2 to the 2nd power = 4, 4 – 2 = 2 valid host addresses on that subnet. How many valid host addresses exist on the 129.12.0.0 /20 subnet? Subtract the subnet mask length from 32. 32 – 20 = 12 host bits. Bring 2 to that result’s power and subtract 2. 2 to the 12th power = 4096, and 4096 − 2 = 4094 valid host addresses on that subnet. How many valid host addresses

exist on the 220.34.24.0 255.255.255.248 subnet? Subtract the subnet mask length from 32. 32 – 29 = 3 host bits. Bring 2 to that result’s power and subtract 2. 2 to the 3rd power = 8, 8 – 2 = 6 valid host addresses on this subnet. How many valid host addresses exist on the 145.100.0.0 255.255.254.0 subnet? Subtract the subnet mask length from 32. 32 – 23 = 9 host bits.

Bring 2 to that result’s power and subtract 2. 2 to the 9th power = 512, and 512 – 2 = 510 valid host addresses on that subnet. How many valid host addresses exist on the 23.0.0.0 255.255.240.0 subnet? Subtract the subnet mask length from 32. 32 − 20 = 12 host bits. Bring 2 to that result’s power and subtract 2. 2 to the 12th power = 4096, 4096 − 2 = 4094 valid host addresses on that subnet.

Determining The Subnet Number Of A Given IP Address On what subnet can the IP address 142.12.38.189 /25 be found? Start writing out the 142.12.38.189 address in binary, and stop once you’ve converted 25 bits. That result gives you the answer. (You can also write out the entire address for practice and then add up the first 25 bits.) First 25 bits = 10001110 00001100 00100110 1xxxxxxx Result = 142.12.38.128 /25.

On what subnet can the IP address 170.17.209.36 /19 be found? Convert that IP address to binary and stop once you get to 19 bits, then convert right back to dotted decimal. First 19 bits = 10101010 00010001 110xxxxx xxxxxxxx The answer: 170.17.192.0 /19. On what subnet can the IP address 10.178.39.205 /29 be found? Convert the address to binary and stop at the 29-bit mark.

29 bits = 00001010 10110010 00100111 11001xxx = 10.178.39.200 Tack your /29 on the back and you have the answer! On what subnet can the IP address 190.34.9.173 /22 be found? Convert the address to binary, stop at 22 bits, and then convert the address right back to decimal. First 22 bits = 10111110 00100010 000010xx xxxxxxxx = 190.34.8.0 /22

On what subnet can the IP address 203.23.189.205 255.255.255.240 be found? Write out the address in binary and stop at the 28-bit mark, then convert those 28 bits back to decimal. Done! 1st 28 bits = 11001011 00010111 10111101 1100xxxx = 203.23.189.192 / 28 On what subnet can the IP address 49.210.83.201 255.255.255.248 be found? Convert the address to binary up to

the 29-bit mark, and convert those 29 bits right back to decimal. 00110001 11010010 01010011 11001xxx = 49.210.83.200 / 29. On what subnet can the IP address 31.189.234.245 /17 be found? Convert the address to binary up to the 17-bit mark, then convert those 17 bits right back to decimal. 31.189.234.245 = 00011111 10111101 1xxxxxxx xxxxxxxx = 31.189.128.0 /17

On what subnet can the IP address 190.98.35.17 /27 be found? Convert the address to binary up to the 27-bit mark, then convert those 27 bits right back to decimal. 190.98.35.17 = 10111110 01100010 00100011 000xxxxx = 190.98.35.0 / 27 Determining Broadcast Addresses and Valid IP Address Ranges For each of the following, identify the valid IP address range and the broadcast address for that subnet.

100.100.45.32 /28 208.72.109.8 /29 190.89.192.0 255.255.240.0 101.45.210.52 /30 90.34.128.0 /18 205.186.34.64 /27 175.24.36.0 255.255.252.0 10.10.44.0 /25 120.20.240.0 /21 200.18.198.192 /26 Answer and explanations follow!

The subnet: 100.100.45.32 /28

We know that the last four bits are the host bits. If these are all zeroes, this is the subnet address itself. If there are all ones, this is the broadcast address for this subnet. All addresses between the two are valid. “All-Zeroes” Subnet Address: 100.100.45.32 /28

“All-Ones” Broadcast Address: 100.100.45.47 /28 Valid IP Addresses: 100.100.45.33 – 46 /28 The subnet: 208.72.109.8 /29

“All-Zeroes” Subnet Address: 208.72.109.8 /29 “All-Ones” Broadcast Address: 208.72.109.15 /29

Valid IP Addresses: 208.72.109.9 – 208.72.109.14 /29 The subnet: 190.89.192.0 255.255.240.0

“All-Zeroes” Subnet Address: 190.89.192.0 /20 “All-Ones” Broadcast Address: 190.89.207.255 /20 Valid IP Addresses: 190.89.192.1 –

190.89.207.254 /20 The subnet: 101.45.210.52 /30

“All-Zeroes” Subnet Address: 101.45.210.52 /30 “All-Ones” Broadcast Address: 101.45.210.55 /30 Valid IP Addresses: 101.45.210.53, 101.45.210.54 /30

The subnet 90.34.128.0 /18

“All-Zeroes” Subnet Address: 90.34.128.0 /18 “All-Ones” Broadcast Address: 90.34.191.255 /18 Valid IP Addresses: 90.34.128.1 – 90.34.191.254 /18 The subnet: 205.186.34.64 /27

“All-Zeroes” Subnet Address: 205.186.34.64 /27 “All-Ones” Broadcast Address: 205.186.34.95 /27 Valid IP Addresses: 205.186.34.65 – 94 /27 The subnet: 175.24.36.0 255.255.252.0

“All-Zeroes” Subnet Address: 175.24.36.0 /22 “All-Ones” Broadcast Address: 175.24.39.255 /22 Valid IP Addresses: 175.24.36.1 – 175.24.39.254 /22 The subnet: 10.10.44.0 /25

“All-Zeroes” Subnet Address: 10.10.44.0 /25 “All-Ones” Broadcast Address: 10.10.44.127 /25 Valid IP Addresses: 10.10.44.1 – 10.10.44.126 /25 The subnet: 120.20.240.0 /21

“All-Zeroes” Subnet Address: 120.20.240.0 /21

“All-Ones” Broadcast Address: 120.20.247.255 /21 Valid IP Addresses: 120.20.240.1 – 120.20.247.254 /21 The subnet: 200.18.198.192 /26

“All-Zeroes” Subnet Address: 200.18.198.192 /26 “All-Ones” Broadcast Address: 200.18.198.255 /26

Valid IP Addresses: 200.18.198.193 – 200.18.198.254 /26 Now let’s put it all together for some real-world design requirement questions! Meeting The Stated Design Requirements You’re working with network 135.13.0.0. You need at least 500 valid subnets, but no more than 100 hosts per subnet. What is the best subnet mask to use?

This is a Class B network, with 16 network bits and 16 host bits.

The first requirement is that we have at least 500 subnets. Nine subnet bits would give us 512 valid subnets: 2×2×2×2×2×2×2×2×2= 512 This would leave seven host bits, resulting in 126 valid host

addresses, which violates the second requirement. (2 to the 7th power is 128; subtract two, and 126 valid host addresses remain.) What about six host bits? That would yield 62 valid host addresses, which meets the second requirement. A combination of ten subnet bits and six host bits gives us 1024 valid subnets and 62 valid host addresses, meeting both requirements.

The resulting mask is 255.255.255.192 (/26). You’re working with the network 223.12.23.0. Your network manager has asked you to develop a subnetting scheme that allows at least 30 valid hosts per subnet, but yields no more than five valid subnets. What’s the best subnet mask to use? This Class C network’s mask is /24, leaving eight host bits to borrow for subnetting.

We know we need five host bits for at least 30 hosts per subnet. (2 to the 5th power, minus two, equals exactly 30.) Does this meet the second requirement? No. That would leave three subnet bits, which yields eight valid subnets. To meet the second requirement, you can have only two subnet bits, which yields two valid subnets.

This yields a mask of 255.255.255.192 (/26). You’re working with the network 131.10.0.0. Your network manager has requested that you develop a subnetting scheme that allows at least fifty subnets. No subnet should contain more than 1000 hosts. What is the best subnet mask to use?

This Class B network has 16 network bits, and 16 host bits that can be borrowed for subnetting.

You quickly determine that for fifty subnets, you only need six subnet bits. That gives you 64 valid subnets. Does this mask meet the second requirement? No. That would leave 10 host bits, which yields 1022 valid host addresses. (2 to the 10th power

equals 1024; subtract two, and 1022 remain.) By borrowing one more bit for subnetting, giving us seven subnet bits and nine host bits, both requirements are met. Seven subnet bits yield 128 valid subnets, and nine host bits yield 510 valid host addresses. The appropriate mask is 255.255.254.0.

Congratulations! You’ve completed

this final exam. If you had any difficulty with the final section, please review Section Eight. If you nailed all five of the final questions – great work! To wrap things up, let’s hit Variable Length Subnet Masking!

How To Develop A VLSM Scheme In the networks we’ve been working with in the binary and subnetting section, we’ve cut our IP address space “pie” into nice, neat slices of the same size. We don’t always want to do that, though. If we have a point-to-point network, why assign a subnet number to that network that gives you 200+ addresses when you’ll only need two? That’s where Variable-Length Subnet Masking comes in. VLSM is

the process of cutting our address pie into uneven slices. The best way to get used to VLSM is to work with it, so let’s go through a couple of drills where VLSM will come in handy. Our first drill will involve the major network number 210.49.29.0. We’ve been asked to create a VLSM scheme for the following five networks, and we’ve also been told that there will be no further hosts added to any of these segments. The requirement is to use no more IP addresses from this range for any subnet that is

absolutely necessary. The networks: NW A: 20 hosts NW B: 10 hosts NW C: 7 hosts NW D: 5 hosts NW E: 3 hosts We’ll need to use the formula for determining how valid host addresses are yielded by a given number of host bits: (2 to the nth power) - 2, with n representing the number of

host bits To create our VLSM scheme, we’ll ask this simple question over and over: “What is the smallest subnet that can be created with all host bits set to zero?” NW A requires 20 valid host addresses. Using the above formula, we determine that we will need 5 host bits (2 to the 5th power equals 32; 32 − 2 = 30). What is the smallest subnet that can be created with all host bits set to zero? 210.49.29.0 in binary: 11010010

00110001 00011101 00000000 /27 subnet mask: 11111111 11111111 11111111 11100000 We’ll use a subnet mask of /27 to have five host bits remaining, resulting in a subnet and subnet mask of 210.49.29.0 /27, or 210.49.29.0 255.255.255.224. It’s an excellent idea to keep a running chart of your VLSM scheme, so we’ll start one here. The network number itself is the value of that binary string with all host bits set to zero; the broadcast address for this subnet is the value of that binary string with all host

bits set to one. These two particular addresses cannot be assigned to hosts, but every IP address between the two are valid host IP addresses. Network Number = 11010010 00110001 00011101 00000000 Broadcast Add. = 11010010 00110001 00011101 00011111 Network: NW A

Subnet & Network Mask Add. 210.49.29.0 210.49.29.0 210 /27

The next subnet will start with the

next number up from the broadcast address. In this case, that’s 210.49.29.32. With a need for 10 valid host addresses, what will the subnet mask be? 210.49.29.32 in binary: 11010010 00110001 00011101 00100000 /28 subnet mask: 11111111 11111111 11111111 11110000 Four host bits result in 14 valid IP addresses, since 2 to the 4th power is 16 and 16 − 2 = 14. We use a subnet mask of /28 to have four host bits remaining, resulting in a subnet and mask of 210.49.29.32 /28, or

210.49.29.32 255.255.255.240. Remember, the network number is the value of the binary string with all host bits set to zero and the broadcast address is the value of the binary string with all host bits set to one. Network Number = 11010010 00110001 00011101 00100000 Broadcast Add. = 11010010 00110001 00011101 00101111 Network: NW A

Subnet & Network Mask Add. 210.49.29.0 210.49.29.0 /27

NW B

210.49.29.32 210.49.29.32 /28

The next subnet is one value up from that broadcast address, which gives us 210.49.29.48. We need seven valid host addresses. How many host bits do we need? 210.49.29.48 in binary: 11010010 00110001 00011101 00110000 /28 subnet mask: 11111111 11111111 11111111 11110000 We still need four host bits - three would give us only six valid IP addresses. (Don’t forget to subtract the two!) The subnet and mask are

210.49.29.48 255.255.255.240, or 210.49.29.48 /28. Calculate the network number and broadcast address as before. Network Number = 11010010 00110001 00011101 00110000 Broadcast Add. = 11010010 00110001 00011101 00111111 Network: NW A NW B NW C

Subnet & Network Mask Add. 210.49.29.0 210.49.29.0 /27 210.49.29.32 210.49.29.32 /28 210.49.29.48 210.49.29.48 /28

The next value up from that broadcast address is 210.49.29.64. We need five valid IP addresses, which three host bits will give us (2 to the 3rd power equals 8, 8 − 2 = 6). 210.49.29.64 in binary: 11010010 00110001 00011101 01000000 /29 subnet mask: 11111111 11111111 11111111 11111000 The subnet and mask are 210.49.29.64 255.255.255.248, or 210.49.29.64 /29. Calculate the network number and broadcast address as before, and bring the

VLSM table up to date. Network Number = 11010010 00110001 00011101 01000000 Broadcast Add. = 11010010 00110001 00011101 01000111 Network: NW A NW B NW C NW D

Subnet & Mask 210.49.29.0 /27 210.49.29.32 /28 210.49.29.48 /28 210.49.29.64 /29

Network Add. 210.49.29.0 210.49.29.32 210.49.29.48 210.49.29.64

We’ve got one more subnet to calculate, and that one needs only three valid host addresses. What will the network number and mask be? 210.49.29.72 in binary: 11010010 00110001 00011101 01001000 /29 subnet mask: 11111111 11111111 11111111 11111000 We still need a /29 subnet mask, because a /30 mask would yield only two usable addresses. The subnet and mask are 210.49.29.72 /29, or 210.49.29.72 255.255.255.248. Calculate the network number and broadcast

address, and bring the VLSM table up to date. Network Number = 11010010 00110001 00011101 01001000 Broadcast Add. = 11010010 00110001 00011101 01001111 Network: NW A NW B NW C NW D

Subnet & Mask 210.49.29.0 /27 210.49.29.32 /28 210.49.29.48 /28 210.49.29.64 /29

Network Add. 210.49.29.0 210.49.29.32 210.49.29.48 210.49.29.64

NW E

210.49.29.72 210.49.29.72 /29

And now you’re done! The next subnet would be 210.49.29.80, and the mask would of course be determined by the number of host addresses needed on the segment. A final binary word: You either know how to determine the number of valid subnets, valid hosts, or perform the subnetting from scratch, or you don’t - and how do you learn how to do it? Practice.

You don’t need expensive practice exams - the only thing you need is a piece of paper and a pencil. Just come up with your own scenarios! All you need to do is choose a major network number, then just write down five or six different requirements for the number of valid host addresses needed for each subnet. iI can tell you from firsthand experience that this is the best way to get really, really good with VLSM - just pick a network number, write down five or six different requirements for the

number of valid addresses needed, and get to work! Thanks again for purchasing my ICND1 Study Guide, be sure to take advantage of the free resources listed in the next section, and all the best to you in your studies and career! Chris Bryant “The Computer Certification Bulldog”

Free Resources for your CCENT and CCNA exam success – and beyond! You’ll find over 325 free videos on my YouTube channel, covering several Cisco certifications now and expanding to cover IP Version 6, the new CCNA and CCENT exams, Network+, Security+, and more in 2014!

http://www.youtube.com/user/ccie12

On my main Udemy page, you’ll find descriptions and links for my free and almost-free Video Boot Camps on that site, including courses for the CCNA, CCENT, CCNP, CCNA Security, and more to come in 2014! All paid courses have a 60% discount code on their main pages!

https://www.udemy.com/u/chrisbryan Twitter:

https://twitter.com/ccie12933 Website: http://www.thebryantadvantage.com (That site’s getting a major overhaul in Dec. 2013 and Jan 2014, bear with us!) Facebook: http://on.fb.me/nlT8SD See you there! -- Chris B.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF