CMQS_12.c-R_IG (1)

January 2, 2017 | Author: bajantrina | Category: N/A
Share Embed Donate


Short Description

Download CMQS_12.c-R_IG (1)...

Description

Configuring and Monitoring QFabric Systems 12.c

Instructor Guide

Worldwide Education Services 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA 408-745-2000 www.juniper.net Course Number: EDU-JUN-CMQS

This document is produced by Juniper Networks, Inc. This document or any part thereof may not be reproduced or transmitted in any form under penalty of law, without the prior written permission of Juniper Networks Education Services. Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners. Configuring and Monitoring QFabric Systems Instructor Guide, Revision 12.c Copyright © 2012 Juniper Networks, Inc. All rights reserved. Printed in USA. Revision History: Revision 12.a—July 2012 Revision 12.b—September 2012 Revision 12.c—October 2012 The information in this document is current as of the date listed above. The information in this document has been carefully verified and is believed to be accurate for software Release 12.2X50-D20.4. Juniper Networks assumes no responsibilities for any inaccuracies that may appear in this document. In no event will Juniper Networks be liable for direct, indirect, special, exemplary, incidental, or consequential damages resulting from any defect or omission in this document, even if advised of the possibility of such damages.

Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. YEAR 2000 NOTICE Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036. SOFTWARE LICENSE The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.

Contents Chapter 1: Course Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-1 Chapter 2: System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-1 QFabric System Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3 Components and Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6 Control Plane and Data Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-22

Chapter 3: Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-1 Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3 Software Abstractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7 Internal Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-18

Chapter 4: Setup and Initial Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-1 System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3 Initial Configuration Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-15 Configuring Network Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-27 Lab 1: Setup and Initial Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-42

Chapter 5: Layer 2 Features and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-1 Layer 2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3 Layer 2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-14 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-23 Lab 2: Layer 2 Features and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-33

Chapter 6: Layer 3 Features and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-1 Layer 3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3 Layer 3 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-16 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-25 Lab 3: Layer 3 Features and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-36

Chapter 7: Network Storage Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-1 Data Center Storage Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3 Storage Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-15

Chapter 8: Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-1 Fibre Channel Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3 Fibre Channel over Ethernet Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-14 Configuration and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-25 Lab 4: Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-45

Appendix A: System Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1 Standard Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-3 Nonstop Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A-11

www.juniper.net

Contents • iii

Appendix B: Acronym List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1 Appendix C: Answer Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-1

iv • Contents

www.juniper.net

Course Overview This two-day course is designed to provide students with intermediate knowledge of the QFabric system. Students will be provided an overview of the QFabric system with detailed coverage of its components, design, and architecture. Students will learn how the system is deployed and operates and will be provided configuration and monitoring examples. Through demonstrations and hands-on labs, students will gain experience in configuring and monitoring the QFabric system. This course uses the Juniper Networks QFX3000-M system for the hands-on component. This course is based on the Junos operating system Release 12.2X50-D20.4.

Objectives After successfully completing this course, you should be able to: •

Compare legacy environments with the QFabric system.



Describe the hardware components of the QFabric system.



Explain control plane and data plane functions in the QFabric system.



Describe the goals of the software architecture.



Explain the purpose and functions of the Director software.



Configure and verify some key software abstractions.



List and describe operations of internal protocols used in the QFabric system.



Perform the initial setup and configuration tasks.



Configure and monitor network interfaces.



Log in to system components and verify status.



Explain bridging concepts and operations for the QFabric system.



List and describe supported Layer 2 protocols and features.



Configure and monitor key Layer 2 protocols and features.



Explain routing concepts and operations for the QFabric system.



List and describe supported Layer 3 protocols and features.



Configure and monitor key Layer 3 protocols and features.



Identify the purposes of data center storage along with the challenges.



Describe and compare data center storage technologies.



List and describe data center storage networking protocols.



Describe common Fibre Channel topologies, components, and related terminology.



Explain Fibre Channel operations and issues that can impact protocol operations.



Configure and monitor Fibre Channel functionality on the QFabric system.



Identify the various QFabric system software packages.



Perform a standard software upgrade.



Perform a nonstop software upgrade.

Intended Audience This course benefits all individuals responsible for selling, implementing, monitoring, or supporting the QFabric system.

Course Level CMQS is an intermediate-level course.

www.juniper.net

Course Overview • v

Prerequisites The following are the prerequisites for this course:

vi • Course Overview



Intermediate TCP/IP networking knowledge;



Intermediate Layer 2 switching knowledge;



Introductory data center technologies knowledge; and



Attend the Junos Enterprise Switching (JEX) course, or have equivalent experience. Additionally, the Junos Intermediate Routing (JIR) course is recommended.

www.juniper.net

Course Agenda Day 1 Chapter 1:

Course Introduction

Chapter 2:

System Overview

Chapter 3:

Software Architecture

Chapter 4:

Setup and Initial Configuration Lab 1: Setup and Initial Configuration

Day 2 Chapter 5:

Layer 2 Features and Operations Lab 2: Layer 2 Features and Operations

Chapter 6:

Layer 3 Features and Operations Lab 3: Layer 3 Features and Operations

Chapter 7:

Network Storage Fundamentals

Chapter 8:

Fibre Channel Lab 4: Fibre Channel

www.juniper.net

Course Agenda • vii

Document Conventions CLI and GUI Text Frequently throughout this course, we refer to text that appears in a command-line interface (CLI) or a graphical user interface (GUI). To make the language of these documents easier to read, we distinguish GUI and CLI text from chapter text according to the following table. Style

Description

Usage Example

Franklin Gothic

Normal text.

Most of what you read in the Lab Guide and Student Guide.

Courier New

Console text: •

Screen captures



Noncommand-related syntax

GUI text elements: • Menu names • Text field entry

commit complete Exiting configuration mode Select File > Open, and then click Configuration.conf in the Filename text box.

Input Text Versus Output Text You will also frequently see cases where you must enter input text yourself. Often these instances will be shown in the context of where you must enter them. We use bold style to distinguish text that is input versus text that is simply displayed. Style

Description

Usage Example

Normal CLI

No distinguishing variant.

Physical interface:fxp0, Enabled

Normal GUI

CLI Input

View configuration history by clicking Configuration > History. Text that you must enter.

lab@San_Jose> show route Select File > Save, and type config.ini in the Filename field.

GUI Input

Defined and Undefined Syntax Variables Finally, this course distinguishes between regular text and syntax variables, and it also distinguishes between syntax variables where the value is already assigned (defined variables) and syntax variables where you must assign the value (undefined variables). Note that these styles can be combined with the input style as well. Style

Description

Usage Example

CLI Variable

Text where variable value is already assigned.

policy my-peers

Text where the variable’s value is the user’s discretion or text where the variable’s value as shown in the lab guide might differ from the value the user must input according to the lab topology.

Type set policy policy-name.

GUI Variable CLI Undefined GUI Undefined

viii • Document Conventions

Click my-peers in the dialog.

ping 10.0.x.y Select File > Save, and type filename in the Filename field.

www.juniper.net

Additional Information Education Services Offerings You can obtain information on the latest Education Services offerings, course dates, and class locations from the World Wide Web by pointing your Web browser to: http://www.juniper.net/training/education/.

About This Publication The Configuring and Monitoring QFabric Systems Instructor Guide was developed and tested using software Release 12.2X50-D20.4. Previous and later versions of software might behave differently so you should always consult the documentation and release notes for the version of code you are running before reporting errors. This document is written and maintained by the Juniper Networks Education Services development team. Please send questions and suggestions for improvement to [email protected].

Technical Publications You can print technical manuals and release notes directly from the Internet in a variety of formats: •

Go to http://www.juniper.net/techpubs/.



Locate the specific software or hardware release and title you need, and choose the format in which you want to view or print the document.

Documentation sets and CDs are available through your local Juniper Networks sales office or account representative.

Juniper Networks Support For technical support, contact Juniper Networks at http://www.juniper.net/customers/support/, or at 1-888-314-JTAC (within the United States) or 408-745-2121 (outside the United States).

www.juniper.net

Additional Information • ix

x • Additional Information

www.juniper.net

Configuring and Monitoring QFabric Systems Chapter 1: Course Introduction

Configuring and Monitoring QFabric Systems

This Chapter Discusses: •

Objectives and course content information;



Additional Juniper Networks, Inc. courses; and



The Juniper Networks Certification Program.

Chapter 1–2 • Course Introduction

www.juniper.net

Configuring and Monitoring QFabric Systems

Introductions The slide asks several questions for you to answer during class introductions.

www.juniper.net

Course Introduction • Chapter 1–3

Configuring and Monitoring QFabric Systems

Course Contents The slide lists the topics we discuss in this course.

Chapter 1–4 • Course Introduction

www.juniper.net

Configuring and Monitoring QFabric Systems

Prerequisites The slide lists the prerequisites for this course.

www.juniper.net

Course Introduction • Chapter 1–5

Configuring and Monitoring QFabric Systems

General Course Administration The slide documents general aspects of classroom administration.

Chapter 1–6 • Course Introduction

www.juniper.net

Configuring and Monitoring QFabric Systems

Training and Study Materials The slide describes Education Services materials that are available for reference both in the classroom and online.

www.juniper.net

Course Introduction • Chapter 1–7

Configuring and Monitoring QFabric Systems

Additional Resources The slide provides links to additional resources available to assist you in the installation, configuration, and operation of Juniper Networks products.

Chapter 1–8 • Course Introduction

www.juniper.net

Configuring and Monitoring QFabric Systems

Satisfaction Feedback Juniper Networks uses an electronic survey system to collect and analyze your comments and feedback. Depending on the class you are taking, please complete the survey at the end of the class, or be sure to look for an e-mail about two weeks from class completion that directs you to complete an online survey form. (Be sure to provide us with your current e-mail address.) Submitting your feedback entitles you to a certificate of class completion. We thank you in advance for taking the time to help us improve our educational offerings.

www.juniper.net

Course Introduction • Chapter 1–9

Configuring and Monitoring QFabric Systems

Juniper Networks Education Services Curriculum Juniper Networks Education Services can help ensure that you have the knowledge and skills to deploy and maintain cost-effective, high-performance networks for both enterprise and service provider environments. We have expert training staff with deep technical and industry knowledge, providing you with instructor-led hands-on courses in the classroom and online, as well as convenient, self-paced eLearning courses.

Courses You can access the latest Education Services offerings covering a wide range of platforms at http://www.juniper.net/training/technical_education/.

Chapter 1–10 • Course Introduction

www.juniper.net

Configuring and Monitoring QFabric Systems

Juniper Networks Certification Program A Juniper Networks certification is the benchmark of skills and competence on Juniper Networks technologies.

www.juniper.net

Course Introduction • Chapter 1–11

Configuring and Monitoring QFabric Systems

Juniper Networks Certification Program Overview The Juniper Networks Certification Program (JNCP) consists of platform-specific, multitiered tracks that enable participants to demonstrate competence with Juniper Networks technology through a combination of written proficiency exams and hands-on configuration and troubleshooting exams. Successful candidates demonstrate a thorough understanding of Internet and security technologies and Juniper Networks platform configuration and troubleshooting skills. The JNCP offers the following features: •

Multiple tracks;



Multiple certification levels;



Written proficiency exams; and



Hands-on configuration and troubleshooting exams.

Each JNCP track has one to four certification levels—Associate-level, Specialist-level, Professional-level, and Expert-level. The Associate-level, Specialist-level, and Professional-level exams are computer-based exams composed of multiple choice questions administered at Prometric testing centers worldwide. Expert-level exams are composed of hands-on lab exercises administered at select Juniper Networks testing centers. Please visit the JNCP website at http://www.juniper.net/certification for detailed exam information, exam pricing, and exam registration.

Chapter 1–12 • Course Introduction

www.juniper.net

Configuring and Monitoring QFabric Systems

Preparing and Studying The slide lists some options for those interested in preparing for Juniper Networks certification.

www.juniper.net

Course Introduction • Chapter 1–13

Configuring and Monitoring QFabric Systems

Find Us Online The slide lists some online resources to learn and share information about Juniper Networks.

Chapter 1–14 • Course Introduction

www.juniper.net

Configuring and Monitoring QFabric Systems

Any Questions? If you have any questions or concerns about the class you are attending, we suggest that you voice them now so that your instructor can best address your needs during class. This chapter contains no review questions.

www.juniper.net

Course Introduction • Chapter 1–15

Configuring and Monitoring QFabric Systems

Chapter 1–16 • Course Introduction

www.juniper.net

Configuring and Monitoring QFabric Systems Chapter 2: System Overview

Configuring and Monitoring QFabric Systems

This Chapter Discusses: •

Legacy data center environments and the QFabric system;



Hardware components of the QFabric system; and



Control and data plane roles and responsibilities in the QFabric system.

Chapter 2–2 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

QFabric System Introduction The slide lists the topics we cover in this chapter. We discuss the highlighted topic first.

www.juniper.net

System Overview • Chapter 2–3

Configuring and Monitoring QFabric Systems

Challenges in Traditional Data Center Environments Data centers built more than a few years ago face one or more of the following challenges: •

The legacy multitier switching architecture cannot provide today’s applications and users with predictable latency and uniform bandwidth. This problem is made worse when virtualization is introduced, where the performance of virtual machines depends on the physical location of the servers on which those virtual machines reside.



The power consumed by networking gear represents a significant proportion of the overall power consumed in the data center. This challenge is particularly important today, when escalating energy costs are putting additional pressure on budgets.



The increasing performance and densities of modern CPUs has led to an increase in network traffic. The network is often not equipped to deal with the large bandwidth demands and increased number of media access control (MAC) addresses and IP addresses on each network port.



Separate networks for Ethernet data and storage traffic must be maintained, adding to the training and management budget. Siloed Layer 2 domains increase the overall costs of the data center environment. In addition, outages related to the legacy behavior of the Spanning Tree Protocol (STP), which is used to support these legacy environments, often results in lost revenue and unhappy customers.

Given all of these challenges, data center operators are seeking solutions.

Chapter 2–4 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Addressing the Challenges The Juniper Networks QFabric system offers a solution to many of the aforementioned challenges found in legacy data center environments. The QFabric system collapses the various tiers found in legacy data center environments into a single tier. In the QFabric system, all Access Layer devices connect to all other Access Layer devices across a very large scale fabric backplane. This architecture enables the consolidation of data center endpoints and provides better scaling and network virtualization capabilities than traditional data centers. The QFabric system functions as a single, nonblocking, low-latency switch that can support up to thousands of 10-Gigabit Ethernet ports or 2-Gbps, 4-Gbps, or 8-Gbps Fibre Channel ports to interconnect servers, storage, and the Internet across a high-speed, high-performance fabric. The system is managed as a single entity. The control and management element of the system automatically senses when components are added or removed from the system and dynamically adjusts the amount of processing resources required to support the system. This intelligence helps the system use the minimum amount of power to run the system efficiently. The architecture of the system is flat, nonblocking, and lossless, which allows the network fabric to offer the scale and flexibility required by small, medium, and large-sized data centers.

www.juniper.net

System Overview • Chapter 2–5

Configuring and Monitoring QFabric Systems

Components and Architecture The slide highlights the topic we discuss next.

Chapter 2–6 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Components of a Traditional Modular Switch The slide lists the components found in traditional modular switches. These components include: •

Linecards with I/O modules: The linecards act as the entry and exit point into and from the switch.



Backplane: The backplane interconnect and provide high-speed transport for the attached linecards.



Routing Engine (RE): The RE provide control and management services for the switch and deliver the primary user interface that allows you to manage the device.



Internal link: The internal link provides the required connectivity between the control plane and the data plane.

Together, these components allow the switch to perform the functions for which it is designed. We make a basic comparison between these components and the components found within a QFabric system on the next slide.

www.juniper.net

System Overview • Chapter 2–7

Configuring and Monitoring QFabric Systems

System Components The QFabric system comprises four distinct components. These components are illustrated on the slide and briefly described as follows: •

Node devices: The linecard component of a QFabric system, mode devices act as the entry and exit point into and from the fabric.



Interconnect devices: The fabric component of a QFabric system, Interconnect devices interconnect and provide high-speed transport for the attached Node devices.



Director devices: The primary Routing Engine component of a QFabric system, Director devices provide control and management services for the system and deliver the primary user interface that allows you to manage all components as a single device.



EX Series switches: The control plane link of a QFabric system, EX Series switches provide the required connectivity between all other system components and facilitate the required control and management communications within the system.

We provide more details for these components throughout this chapter.

Chapter 2–8 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Node Devices Node devices connect endpoints (such as servers or storage devices) or external networks to the QFabric system. Node devices have redundant connections to the system's fabric through Interconnect devices. Node devices are often implemented in a manner similar to how top-of-rack switches are implemented in legacy multitier data center environments. By default, Node devices connect to servers or storage devices. However, you can use Node devices to connect to external networks by adding them to the network Node group. We discuss the network Node group in subsequent chapters. The QFX3500 and QFX3600 switches can be used as Node devices within a QFabric system. We provide system details for these devices on subsequent slides. By default, the QFX3500 and QFX3600 switches function as standalone switches or node devices depending on the how the device is ordered. However, through explicit configuration, you can change the operation mode between standalone to fabric. We provide the conversion process used to change the operation mode from standalone to fabric on a subsequent slide in this chapter.

www.juniper.net

System Overview • Chapter 2–9

Configuring and Monitoring QFabric Systems

QFX3500 Node Devices You might want to point out the redundant components for this device. Note that the redundancy included at the device-level is a key to the overall redundancy at the system level.

The slide provides a detailed illustration of the QFX3500 Node device with some key information. As a node device, the QFX3500’s four 40 GbE interfaces are dedicated uplink interfaces. However, you can use only two of the uplink ports, if desired, resulting in a 6:1 oversubscription ratio when fully provisioned. Note that ports 0-5 and 42-47 have optional support for Fibre Channel and are incompatible with 1 GbE. This means, when used as non-Fibre Channel ports, only 10 GbE connections are supported.

Chapter 2–10 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

QFX3600 Node Devices You might want to point out the redundant components for this device. Note that the redundancy included at the device-level is a key to the overall redundancy at the system level.

www.juniper.net

The slide provides a detailed illustration of the QFX3600 Node device with some key information. By default, the first four Quad Small Form-factor Pluggable Plus (QSFP+) 40 Gb ports function as fabric uplink ports and the remaining 12 QSFP+ ports function as access ports. The default port assignments can be modified through configuration to designate as few as two uplink ports and as many as eight uplink ports. Using two uplink ports results in a 7:1 oversubscription ratio when fully provisioned. Note that while the physical structure and components of the QFX3600 Node device and the QFX3600-I Interconnect device are the same, their roles are quite different. These devices have distinct part numbers and come preprovisioned for their designated roles. Currently no supported process exists to convert a Node device to an Interconnect device and vice versa.

System Overview • Chapter 2–11

Configuring and Monitoring QFabric Systems

Converting Switches to QFabric Nodes As previously mentioned, the QFX3500 and QFX3600 devices can either serve as standalone switches or Node devices within a QFabric system. The slide shows the process to verify the current mode of the devices and how to change that mode, if needed. Note that any time the mode is changed a reboot is required! Note that two different SKUs are available for QFX3600 and QFX3500 devices, representing either a standalone switch or a Node device. If you order the switch as a Node device, you need not perform the procedure shown on the slide.

Chapter 2–12 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Interconnect Devices Interconnect devices serve as the fabric between all Node devices within a QFabric system. Two or more Interconnect devices are used in QFabric systems to provide redundant connections for all Node devices. Each Node device has at least one fabric connection to each Interconnect device in the system. Data traffic sent through the system and between remote Node devices must traverse the Interconnect devices, thus making this component a critical part of the data plane network. We discuss the data plane connectivity details on a subsequent slide in this chapter. The two Interconnect devices available are the QFX3008-I and the QFX3600-I Interconnect devices. The model deployed will depend on the size and goals of the implementation. We provide system details for these devices and some deployment examples on subsequent slides in this chapter.

www.juniper.net

System Overview • Chapter 2–13

Configuring and Monitoring QFabric Systems

QFX3008-I Interconnect Devices You might want to point out the redundant components for this device. Note that the redundancy included at the device-level is a key to the overall redundancy at the system level.

The slide provides a detailed illustration of the QFX3008-I Interconnect device with some key information.

Chapter 2–14 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

QFX3600-I Interconnect Devices You might want to point out the redundant components for this device. Note that the redundancy included at the device-level is a key to the overall redundancy at the system level.

www.juniper.net

The slide provides a detailed illustration of the QFX3600-I Interconnect device with some key information. Note that while the physical structure and components of the QFX3600 Node device and the QFX3600-I Interconnect device are the same, their roles are quite different. These devices have distinct part numbers and come preprovisioned for their designated roles. Currently, no supported process exists to convert a Node device to an Interconnect device and vice versa.

System Overview • Chapter 2–15

Configuring and Monitoring QFabric Systems

Director Devices Together, two Director devices form a Director group. The Director group is the management platform that establishes, monitors, and maintains all components in the QFabric system. The Director devices run the Junos operating system (Junos OS) on top of a CentOS foundation. These devices are internally assigned the names DG0 and DG1. The assigned name is determined by the order in which the device is deployed. DG0 is assigned to the first Director device brought up and DG1 is assigned to the second Director device brought up. The Director group handles tasks such as network topology discovery, Node and Interconnect device configuration and startup, and system provisioning services. Note that the Director devices currently available are also referred to as enhanced Director devices. In the future, non-enhanced Director devices will be available to assist with the processing load and responsibilities assigned to the Director group. Non-enhanced Director devices will not include hard drives and are simply designed to provide auxiliary support to the Director group. Director groups should always have at least two enhanced Director devices to ensure redundancy. Also, note that the interface modules are not hot swappable. To replace an interface module, the Director device must be powered down.

Chapter 2–16 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

QFX3100 Director Devices You may want to point out the redundant components for this device. Note that the redundancy included at the device-level is a key to the overall redundancy at the system level.

www.juniper.net

This slide provides a detailed illustration of the QFX3100 Director device with some key information.

System Overview • Chapter 2–17

Configuring and Monitoring QFabric Systems

EX4200 Switches—Control Plane Network Infrastructure The EX4200 Ethernet switches support the control plane network, which is a Gigabit Ethernet management network used to connect all components within a QFabric system. This control plane network facilitates the required communications between all system devices. By keeping the control plane network separate from the data plane, the QFabric switch can scale to support thousands of servers and storage devices. We discuss the control plane network and the data plane network in more detail on subsequent slides in this chapter. The model of switch, number of switches, and the configuration associated with these switches depends on the actual deployment of the QFabric system. In small deployments, two standalone EX4200 switches are required, whereas in medium to large deployments, eight EX4200 Switches configured as two Virtual Chassis with four members each is required. Regardless of the deployment scenario, the 1 Gb ports are designated for the various devices in the system and the uplink ports are used to interconnect the standalone switches or the Virtual Chassis. We discuss the port assignments and the required configuration for the Virtual Chassis and standalone EX4200 Series switches in the next section of this chapter.

Chapter 2–18 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Designed for High Availability The QFabric system is designed for high availability. The individual components and the overall hardware and software architectures of the system include redundancy to ensure a high level of operational uptime. In addition to the redundant hardware components at the device level, the architectural design of the control and data planes also includes many important qualities that support a high level of system uptime. One key consideration and implementation reality is the separation of the control plane and data plane. This design, of course, is an important design goal for all devices that run the Junos OS. We cover the implementation details of the control and data planes later in this chapter. Likewise, the system's software architecture maintains high availability by using resilient fabric provisioning and fabric management protocols to establish and maintain the QFabric system. We cover the software architecture and describe some of these key considerations in the next chapter.

www.juniper.net

System Overview • Chapter 2–19

Configuring and Monitoring QFabric Systems

QFX3000-G Deployment Example Large QFabric system deployments include four QFX3008-I Interconnect devices and up to 128 Node devices, which offers up to 6,144 10 Gb Ethernet ports. Note that each Node device has a 40 Gb uplink connection to each Interconnect device, thus providing redundant paths through the fabric. The control plane Ethernet network for large deployments includes two Virtual Chassis, consisting of four EX4200 Series switches each. Although not shown in this illustration, each component in the system has multiple connections to the control plane network thus ensuring fault tolerance. This and the next slide are intended to provide some deployment examples. The examples shown on these two slides do not represent the only deployment scenarios. In some cases, there might be only two Interconnect devices with one or two uplink connections from each Node device. Note that these slides also provide you an opportunity to highlight some of the high availability design aspects of the QFabric system.

Chapter 2–20 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

QFX3000-M Deployment Example Small QFabric system deployments include four QFX3600-I Interconnect devices and up to 16 Node devices, which offers up to 768 10 Gb Ethernet ports. Note that each Node device has a 40 Gb uplink connection to each Interconnect device, thus providing redundant paths through the fabric. The control plane Ethernet network for small deployments includes two EX4200 Series switches. Although not shown in this illustration, each component in the system has multiple connections to the control plane network thus ensuring fault tolerance.

www.juniper.net

System Overview • Chapter 2–21

Configuring and Monitoring QFabric Systems

Control Plane and Data Plane The slide highlights the topic we discuss next.

Chapter 2–22 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Summary of Control Plane Functions The slide provides a summary of the control plane functions. Note that these basic control plane functions are the same in a QFabric system as they are in a traditional modular switch. The manner in which these functions are performed is covered in the next chapter.

www.juniper.net

System Overview • Chapter 2–23

Configuring and Monitoring QFabric Systems

Control Plane Network The control plane functions highlighted on the previous slide are performed over the control plane Ethernet network, which is supported by a collection of EX4200 Series switches. Note that all system components have redundant connections to the control plane network. We cover the device-specific connections on subsequent slides.

Chapter 2–24 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Control Plane Connections for Large Deployments: Part 1 This slide and the next few slides provide the control plane network connectivity details for large QFabric system deployments that use the QFX3008-I Interconnect device. This slide highlights the control plane network connections for the QFX3008-I Interconnect devices. The Interconnect devices are assigned ports 38 and 39 on all members of the Virtual Chassis. Port 0 on each of the control boards installed in the Interconnect devices connects to the first Virtual Chassis (VC 0). Similarly, port 1 on each of the control boards installed in the Interconnect devices connects to the second Virtual Chassis (VC 1). You can download a predefined configuration for the Virtual Chassis that supports the control plane connections. We discuss this configuration in more detail on subsequent slides. The following output shows the configuration used to support the connections from the Interconnect devices: {master:0}[edit] root@VC0# show interfaces | find interconnect interface-range Interconnect_Device_Interfaces { member "ge-[0-3]/0/[38-39]"; apply-groups qfabric-int; description "QFabric Interconnect Device"; }

www.juniper.net

System Overview • Chapter 2–25

Configuring and Monitoring QFabric Systems

Control Plane Connections for Large Deployments: Part 2 The slide highlights the control plane network connections for the QFX3100 Director devices in large deployments where the QFX3008-I Interconnect devices are used. Note that Port 42 through Port 47 on the Virtual Chassis are reserved for future use when non-enhanced Director devices will be supported.

The Director devices are assigned ports 40 and 41 on all members of the Virtual Chassis. Port 0 through Port 2 on the first interface module (module 0) in the Director devices connect to the first Virtual Chassis (VC 0). In a similar fashion, Port 0 through Port 2 on the second interface module (Module 1) in the Director devices connect to the second Virtual Chassis (VC 1). The ports from DG 0 connect to Port 40 and the ports from DG 1 connect to Port 41 on all members of the Virtual Chassis. Continued on the next page.

Chapter 2–26 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Control Plane Connections for Large Deployments: Part 2 (contd.) The following output shows the configuration used to support the connections from the Director devices: {master:0}[edit] root@VC0# show interfaces | find director interface-range Director_Device_DG0_LAG_Interfaces { member "ge-[0-3]/0/40"; description "QFabric Director Device - DG0"; ether-options { 802.3ad ae0; } } interface-range Director_Device_DG1_LAG_Interfaces { member "ge-[0-3]/0/41"; description "QFabric Director Device - DG1"; ether-options { 802.3ad ae1; } } In addition to the connections from the Director devices to the Virtual Chassis, you must ensure that the Director devices are connected to one another directly. Port 3 on the interface modules on both Director devices is designated for interface bonding purposes. Over these bonded interfaces, the Director devices form the Director group and perform the required synchronization tasks. We learn more about the Director group and its software and services in the next chapter.

www.juniper.net

System Overview • Chapter 2–27

Configuring and Monitoring QFabric Systems

Control Plane Connections for Large Deployments: Part 3 The slide highlights the control plane network connections for the Node devices in large deployments where the QFX3008-I Interconnect devices are used. The Node devices are assigned Port 0 through Port 31 on all members of the Virtual Chassis. All Node devices have two control ports, ports C0 and C1. Port 0 on each Node device connects to the first Virtual Chassis (VC 0). In a similar fashion, Port 1 on each Node device connects to the second Virtual Chassis (VC 1). The actual port designations used on each Virtual Chassis for the Node device connections is user defined. The only requirement for these Node device connections to the control plane network is that they connect to one of the ports in the defined range, which is Port 0 through Port 31 on all members of the Virtual Chassis. The following output shows the configuration used to support the connections from the Node devices: {master:0}[edit] root@VC0# show interfaces | find node interface-range Node_Device_Interfaces { member "ge-[0-3]/0/[0-31]"; apply-groups qfabric-int; description "QFabric Node Device"; }

Chapter 2–28 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Control Plane Connections for Small Deployments: Part 1 This slide and the next few slides provide the control plane network connectivity details for small QFabric system deployments that use the QFX3600-I Interconnect device. This slide highlights the control plane network connections for the QFX3600-I Interconnect devices. The Interconnect devices are assigned ports ge-0/0/16 through ge-0/0/19 on the EX Series switches. Port C0 on the Interconnect devices connects to the first EX4200 Series switch (EX 0). Similarly, Port C1 on the Interconnect devices connects to the second EX4200 Series switch (EX 1). In lieu of using the C0 and C1 ports, C0S and C1S ports also exist. The C0S and C1S ports are SFP-based and support optical connections. To use optics for the control plane connection, you must ensure the appropriate EX Series device, which also supports the optical control plane connections. Note that once small form-factor pluggable transceivers (SFPs) are inserted into the C0S and C1S ports, the C0 and C1 ports are disabled. The EX4200 Series switch configuration used for the small deployments will not likely be ready for download until 12.2 is released. This note serves as an FYI and will be removed in the next revision. www.juniper.net

You can download a predefined configuration for the EX4200 Series switch to support the control plane connections. We discuss this configuration in more detail on subsequent slides.

System Overview • Chapter 2–29

Configuring and Monitoring QFabric Systems

Control Plane Connections for Small Deployments: Part 2 The slide highlights the control plane network connections for the QFX3100 Director devices in small deployments where the QFX3600-I Interconnect devices are used. The Director devices are assigned port s ge-0/0/20 through ge-0/0/23 on the EX4200 Series switches. Port 0 and Port 1 on the first interface module (module 0) in the Director devices connect to the first EX4200 Series switch (EX 0). In a similar fashion, Port 0 and Port 1 on the second interface module (Module 1) in the Director devices connect to the second EX4200 Series switch (EX 0). In addition to the connections from the Director devices to the EX4200 Series switches, you must ensure that the Director devices are connected to one another directly. Port 3 on the interface modules on both Director devices is designated for interface bonding purposes. Over these bonded interfaces, the Director devices form the Director group and perform the required synchronization tasks. We learn more about the Director group and its software and services in the next chapter.

Chapter 2–30 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Control Plane Connections for Small Deployments: Part 3 The slide highlights the control plane network connections for the Node devices in small deployments where the QFX3600-I Interconnect devices are used. The Node devices are assigned ports ge-0/0/0 through ge-0/0/15 on the EX4200 Series switches. All Node devices have two control ports, ports C0 and C1. Port 0 on each Node device connects to the first EX4200 Series switch (EX 0). In a similar fashion, Port 1 on each Node device connects to the second EX4200 Series switch (EX 1). The actual port designations used on each EX4200 Series switch for the Node device connections is user defined. The only requirement for these Node device connections to the control plane network is that they connect to one of the ports in the defined range, which is ports ge-0/0/0 through ge-0/0/15 on the EX4200 Series switches.

www.juniper.net

System Overview • Chapter 2–31

Configuring and Monitoring QFabric Systems

Download, Load, and Commit The configuration required on the control plane network switches should be downloaded from Juniper Network’s website. The configuration you download depends on your deployment scenario. Deployments that use the QFX3008-I Interconnect devices use one configuration for the control plane network switches and deployments that use the QFX3600-I Interconnect devices use a different configuration for the control plane network switches. You gain access to the required configuration on the same webpage through which you download the Junos OS software. As shown on the slide, the configuration is located in the same section where the Junos OS images for the QFX products are found. Note that this access requires a login account with an active support contract. Once you access the required configuration for your deployment scenario, you can copy the required configuration lines and paste them in to the running configuration. Note that you should also add site-specific configuration details manually; including host name, management address, default route, and user account information including root authentication.

Chapter 2–32 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Data Plane Functions Like traditional modular switches, the QFabric system’s data plane has a basic set of functions. The basic data plane functions are highlighted on the slide and include providing connectivity for network devices, interconnecting line cards or in the case of the QFabric system Node devices, and forwarding traffic through the device or system. We cover the device-specific connections on subsequent slides and the process of forwarding traffic through the system in subsequent chapters.

www.juniper.net

System Overview • Chapter 2–33

Configuring and Monitoring QFabric Systems

Data Plane Connections: Part 1 This slide and the next slide provide the data plane connectivity details for the QFabric system. The slide shows sample connections between the Node and Interconnect devices. The connections between the Node and Interconnect devices use the 40 Gb QSFP+ uplink connections. The actual wiring plan used between these devices can vary and is determined by the fabric administrator. The 40 Gb QSFP+ ports on the QFX3500 Node device are fixed and can only be used as uplink ports to connect that Node device to the system’s Interconnect devices. On the QFX3600 Node device, the first four 40 Gb QSFP+ ports (0 through 3) are enabled by default as uplink ports and can be used to connect that Node device to the system’s Interconnect devices. You can alter the default port assignments on QFX3600 Node devices through configuration and can increase or decrease the total number of uplink ports, allowing ports 0-7 to be used as uplink ports. This gives the QFX3600 node the unique ability to expand its number of fabric uplink ports to a total of eight (four to each Interconnect device) for a total uplink capacity of 320 Gbps. This approach does not oversubscribe the uplink throughput capacity, which is useful in situations where oversubscription cannot be tolerated. Port 0 and Port 1 are fixed as uplink ports and cannot be changed to revenue ports.

Chapter 2–34 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Data Plane Connections: Part 2 The slide provides data plane connectivity details between the Node devices and the system endpoints. System endpoints might include servers or other networking devices such as routers, switches, firewalls, or storage devices. The supported interface types and optics depends on the Node device. The QFX3500 Node device supports 1 Gb and 10 Gb Ethernet connections with various fiber or copper-based options. The QFX3500 Node device also supports 2 Gb, 4 Gb, or 8 Gb Fibre Channel. The QFX3600 only supports 10 Gb Ethernet connections using the QSFP+ direct attached copper breakout cables. The interface types and optics used depends on the requirements for a given deployment scenario. For more details on the supported interface types and options refer to datasheet for the QFabric system at http://www.juniper.net/us/en/local/pdf/datasheets/1000393-en.pdf. Note that we cover interface configuration and monitoring for the QFabric system in subsequent chapters.

www.juniper.net

System Overview • Chapter 2–35

Configuring and Monitoring QFabric Systems

This Chapter Discussed: •

Legacy datacenter environments and the QFabric system;



Hardware components of the QFabric system; and



Control and data plane roles and responsibilities in the QFabric system.

Chapter 2–36 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems

Review Questions 1. Some of the key benefits the QFabric system offers over a traditional tiered Layer 2 network are improved scalability, efficient use of resources, and improved performance, which is a result of the decrease of end-to-end latency. 2. The four main components of the QFabric system are the Node devices, Interconnect devices, Director devices, and EX4200 Series switches. The Node devices function as intelligent line cards and are the entry and exit point for traffic entering or leaving the system. The Interconnect devices serve as the system's fabric and are used to interconnect the various Node devices. All traffic passing between two distinct Node devices passes through at least one Interconnect device. The Director devices are the control entity for the system and are often compared to the Routing Engine in traditional modular switches. The EX4200 Series switches make up the infrastructure used to support the control plane network. 3. The control plane functions consist of discovering and provisioning the system, managing routing and switching protocol operations, exchanging reachability information, and discovering and managing paths through the fabric. The data plane functions consist of providing connectivity for attached servers and network devices, interconnecting Node devices through the fabric construct, and forwarding traffic through the system.

www.juniper.net

System Overview • Chapter 2–37

Configuring and Monitoring QFabric Systems

Chapter 2–38 • System Overview

www.juniper.net

Configuring and Monitoring QFabric Systems Chapter 3: Software Architecture

Configuring and Monitoring QFabric Systems

This Chapter Discusses: •

Goals of the software architecture;



Purpose and functions of Director software;



Configuration of key software abstractions; and



Operations of internal protocols used in a QFabric system.

Chapter 3–2 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Architecture Overview The slide lists the topics we cover in this chapter. We discuss the highlighted topic first.

www.juniper.net

Software Architecture • Chapter 3–3

Configuring and Monitoring QFabric Systems

Architectural Goals The slide lists some of the key architectural goals of the QFabric system. Some of the goals listed on the slide relate to and help solve many of the challenges found in traditional multitier data center environments that we discussed in the previous chapter. To achieve these goals and allow for proper operation of the system, a number of software elements are required and must work together. We describe these software elements throughout this chapter.

Chapter 3–4 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Achieving the Architectural Goals To achieve the architectural goals described on the previous slide, a number of software elements are required and used within the QFabric system. The key software elements used to achieve the desired goals include a collection of virtual machines (VMs); new software abstracts, processes and services; and some internal protocols. We look at these software elements throughout this chapter. To allow the system to scale to the desired level, the design architects have subscribed to the “centralize what you can, distribute what you must” philosophy. The system architecture does just that; it centralizes as many functions as possible and distributes functions to the various system components when and where it makes sense. We describe the various VMs, software components, and how the system distributes control plane functions in more detail throughout this chapter.

www.juniper.net

Software Architecture • Chapter 3–5

Configuring and Monitoring QFabric Systems

Software Stack The Director group runs CentOS as its base operating system. On top of CentOS, a number of services provide the required functionality for the system. The services are grouped into one of the five categories listed on the slide. These categories follow, along with a basic description of their functions: •

Clustering: This category includes services used to form and support the Director group. Some services in this category transfer files within the system and ensure data is replicated across the Director group’s disks. Other services in this category perform database management and session load balancing functions for the system.



Networking: This category includes services that provision Director group interfaces that either interconnect Director devices or that connect the Director group to the control plane network. These services perform bonding, bridging, and monitoring functions.



Virtualization: This category includes services that create, tear down, monitor, manage, and schedule resources for the various virtual machines (VMs) on the Director group.



Fabric administration: This category includes services used to provide the single administrative view for the system. We cover these services in detail in the next section.



Monitoring: This category includes services that monitor the health of other services. If needed, these services help other system services rapidly recover from failures.

Chapter 3–6 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Software Abstractions The slide highlights the topic we discuss next.

www.juniper.net

Software Architecture • Chapter 3–7

Configuring and Monitoring QFabric Systems

Fabric Administrator One of the design goals mentioned earlier is to preserve the user experience of interacting with a single device. This requires all management and configuration functionality to be centralized. It is important to note that “centralized” does not imply a single point of failure; in a QFabric system, all centralized functions are either deployed in high availability (HA) pairs or have dedicated monitoring utilities that watch for and, if needed, help processes rapidly recover from failures. The fabric administration component, referred to as fabric admin going forward, is the primary means of accomplishing the single administrative view design goal. As previously mentioned the fabric admin includes services that make this possible. The user interface service is known as the stratus fabric controller (SFC) internally.

The user interface service provides the single management interface abstraction for the system. This service supports the command line interface (CLI), system logging (syslog), and other management interfaces such as SNMP. The user interface service is responsible for presenting information to active CLI sessions. Any information that must be sent to or pulled from the various system components is processed by the user interface service. The management process (MGD) runs the Junos OS for the Director group and is responsible for taking commands from user CLI sessions and presenting those commands to the user interface service for additional processing. MGD only interacts with the user interface service when CLI commands are issued and never interacts with any other component or device in the system. These services are closely monitored by a monitoring utility. If either of these services fails, the monitoring utility will restart the failed service.

Chapter 3–8 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

System Inventory Before moving on to other key software abstractions it is important to know that once the system is properly connected and powered on, all components, including Director, Node, and Interconnect devices and the various REs, will register with and be provisioned by the system. Once these components are registered with and provisioned by the system, you should see them as part of the system’s inventory. Note that we discuss system discovery and provisioning later in this chapter. This slide provides a sample system inventory output that lists a registered Node device. The registered Node device belongs to its own Node group, which is the default behavior. We expand on this default behavior and describe Node groups in more detail over the next several slides.

www.juniper.net

Software Architecture • Chapter 3–9

Configuring and Monitoring QFabric Systems

Node Groups Node group is also referred to as INE or independent network element in some cases. The INE reference might show up in some outputs in the CentOS shell.

The slide provides a brief explanation of the Node group software abstraction along with some other key details that relate to Node groups including the types of Node groups and the default Node group association for Node devices. We expand on these points throughout this section.

Chapter 3–10 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Server Node Groups Server Node group is sometimes referred to as top-of-rack (TOR).

A server Node group is a single Node device functioning as a logical edge entity within the QFabric system. Server Node groups connect server and storage endpoints to the QFabric system. As previously mentioned all Node devices boot up as a server Node group by default. As mentioned on the slide, server Node groups run only host-facing protocols such as link aggregation control protocol (LACP), link layer discovery protocol (LLDP), address resolution protocol (ARP), and data center bridging and exchange (DCBX). Members of a link aggregation group (LAG) from a server are connected to the SNG to provide a redundant connection between the server and the QFabric system. In use cases where redundancy is built into the software application running on the server (for example, many software-as-a-service (SaaS) applications), there is no need for cross Node device redundancy. In those cases, a server Node group configuration is sufficient. The Node device associated with a given server Node group is responsible for local Routing Engine (RE) and Packet Forwarding Engine (PFE) functions. The Node device uses its local CPU to perform these functions.

www.juniper.net

Software Architecture • Chapter 3–11

Configuring and Monitoring QFabric Systems

Redundant Server Node Groups Redundant server Node group is also referred to as PTOR or pair of TORs in some cases.

A redundant server Node group consists of a pair of Node devices that represent a single logical edge entity in a QFabric system. Similar to server Node groups, redundant server Node groups connect server and storage endpoints to the QFabric system. For Node devices to participate in a redundant server Node group, explicit configuration is required. Like server Node groups, redundant server Node groups run only host-facing protocols such as Link Aggregation Control Protocol (LACP), Link Layer Discovery Protocol (LLDP), Address Resolution Protocol (ARP), and Data Center Bridging Capability Exchange (DCBX). Redundant server Node groups have mechanisms such as bridge protocol data unit (BPDU) guard and storm control to detect and disable loops across ports. While firewalls, routers, switches, and other network devices can be connected to redundant server Node groups, only host-facing protocols and Layer 2 traffic is processed. To process network-facing protocols, such as Spanning Tree Protocols (STPs) and Layer 3 protocol traffic, the network device must connect to a Node device in the network Node group. We discuss the network Node group next. Members of a LAG from a server to the QFabric system can be distributed across Node devices in the redundant server Node group to provide a redundant connection. In cases where redundancy is not built into the software application on the server, a redundant server Node group is desirable. One of the Node devices in the redundant server Node group is selected as active and the other is the backup. The active Node device is responsible for local Routing Engine (RE) functions and both Node devices perform the Packet Forwarding Engine functions. If the active Node device fails, the backup Node device assumes the active role.

Chapter 3–12 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Network Node Group The total number of Node devices in the network Node group will increase from 8 to 32 (road map feature). Once user-defined physical partitions are introduced (a roadmap feature), there will be a single network Node group per partition.

www.juniper.net

A set of Node devices running server-facing protocols as well as network protocols such as STP, OSPF, PIM, and BGP to external devices like routers, switches, firewalls, and load balancers is known as a network Node group. This Node group exists by default and is named NW-NG-0. This name cannot be changed. Currently you can associate up to eight Node devices with this group. The network Node group can also fill the redundant server Node group's function of running server-facing protocols. Only one network Node group can run in a QFabric system at a time. In a redundant server Node group, the local CPUs on the participating Node devices perform both the RE and Packet Forwarding Engine functionality. In a network Node group, the local CPUs perform only the Packet Forwarding Engine function. Just as in a traditional modular switch, the RE function of the network Node group is located externally in the Director group.

Software Architecture • Chapter 3–13

Configuring and Monitoring QFabric Systems

Understanding the Defaults As previously mentioned, when a Node device registers and is provisioned by the system, it belongs to a server Node group by default. The default server Node groups to which individual Node devices are associated assume a Node group name based on the Node device’s serial number. The sample output on the slide shows a system inventory which is based on this default association behavior. Note that the network Node group, NW-NG-0, is present but does not have any associated Node devices. You can change the default device and group names, create new groups, or associate Node devices with the network Node group at the [edit fabric resources] hierarchy level. We provide some configuration examples on the next slide.

Chapter 3–14 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Using Aliases By default, the identity of a Node device is the serial number of that Node device. This identity, or name, is what you use when performing configuration and monitoring tasks related to a given Node device. This approach, as you might have already guessed, can be administratively taxing. To make things a little easier, you can define customized aliases for each Node device. Using this approach can simplify things administratively, making monitoring and troubleshooting efforts more manageable. This slide not only shows configuration examples for Node device aliases, but also for Node group definitions and the association of Node devices to user-defined and system-defined Node groups. Note that the definition of the network Node group uses the system-defined name NW-NG-0 as well as the network-domain statement. The network-domain statement is a mandatory when configuring the network Node group. Note that alias names must not match the name of a Node group on the system. If you use the same name for both an alias and Node group the Junos OS will fail to commit, as shown below: [edit] root@qfabric# commit [edit resources node-group sng0] A node group and a node device may not have the same name: 'sng0' error: configuration check-out failed Continued on the next page. www.juniper.net

Software Architecture • Chapter 3–15

Configuring and Monitoring QFabric Systems

Using Aliases (contd.) Changing the default group associations for a given Node device causes the affected Node device to reboot. Making these types of changes is roughly equivalent to moving a linecard or interface card in a traditional modular chassis from one slot to another. In that case, the linecard or interface card assumes a new identity and must re-register with the system. The same basic concept applies with Node devices and altering their identity or role. In addition to configuring aliases for Node devices you can also configure them for Director and Interconnect devices as shown in the following output: [edit fabric] root@qfabric# set aliases ? Possible completions: + apply-groups Groups from which to inherit configuration data + apply-groups-except Don't inherit configuration data from these groups > director-device Aliases of Director devices > interconnect-device Aliases of Interconnect devices > node-device Aliases of Node devices

Chapter 3–16 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Verifying the Results Once changes have been made to the identity and role of Node devices, you will see the results in a number of verification outputs. The slide illustrates this point and provides the appropriate output showing the names and group designations for the Node devices. This output is a result of the configuration illustrated on the previous slide. This output shows that the aliases for all Node devices are properly mapped to their corresponding identifiers, which are the serial numbers associated with each Node device. You can also see that each Node group has received its designated configuration. We discuss the registration and provisioning process for Node devices and groups in more detail later in this chapter.

www.juniper.net

Software Architecture • Chapter 3–17

Configuring and Monitoring QFabric Systems

Internal Protocols The slide highlights the topic we discuss next.

Chapter 3–18 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

System Functions Similar to a traditional modular chassis used for Layer 2 and Layer 3 operations, the QFabric system has a series of common functions that must be performed for proper system operations. The QFabric system distributes these functions between some of the key virtual machines (VMs), which function as internal Routing Engines (REs) in the system. This slide shows the four common functions along with a mapping of which REs are assigned to perform each function. We expand on these functions throughout this section and provide some operational details of how each function is performed by its corresponding RE.

www.juniper.net

Software Architecture • Chapter 3–19

Configuring and Monitoring QFabric Systems

Fabric Manager Routing Engine When components such as Node and Interconnect devices are added to the system, the fabric manager RE is responsible for discovering, provisioning, monitoring, and managing those devices. Because of these assigned responsibilities, the fabric manager is one of the first components initialized and brought up within the Director group; even before the other two REs shown on the slide. We provide a system provisioning example later in this section that illustrates this point. Continued on the next page.

Chapter 3–20 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Fabric Manager Routing Engine (contd.) The system maintains two fabric manager REs named FM-0 and FM-1; one is active while the other is standby. When there are two active Director devices participating in the Director group, one of the REs is associated with DG0 and the other RE is associated with DG1. FM-0 and FM-1, along with all other system components, should be present in the system’s inventory as shown in the following sample output: root@qfabric> show fabric administration inventory infrastructure dg0: Routing Engine Type Hostname PID CPU-Use(%) ------------------------------------------------------------------------Fabric control QFabric_default_FC-1_RE0 2700 1.4 Network Node group QFabric_default_NW-NG-1_RE1 1451 1.9 Debug Routing Engine QFabric_DRE 24106 2.6 Fabric manager FM-0 22433 0.9 dg1: Routing Engine Type Hostname PID CPU-Use(%) ------------------------------------------------------------------------Fabric control QFabric_default_FC-0_RE0 1136 1.4 Network Node group QFabric_default_NW-NG-0_RE0 32623 1.6 Fabric manager FM-1 24397 1.2 Note that the lsvm command, when issued in the CentOS shell, lists all active VMs. The VMs displayed in this output use different names than what is seen in the fabric admin CLI. For example, the FM-0 VM is named _TAG_DCF_ROOT_. A sample output follows: [root@dg0 ~]# lsvm NODE ACTIVE TAG ... dg1 1 _TAG_DCF_ROOT_

www.juniper.net

UUID 08a8be9a-f415-11e0-a9b3-00e081c57d4e

Software Architecture • Chapter 3–21

Configuring and Monitoring QFabric Systems

System Discovery One of the key responsibilities of the fabric manager RE is to discover all components within the system. This discovery process is performed over the control plane network and uses the fabric discovery protocol, which is based on the IS-IS protocol. Hello messages are exchanged between the system components, which leads to all system components eventually learning about all other system components. This system discovery process serves the same basic purpose as the hardware assist mechanism used in traditional modular chassis. The next few slides show the discovery and provisioning example for the QFabric system.

Chapter 3–22 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

The End Result Once the system components are discovered and provisioned, they are then added to a common local area network (LAN) and can communicate using TCP/IP. This common LAN and the ability to communicate using TCP/IP allow the system components to communicate other key details among themselves such as data path discovery details and reachability information. We cover the data path discovery process and how reachability information is distributed later in this chapter and in subsequent chapters.

www.juniper.net

Software Architecture • Chapter 3–23

Configuring and Monitoring QFabric Systems

Topology Discovery As previously mentioned, the QFabric system does not have a hardware assist mechanism to establish physical connectivity between the Node devices and Interconnect devices. To account for this deficiency and to ensure that the required topological information is known and distributed to the various components, the system uses an internal protocol designed for topology discovery. The topology discovery protocol performs two primary functions. The first function is to facilitate neighbor discovery between system components over the data plane network connections. The second function is to compile and distribute the learned topological details to the appropriate components throughout the system. We discuss these functions in more detail in the next few slides.

Chapter 3–24 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Discovering Neighbors One of the primary functions of the topology discovery protocol is to enable neighbor discovery between Node devices and Interconnect devices over the data plane connections. The topology discovery protocol includes the neighbor discovery portion of the IS-IS protocol for this purpose. To discover their directly attached neighbors, Node devices and Interconnect devices exchange Hello messages across their 40 Gbps data plane connections. Once the Node devices and Interconnect devices discover their attached neighbors, they communicate that information to the fabric manager RE. This fabric path information will then be organized in a topological database and shared throughout the system.

www.juniper.net

Software Architecture • Chapter 3–25

Configuring and Monitoring QFabric Systems

Distributing Fabric Path Information Once the topology database is created and the information is organized, the fabric manager RE distributes the connectivity information through the data plane to the required system components. This information includes not only physical path information between the various Node devices but also how the available paths through the fabric should be used; referred to as spray weight. The spray weight of the available paths determines how the Node devices distribute fabric-bound traffic between the available Interconnect devices.

Chapter 3–26 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

The End Result After neighbors have been discovered and the topology database has been created and distributed to the various system components, each component should have a complete view of the fabric topology with itself as the root of the topology tree. The number of available fabric paths depends on the deployment scenario. In the example on the slide, each Node device has a single connection to each QFX3008 Interconnect device which provides two distinct paths for traffic sent between Node devices. This point is illustrated on the slide, which shows the expected fabric paths between Node devices 0 and 7. Note that including additional connections to the Interconnect devices or adding more Interconnect devices through which the Node devices are connected would result in a more diverse physical topology and a better distribution of traffic sent between Node devices.

www.juniper.net

Software Architecture • Chapter 3–27

Configuring and Monitoring QFabric Systems

Network Node Group Routing Engine The third system function, which is to perform all routing and switching operations, is associated with the network Node group RE. In addition to performing all routing and switching operations, the network Node group RE also oversees the calculation of multicast trees within the system. There are two network Node group REs; one active and one backup. When there are two active Director devices participating in the Director group, one of the REs is associated with DG0 and the other RE is associated with DG1. In situations where there is only one active Director device in the Director group, both network Node group REs are associated with the active Director device. In situations such as system upgrades and maintenance windows, it might be advantageous to control the DG device mastership. Mastership can be manually switched using the request fabric administration director-group change-master command. We discuss routing and switching features and operations in subsequent chapters. We describe the role of the network Node group REs in those chapters.

Chapter 3–28 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Fabric Control Routing Engine As introduced earlier in this section, the fourth system function is to distribute routing and forwarding information to the system components. The fabric control RE is responsible for this system function. The fabric control REs are typically distributed between the Director devices participating in the Director group. In situations where only one active Director device exists in the Director group, both fabric control REs are associated with that Director device. Continued on the next page.

www.juniper.net

Software Architecture • Chapter 3–29

Configuring and Monitoring QFabric Systems

Fabric Control Routing Engine (contd.) The fabric control REs are named FC-0 and FC-1. These REs, along with all other system components, should be present in the system’s inventory as shown in the following sample output: root@qfabric> show fabric administration inventory Item Identifier Connection ... Fabric control FC-0 Connected FC-1 Connected

Configuration Configured Configured

Note that the FC-0 and FC-1 VMs are internally referred to as _DCF_default___RR-INE-0_RE0_ and _DCF_default___RR-INE-1_RE0_ as shown in the following output: [root@dg0 ~]# lsvm NODE ACTIVE TAG ... dg1 1 _DCF_default___RR-INE-0_RE0_ dg0 1 _DCF_default___RR-INE-1_RE0_

UUID 09e7c952-f41c-11e0-a2c1-00e081c57d50 0d947e60-f41c-11e0-9cf8-00e081c57d50

To distribute reachability information the fabric control REs, and all other system components run the fabric control protocol, which is based on BGP. The fabric control REs function as BGP route reflectors and peer with each other and all other system components. We provide more details regarding the fabric control protocol on subsequent slides.

Chapter 3–30 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Distributing Routing Information The primary function of the fabric control REs and the fabric control protocol is to oversee and facilitate the distribution of routing information within a QFabric system. To accomplish this task, Node groups and the fabric control REs establish reliable TCP sessions using the fabric control protocol. The fabric control protocol is based on BGP and makes use of route reflectors, which have proven to be a very scalable solution when establishing many BGP peers within a routing domain. The Junos OS has added new address families to multiprotocol BGP to allow it to carry MAC routes in addition to IP and VPN routes. The mechanism of route distinguishers allows the conversion of overlapping addresses into unique global addresses to be sent across a common infrastructure. Route targets allow the application of filters while exporting and importing routes into and out of the common infrastructure, which allows the selective sharing of network state. In a classic Layer 3 VPN, the user must explicitly configure parameters such as route distinguishers and route targets, but that is not the case with QFabric system. Instead, the QFabric system has user-configured VLANs and virtual routers for that purpose, so the creation and use of route distinguishers and route targets remain totally transparent to the user.

www.juniper.net

Software Architecture • Chapter 3–31

Configuring and Monitoring QFabric Systems

Extending the VPN Model The diagram on the slide shows a small network topology after Layer 2 network reachability state has been shared between the various Node groups through the fabric control REs. To allow the creation and sharing of this reachability state, the system uses unique route tags based on user-defined variables such as VLAN IDs. As highlighted on the slide, the fabric control REs and the network Node group REs maintain all reachability information. To reduce unnecessary overhead throughout the system, the fabric control REs only share relevant reachability information with the server Node group REs.

Chapter 3–32 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Component Provisioning As system components such as Node devices and Interconnect devices are discovered and provisioned, their respective REs receive and automatically load their respective configuration files. Part of this configuration includes BGP, running as a fabric control protocol. Continued on the next page.

www.juniper.net

Software Architecture • Chapter 3–33

Configuring and Monitoring QFabric Systems

Component Provisioning (contd.) The output below illustrates the protocol families used to support the various traffic types as well as the neighbor addresses of the route reflectors; 128.0.128.6 corresponds with FC-0 and 128.0.128.8 corresponds with FC-1. qfabric-admin@sng1> show bgp summary fabric Groups: 1 Peers: 2 Down peers: 0 Table Tot Paths Act Paths Suppressed History Damp State Pending bgp.l3vpn.0 0 0 0 0 0 0 Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped... 128.0.128.6 100 47438 47819 0 0 2w0d22h Establ bgp.l3vpn.0: 0/0/0/0 bgp.rtarget.0: 2/8/8/0 bgp.fabricvpn.0: 7/7/7/0 bgp.bridgevpn.0: 3/3/3/0 default.bridge.0: 3/3/3/0 default.fabric.0: 7/7/7/0 128.0.128.8 100 47429 47815 0 0 2w0d22h Establ bgp.l3vpn.0: 0/0/0/0 bgp.rtarget.0: 0/8/8/0 bgp.fabricvpn.0: 0/7/7/0 bgp.bridgevpn.0: 0/3/3/0 default.bridge.0: 0/3/3/0 default.fabric.0: 0/7/7/0

Chapter 3–34 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

Supporting the Different Traffic Types The slide illustrates that the protocol families defined on the previous slide were properly negotiated during the peer establishment process between the sng1 Node group and FC-0 (128.0.128.6). Because the configured protocol families are properly negotiated, we see the corresponding route tables along with the default route tables.

www.juniper.net

Software Architecture • Chapter 3–35

Configuring and Monitoring QFabric Systems

The End Result If the Node groups have established fabric control sessions with the fabric control REs, you should see routes added to one or more of the various route tables. Note that the number and type of routes added to these tables depends on the system’s configuration. Also note, the command shown on the slide was issued directly from the FC-0 RE and the output is not visible from the fabric administrator. We explore some of these tables and how they are populated in subsequent chapters.

Chapter 3–36 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems

This Chapter Discussed:

www.juniper.net



Goals of the software architecture;



Purpose and functions of Director software;



Configuration of key software abstractions; and



Operations of internal protocols used in a QFabric system.

Software Architecture • Chapter 3–37

Configuring and Monitoring QFabric Systems

Review Questions 1. The fabric admin consists of a user interface service and key management processes and is how the QFabric system provides the single administrative view to the end user. 2. A number of Routing Engines exist within the QFabric system including the fabric manager RE, network Node group REs, fabric control REs, diagnostic RE, and the local CPU REs, and server Node group REs distributed throughout the system. The fabric manager RE is used for system discovery and provisioning as well as topology discovery. The network Node group REs are used for Layer 2 and Layer 3 protocol processing. The fabric control REs are used to learn about and distribute reachability information. The local CPU REs and server Node group REs are used local processing tasks for distributed system operations. 3. System components are discovered and provisioned by the fabric manager RE through the fabric discovery protocol. The fabric manager RE interfaces with the fabric admin, VM manager and the individual VMs, and Node devices and Interconnect devices throughout the discovery and provisioning process. The fabric discovery protocol is based on IS-IS. 4. Reachability information is learned and distributed through the fabric control REs and fabric control protocol, which is based on BGP. Chapter 3–38 • Software Architecture

www.juniper.net

Configuring and Monitoring QFabric Systems Chapter 4: Setup and Initial Configuration

Configuring and Monitoring QFabric Systems

This Chapter Discusses: •

Initial setup and configuration of the QFabric system;



Logging in and verifying status of the QFabric system; and



Configuring and monitoring interfaces on the QFabric system.

Chapter 4–2 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

System Setup The slide lists the topics we cover in this chapter. We discuss the highlighted topic first.

www.juniper.net

Setup and Initial Configuration • Chapter 4–3

Configuring and Monitoring QFabric Systems

System Bring Up Checklist Deploying a system with many components and interdependencies between those components can be a difficult task. To simplify the deployment of a QFabric system, you can group the major tasks into the categories shown on the slide. We discussed the racking and cabling of a QFabric system in prerequisite courseware and a previous chapter in this course. We discuss the other three categories and their associated tasks throughout the remainder of this section.

Chapter 4–4 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Bringing Up the Control Plane Network Because all system components communicate through and depend on the control plane network, it only makes sense that this would be our first major task. As shown on the slide there are a number of steps, or smaller tasks, required to accomplish this first major task. Regardless of the deployment scenario there are some common tasks that must be performed. These common tasks include: •

Power on the EX Series switches;



Apply site-specific configuration to the switches, which might include unique hostnames, user accounts, and other system services or parameters;



Upgrade or downgrade software, if needed;



Apply the configuration required to support the QFabric system; and



Verify connectivity and state of link aggregation group (LAG) between EX Series switches.

Remember that the configuration required to support the QFabric system can be downloaded from Juniper Network’s website and can vary depending on your deployment scenario. The configuration used in large deployments, where the QFX3008-I Interconnect devices are used, is different than the configuration used in smaller deployments, where the QFX3600-I Interconnect devices are used. Continued on the next page.

www.juniper.net

Setup and Initial Configuration • Chapter 4–5

Configuring and Monitoring QFabric Systems

Bringing Up the Control Plane Network (contd.) When bringing up the control plane network in large deployment scenarios, where the QFX3008-I Interconnects are used, you will need to perform some additional tasks. In these large deployment scenarios, you must use the Virtual Chassis cables to connect the EX Series switches to form the two Virtual Chassis. Note that these connections should be made before powering up the EX Series switches. When powering up the switches, first power on the switch in each Virtual Chassis that you want to serve as the master switch. After a few minutes, power on the switch in each Virtual Chassis that you want to serve as the backup switch. Once the designated master and backup switches for each Virtual Chassis have had time to boot, power up the remaining two EX Series switches associated with each Virtual Chassis. These last two switches will serve as linecard member switches within their respective Virtual Chassis. You can verify the roles of the member switches using the show virtual-chassis status command as shown in the following output: {master:0} user@VC0> show virtual-chassis status Virtual Chassis ID: cd74.2d06.3b14 Virtual Chassis Mode: Enabled Member ID 0 (FPC 0)

Status Prsnt

Mastership Serial No Model priority BM0210409748 ex4200-48t 255

Role Master*

1 (FPC 1)

Prsnt

BM0210399371 ex4200-48t

255

Backup

2 (FPC 2)

Prsnt

BM0210409518 ex4200-48t

255

Linecard

3 (FPC 3)

Prsnt

BM0210399621 ex4200-48t

255

Linecard

Neighbor List ID Interface 1 vcp-0 3 vcp-1 2 vcp-0 0 vcp-1 3 vcp-0 1 vcp-1 0 vcp-0 2 vcp-1

Member ID for next new member: 4 (FPC 4)

Chapter 4–6 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Bringing Up the Director Group Once the control plane network and data plane network are wired and the control plane network infrastructure has been properly established, you must bring up the Director group. This slide provides a list of the basic tasks required to bring up the Director group. Before performing the tasks listed on the slide, we recommend that you identify which Director device will be DG0 and which Director device will be DG1. Predetermining the name designations for the Director devices does not impact system operations but it can ease administrative tasks later on. Once you determine the Director device name designations, power on the device designated as DG0. This device boots and searches for other Director devices. When it does not find any existing Director device it then considers itself DG0. We recommend that you boot the second Director device no less than two minutes after the first Director device to ensure that the predetermined designations are actually assigned to their respective Director devices. When the second Director device is booted, again no less than two minutes after the first, it encounters an existing Director device within the group (DG0) and then becomes DG1. The two Director devices are then mirrored and synchronized, which can take about 25 minutes. Note that once this process has been completed, DG0 will remain as DG0 and DG1 will remain as DG1 upon further reboots of the system. Once the Director group is formed and the Director devices are synchronized, you log in to DG0 using console access and perform the initial setup task. The initial setup task provisions the Director group with some key local parameters and is illustrated on the next couple of slides.

www.juniper.net

Setup and Initial Configuration • Chapter 4–7

Configuring and Monitoring QFabric Systems

Performing the Initial Setup: Part 1 To perform the initial setup of the Director group, you need some key information. Some of this required information is specific to your environment and provided by your network administrator, while other information is specific to your QFabric system and obtained through Juniper Networks. The site-specific information, which includes the Director device addresses, Director group addresses, management subnet, gateway address for the management subnet, and the passwords for the Director group and the system components, is used to facilitate remote and future console access to the system and its components. The system-specific information, which includes a system serial number and a valid and unique range of MAC addresses, is used to ensure the system can be uniquely identified for support and licensing functions and can interface with other systems without any MAC address conflicts.

Chapter 4–8 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Performing the Initial Setup: Part 2 This slide provides a sample output showing the initial setup of a Director group. The slide shows the definition of the key information mentioned on the previous slide along with some other required parameters. This sample output includes IPv4 management address definitions only and uses the QFX3000-M platform type (option 2). Note that the QFX3000-M platform type is required for deployment scenarios that use the QFX3600-I Interconnect devices, and the QFX3000-G (option 1 in the sample output on the slide) is required for deployments that use the QFX3008-I Interconnect devices. To invoke the script manually you can use the following command: [root@dg0 ~]# /root/reset_initial_configuration.sh

www.juniper.net

Setup and Initial Configuration • Chapter 4–9

Configuring and Monitoring QFabric Systems

Logging In to the System Once the initial setup of the Director group has been performed you should be able to access the Director devices and the fabric admin CLI through the out of band (OoB) management network using SSH. All incoming SSH sessions to the fabric admin are directed to the load balancing utility running in the Director software and distributed between the Director devices participating in the Director group. Note that you do not need to SSH to the virtual IP address associated with the system’s default partition to gain access to the fabric admin, which, again, is the Junos CLI user interface for a QFabric system. Instead, you can issue the cli command on DG0 or DG1, as shown in the following output: [root@dg0 ~]# cli RUNNING ON DIRECTOR DEVICE : dg0 root@qfabric> Note that the initial access to the fabric admin is obtained using the root account without a password. Once you begin to configure the system, you must configure root authentication. We cover the initial configuration of a QFabric system later in this chapter.

Chapter 4–10 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Bringing Up Interconnect Devices and Node Devices After the system is racked, cabled, has a functioning control plane network, and the Director group is running, your focus turns to the Interconnect devices and Node devices participating in the system. This slide provides a list of recommended steps used to bring up the Interconnect devices and Node devices. We recommend you bring up the Interconnect devices first and then the Node devices. Note that for QFX3000-M deployments where the QFX3600-I Interconnect devices are used, the Director devices must be running Junos operating system (OS) Release 12.2 or later. If the Director group software version is a release prior to 12.2 the QFX3600-I Interconnect devices will not join the system. For all deployment scenarios, you must ensure the software version on QFX3500 Node devices is compatible with the QFabric system. The initial software version and some subsequent images used on the QFX3500 devices is not compatible with the QFabric system. If the software image running on the QFX3500 Node devices is not compatible with the QFabric system, you must upgrade the Node devices to a compatible image. Note that software upgrades are covered in Appendix A. The software version running on the Interconnect and Node devices does not need to be the same version of software running on the Director group. As long as the images on the Interconnect devices and Node devices are compatible with the version running on the Director group, the Director group will register those devices and automatically upgrade them to the version running on the Director group. Continued on the next page.

www.juniper.net

Setup and Initial Configuration • Chapter 4–11

Configuring and Monitoring QFabric Systems

Bringing Up Interconnect Devices and Node Devices (contd.) In addition to the potential software compatibility issues, you must also ensure that the Node devices are configured for fabric mode instead of standalone mode. The following sample output illustrates the basic process for converting QFX3500 and QFX3600 devices from standalone mode to fabric mode: root> request chassis device-mode fabric Device mode set to 'fabric' mode. Please reboot the system to complete the process. root> show chassis device-mode Current device-mode : Standalone Future device-mode after reboot : Fabric root> request system reboot Reboot the system ? [yes,no] (no) yes Shutdown NOW! …

Chapter 4–12 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

The End Result Once you have performed the required steps previously mentioned, you should see all components registered with the system. This slide shows a sample output, listing the various Node devices and Interconnect devices along with the Routing Engines (REs) required to support the system. The two columns on the right side of the sample output display the state of all components registered with the system. Based on this sample output, we see that the components registered with the system are properly connected and configured. Note that the time it takes for components to register with and be provisioned by the system can vary and might depend on the software version running on the Director group as well as the Node devices and Interconnect devices.

www.juniper.net

Setup and Initial Configuration • Chapter 4–13

Configuring and Monitoring QFabric Systems

Test Your Knowledge This slide is animated.

The slide is designed to test your understanding of some of the common reasons why devices might not register with a QFabric system. The text boxes on the slide provide a list of the common issues as well as a list of the remedies for those issues.

Chapter 4–14 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Initial Configuration Tasks The slide highlights the topic we discuss next.

www.juniper.net

Setup and Initial Configuration • Chapter 4–15

Configuring and Monitoring QFabric Systems

Defaults and Initial Configuration Tasks Now that your system is up and running, you might actually want to put it to work. To put your system to work, you must log in to the fabric admin and enable the desired functionality through the associated configuration statements. To log in to the fabric admin the first time, use the root user account without a password. Note that while the fabric admin does not have any default configuration, the various system components do have the default configuration received from the Director group software during the provisioning process.

Once you are logged in, you can navigate from operational mode to configuration mode and add the desired statements. Note that the configuration file on the fabric admin is completely empty (unlike other devices that run the Junos OS. Root authentication must be defined before any other configuration statements can be committed. In addition to defining root authentication, you will likely also perform one or more of the initial configuration tasks listed on the slide. Note that the management network, often used for system services such as NTP, syslog, and SNMP, is not shown with the show route command, as is usually the case with other Junos devices. This behavior is due to VM isolation between system management functions and the Junos OS routing process. We discuss the initial configuration tasks listed on the slide in more detail throughout the remainder of this chapter.

Chapter 4–16 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

User Access and Authentication As previously mentioned, the initial user access available when a QFabric system is deployed is through console or SSH access (assuming the initial setup of the system has been performed) using the root user account and no password. Similar to other devices running the Junos OS, you must define a password for the root user once you begin configuring the system. You can also define additional user accounts along with various authentication options. The same user and authentication options configurable on other devices running the Junos OS are also available on the QFabric system. It is worth pointing out that the CLI description for the qfabric-user class is inaccurate as shown. The description should indicate that this class prevents users from accessing system components. PR811007 is open to correct this description. www.juniper.net

In addition to the user and authentication options available on all other devices running the Junos OS, you can also enable the remote-debug-permission option when defining user accounts and authentication. This configuration option allows users with which the option is associated to access the various components within the system. Note that the remote-debug-permission option requires an accompanying class definition. The qfabric-admin class allows the user to manage system components. The qfabric-operator class allows users to view component operations and configurations. The qfabric-user class prevents users from accessing components and is the equivalent of the Junos OS predefined user class unauthorized. We illustrate how connections are made to the remote system components on the next slide.

Setup and Initial Configuration • Chapter 4–17

Configuring and Monitoring QFabric Systems

Connecting to System Components To initiate a connection to the remote components within a QFabric system, you use the request component login component-name command as shown on the slide. This command allows you to connect to Node devices and Interconnect devices as well as the various Routing Engines (REs) throughout the system. Once connected to a remote system component using the request component login component-name command, you can navigate within the component and display various outputs just as you would on any other device running the Junos OS. As shown on the slide, the class definition associated with the user, which again is defined as part of the remote-debug-permission statement, is displayed as the user name within the prompt once a successful connection to the desired component is made. Note that you can change the component password specified during the initial setup script. To change the component password, configure a new password under the [edit system device-authentication] configuration hierarchy as shown here: [edit system device-authentication] user@qfabric# show encrypted-password "$1$n4phuc8v$iy/QcYmQ1GjH8t4Z5bc8V/:$9$ZVDqf0ORevLhSbsg4ZG/CA"; ## SECRET-DATA

Chapter 4–18 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

System Time Synchronization To ensure proper operations of system logging, database and file system replication, and other key functions, time synchronization within the QFabric system is critical. To synchronize time within the system, all components are configured when provisioned to synchronize their time with the Director group. The required NTP communication among the system components occurs over the control plane network and specifically through the 169.254.0.0/16 subnet. The DG0 Director device is assigned the IP address 169.254.0.1 and serves as the NTP boot server for all system components. As shown on the slide, both Director devices serve as NTP servers. The NTP configuration on the system components is shown in the following output: qfabric-admin@P3603-C> show configuration system | find ntp ntp { boot-server 169.254.0.1; authentication-key 1 type md5 value "$9$jgikmTQ3tpOM8UiHk5T1REcev"; ## SECRET-DATA broadcast-client; trusted-key 1; } Continued on the next page.

www.juniper.net

Setup and Initial Configuration • Chapter 4–19

Configuring and Monitoring QFabric Systems

System Time Synchronization (contd.) If an external NTP server is defined for the system, the Director devices synchronize with that server and then perform the roles of NTP clients and servers; NTP clients to the external NTP server and NTP servers for the QFabric system components. To verify NTP synchronization with an external server on the system, use the run show ntp associations command as shown in the following output: root@qfabric> show ntp associations remote refid st t when poll reach delay offset jitter ============================================================================== *10.210.14.130 205.233.73.201 3 u 56 64 377 0.176 2889777 7.034

Chapter 4–20 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

System Logging Ensure that the students know that while there is no messages log file configured as part of the fabric admin’s factory default configuration, logs from the various components are generated and available through the fabric admin.

On most devices that run the Junos OS, a single messages log file is created and maintained for the device. On a QFabric system, however, each component is configured to create and maintain its own messages log file. The Director group gathers and organizes the messages log files from the various system components. The information gathered from the system components is organized in a database maintained by the Director group software and is accessible through the fabric admin using the show log messages command. The individual entries collected include the component name from which the entries were retrieved. This component name is helpful when troubleshooting and can help point you in the right direction if a problem is related to a specific component. The sample output that follows shows the expected format for a log message retrieved from a Node device named P3603-C.

root@qfabric> show log messages | match P3603-C Apr 19 04:47:22 qfabric mib2d: QFABRIC_INTERNAL_SYSLOG: Apr 19 04:47:22 P3603-C mib2d: SNMP_TRAP_LINK_UP [[email protected] snmp-interface-index="1209532945" admin-status="up(1)" operational-status="up(1)" interface-name="xe-0/0/10"] ifIndex 1209532945, ifAdminStatus up(1), ifOperStatus up(1), ifName xe-0/0/10 Continued on the next page.

www.juniper.net

Setup and Initial Configuration • Chapter 4–21

Configuring and Monitoring QFabric Systems

System Logging (contd.) While you can use the fabric admin CLI interface and the show log messages command to view the syslog messages, you might find it advantageous to export these messages to an external server. Note that connections to external devices for system management functions such as system logging and SNMP communications use the out-of-band management ports residing on DG0. To export the syslog messages to an external server, you must specify the desired external server using the host statement as shown in the following configuration example: [edit] root@qfabric# show system | find syslog syslog { host 10.210.14.130 { any any; } } Note that there might be some value in promoting the use of exporting log messages to an external server for remote log viewing. Currently, there is a significant delay in retrieving output locally from the show log commands. At the very least, promote the use of the pipe (|) command because these files can be very large and seemingly difficult to manage and use.

Chapter 4–22 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

SNMP SNMP monitors network devices from a central location. The QFabric system supports the basic SNMP architecture of Junos OS, but its implementation and current support of SNMP differ from that of other devices running Junos OS. As in other SNMP systems, the SNMP manager resides on the network management system (NMS) of the network. The SNMP agent resides in system components, such as Node devices and Interconnect devices, and in the Director group software. The SNMP agent is responsible for receiving and distributing traps as well as responding to queries from the SNMP manager. For example, traps generated by a Node device are sent to the SNMP agent on the Director group, which in turn processes and sends them to the target IP addresses defined in the SNMP configuration on the fabric admin. In this role, the Director group acts as an SNMP proxy server. Using this SNMP proxy server approach requires more time to process SNMP requests when compared to a typical Junos OS device. The default timeout setting on most SNMP applications is 3 seconds. This amount of time might not be enough for the QFabric system to respond to SNMP requests. Because of the additional time SNMP processing might take on a QFabric system, we recommend you change the SNMP timeout setting to 5 seconds or longer on your NMS so the QFabric system has sufficient time respond to the incoming requests. Note that the current support of SNMP is somewhat limited. Only a handful of MIBs are currently supported, client user access is limited to read-only access, and the local monitoring options are limited to the operational mode show snmp statistics command. Please keep in mind that a number of roadmap items related to SNMP are scheduled for future software releases. www.juniper.net

Setup and Initial Configuration • Chapter 4–23

Configuring and Monitoring QFabric Systems

Using Aliases—A Review By default, the identity of a Node device is the serial number of that Node device. This identity, or name, is what you use when performing configuration and monitoring tasks related to a given Node device. This approach, as you might have already guessed, can be administratively taxing. To make things a little easier, you can define customized aliases for each Node device. Using this approach can simplify things administratively, making monitoring and troubleshooting efforts more manageable. The slide shows configuration examples not only for Node device aliases but also for Node group definitions and the association of Node devices to user-defined and system-defined Node groups. Note that the definition of the network Node group uses the system-defined name NW-NG-0 as well as the network-domain statement. The network-domain statement is mandatory when configuring the network Node group. Note that alias names must not match the name of a Node group on the system. If you use the same name for both an alias and Node group the Junos OS will fail to commit, as shown below: [edit] root@qfabric# commit [edit resources node-group sng0] A node group and a node device may not have the same name: 'sng0' error: configuration check-out failed Continued on the next page. Chapter 4–24 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Using Aliases—A Review (contd.) Changing the default name and group associations for a given Node device causes the affected Node device to reboot. Making these types of changes is roughly equivalent to moving a linecard or interface card in a traditional modular chassis from one slot to another. In that case, the linecard or interface card assumes a new identity and must re-register with the system. The same basic concept applies with Node devices and altering their identity or role. In addition to configuring aliases for Node devices, you can also configure them for Director devices and Interconnect devices as shown in the following output: [edit fabric] root@qfabric# set aliases ? Possible completions: + apply-groups Groups from which to inherit configuration data + apply-groups-except Don't inherit configuration data from these groups > director-device Aliases of Director devices > interconnect-device Aliases of Interconnect devices > node-device Aliases of Node devices

www.juniper.net

Setup and Initial Configuration • Chapter 4–25

Configuring and Monitoring QFabric Systems

Activating Configuration Changes As with all Junos OS devices, you must use the commit command to activate your configuration changes on a QFabric system. As with other Junos OS devices that are in a high availability mode, you use a private edit of the configuration file when entering configuration mode within the fabric admin of the QFabric system, shown in the following sample output: root@qfabric> configure warning: Using private edit on QF/Director warning: uncommitted changes will be discarded on exit Entering configuration mode As mentioned in the sample output, using a private edit of the configuration file will discard any uncommitted changes when the user exits the system. One corresponding affect or requirement of this operation is that all commit operations must be performed from the top of the configuration hierarchy. This point is illustrated on the slide where a commit operation is attempted at a lower level within the configuration hierarchy and fails with a descriptive error. When a commit operation is performed, and if the changes to the configuration file relate to the installed Node groups, the appropriate configuration is pushed to the affected Node group Routing Engines (REs) and applied to that component’s local configuration file. With this in mind, please note that commit operations can take some time.

Chapter 4–26 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Configuring Network Interfaces The slide highlights the topic we discuss next.

www.juniper.net

Setup and Initial Configuration • Chapter 4–27

Configuring and Monitoring QFabric Systems

Interface Support As mentioned on the slide, revenue ports on Node devices can be configured for Layer 2 and Layer 3 operations and can be joined in groups to form link aggregation groups (LAGs). Just as with other Junos OS devices, the protocol family configured on the interface determines if the interface will perform Layer 2 or Layer 3 operations. We cover configuration and some basic monitoring examples throughout the remainder of this section. A number of different interface type options are available for the Node devices. As shown on the slide, the interface type options on the QFX3500 Node devices include 1 GbE and 10 GbE connections and 2 Gb, 4 Gb, and 8 Gb Fibre Channel connections. The QFX3600 Node device currently offers a 10 GbE interface type option through the Quad Small Form-factor Pluggable Plus (QSFP+) direct attached copper (DAC) breakout cables. Note that other interface options will be supported in future releases. For the most current interface support information for the QFX3500 and QFX3600 Node devices, refer to the appropriate datasheet at http://www.juniper.net/us/en/products-services/switching/qfx-series/qfabric-system/ #literature.

Chapter 4–28 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Interface Naming When you configure an interface on the QFabric system, the interface name must follow a four-level naming convention that enables you to identify an interface as part of either a Node device or a Node group. Include the name of the network or server Node group at the beginning of the interface name. The four-level interface naming convention is device-name:type-fpc/pic/port, where device-name is the name of the Node device or Node group. The remainder of the naming convention elements are the same as those used with other Junos OS devices. The slide shows a side-by-side comparison of the interface naming convention for traditional Junos OS devices and that used within the QFabric system.

www.juniper.net

Setup and Initial Configuration • Chapter 4–29

Configuring and Monitoring QFabric Systems

Layer 2 Interface Configuration The slide provides a comparison of the Layer 2 interface configuration used for the EX Series switches and the QFabric system. Other than the difference in the interface naming format, the Layer 2 interface configuration syntax for EX Series switches and the QFabric system is the same. We discuss Layer 2 features and operations on QFabric systems in the next chapter.

Chapter 4–30 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Interface Ranges To ease the administrative overhead when configuring Layer 2 interfaces on the QFabric system, you can define interface ranges. Interface ranges configured on QFabric systems are based on Node device interface ranges or individual interfaces from different Node devices. As shown on the slide, you use the member-range statement to define a sequential list of interfaces on the same Node device that will share common configuration settings. You use the member option to list individual member interfaces on the same Node device or different Node devices that will share common configuration settings.

www.juniper.net

Setup and Initial Configuration • Chapter 4–31

Configuring and Monitoring QFabric Systems

Layer 3 Interface Configuration The slide provides a comparison of the Layer 3 interface configuration used for the EX Series switches and the QFabric system. Other than the difference in the interface naming format, the Layer 3 interface configuration syntax for EX Series switches and the QFabric system is the same. We discuss Layer 3 features and operations on QFabric systems in a subsequent chapter.

Chapter 4–32 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Monitoring Interfaces: Part 1 You monitor Layer 2 interfaces on the QFabric system the same way you monitor an EX Series switch. The sample capture on the slide illustrates the use of the show ethernet-switching interfaces command and typical outputs for an access port and a trunk port.

www.juniper.net

Setup and Initial Configuration • Chapter 4–33

Configuring and Monitoring QFabric Systems

Monitoring Interfaces: Part 2 The show interfaces command is useful when verifying state, configuration, and error details, and usage statistics for Layer 2 and Layer 3 interfaces. You can reference a specific interface within this command to filter the output and limit the display to the desired interface. You can also increase or decrease the amount of details displayed in the resulting output by using the appropriate command option. The options for the show interfaces interface-name command are shown in the following output: root@qfabric> show interfaces Node-83:ge-0/0/25 ? Possible completions: Execute this command brief Display brief output descriptions Display interface description strings detail Display detailed output extensive Display extensive output fabric Fabric interfaces media Display media information routing-instance Name of routing instance snmp-index SNMP index of interface statistics Display statistics and detailed output terse Display terse output | Pipe through a command

Chapter 4–34 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Link Aggregation Groups Link aggregation groups (LAGs) allow you to bundle multiple interfaces as a single logical interface with increased bandwidth and availability. LAGs can be static or dynamic. You configure LACP for dynamic LAGs and at least one end of the LAG must be configured for active mode LACP operations. LAGs can include interfaces that connect to the same Node devices within any Node group or different Node devices within the network Node group or a redundant server Node group. You can include up to 32 Ethernet interfaces in a LAG. You can have up to 48 LAGs within a server Node group and 128 LAGs in the network Node group. We cover LAG configuration on a subsequent slide in this chapter.

www.juniper.net

Setup and Initial Configuration • Chapter 4–35

Configuring and Monitoring QFabric Systems

LAG Configuration: Part 1 The slide provides a comparison of the LAG configuration used for the EX Series switches and the QFabric system. This slide specifically covers the creation of the aggregated interfaces. Note that on the QFabric system you create aggregated interfaces on a per Node group basis whereas on the EX Series you configure them for the entire chassis. We illustrated the configuration of the aggregated interfaces and member links and how they are linked on the next slide.

Chapter 4–36 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

LAG Configuration: Part 2 The slide illustrates the remaining portions of the LAG configuration on an EX Series switch and a QFabric system. Specifically, this slide illustrates the definition of a dynamic aggregated interface and the member links associated with that aggregated interface. Note the inclusion of LACP within the configured aggregated interface, which is the defining characteristic and option that makes this a dynamic LAG. The aggregated interface includes the active option within the LACP configuration and is required on at least one of the two devices or systems participating in the LAG. Note that in the configuration example for the QFabric system, the member links participating in the rsng-1:ae0 LAG are associated with different Node devices participating in the rsng-1 Node group. This cross-Node device configuration option is available only with redundant server Node groups and network Node groups. This configuration requirement is in place because of the distributed control plane model used within the QFabric system. The Routing Engine associated with the Node group for which the aggregated interface is defined is responsible for the maintenance of the LAG, including the generating and processing of any related LACP traffic.

www.juniper.net

Setup and Initial Configuration • Chapter 4–37

Configuring and Monitoring QFabric Systems

Monitoring LAGs and LACP You monitor LAGs and LACP on the QFabric system the same way you monitor an EX Series switch. The sample capture on the slide illustrates the key commands and some associated outputs used to monitor and verify proper operations of LAGs and LACP. Note that depending on which Release version you are running, you might see a warning message when issuing the show lacp commands listed on the slide. This issue is documented in PR723700. A sample of this warning is shown in the following output: root@qfabric> show lacp interfaces Aggregated interface: rsng-1:ae0 LACP state: Role Exp Def Dist Col Syn Aggr Timeout Activity node-3:xe-0/0/22 Actor No No Yes Yes Yes Yes Fast Active node-3:xe-0/0/22 Partner No No Yes Yes Yes Yes Fast Active node-2:xe-0/0/18 Actor No No Yes Yes Yes Yes Fast Active node-2:xe-0/0/18 Partner No No Yes Yes Yes Yes Fast Active LACP protocol: Receive State Transmit State Mux State node-3:xe-0/0/22 Current Fast periodic Collecting distributing node-2:xe-0/0/18 Current Fast periodic Collecting distributing warning: lacp subsystem not running - not needed by configuration.

Chapter 4–38 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Monitoring Traffic As shown on the slide, you can monitor traffic entering and leaving the system through a revenue port, but only from the Node device or group associated with that port. You cannot currently accomplish this same task from the fabric admin, which is important to know when you must troubleshoot traffic sent to and from the system. As shown in the sample output that follows, the support of the monitor command does not currently exist on the fabric admin: root@qfabric> monitor ? No valid completions When connected to a Node device or group, you have the following options with the monitor command: {master} qfabric-admin@rsng-1> monitor ? Possible completions: ethernet Start ethernet performance measurement interface Show interface traffic list Show status of monitored files start Start showing log file in real time static-lsp Show static label-switched-path traffic stop Stop showing log file in real time traffic Show real-time network traffic information www.juniper.net

Setup and Initial Configuration • Chapter 4–39

Configuring and Monitoring QFabric Systems

This Chapter Discussed: •

Initial setup and configuration of the QFabric system;



Logging in and verifying status of the QFabric system; and



Configuring and monitoring interfaces on the QFabric system.

Chapter 4–40 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems

Review Questions 1. After the equipment is installed, you should first bring up the control plane Ethernet network infrastructure, which is provided through EX Series switches. You should then bring up the Director devices and ensure that they form a Director group. Once the Director group is formed, you should bring up the Interconnect and Node devices. 2. When bringing up the Director group, you will need the IP addresses for DG0 and DG1 as well as the default root partition virtual IP address. You will need the default gateway address for the management subnet on which the Director devices are connected. You will need two passwords—one for the Director devices and the other for Node devices and Interconnect devices. You will also need the serial number and MAC address range information, both of which are obtained through Juniper Networks when the system is purchased. 3. The QFabric system follows a four-level interface naming convention using the format device-name:type-fpc/pic/port, where device-name is the name of the Node device or Node group. The remainder of the naming convention elements are the same as those used with other Junos OS devices.

www.juniper.net

Setup and Initial Configuration • Chapter 4–41

Configuring and Monitoring QFabric Systems

Lab 1: Setup and Initial Configuration The slide provides the objectives for this lab.

Chapter 4–42 • Setup and Initial Configuration

www.juniper.net

Configuring and Monitoring QFabric Systems Chapter 5: Layer 2 Features and Operations

Configuring and Monitoring QFabric Systems

This Chapter Discusses: •

Supported Layer 2 features;



Commonly used Layer 2 connections;



Layer 2 operations and traffic flow; and



Configuring and monitoring Layer 2 features.

Chapter 5–2 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Layer 2 Features The slide lists the topics we cover in this chapter. We discuss the highlighted topic first.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–3

Configuring and Monitoring QFabric Systems

Overview of Layer 2 Features The slide provides a brief overview of key Layer 2 features supported on QFabric systems. Many of the Layer 2 features supported on EX Series switches are also supported on QFabric systems. For a list of all supported features on the QFabric systems visit www.juniper.net. Note that features supported on EX Series and QFabric systems typically use the same syntax for configuration and monitoring tasks related to those features. We highlight specific details for the key Layer 2 features listed on the slide throughout this chapter.

Chapter 5–4 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Understanding the Defaults On most EX Series switches, a factory-default configuration exists which enables interfaces for Layer 2 operations. As shown in an earlier chapter, the QFabric system does not have any such factory-default configuration. For interfaces to support Layer 2 operations, you must configure the interface using the four-level naming convention mentioned in the previous chapter and include the ethernet-switching protocol family as part of the interface’s configuration. The configuration example on the slide illustrates the required statements to enable an interface for Layer 2 operations. Interfaces that include the ethernet-switching protocol family within their configuration will, by default, function as access ports and be associated with the default VLAN. As with the EX Series switches, the default VLAN on QFabric systems is untagged. For a port to serve as a trunk port or to associate with a nondefault VLAN, additional user configuration is required. We explore and illustrate additional port configuration scenarios throughout this chapter.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–5

Configuring and Monitoring QFabric Systems

VLANs and Port Membership The slide provides a basic configuration example of VLANs and port membership. On this slide we illustrate and note that interfaces can be assigned to their designated VLANs at the [edit vlans] and [edit interfaces] hierarchy levels. The hierarchy level at which you associate interfaces with their designated VLANs is not important because both locations provide the same result. We do recommend, however, that you be consistent in your configuration approach to avoid any errors.

Chapter 5–6 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Common Layer 2 Connection Types The slide introduces the common Layer 2 connection types found in data center environments. The rack server and blade chassis connections can associate with any Node group type, but are often made with server Node groups and redundant server Node groups. As the name implies, the network Node group connections are associated with the network Node group only. We provide more details about these Layer 2 connection types on subsequent slides.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–7

Configuring and Monitoring QFabric Systems

Rack Server Connections How the rack server connects to the QFabric system depends on the high availability strategy, specifically whether it is at the application, server and network interface card (NIC), or network level. As shown on the slide, rack servers are typically connected to the Layer 2 network in one of three ways depending on the business requirements. The three connection methods used by rack servers are as follows: •

Single-attached: The server has a single link connecting to a Node device. In this model, there is either no redundancy, or the redundancy is built into the application.



Dual-attached: The server has two links connecting to the same Node device. NIC teaming is enabled on the servers, where it can be either active/standby or active/active. The second link provides the second level of redundancy. The more common deployment is active/active with a static LAG between the switch and rack server.



Dual-homed: The server has two links that connect to two different Node devices in the same Node group in either active/standby or active/active mode. This method provides a third level of redundancy; in addition to link redundancy there is spatial redundancy. If one of the Node devices fails, then there is an alternative path. For active/active deployments, a cross-Node LAG is typically used between the QFabric system and the server. This connection method requires the use of a redundant server Node group.

Chapter 5–8 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Blade Chassis Connections In cases where blade chassis are used instead of rack servers, physical connectivity can vary depending on the blade chassis intermediary connection, pass-through module, or blade switches. We recommend a pass-through module whenever possible because it provides a direct connection between the servers and the QFabric system. This direct connection eliminates any oversubscription and the additional switching layer that is seen with blade switches. The deployment options for blade chassis using a pass-through module are exactly the same as previously described for rack servers. Note that deployments using blade switches represent additional devices to manage. This deployment method adds complexity to the overall switching topology and can introduce some unwanted issues related to the inclusion of a Spanning Tree Protocol (STP). The slide shows the common connections used between the blade switches and a QFabric system. Continued on the next page.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–9

Configuring and Monitoring QFabric Systems

Blade Chassis Connections (contd.) The three connection methods used by blade chassis when blade switches are used as the intermediary connection are as follows: •

Single-homed: Each blade switch has a LAG connection into a single Node device. In this deployment, there are no Layer 2 loops to worry about or manage.



Dual-homed (active/backup): In this deployment, each access switch is a standalone device. Because there are potential Layer 2 loops, the blade switch should support some sort of Layer 2 loop prevention technology, such as STP or active/backup like technology, which will effectively block any redundant link to break the Layer 2 loop.



Dual-homed (active/active): This deployment provides the most optimized deployment, as all links between the blade and access switches are active and forwarding and provide network resiliency. The connection between the blade switch and access switch is a LAG, which means the external switches must support either multichassis LAG or some form of stacking technology. Because LAG is a single logical link between the blade and external switch, there are no Layer 2 loops to worry about or manage.

The connections illustrated on the slide and previously described assume that the blade switches are separate entities and are not daisy-chained or logically grouped through a stacking technology. Scenarios in which the blade switches are grouped might require additional considerations. Because QFabric systems are a distributed solution that acts as a single logical switch, the two most likely deployments are single-homed or dual-homed (active/active). As shown on the slide, the Node devices will be configured as server Node groups for single-homed connections and redundant server Node groups for dual-homed (active/active) connections.

Chapter 5–10 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Network Node Group Connections The slide illustrates and provides details for situations where Layer 2 connections are used within the network Node group. We discuss these situations in more detail later in this chapter and in the next chapter.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–11

Configuring and Monitoring QFabric Systems

Ingress Protocol Traffic While there is no need to run the Spanning Tree Protocol (STP) within the QFabric system, there might be deployment scenarios that require its use when connecting the QFabric system with another Layer 2 device. STP bridge protocol data units (BPDUs) can only be received and processed through interfaces associated with the network Node group. If BPDUs are received on a server Node group or redundant server Node group, the interface on which the BPDU was received is disabled. All server Node groups are automatically configured to block BPDUs and disable their interfaces should they receive BPDUs. A sample of the configuration used to enforce this functionality is shown in the following output: {master} root@RSNG-1> show configuration | find ethernet-switching-options ethernet-switching-options { nonstop-bridging; bpdu-block { interface all; } } You can re-enable interfaces on server Node groups that have been disabled because of BPDUs. We cover the recommended process on the next slide.

Chapter 5–12 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

If.., Then... The slide illustrates the effects of an interface associated with a server Node group that has received a BPDU. The sample output on the slide shows that an interface has been disabled by the BPDU protect mechanism. Should this situation occur, we recommend that you first disable STP on the connecting device and then clear the BPDU error using the command highlighted on the slide. If STP is required between the system and the connecting device, you must ensure the connection associates with the network Node group and that STP is enabled on the system. When enabling STP on a QFabric system, the only interfaces that will actively receive and send BPDUs are interfaces belonging to the network Node group. We cover a sample STP configuration scenario in the case study later in this chapter.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–13

Configuring and Monitoring QFabric Systems

Layer 2 Operations The slide highlights the topic we discuss next.

Chapter 5–14 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

The Distributed RE Model In a traditional chassis-based system, the control plane functions are centralized in one or two Routing Engines, depending on the hardware configuration. All linecards as well as the fabric are passive participants in the control plane operations and primarily participate in the data plane tasks. While the QFabric system centralizes many control plane functions in the REs running on the Director group, it does not centralize all of these functions. To allow the system to scale and maintain a high-degree of performance, it distributes some key control plane functions to the REs belonging to the various system components, including the Interconnect devices and the Node groups. These distributed REs work together to ensure the proper route information is shared and that traffic properly flows through the system. We provide some details and illustrations of how the distributed REs work together on subsequent slides.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–15

Configuring and Monitoring QFabric Systems

Supporting Layer 2 Route Exchange As system components such as Node and Interconnect devices are discovered and provisioned, their respective REs receive and load their assigned configuration files. A portion of the received configuration file supports the fabric control protocol. Using the bridge-vpn protocol family, Layer 2 routes are exchanged between system components through the fabric control REs, which serve as BGP route reflectors. Note that this configuration is automatically received and applied. Users do not need to explicitly configure the bridge-vpn protocol family. An example snippet of this configuration is shown here: [edit fabric protocols bgp] root@sng1# show … family bridge-vpn { unicast; } … Continued on the next page.

Chapter 5–16 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Supporting Layer 2 Route Exchange (contd.) You can log in to the individual components and view the contents of the bgp.bridgevpn.0 table as shown in the following output: qfabric-admin@FC-0> show route fabric table bgp.bridgevpn.0 bgp.bridgevpn.0: 11 destinations, 22 routes (11 active, 0 holddown, 0 hidden) Restart Complete + = Active Route, - = Last Active, * = Both 65534:1:2.ff:ff:ff:ff:ff:ff/144 *[BGP/170] 1d 00:29:41, localpref 100, from 128.0.128.4 AS path: I > to 128.0.128.4:57005(NE_PORT) via dcfabric.0, MultiCast Corekey:3 Keylen:7 [BGP/170] 1d 00:29:41, localpref 100, from 128.0.128.8 AS path: I > to 128.0.128.4:57005(NE_PORT) via dcfabric.0, MultiCast Corekey:3 Keylen:7 65534:1:3.33:33:df:ff:ff:1/144 *[BGP/170] 2d 15:33:51, localpref 100, from 128.0.128.4 AS path: I > to 128.0.128.4:57005(NE_PORT) via dcfabric.0, MultiCast Corekey:0 Keylen:7 [BGP/170] 2d 15:34:40, localpref 100, from 128.0.128.8 AS path: I > to 128.0.128.4:57005(NE_PORT) via dcfabric.0, MultiCast Corekey:0 Keylen:7 65534:1:5.0:c:29:3d:f3:b9/144 *[BGP/170] 1d 00:29:40, localpref 100 AS path: I > to 128.0.130.18 via dcfabric.0, PFE Id 6 Port Id 27 to 128.0.130.18 via dcfabric.0, PFE Id 8 Port Id 27 [BGP/170] 1d 00:29:40, localpref 100, from 128.0.128.8 AS path: I > to 128.0.130.18 via dcfabric.0, PFE Id 6 Port Id 27 to 128.0.130.18 via dcfabric.0, PFE Id 8 Port Id 27 65534:1:5.78:fe:3d:5c:5d:76/144 *[BGP/170] 1d 00:29:40, localpref 100 AS path: I to 128.0.128.4 via dcfabric.0, PFE Id 5 Port Id 41 > to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 41 [BGP/170] 1d 00:29:40, localpref 100, from 128.0.128.8 AS path: I to 128.0.128.4 via dcfabric.0, PFE Id 5 Port Id 41 > to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 41

-

-

-

-

Note that some of the entries shown in the preceding output are broadcast entries while others are unicast entries. We illustrate the MAC learning and forwarding processes on subsequent slides.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–17

Configuring and Monitoring QFabric Systems

Distributing Layer 2 Information The diagram on the slide shows a small network topology after Layer 2 network reachability state has been shared between the various Node groups through the fabric control REs. When VLANs are created and interfaces from the various Node groups are assigned to those VLANs, the system assigns a unique route tag to each VLAN. Note that the route tags are automatically generated and do not match the VLAN IDs. Once an interface belonging to a Node group is configured to participate in a VLAN, the Node group is associated with the corresponding target and receives any corresponding reachability information pertaining to the VLAN. Continued on the next page.

Chapter 5–18 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Distributing Layer 2 Information (contd.) This approach allows the fabric control REs to share relevant reachability information with the server Node group REs rather than having all Node groups maintain all reachability information. The sample output that follows shows the configuration applied to the sng0 Node group for VLAN v50: qfabric-admin@sng0> show configuration vlans v50---qfabric { vlan-id 50; global-layer2-domainid 6; interface { ge-0/0/12.0; } } Note the global Layer 2 domain ID associated with VLAN v50. In this case, the system has assigned VLAN v50 a Layer 2 domain ID of 6. All server Node groups that have interfaces participating in this VLAN should receive a flood route entry that includes this Layer 2 domain ID, as shown in the following output: qfabric-admin@sng0> show route fabric table bgp.bridgevpn.0 bgp.bridgevpn.0: 2 destinations, 4 routes (2 active, 0 holddown, 0 hidden) Restart Complete + = Active Route, - = Last Active, * = Both ... 65534:1:6.ff:ff:ff:ff:ff:ff/144 *[BGP/170] 01:17:43, localpref 100, from 128.0.128.6 AS path: I > to 128.0.128.4:57005(NE_PORT) via dcfabric.0, MultiCast Corekey:3 Keylen:7 [BGP/170] 01:17:43, localpref 100, from 128.0.128.8 AS path: I > to 128.0.128.4:57005(NE_PORT) via dcfabric.0, MultiCast Corekey:3 Keylen:7 Note that this entry points back to the fabric control REs (128.0.128.6 and 128.0.128.8). Should Layer 2 traffic for this VLAN destined to an unknown destination MAC address enter sng0, that traffic would use the route entry shown in the preceding output and be sent to the selected fabric control RE and then be replicated, if needed, and flooded to all other Node groups associated with VLAN v50. We provide some Layer 2 frame processing examples on the next slides.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–19

Configuring and Monitoring QFabric Systems

Layer 2 Learning This slide is animated.

The slide illustrates the Layer 2 learning process on a QFabric system. In this example, the sng1 server Node group receives a frame on int-01. After examining the header information in the frame, sng1 determines that it does not know the source MAC address. Because it is a new source MAC address, sng1 records the MAC address in its local bridging table and then notifies the fabric control REs about this newly learned information. The fabric control REs then distribute this Layer 2 information to all interested Node groups using the fabric control protocol. All Node groups belonging to the associated VLAN receive and record the newly learned MAC address in their bridge tables.

Chapter 5–20 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Layer 2 Unicast Forwarding This slide is animated.

The slide illustrates the Layer 2 unicast forwarding process on a QFabric system. In this example, the sng7 server Node group receives a frame on int-03. After examining the header information in the frame, sng7 determines that it knows the source and destination MAC addresses. Because the source and destination MAC addresses are known, no local or global learning is required. While examining the local bridge table, sng7 identifies the next hop for the destination MAC address, which in this case is Packet Forwarding Engine (PFE) ID 1 and corresponds with sng1. The sng7 Node group adds the fabric header, which includes the destination PFE ID, to the frame and sends it to the selected Interconnect device. The Interconnect device receives the frame, examines the fabric header, and sends the frame to the destination PFE, which in this case is sng1. The sng1 server Node group performs a forwarding lookup and determines the destination is local. Because the destination is local, the fabric header is removed and the original frame is sent out the egress port (int-01) and on to its ultimate destination. Note that the PFE IDs are automatically assigned by the system and are not user configurable. You can view the PFE ID assigned to a given Node group using the following method:

qfabric-admin@sng0> start shell % cprod -A fpc0 -c 'set dc bc "stkm"' HW (unit 0) STKMode: unit 0: module id 9 Optionally, you can log in to NW-NG-0 and view all assigned PFE IDs using the show oam fabric internal-database device command. www.juniper.net

Layer 2 Features and Operations • Chapter 5–21

Configuring and Monitoring QFabric Systems

Layer 2 Broadcast and Unknown Unicast Forwarding This slide is animated.

The slide illustrates the Layer 2 broadcast and unknown unicast forwarding process on a QFabric system. In this example the sng1 server Node group receives a frame on int-01. After examining the header information in the frame, sng1 determines that it knows the source and destination MAC addresses and that the destination MAC address is the broadcast MAC address. Because the source and destination MAC addresses are known, no local or global learning is required. The sng1 Node-group obtains the Layer 2 domain ID from its forwarding lookup. The Layer 2 domain ID is associated with a list of ports to which the packet must be replicated. The sng1 Node group RE adds the fabric header, containing the Layer 2 domain ID or multicast tree index, to the frame and forwards it to the selected Interconnect device. The Interconnect device receives the frame, examines the fabric header, and sends a copy of the frame to all destination Node groups, which in this case includes sng3, sng5, sng7, and sng11. The receiving server Node groups perform a forwarding lookup to determine the destination port or ports. The Node groups then remove the fabric header, replicate the frame as needed, and forward a copy of the original frame out the appropriate egress port or ports.

Chapter 5–22 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Case Study The slide highlights the topic we discuss next.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–23

Configuring and Monitoring QFabric Systems

Case Study: Objectives and Topology The slide provides the objectives and topological details for this case study.

Chapter 5–24 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Configuring Interfaces and VLANs: Part 1 The slide provides a portion of the required interface and VLAN configuration for this case study.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–25

Configuring and Monitoring QFabric Systems

Configuring Interfaces and VLANs: Part 2 The slide provides the remaining portion of the required interface and VLAN configuration for this case study.

Chapter 5–26 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Enabling RSTP Note that the design on the slide is in no way a recommended design but simply an example of when RSTP might be used. If this were a long term connection, rather than a temporary one used for a migration, a second LAG from the QFabric system to a different aggregation switch should probably be added.

www.juniper.net

The slide shows the required configuration to enable the Rapid Spanning Tree Protocol (RSTP). This configuration supports all interfaces belonging to the network Node group and uses the highest possible bridge priority to reduce any chance that the QFabric system will become the root bridge. As highlighted on the slide and previously mentioned earlier in this chapter, if BPDUs are received on a port associated with a server Node group, that port will be disabled by the BPDU control mechanism. Note that this slide provides you an opportunity to discuss the migration from a legacy data center network to the QFabric system. The connection between the network Node group and the aggregation switch in the tiered network represents the first step in one migration approach. Subsequent steps in this approach would include connecting the WAN edge and existing access switches to the QFabric system ultimately removing the aggregation and core layers. If the access switches are QFX3500 or QFX3600 switches, they could then be converted easily to Node devices and merged in to the system.

Layer 2 Features and Operations • Chapter 5–27

Configuring and Monitoring QFabric Systems

Verifying RSTP Operations The slide illustrates key commands used to verify RSTP operations along with sample outputs.

Chapter 5–28 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Verifying Layer 2 Interfaces Emphasize the value of using the | match options with various operational mode commands such as the show interfaces command on the QFabric system. This approach can save a lot of time and make life easier!

www.juniper.net

The slide illustrates key commands used to verify Layer 2 interfaces along with sample outputs.

Layer 2 Features and Operations • Chapter 5–29

Configuring and Monitoring QFabric Systems

Verifying VLAN Associations The slide illustrates key commands used to verify VLAN associations along with sample outputs.

Chapter 5–30 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

This Chapter Discussed:

www.juniper.net



Supported Layer 2 features;



Commonly used Layer 2 connections;



Layer 2 operations and traffic flow; and



Configuring and monitoring Layer 2 features.

Layer 2 Features and Operations • Chapter 5–31

Configuring and Monitoring QFabric Systems

Review Questions 1. The majority of the Layer 2 connections are within a data center, where the QFabric system is used to connect servers to the network. These connections include rack server connections and blade server connections and often support east-west traffic flows, which is traffic passing between devices within the data center. One other type of connection used involves the network Node group and is typically used to connect the system with the WAN edge and security devices. These connections are commonly used for north-south traffic flows which is traffic entering and leaving the data center. Note that there are also certain Layer 2 connections within the network Node group that are used for migration strategies and in situations where the blade switches within blade chassis deployments cannot be bypassed and must run STP. 2. When blade chassis are used that include blade switches and the connections interface with a server Node group, you must ensure STP is disabled. Otherwise the interfaces within the Node group that receive STP BPDUs will be disabled. 3. MAC addresses are first learned by the ingress Node groups through which the related traffic is received. The newly learned MAC address is then advertised from the Node group RE to the fabric control REs through the fabric control protocol, which is based on BGP. The fabric control REs then reflected the learned MAC addresses on to all other Node groups associated with the VLAN with which the MAC address is associated.

Chapter 5–32 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Lab 2: Layer 2 Features and Operations The slide provides the objective for this lab.

www.juniper.net

Layer 2 Features and Operations • Chapter 5–33

Configuring and Monitoring QFabric Systems

Chapter 5–34 • Layer 2 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems Chapter 6: Layer 3 Features and Operations

Configuring and Monitoring QFabric Systems

This Chapter Discusses: •

Supported Layer 3 features;



Commonly used Layer 3 connections;



Layer 3 operations and traffic flow; and



Configuring and monitoring Layer 3 features.

Chapter 6–2 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Layer 3 Features The slide lists the topics we cover in this chapter. We discuss the highlighted topic first.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–3

Configuring and Monitoring QFabric Systems

Overview of Layer 3 Features The slide provides a brief overview of key Layer 3 features supported on QFabric systems. Many of the Layer 3 features supported on EX Series switches are also supported on QFabric systems. For a list of all supported features on the QFabric systems visit www.juniper.net. Note that features supported on EX Series and QFabric systems typically use the same syntax for configuration and monitoring tasks related to those features. We highlight specific details for the key Layer 3 features listed on the slide throughout this chapter.

Chapter 6–4 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Securing Traffic Flows While a significant amount of traffic exists within the data center that passes between devices in the same Layer 2 broadcast domain, much of the traffic flows between VLANs and in and out of the data center. You must account for all required traffic flows within the data center and ensure those flows are permitted in a secure and efficient manner. To support all of the required traffic flows in a data center you must incorporate Layer 3 gateway and routing services. We discuss the gateway and routing services as well as the operations of some of these services throughout the remainder of this chapter.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–5

Configuring and Monitoring QFabric Systems

First Hop Router Placement One of the primary decisions that must be made to provide Layer 3 gateway and routing services relates to placement of the first hop router. This decision is ultimately determined by your design and traffic flow requirements. As illustrated on the slide, you have a number of options when it comes to the implementation of the first hop router. For deployments that require security checks on all traffic between VLANs within the data center and traffic in and out of the data center, we recommend using a high-performance firewall, such as an SRX Series device, as the first hop router. For deployments where low latency and high throughput performance are critical, such as with some high performance computing applications found in the data center, we recommend using RVIs within the QFabric system as the first hop router. Using RVIs as the first hop router essentially cuts one hop out of the forwarding equation, which means latency is lowered and performance is increased. Continued on the next page.

Chapter 6–6 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

First Hop Router Placement (contd.) If you are connecting multiple data centers through a Virtual Private LAN Service (VPLS) deployment, you can use an MX Series device as the first hop router to simplify the design. In such deployments, all Layer 2 traffic flows between the data centers must pass through the MX Series device before entering the VPLS cloud. Traffic entering and exiting the data center not associated with intersite VPLS flows can still be directed through the SRX Series devices even though it passes through the MX Series devices first. Directing all traffic through the SRX Series devices incorporates the needed security checks, protects your environment, and is highly recommended by Juniper Networks. Your deployment and design objectives might require a combination of the available first hop router options. You might have some VLANs used for high-performance computing where using RVIs within the QFabric system makes the most sense and helps support the design and business goals. You might also have some common VLANs shared between remote data centers where using an MX Series device as the first hop router is beneficial and simplifies the deployment. In deployments where the requirements do not match neatly with a single approach, you should consider a combination of the available first hop router options.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–7

Configuring and Monitoring QFabric Systems

What Is an RVI? Emphasize the need for proper routing information on the host devices in the VLANs (likely a default route pointing to the RVI).

A routed VLAN interface (RVI) is a logical Layer 3 VLAN interface used to route traffic between VLANs. RVIs often serve as the gateway IP address for host devices on the subnet associated with the corresponding VLAN. Note that proper routing information must exist on the host devices, which typically comes in the form of a default gateway. RVIs, along with all other Layer 3 interfaces on a QFabric system, are associated with the network Node group. We provide configuration and monitoring examples for RVIs on subsequent slides.

Chapter 6–8 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Configuring and Applying RVIs The slide provides a configuration and application example for RVIs. Note that the syntax is identical to that used for EX Series switches.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–9

Configuring and Monitoring QFabric Systems

Verifying Interface State The slide provides the commands and sample outputs used to verify the state of RVIs. Note that RVIs become active only when an operational Layer 2 interface is associated with the VLAN to which the RVI is applied.

Chapter 6–10 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Test Your Knowledge This slide is animated.

The slide is designed to test your knowledge and specifically serves as a review of how route tables are created and are used on Junos OS devices. Just like other Junos OS devices, once interfaces are configured on the system, the corresponding routes (Direct and Local routes as shown in the sample output on the slide) are added to the route table. If the system is serving as the gateway device for attached devices, it consults the routing table and takes the appropriate action; forwards the packet based on a matching entry in the route table or discards the packet with an informative response to the sender because of the lack of a matching entry. In our example, two RVIs are defined and are active, which results in the associated entries being added to the route table. Host X and Y are configured to use the RVIs on the QFabric system as their respective gateways. Because the required configuration and route table entries are in place, Host X and Y should be able to communicate.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–11

Configuring and Monitoring QFabric Systems

Layer 3 LAGs In previous chapters, we introduced and illustrated the configuration of Link Aggregation Groups (LAGs). This slide provides a configuration example for a Layer 3 LAG. While much of this configuration example matches the examples shown in previous chapters, you should note that this LAG configuration example uses the protocol family inet. As with RVIs and any other Layer 3 interface, Layer 3 LAGs must be associated with the network Node group. As noted on the slide, Layer 3 LAGs can use only a single unit and that unit number must be zero. If you associate a number other than zero to a Layer 3 LAG, you will get a commit error as shown in the following sample output: [edit] root@qfabric# commit [edit interfaces NW-NG-0:ae0] 'unit 100' Only unit 0 is valid for this encapsulation error: configuration check-out failed Note that if a LAG connection is used and must support tagged traffic, you can use a Layer 2 LAG configured as a trunk port for all required VLANs and associate an RVI with each VLAN requiring Layer 3 communications between the QFabric system and the attached device.

Chapter 6–12 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Static Routes To allow routing to remote networks for the QFabric system and its attached devices, the route table will need route entries for those remote destination networks or a default route. The QFabric system supports the manual creation of static routes as well as some dynamic routing protocols. This slide provides a sample default route configuration that directs all outbound traffic to remote destination networks to the attached SRX Series devices. For end-to-end routing to work properly, the SRX Series devices, as well as any other upstream Layer 3 devices, must have the necessary routing information to forward the packets on to their intended destination.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–13

Configuring and Monitoring QFabric Systems

Dynamic Routing Protocols Depending on your deployment and design requirements, it might be best to use a dynamic routing protocol instead of static routes. The QFabric system supports OSPF and BGP for these situations. Continued on the next page.

Chapter 6–14 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Dynamic Routing Protocols (contd.) The syntax used to configure these protocols on a QFabric system is the same syntax used on other Junos OS devices, such as the MX Series and SRX Series devices and is shown in the following example: [edit protocols] root@qfabric# show bgp { local-as 65322; group ibgp { type internal; neighbor 172.25.100.10; } } ospf { area 0.0.0.1 { stub no-summaries; interface vlan.100; interface lo0.0; interface vlan.50 { passive; } } } The QFabric architecture currently requires any routing protocol traffic intended for the system to be received through interfaces associated with Node devices participating in the network Node group. The received protocol traffic is forwarded from these Node devices (PFEs) through the control plane network and to the active network Node group Routing Engine (RE).

www.juniper.net

Layer 3 Features and Operations • Chapter 6–15

Configuring and Monitoring QFabric Systems

Layer 3 Operations The slide highlights the topic we discuss next.

Chapter 6–16 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Maintaining and Sharing Layer 3 Routes The active network Node group RE is responsible for learning Layer 3 routes and for sharing the learned routing information with the appropriate components throughout the system. As with other Junos OS devices, you can view the route table contents on a QFabric system through the fabric admin using the show route command. The slide illustrates the use of this command along with a sample output showing a static default route and some direct and local routes. Continued on the next page.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–17

Configuring and Monitoring QFabric Systems

Maintaining and Sharing Layer 3 Routes (contd.) The routing information learned and maintained by the network Node group RE is shared with the appropriate components in the system through the fabric control protocol. This Layer 3 routing information is stored in the default.inet.0 table on the various system components requiring such information. A sample output showing the received contents in the default.inet.0 table on a server Node group follows: qfabric-admin@sng1> show route fabric table default.inet.0 default.inet.0: 18 destinations, 43 routes (18 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 0.0.0.0/0

172.25.50.0/24

172.25.50.1/32

*[BGP/170] 12:19:28, localpref 101, from 128.0.128.6 AS path: I > to 128.0.128.4:137(NE_PORT), Layer 3 Fabric Label 5 [BGP/170] 12:19:28, localpref 101, from 128.0.128.8 AS path: I > to 128.0.128.4:137(NE_PORT), Layer 3 Fabric Label 5 *[INET/40] 14:38:10 > to 128.0.130.10 via dcfabric.0 [BGP/170] 14:22:22, localpref 101, from 128.0.128.6 AS path: I to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 29 > to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 41 to 128.0.128.4 via dcfabric.0, PFE Id 12 Port Id 29 to 128.0.128.4 via dcfabric.0, PFE Id 12 Port Id 41 [BGP/170] 14:22:22, localpref 101, from 128.0.128.8 AS path: I to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 29 > to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 41 to 128.0.128.4 via dcfabric.0, PFE Id 12 Port Id 29 to 128.0.128.4 via dcfabric.0, PFE Id 12 Port Id 41 *[INET/40] 14:38:10 > to 128.0.130.10 via dcfabric.0 [BGP/170] 14:22:22, localpref 101, from 128.0.128.6 AS path: I to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 29 > to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 41 to 128.0.128.4 via dcfabric.0, PFE Id 12 Port Id 29 to 128.0.128.4 via dcfabric.0, PFE Id 12 Port Id 41 [BGP/170] 14:22:22, localpref 101, from 128.0.128.8 AS path: I to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 29 > to 128.0.128.4 via dcfabric.0, PFE Id 7 Port Id 41 to 128.0.128.4 via dcfabric.0, PFE Id 12 Port Id 29 to 128.0.128.4 via dcfabric.0, PFE Id 12 Port Id 41

...

Chapter 6–18 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Resolving Layer 3 Addresses The Layer 3 routing information provided by the network Node group to all other server Node groups through the default.inet.0 table is used by the Node devices to perform local resolution tasks. As server Node groups learn details from attached devices through the Address Resolution Protocol (ARP), they share those details with other Node groups throughout the system. We illustrate this process and the role ARP plays within the QFabric system for Layer 3 routing on subsequent slides. This distributed approach used to share Layer 3 reachability information helps with the scaling and performance capabilities within the system.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–19

Configuring and Monitoring QFabric Systems

Host Resolving Default Gateway This slide is animated.

The slide provides an illustration and the relevant details for the steps used by a QFabric system to process incoming ARP requests for an address associated with a local Layer 3 interface. The steps for this process follow: 1.

An attached device, Host Y in this case, sends an ARP request for the IP address of its defined gateway. In this example, the QFabric system has a defined RVI, which serves as the gateway for Host Y.

2.

The server Node group (sng7) receives the incoming ARP request, performs a local lookup in its default.inet.0 table, and sends an ARP reply back to Host Y. Note that a matching entry must exist in the server Node group’s route table for a response to be issued. During this step, the server Node group checks to see if the ARP details of Host Y are known by the system. In this case, the ARP details for Host Y are unknown to the system so the server Node group builds a MAC address and IP address binding.

3.

The server Node group advertises the newly learned ARP information for Host Y to the system using the fabric control protocol. This advertisement takes place over the control plane network and is directed to the fabric control REs, which then reflect the information on to other Node groups throughout the system.

4.

Once all other Node groups receive the advertisement that includes Host Y’s MAC address and IP address binding, they update their route table to include this information. This updated information can be used later on for Layer 3 unicast forwarding efforts.

Chapter 6–20 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Resolving a Destination Address: Scenario 1 This slide is animated.

www.juniper.net

The slide provides an illustration and the relevant details for the steps used by a QFabric system to process incoming ARP requests for an address associated with another attached device. A key point to consider in this example is that the server Node group where the ARP request is generated, and the server Node group where the ARP response is received from the Host are both participating in the VLAN to which the destination host belongs. The steps for this process follow: 1.

An attached device, Host X in this case, sends an IP packet destined to a subnet known by the ingress server Node group, because it has attached ports in the destination RED VLAN, but the destination IP address is not yet known (no ARP entry for the host). The ingress server Node group (sng1) receives the packet, generates an ARP request for Host Y, and sends the ARP request to all PFEs and local egress ports associated with the VLAN associated with the destination subnet.

2.

All relevant server Node groups receive the ARP request and forward it out their egress ports associated with that VLAN. Host Y receives the ARP request and responds back to the sng7 server Node group. Once sng7 receives the incoming ARP response, it builds a MAC and IP binding and then advertises this new information to the fabric control REs, which then reflect the information on to other Node groups throughout the system.

3.

Once all other Node groups receive the advertisement that includes Host Y’s MAC and IP binding, they update their route table to include this information. This updated information can be used later on for Layer 3 unicast forwarding efforts.

Layer 3 Features and Operations • Chapter 6–21

Configuring and Monitoring QFabric Systems

Resolving a Destination Address: Scenario 2 This slide is animated.

The slide provides an illustration and the relevant details for the steps used by a QFabric system to process incoming ARP requests for an address associated with another attached device. A key point to consider in this example is that the server Node group where the ARP request is generated is not participating in the VLAN to which the destination host and its attached server Node group belong. Because the two server Node groups do not both belong to this VLAN, special handling is required that involves the network Node group. The steps for this process follow: 1.

An attached device, Host X in this case, sends an IP packet destined to a subnet unknown to the ingress server Node group. The ingress server Node group does not have any ports associated with the destination RED VLAN. The ingress server Node group (sng1) receives the packet, adds a fabric header to the packet, and sends the encapsulated packet to the network Node group RE through one of the network Node group PFEs.

2.

The active network Node group RE receives the encapsulated packet, generates the appropriate ARP request for the packet and sends the newly generated ARP request to all Node groups associated with the VLAN on which Host Y resides.

Continued on the next page.

Chapter 6–22 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Resolving a Destination Address: Scenario 2 (contd.) The remainder of the steps for the process illustrated on the slide follow:

www.juniper.net

3.

All relevant Node groups receive the ARP request and forward it out their egress ports associated with that VLAN. Host Y receives the ARP request and responds back to the sng7 server Node group. Once sng7 receives the incoming ARP response, it builds a MAC and IP binding and then advertises this new information to the fabric control REs, which then reflect the information on to other Node groups throughout the system.

4.

Once all other Node groups receive the advertisement that includes Host Y’s MAC and IP binding, they update their route table to include this information. This updated information can be used later on for Layer 3 unicast forwarding efforts.

Layer 3 Features and Operations • Chapter 6–23

Configuring and Monitoring QFabric Systems

Layer 3 Unicast Forwarding The previous slides illustrated the processes involved in building the ARP table on a QFabric system. Once all of the ARP entries are in place, Layer 3 unicast forwarding through the system can occur. This slide highlights the basic steps used when forwarding Layer 3 unicast packets. When a Node group receives a packet from a connected host with known source and destination addresses it performs a forwarding lookup and identifies the destination PFE ID (also known as Module ID). The Node group then decrements the time-to-live (TTL) value within the packet, encapsulates the packet by adding a fabric header which includes the destination PFE ID, and sends the newly encapsulated packet on to a selected Interconnect device. The Interconnect device receives and examines the encapsulated packet. Once the destination PFE ID is identified, the Interconnect device sends the encapsulated packet on to the destination Node device. The destination Node device receives the packet, performs a lookup to learn the egress port toward the destination host, removes the fabric header, rewrites the MAC address within the packet and then sends the packet on to its intended destination.

Chapter 6–24 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Case Study The slide highlights the topic we discuss next.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–25

Configuring and Monitoring QFabric Systems

Case Study: Objectives and Topology The slide provides the objectives and topological details for this case study.

Chapter 6–26 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Configuring Interfaces The slide illustrates the configuration of the Layer 3 LAG and the RVIs specified in this case study. While there are Layer 2 interfaces used in this sample environment, we do not show them in this case study. We also do not show the Node-1 LAG member interface, which is configured the same as the Node-0 LAG member interface, for brevity. Refer to the previous chapter for Layer 2 interface configuration examples.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–27

Configuring and Monitoring QFabric Systems

Assigning RVIs to VLANs The slide illustrates the required configuration used to assign the previously defined RVIs to their respective VLANs. Note that these RVIs serve as gateway addresses for host devices participating in these VLANs.

Chapter 6–28 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Configuring a Default Route The slide illustrates the required configuration used to define a default static route for the system. As noted on the slide, this route will direct all traffic destined remote subnets to the SRX Series device. As mentioned earlier in this chapter, all Layer 3 devices along the forwarding path must have the proper routing information to ensure end-to-end reachability.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–29

Configuring and Monitoring QFabric Systems

Verifying Layer 3 Interfaces The slide illustrates the operational mode command used to verify the state of all Layer 3 interfaces in our sample topology. The associated output on the slide shows that all Layer 3 interfaces are administratively and operationally up. We also know, based on this output, that at least one Layer 2 interface within each of the configured VLANs is functional. If there is not at least one operational Layer 2 interface in a VLAN to which an RVI is applied, the RVI will not be up.

Chapter 6–30 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Verifying Layer 3 Routes The slide illustrates the operational mode command used to verify the routes in the route table. The output on the slide shows the expected direct and local routes as well as the defined static route. Remember that these routes are shared with all Node groups and are ultimately used for resolution and forwarding tasks. We perform some basic reachability tests on the next slide which depend on the routes shown here.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–31

Configuring and Monitoring QFabric Systems

Testing Connectivity This slide shows some basic ping tests used to validate interVLAN reachability. The results of the ping tests show success which implies all Layer 3 configuration is correct on the connected host devices as well as the QFabric system. We can also see that because of the ping tests, some ARP entries have been added to the ARP table. Note the use of the | except bme command options. These command options filter a large number of internal APR entries that would otherwise be displayed. Continued on the next page.

Chapter 6–32 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Testing Connectivity (contd.) The ARP entries shown on the slide also exist in the default.inet.0 table on all Node groups. ARP entries learned by a given Node group appear differently than those learned through the fabric control protocol by a remote Node group. The sample output that follows illustrates this point: qfabric-admin@RSNG-1> show route fabric table default.inet.0 | match ARP 172.25.51.3/32 *[ARP/40] 17:14:15 172.25.200.2/32 *[ARP/40] 17:18:11 qfabric-admin@RSNG-1> show route fabric table default.inet.0 172.25.50.2 default.inet.0: 18 destinations, 43 routes (18 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 172.25.50.2/32

*[BGP/170] 17:10:12, localpref 100, from 128.0.128.6 AS path: I > to 128.0.128.4 via dcfabric.0, Layer 3 Fabric Label 5 PFE Id 7

Port Id 29 [BGP/170] 17:10:12, localpref 100, from 128.0.128.8 AS path: I > to 128.0.128.4 via dcfabric.0, Layer 3 Fabric Label 5 PFE Id 7 Port Id 29 qfabric-admin@RSNG-1> show route fabric table default.inet.0 172.25.51.2 default.inet.0: 18 destinations, 43 routes (18 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 172.25.51.2/32

*[BGP/170] 17:32:04, localpref 100, from 128.0.128.6 AS path: I > to 128.0.128.4 via dcfabric.0, Layer 3 Fabric Label 5 PFE Id 12

Port Id 29 [BGP/170] 17:32:04, localpref 100, from 128.0.128.8 AS path: I > to 128.0.128.4 via dcfabric.0, Layer 3 Fabric Label 5 PFE Id 12 Port Id 29 qfabric-admin@RSNG-1> show route fabric table default.inet.0 172.25.100.1 default.inet.0: 18 destinations, 43 routes (18 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 172.25.100.1/32

*[BGP/170] 18:28:58, localpref 100, from 128.0.128.6 AS path: I > to 128.0.128.4 via dcfabric.0, Layer 3 Fabric Label 5 PFE Id 7

Port Id 41 to 128.0.128.4 via dcfabric.0, Layer 3 Fabric Label 5 PFE Id 12 Port Id 41 [BGP/170] 18:28:58, localpref 100, from 128.0.128.8 AS path: I > to 128.0.128.4 via dcfabric.0, Layer 3 Fabric Label 5 PFE Id 7 Port Id 41 to 128.0.128.4 via dcfabric.0, Layer 3 Fabric Label 5 PFE Id 12 Port Id 41 In the previous outputs we see that the entries for 172.25.51.3 and 172.25.200.2 were learned locally while the other three entries were learned by a remote Node group and received through the fabric control protocol.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–33

Configuring and Monitoring QFabric Systems

This Chapter Discussed: •

Supported Layer 3 features;



Commonly used Layer 3 connections;



Layer 3 operations and traffic flow; and



Configuring and monitoring Layer 3 features.

Chapter 6–34 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems

Review Questions 1. RVIs are logical Layer 3 interfaces configured on QFabric systems. These interfaces are associated with VLANs and often serve as gateways for hosts on the VLANs to which their assigned. 2. Some of the available first hop router options mentioned in this chapter include RVIs, SRX Series devices, MX Series devices, or a hybrid scenario that uses more than one of these options. 3. ARP entries learned by one Node group are shared with other Node groups associated with the same Layer 2 domain through the fabric control protocol.

www.juniper.net

Layer 3 Features and Operations • Chapter 6–35

Configuring and Monitoring QFabric Systems

Lab 3: Layer 3 Features and Operations The slide provides the objective for this lab.

Chapter 6–36 • Layer 3 Features and Operations

www.juniper.net

Configuring and Monitoring QFabric Systems Chapter 7: Network Storage Fundamentals

Configuring and Monitoring QFabric Systems

This Chapter Discusses: •

The purpose and challenges of storage in the data center;



Data center storage technologies; and



Data center storage networking protocols.

Chapter 7–2 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Data Center Storage Overview The slide lists the topics we cover in this chapter. We discuss the highlighted topic first.

www.juniper.net

Network Storage Fundamentals • Chapter 7–3

Configuring and Monitoring QFabric Systems

Overview of the Data Center The slide illustrates an example data center and how storage components fit into the data center. A collection of client devices, such as mobile or desktop devices have access to servers in the data center. The servers have many functions, such as hosting applications, databases, presentations, data backups, enterprise resource planning (ERP) systems, or network-attached storage (NAS). The servers must get data from a hard disk residing on the server itself or somewhere else such as a storage pool. The growth of storage and servers in the data center creates an ideal scenario for a storage area network (SAN). Storage technologies such as NAS, Fibre Channel, and iSCSI are enablers that enable the applications on the servers to provide access to storage. Multiple methods exist for accessing storage. We discuss these methods on subsequent slides.

Chapter 7–4 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Direct-Attached Storage The slide illustrates the simplest example of direct attached storage (DAS)--a server with a hard disk inside. Applications that must store or retrieve data use a component of operating systems, known as a file system. The file system organizes and manages the data on storage devices. File systems create and delete files. They open files so that applications can gain access to file data and close files when that access is no longer needed. File systems do the actual reading and writing of the data on behalf of applications. In this example, a server connects to the hard disk with a Small Computer System Interface (SCSI) connection; however, the connection could also be with Advanced Technology Attachment (ATA), Integrated Drive Electronics (IDE), or Serial ATA (SATA). The storage need not reside within the server itself to qualify as DAS. The storage might reside externally in the form of multiple disk drives in an enclosure. As long as the storage connects directly to a computer without any intervening network device, it is known as DAS.

www.juniper.net

Network Storage Fundamentals • Chapter 7–5

Configuring and Monitoring QFabric Systems

DAS Challenges DAS is very fast because it is directly connected to the motherboard. DAS worked well when data centers contained only a handful of servers, but it became less effective as data centers grew. In a data center with many servers, each with its own dedicated disk, operators often would have to add storage to one server while another had plenty of storage to spare. This led to several inefficiencies: •

Businesses wasted money purchasing servers with more storage than they might ever need in an effort to avoid expanding them again later.



Adding storage to old servers when others had capacity wasted time and effort.



Time was lost in planning for system downtime and during the downtime itself.



Unused server space represented wasted money.

To solve this challenge, a network is put in the middle—enabling resource sharing. At a basic level, you remove most of the storage from a server, and then multiple servers access a single pool of storage across a network. Thus you allocate storage as needed and manage it from the servers. In general, each server boots from a very limited internal disk, but in some cases you can even store a server’s operating system in the shared disk pool. Insertion of this network in the middle is the fundamental difference between DAS and SAN or NAS.

Chapter 7–6 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Network-Attached Storage NAS consists of one or more hard disks connected to a standard Ethernet, TCP/IP network to provide data access to various operating systems, such as Windows, Linux, IBM’s AIX, and ESX. A NAS server itself runs an operating system. If the operating system has a function to make the system run as NAS, the system is considered NAS regardless of the disk size. Typical protocols used in a NAS environment are Network File System (NFS) for Unix and Common Internet File System (CIFS) for Windows (also known as Server Message Block (SMB).

www.juniper.net

Network Storage Fundamentals • Chapter 7–7

Configuring and Monitoring QFabric Systems

Scaling with NAS Because NAS is designed for file sharing rather than pure data storage, it allows multiple servers to access the same data. In the diagram depicted on the slide, two servers are pointing to one NAS. A Windows host is using the \\nas01.xyz.com\datastore, while the Linux host is using the /ext/datastore. This behavior is acceptable because NAS will speak the appropriate protocol for each operating system, such as NFS for Unix-based systems or CIFS for Windows-based systems. On the NAS side, the server maps the folder location to a specific logical volume, which is called a logical unit and commonly referred to as a logical unit number (LUN), used to identify the logical volume. A LUN is composed of some physical partitions on the physical hard disk.

Chapter 7–8 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

NAS Advantages NAS makes sharing files much easier than other storage methods, mostly due to its ability to serve files from it directly. As servers move their file-serving responsibilities to NAS, the overall performance of the file server will inherently increase.

NAS Disadvantages NAS has physical limitations in that it can handle only a finite number of user and I/O operations. In addition, some NAS devices are unable to perform tasks that typically are done by a file server, such as computing the disk usage of separate directories and rapidly indexing files. As data centers grow, there is also the potential for what is known as NAS sprawl. NAS sprawl occurs when NAS devices are added in an ad hoc fashion to the network resulting in increased management responsibility.

www.juniper.net

Network Storage Fundamentals • Chapter 7–9

Configuring and Monitoring QFabric Systems

Storage Area Networks SAN is an architecture that can attach remote storage devices to the servers. SANs use a different technology, which provides servers block-level access to disks, not file system access, which is a fundamental difference with NAS. The other big difference is interface implementation. Depending on how a SAN is built, the interface will be different. It can be Ethernet, it can be Fibre Channel (FC), or it can be InfiniBand (IB). In a SAN environment, the interface is typically called a Host Bus Adaptor (HBA). Because SANs provide servers block-level access to data, SANs must transport SCSI commands. The slide lists several protocols used to convey SCSI commands back and forth between server and storage including Internet SCSI (iSCSI), FC, Fibre Channel over Ethernet (FCoE), and Fibre Channel over IP (FCIP).

Chapter 7–10 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

SAN Scaling Blocked access means the server will own single or multiple storage resources and it will have exclusive access. It will not be shared with others in most cases. In a typical SAN environment, multiple servers cannot access the same LUN simultaneously because it lacks the coordination to ensure data integrity when blocks of data change. This is a key difference when compared with a NAS environment, where multiple servers can access the same LUN simultaneously because NFS or CIFS performs the coordination. In the diagram, one server is pointing to one LUN identified with the Z character, while the other is pointing to a separate LUN identified by the Y character.

www.juniper.net

Network Storage Fundamentals • Chapter 7–11

Configuring and Monitoring QFabric Systems

SAN Advantages The benefits of a SAN include: •

Added flexibility and better control of network resources because cables and storage devices do not need to be shifted from one server to another server on the network;



Servers can boot directly from the SAN itself; thus you can replace and reconfigure faulty servers quickly and easily; and



Low processing overhead due to direct block data access.

SAN Disadvantages The drawbacks of a SAN include: •

Management and maintenance can be difficult due to system complexity;



In general, a SAN does not permit data sharing; and



SAN devices are expensive—the initial investment can be prohibitive.

Chapter 7–12 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

SAN and NAS Comparison The diagram on the slide illustrates the differences between the abstraction layers of the SAN and NAS methods of storage. Both systems have an application for reading and writing data at the top layer and physical disks that process I/O commands at the bottom layer. The models get even more complex when you add in server virtualization and the hypervisor. Note that abstraction layers provide many benefits in terms of flexibility, but they also add layers of complexity, such as code and processing.

www.juniper.net

Network Storage Fundamentals • Chapter 7–13

Configuring and Monitoring QFabric Systems

Hybrid SAN and NAS Environments SAN and NAS are not mutually exclusive. In this example, a NAS head, also known as a NAS Gateway is used to bridge the two technologies. In this hybrid setup, the server sees filesystem access, but, from a storage perspective, access is blocked. A NAS head masks a heterogeneous environment to create a storage pool. This example illustrates that NAS and SAN are options which can be used together to build a solution based on needs. This scenario might also provide a transition mechanism to move from NAS to a SAN.

Chapter 7–14 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Storage Technologies The slide highlights the topic we discuss next.

www.juniper.net

Network Storage Fundamentals • Chapter 7–15

Configuring and Monitoring QFabric Systems

Disk Storage A disk system is a device in which a number of physical storage disks sit side-by-side. A disk system usually has a central control unit that manages all the I/O, simplifying the integration of the system of other devices. JBOD (short for just a bunch of disks) is a disk system that appears as a set of individual storage devices. The central control unit provides only basic functionality for writing and reading data from the disks. JBOD does not account for fault tolerance or increased performance. A RAID has the central control unit that provides additional functionality to utilize the individual disks, achieving higher fault-tolerance and performance. Like JBOD, the disks appear as a single storage unit to the connected devices.

Tape Storage Tape systems, similarly to disk systems, provide necessary tools to manage the use of tapes for storage purposes. Tape systems could be in the form of tape drives, tape autoloaders, and tape libraries.

Chapter 7–16 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Storage Access Protocols A number of storage interconnection standards evolved over time. Those standards can be loosely grouped into two sets:

www.juniper.net



Low end interconnections—IDE, ATA, and SATA; and



High end interconnections—SCSI, FC, Serial Attached SCSI (SAS), Enterprise Systems Connection (ESCON), and Fibre Connection (FICON).

Network Storage Fundamentals • Chapter 7–17

Configuring and Monitoring QFabric Systems

Comparing Storage Access Protocols The slide provides a detailed view of the underlying storage networking protocols. (We ignore ATA, IDE, and SATA because they are mostly client-side technologies, although they do sometimes make an appearance in a server.) Four columns of information are shown in the diagram: •

On the left are applications communicating by means of SCSI over FC, FCoE, iSCSI, and InfiniBand;



On the right are applications communicating by means of NAS;



On the middle left are two gateway entries referencing technologies used to stretch FC over MAN or WAN. These protocols are not exposed to the server; and



On the middle right is mainframe over FC.

Chapter 7–18 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Accessing Data Using SCSI The SCSI protocol is actually a set of standards defined by the ANSI-accredited InterNational Committee for Information Technology Standards (INCITS) subgroup that not only define physical characteristics, but also command definitions. The slide provides an example of SCSI commands. In a SAN environment, the interface can vary. In a NAS environment, the interface is just a regular Ethernet interface.

www.juniper.net

Network Storage Fundamentals • Chapter 7–19

Configuring and Monitoring QFabric Systems

SCSI over IP iSCSI is serial SCSI over TCP/IP. iSCSI utilizes SCSI commands over IP networks, and is common in SAN architectures. iSCSI uses TCP ports 860 and 3260. iSCSI is popular with small-sized and medium-sized businesses, which often have a wealth of mid-sized iSCSI storage solutions. A TCP Offload Engine (TOE) network interface card (NIC), offers an alternative to a full iSCSI HBA. The NIC offloads the TCP/IP operations from the host processor, freeing up CPU cycles for the main host applications. (Note that the CPU still performs the iSCSI protocol processing.) iSCSI storage, servers, and clients use Challenge Handshake Authentication Protocol (CHAP) to authenticate. In addition, you can deploy IPsec at the Network Layer.

Chapter 7–20 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

iSCSI Operation iSCSI has three key building blocks, defined as follows:

www.juniper.net



iSCSI initiator: The first key element is the iSCSI initiator, the source of iSCSI commands. The initiator resides on the server HBA in the form of a specialized hardware adapter or software and is used in tandem with a standard Ethernet network adapter.



iSCSI target: The second key element is the iSCSI target, which is the storage device itself. iSCSI targets can provide read/write security to initiators.



Internet Storage Name Service (iSNS): The last key element is the iSNS, which is a software service used for the discovery of devices in an iSCSI network. It behaves like DNS in the IP world.

Network Storage Fundamentals • Chapter 7–21

Configuring and Monitoring QFabric Systems

iSCSI Qualified Name Similar to a fully qualified domain name (FQDN) in the IP world, an iSCSI Qualified Name (IQN) is a concept to identify iSCSI targets. The owner of the domain name can assign everything after the colon as desired. The naming authority must ensure that the iSCSI initiator and target names are unique.

Chapter 7–22 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

iSCSI Interface Cards The slide shows three options for implementing iSCSI connectivity. On the left is a simple Ethernet NIC, where the software implements the IP, TCP, iSCSI, and SCSI layers in their entirety, typically in the operating system. On the right is an iSCSI HBA or converged network adapter (CNA), where the IP, TCP, iSCSI layers are implemented in hardware. The middle example makes use of a TOE. In this case, the card itself runs the TCP/IP layer and only iSCSI and SCSI are in the software.

www.juniper.net

Network Storage Fundamentals • Chapter 7–23

Configuring and Monitoring QFabric Systems

Fibre Channel History Fibre Channel (FC) was created in 1988 as a traditional channel protocol, but with the flexibility of network topologies. It gained ANSI approval in 1994, and became increasingly popular in the late 1990s and through the “dot com” boom of the early 2000s. Initially, FC used hub-based arbitrated loop technology and point-to-point implementations.

Fibre Channel Evolution The hub approach had limitations similar to those of coaxial-based Ethernet type (such as 10Base2), though, and in the late 1990s the topology changed to switch-based. (Technically, FC is more like a Layer 3 protocol, but we tend to say “switched.”) Early versions of FC ran at 1 Gbps; its current speed is 16 Gbps and higher speeds are already planned in future revisions. FC can carry multiple higher level protocols including IP, though it is most commonly used to carry SCSI.

Chapter 7–24 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Guide to Initial Understanding of Fibre Channel To familiarize yourself with the technologies of Fibre Channel, the table on the slide compares LAN and SAN protocols. In an IP or LAN environment, a device has a MAC address and an IP address, which is similar in a SAN to the worldwide name (WWN) and Fibre Channel ID (FCID), respectively. While the LAN world utilizes VLANs for virtual segmentation, a SAN makes use of virtual SANs (VSANs) for the same purpose. Zoning in SAN environments provides a security method similar to access control lists (ACLs) in the LAN world. The LAN environment might utilize OSPF to determine the best path to a destination. In a SAN, a similar functionality is known as Fibre Shortest Path First (FSPF). Domain Name Service (DNS) and Dynamic Host Configuration Protocol (DHCP) roughly map to Fibre Channel’s name server (NS) and fabric login (FLOGI), respectively. The terms are different, but the overlapping concepts provide a useful introduction.

www.juniper.net

Network Storage Fundamentals • Chapter 7–25

Configuring and Monitoring QFabric Systems

Fibre Channel Layers The slide illustrates the breakdown of some of the core parts of FC and shows how FC approximately aligns to both the Open Systems Interconnection (OSI) model and the actual TCP/IP model. Note that FC is, in fact, a complete stack with services logically sitting at all levels. We simplify it by viewing it from these three perspectives: •

Low-level connectivity;



L3 based forwarding (FCF or Fibre Channel Forwarding); and



Higher-level services (FCS or Fibre Channel Services).

FC0 corresponds to the OSI Physical Layer, which includes cables, fiber optics, connectors, and pinouts. Fibre Channel supports two types of cables: copper and optical. Copper is used for connecting storage devices over short distances, while optical cabling is used for connecting Fibre Channel over longer distances due to noise. FC1 connects itself to the Data Link Layer, implementing Layer 2 encoding and decoding. Layer 2 encoding and decoding is required to improve transmission of information across the Fibre Channel network. FC2 represents the Network Layer. This constitutes the Fibre Channel protocols. FC2 defines the framing rules of the data to be transferred between ports, the ways for controlling the three service classes, and the means of managing the data transfer. Continued on the next page. Chapter 7–26 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Fibre Channel Layers (contd.) FC3 relates to the Services Layer. It is responsible for implementing functions such as encryption. The FC3 layer provides several advanced features such as: •

Striping: Multiplies bandwidth to transmit single information across multiple links.



Hunt groups: Allows more than one port to respond to the same address, which improves efficiency.



Multicast: Delivers a single transmission to multiple ports.

FC4 is the Protocol Mapping Layer. It resides on the Application Layer of the OSI model. FC4’s responsibility is to map other protocols such as SCSI and IP. It allows both protocol types to be concurrently transported over the same physical interface. The following network and channel upper layer protocols (ULP) that FC4s use are as follows:

www.juniper.net



SCSI;



IP;



FICON;



High Performance Parallel Interface (HIPPI) Framing Protocol;



Link Encapsulation (FC-LE);



IEEE 802.2;



Asynchronous Transfer Mode—Adaption Layer 5 (ATM-AAL5);



Intelligent Peripheral Interface—3 (IPI-3) (disk and tape); and



Single Byte Command Code Sets (SBCCS).

Network Storage Fundamentals • Chapter 7–27

Configuring and Monitoring QFabric Systems

Fibre Channel over Ethernet The lower layers of Fibre Channel provide a reliable deterministic network designed specifically to meet the needs and topologies of the data center. To meet the needs of network convergence, Fibre Channel over Ethernet (FCoE) was created with the goal of using a common network (as with iSCSI and NAS). Because FCoE is not routable, and does not work across IP networks, it requires enhancements to Ethernet standards to support flow control, thus preventing congestion and frame loss. The enhancements are data center bridging (DCB), also known as converged enhanced Ethernet (CEE). Note that the FC, FCoE, and SCSI standards are derived from the ANSI-accredited INCITS subgroup, and Ethernet standards are derived from the Institute of Electrical and Electronics Engineers (IEEE) association. Therefore, FCoE requires knowledge and interoperability from both organizations.

Chapter 7–28 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

FCoE Evolution In many data center environments today, servers are equipped with multiple NICs for storage, management, infrastructure connectivity, and computing. This configuration results in hardware and software duplication, increasing the operational costs. To overcome these duplications, standards were developed to converge FCoE onto shared Ethernet data center LANs.

www.juniper.net

Network Storage Fundamentals • Chapter 7–29

Configuring and Monitoring QFabric Systems

Data Center Bridging DCB was designed as a standard so that Fibre Channel could have a dedicated fabric with little or no loss to packets on a shared Ethernet network. The DCB (also known as CEE) specifications are: •

IEEE 802.1Qbb—Specifies priority-based flow control (PFC). This specification provides a per-priority level flow control mechanism to ensure zero loss due to congestion in the DCB network.



IEEE 802.1Qaz—Specifies enhanced transmission selection. This specification provides a common management framework for assigning bandwidth to traffic flows. The specification defines the Data Center Bridging Capability Exchange (DCBX) protocol. It is used to exchange configuration information with the directly connected peer.



IEEE 802.1Qau—Specifies congestion notification. This specification provides end-to-end congestion signaling and per-queue rate limiting for long-lived flows.



IEEE 802.1aq—Specifies Shortest Path Bridging. This specification is important for complex topologies. It eliminates the STP algorithm requirement that Layer 2 topologies typically use and allows for a Layer 2 multipath.

We discuss these specifications in detail in the next chapter.

Chapter 7–30 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

The FCoE Stack The slide illustrates how the lower layers of FC change when using FCoE.

www.juniper.net

Network Storage Fundamentals • Chapter 7–31

Configuring and Monitoring QFabric Systems

FCoE Interface Cards Similar to iSCSI interface cards, card manufactures either implement a software FCoE stack or offload it to hardware. The only difference is that FCoE sits directly on Ethernet rather than on TCP/IP, and thus no need for the TCP/IP offload layer exists. In practice, many of the FCoE CNAs are iSCSI HBAs as well.

Chapter 7–32 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

iSCSI Versus FCoE The table on the slide compares and contrasts iSCSI and FCoE protocols. It is important to note that the iSCSI protocol can be implemented in networks that are subject to packet loss, and that iSCSI can run over 1 Gbps Ethernet. FCoE requires 10 Gbps Ethernet and a lossless network with infrastructure components that properly implement pause-frame requests and per-priority pause flow control (PFC) based on separate traffic classes that map to different priorities. PFC allows high-priority traffic, while lower-priority traffic is paused.

www.juniper.net

Network Storage Fundamentals • Chapter 7–33

Configuring and Monitoring QFabric Systems

This Chapter Discussed: •

The purpose and challenges of storage in the data center;



Data center storage technologies; and



Data center storage networking protocols.

Chapter 7–34 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems

Review Questions 1. Data is stored as raw blocks of storage in a SAN environment, which is one of the key differences between a SAN and NAS. 2. A CNA combines the functionality of an HBA and an Ethernet NIC into one interface card. Both standard Ethernet traffic and storage traffic such as FCoE can traverse the same physical interface. 3. Two methods for extending storage access over a WAN are iSCSI and FCoE. FCoE allows the transportation of SCSI over standard Ethernet networks. iSCSI allows the transportation of SCSI over TCP/IP.

www.juniper.net

Network Storage Fundamentals • Chapter 7–35

Configuring and Monitoring QFabric Systems

Chapter 7–36 • Network Storage Fundamentals

www.juniper.net

Configuring and Monitoring QFabric Systems Chapter 8: Fibre Channel

Configuring and Monitoring QFabric Systems

This Chapter Discusses: •

Fibre Channel operation;



Fibre Channel layers and speeds;



Logging in to a Fibre Channel fabric;



Fibre Channel over Ethernet (FCoE) and FCoE Initialization Protocol (FIP); and



Fibre Channel and FCoE configuration and monitoring.

Chapter 8–2 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Fibre Channel Operation The slide lists the topics we cover in this chapter. We discuss the highlighted topic first.

www.juniper.net

Fibre Channel • Chapter 8–3

Configuring and Monitoring QFabric Systems

Basic View of a Fibre Channel Fabric Three types of Fibre Channel (FC) topologies exist: •

FC-P2P is a point-to-point connection between two FC devices.



FC-AL, or FC arbitrated loop, which comprises a loop of FC devices.



FC-SW, or a switched fabric, is the most common topology today and will serve as the focus of this chapter.

The slide depicts a simple FC switched fabric topology. FC switches reside at the core of the fabric. On the edge of the fabric, servers and various storage media connect. Three primary port types are in an FC fabric: •

N_Port, also known as a node port, is a port that resides on a host or storage device and connects to an F_Port.



F_Port, also known as a fabric port, is a port that resides on an FC switch and connects to an N_port.



E_Port, also known as an expansion or extender port, is a port that resides on an FC switch and connects to another E_Port on another FC switch. These links are sometimes called interswitch links (ISLs).

Chapter 8–4 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Fibre Channel Layers The slide compares the layers of Fibre Channel to those of the OSI model. The diagram serves as a review from the previous chapter. As a guideline, FC hubs, which are used only for FC-AL topologies, operate at FC0. FC switches operate up to the FC2 layer and FC routers operate up to the FC4 layer. Most of what we discuss in this chapter pertains to the FC2 layer.

www.juniper.net

Fibre Channel • Chapter 8–5

Configuring and Monitoring QFabric Systems

FC Speeds The table on the slide lists the current line rates and throughputs for FC. There are higher speeds specified which will come at future dates. Currently, the Junos OS FC interfaces on QFX3500 switches support 8 GFC, which equates to a 8.5 Gbps line rate and a throughput of around 1600 MBps. Note that the FC community commonly measures throughputs using Megabytes per second (MBps) rather than Megabits per second (Mbps). FC maintains backwards compatibility for a minimum of two previous generations. While FC interface speeds are measured with the Base2 number system, FC ISLs actually use the Base10 numbering system and currently support 10 GFC and 20 GFC speeds. Fibre Channel over Ethernet (FCoE) currently supports only 10 GFCoE speed. However, 40 GFCoE and 100GFCoE speeds are planned.

Chapter 8–6 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

FC Frame Format The slide illustrates the format of an FC frame. The D_ID represents the destination FC identifier (FCID). Note that the maximum size of an FC frame is 2148 bytes, which is significant when dealing with FCoE. We discuss this issue on subsequent slides.

www.juniper.net

Fibre Channel • Chapter 8–7

Configuring and Monitoring QFabric Systems

Worldwide Name FC uses worldwide names (WWNs) in a manner similar to Ethernet’s MAC addresses. Two types of WWNs exist for FC—worldwide node name (WWNN) and a worldwide port name (WWPN). The WWPN is most widely used. A WWN is a 64-bit address. As shown on the slide, the first two bytes represent a header, with the Network Address Authority (NAA) format. The next three bytes represent the IEEE-assigned Organizationally Unique Identifier (OUI). The last three bytes are assigned by the vendor. Note that three common formats exist for WWNs defined by the IEEE. Each of these formats are identified by the leading NAA bits. The example shown is known as the IEEE Extended format, designated by the NAA bits equaling 2 (NAA=2). Other formats include the IEEE Standard format (NAA=1) and the IEEE Registered Name format (NAA=5). The slide illustrates example WWN addresses using the show fibre-channel flogi nport command.

Chapter 8–8 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Fibre Channel Identifiers A Fibre Channel Identifier (FCID) is a 24-bit address used to route FC frames much like an IP address is used to route IP traffic. The slide identifies each of the three octets including the domain ID, area ID, and port ID. The slide illustrates the formula for determining the FCID. The resulting address number is converted to hexadecimal format as shown in the output of show fibre-channel routes.

www.juniper.net

Fibre Channel • Chapter 8–9

Configuring and Monitoring QFabric Systems

FC Zones FC switches have the ability to zone traffic by WWN or associated ports. This function is similar to an access control list (ACL), in that it prevents unauthorized traffic from one zone to another. FC devices can be members of multiple zones.

Chapter 8–10 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Fabric Shortest Path First Another similarity to TCP/IP networking is the Fabric Shortest Path First (FSPF) protocol, which is very similar to TCP/IP’s Open Shortest Path First (OSPF). Like OSPF, FSPF dynamically calculates the shortest path to other switches in an FC fabric. It maintains a routing table with the destination FCID, next-hop interface, and cost. FSPF also recalculates routes in the event of a failure in the network.

www.juniper.net

Fibre Channel • Chapter 8–11

Configuring and Monitoring QFabric Systems

Logging In to an FC Fabric The slide illustrates the steps involved when a device logs into an FC fabric. There are two primary steps—fabric login (FLOGI) and port login (PLOGI). To initiate FLOGI, the N_Port sends an FLOGI frame to the well-known 0xFFFFFE address, similar to sending out a broadcast address in an Ethernet network. The frame contains the device’s WWNN and WWPN. The FC services provides the device with an FCID. The device then interacts with the FC services to register values such as the WWNN, WWPN, FCID, CoS values, and others. This interaction is accomplished by sending PLOGI frames to the well-known 0xFFFFFC address. Finally, PLOGI is performed between both N_Ports for discovery and parameter negotiations, opening a direct connection between devices across the fabric.

Chapter 8–12 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Fibre Channel Has Built-in Flow Control Storage environments, sometimes called I/O networks, are not tolerant of data loss, reordering, or latency. Recall that in a storage environment such as a SAN, technology has extended what used to be simple, local I/O reads and writes across a network. Because of these requirements, FC has a built-in flow control mechanism. Flow control is used to handle traffic situations when a device receives frames faster than can be processed, which results in dropped frames. With FC flow control, a sending device only transmits frames to another device when the receiving device is ready to accept the frames. During FLOGI, credits are negotiated known as buffer-to-buffer credits (BB_Credit) between ports. Credits refer to the number of frames a device can receive at a time. Credits are signalled using Receiver Ready (r_rdy) signals.The FC flow control mechanism compares a buffer-to-buffer credit counter (BB_Credit_CNT) with its BB_Credit. Upon FLOGI, the counter is initialized to zero. For each frame transmitted, the BB_Credit_CNT counter is incremented by 1. The value is decremented by 1 for each r_rdy received from the remote port. Transmission of an r_rdy indicates that the remote port has processed a frame, freed a receive buffer, and is ready for more. If the BB_Credit_CNT counter reaches its BB_Credit level, the port cannot transmit another frame until it receives an r_rdy. The next section discusses FCoE and the challenges of transporting storage traffic over an Ethernet network, which by default, contains loss, congestion, and retransmissions.

www.juniper.net

Fibre Channel • Chapter 8–13

Configuring and Monitoring QFabric Systems

Fibre Channel over Ethernet Operation The slide highlights the topic we discuss next.

Chapter 8–14 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Data Center Bridging As we discussed on the previous slide, FC traffic and I/O traffic in general requires a lossless network and Ethernet is anything but lossless. This dilemma led to several standards release by an 802.1 working group known as data center bridging (DCB), sometimes referred to as Converged Enhanced Ethernet (CEE). The working group released the following standards:

www.juniper.net



Priority-based flow control (PFC), defined in 802.1Qbb, outlines a flow control mechanism per priority level, including congestion notification.



Enhanced transmission selection (ETS), defined in 802.1Qaz, which more efficiently utilizes available bandwidth or buffer.



Quantized congestion notification (QCN), defined in 802.1au, is not yet supported by the Junos OS. QCN allows an FCoE forwarder (FCF) to notify downstream devices about congestion so that it does not spread throughout a Layer 2 domain. An example of an FCF would be a QFX3500 acting as an FCoE gateway device.



Data Center Bridging Exchange Capabilities Notification (DCBX) is an extension of 802.1Qaz used to notify and negotiate the above parameters with neighbors. Note that the DCBX utilizes Link Layer Discovery Protocol (LLDP) time/length/values (TLVs) for its communication, which means LLDP is required.

Fibre Channel • Chapter 8–15

Configuring and Monitoring QFabric Systems

Priority-Based Flow Control The slide depicts PFC operation, which is configured in the Junos OS using a congestion notification profile. PFC allows an Ethernet PAUSE frame to be sent per-priority level. Standard Ethernet PAUSE frames are sent at a port-level. By having this ability at the priority level, transmission can be halted on individual queues, such as one for FCoE, instead of an entire interface.

Chapter 8–16 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Enhanced Transmission Selection When a load given in a traffic class does not fully utilize its allocated bandwidth, ETS allows other traffic classes to use the available bandwidth. This helps accommodate the bursty nature of some traffic classes while maintaining bandwidth guarantees. ETS is enabled in the Junos OS using hierarchical scheduling and traffic control profiles.

www.juniper.net

Fibre Channel • Chapter 8–17

Configuring and Monitoring QFabric Systems

DCBX Capability Negotiation and Communication As mentioned previously, DCBX is the protocol for exchange between neighboring switches. DCBX has three key functions. The first key function is discovery of the DCB-capable switch. The second key function is passing configuration parameters to peering switches. The third key function is the discovery of configuration parameters used in flow control. For example, PFC and ETS parameters must match on both ends to build an FCoE capable link.

Chapter 8–18 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

FCoE The DCB standards were prerequisites to FCoE itself. FCoE is comprised of two protocols. The FCoE protocol defines the actual data plane communication.

FIP FCoE Initialization Protocol (FIP) defines the handling of control plane communication. FIP is used for discovery of FC entities connected to an Ethernet cloud. FIP is also used to log in to and log out from the FC network. Both FCoE and FIP use different Ethertypes, which are listed on the slide.

www.juniper.net

Fibre Channel • Chapter 8–19

Configuring and Monitoring QFabric Systems

Logging In to an FC Fabric with FCoE Because FCoE is still an extension of Fibre Channel, it still has the same login process covered previously. However, now the communication utilizes Ethernet. This results in two additional processes before FLOGI—one is VLAN discovery and the other is FCF discovery.

Chapter 8–20 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

FCoE Gateway In its simplest form, an FCoE gateway (GW) converts FCoE traffic to FC traffic and FC traffic to FCoE traffic. An FCoE gateway performs this conversion by acting as a proxy. An ENode, which is a node device connected using FCoE, detects the gateway’s virtualized VF_Port as an F_Port. The FCoE gateway also manages FIP discovery for FCoE devices.

N_Port ID Virtualization Juniper Networks QFX3500 switches acting as FCoE gateways support N_Port ID Virtualization (NPIV) which allows multiple N_Port FCIDs to utilize one physical N_Port using Fabric Discovery (FDISC) login commands.

www.juniper.net

Fibre Channel • Chapter 8–21

Configuring and Monitoring QFabric Systems

FCoE Operation The slide depicts the operation of an FCoE environment with servers communicating through an FCoE gateway.

Chapter 8–22 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Transit FCoE Switch The slide shows a Junos device such as a QFX3500 switch acting as an FCoE transit switch. In this scenario, the switch is not acting as a proxy, but merely transporting FCoE traffic, which will eventually transit an FCF. A Juniper QFabric system might also act as an FCoE transit switch.

FIP Snooping While you can think of an FCoE transit switch as simply a bridge for FCoE traffic, the Junos OS has added an intelligence to this scenario known as FIP snooping. When FIP initiation such as VLAN discovery or fabric discovery is sent from the FCoE capable server, a QFX3500, EX4500, or QFabric node will record the source and destination FCoE MAC address, and the Junos OS will use the entry to validate subsequent FCoE sessions. So, if an unknown FCoE packet is sent without first having completed the login process, the Junos OS will drop the frame.

www.juniper.net

Fibre Channel • Chapter 8–23

Configuring and Monitoring QFabric Systems

FCoE ENode MAC Addressing Two types of FCoE MAC addresses exist—server provided MAC address (SPMA) and fabric provided MAC address (FPMA). An SPMA is a MAC address the server has burned in or configured for FCoE traffic. However, it is not supported with Junos devices and not widely used. With FPMA, a Junos device such as the QFX3500 switch provides a MAC address to CNA for FCoE communication.The address is made up of two portions. The first portion is called FC-MAP and is fixed at 0E-FC-00. The next portion of the FPMA is the FC-ID, provided by upstream FCF.

Chapter 8–24 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Configuration and Monitoring The slide highlights the topic we discuss next.

www.juniper.net

Fibre Channel • Chapter 8–25

Configuring and Monitoring QFabric Systems

QFX3500 Switch Native FC Support The QFX3500 switch can act as an FCoE gateway, or an FCF, with up to 12 ports supported for native Fibre Channel. Fibre Channel support requires an advanced feature license.

Chapter 8–26 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

FCoE Gateway Configuration Overview The slide lists the general configuration steps required to configure a QFX3500 switch as an FCoE gateway.

www.juniper.net

Fibre Channel • Chapter 8–27

Configuring and Monitoring QFabric Systems

Defining the FC Interfaces The slide shows the configuration of Port 42 through Port 47 as native FC interfaces. Recall that the QFX3500 supports up to 12 native FC interfaces. Port 0 through Port 5 can also be configured for native FC support. Interfaces configured for FC support do not support regular Ethernet traffic.

Chapter 8–28 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Configuring Interfaces The slide illustrates the configuration of the VLAN interface and the 10 GbE interface, which faces the FCoE servers. Note that both interfaces are configured to support jumbo frames. Jumbo frame support is required to accommodate FC frames, which can have a size of up to 2148 bytes. The VLAN interface is configured similar to an RVI but instead of the inet family, it is configured with family fibre-channel and defined as a VF_Port by specifying port-mode f-port.

www.juniper.net

Fibre Channel • Chapter 8–29

Configuring and Monitoring QFabric Systems

Defining the VLANs The slide shows the configuration of the required VLANs. One VLAN, named v30-FCoE, is used to carry FCoE traffic (and only FCoE traffic). The v30-FCoE VLAN is using a vlan-id of 30 and associates with the vlan.30 interface defined on the previous slide. The other VLAN represents the native VLAN.

Chapter 8–30 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Configuring FC Interfaces Once eligible interfaces have been defined for native FC use under the [edit chassis] hierarchy, they can be configured as fc- interfaces. On the slide, the fc-0/0/47 interface is configured with family fibre-channel and given a port-mode of np-port, representing a proxied N_Port.

www.juniper.net

Fibre Channel • Chapter 8–31

Configuring and Monitoring QFabric Systems

FC Fabric Configuration The slide shows the configuration of the [edit fc-fabrics] hierarchy. In this example, only one fabric has been configured and named cmqs. The fabric has been assigned an ID of 30 and the FC and VLAN interfaces have been associated with the fabric.

Chapter 8–32 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Protocol Configuration The slide shows the protocol configuration for this example. LLDP and DCBX has been enabled for all interfaces and IGMP snooping has been disabled for both the native and v30-FCoE VLANs.

www.juniper.net

Fibre Channel • Chapter 8–33

Configuring and Monitoring QFabric Systems

PFC Configuration Overview The slide lists the steps required to configured PFC.

Classifier Definition As the first step in PFC configuration, the 011 codepoint is configured for a low loss priority. This codepoint is used for FCoE traffic. The remaining codepoints are classified as best effort traffic.

Chapter 8–34 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Congestion Notification Profile The slide illustrates the creation and application of a congestion notification profile. The codepoints for FC traffic are marked for priority-based flow control.

www.juniper.net

Fibre Channel • Chapter 8–35

Configuring and Monitoring QFabric Systems

ETS Configuration Overview The slide lists an overview of the steps required for ETS configuration.

Forwarding Class Sets Configuration ETS is implemented using hierarchical port scheduling in the Junos OS. Priorities are configured as forwarding classes and priority groups are configured as forwarding class sets. The slide shows the configuration of two forwarding class sets—one for FCoE traffic and another for all other traffic.

Chapter 8–36 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Scheduler Configuration The slide shows the scheduler and scheduler map configurations for our example. In this case the 10GbE link is divided equally with 5g scheduled for FCoE traffic and 5g scheduled for all other traffic. The scheduler map applies the schedulers to the appropriate forwarding classes.

www.juniper.net

Fibre Channel • Chapter 8–37

Configuring and Monitoring QFabric Systems

Configuring Traffic Control Profiles Configuring traffic control profiles is a key step in ETS configuration. This step requires the association of scheduler maps to profiles. In this example, we have set a guaranteed rate of 50 percent for each profile. Although these settings are flexible dependent upon the environment, the total should equal 100 percent of the interface bandwidth.

Interface Application The slide shows the application of the forwarding class sets to the egress interface. Note that in many cases, for bidirectional flow, the egress interfaces can be the same as the ingress interfaces.

Chapter 8–38 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

FCoE Transit Switch Configuration The configuration of a transit FCoE switch is nearly the same as that of an FCoE gateway switch. A VLAN must be configured that transports FCoE traffic. A native VLAN must be configured to support the DCBX and LLDP traffic and the same CoS requirements apply. In general, an FCoE transit switch can be considered just a bridge for FCoE traffic. However, the Junos OS has the additional feature of FIP snooping, which provides some security for FCoE transit switches. The slide illustrates the configuration of FIP snooping on an FCoE transit switch. In this example, a QFabric system is used as an FCoE transit switch. For best security practice, configure the interface facing the storage media as fcoe-trusted, which mean traffic coming from the storage media is not subject to FIP snooping filters. Never configure the interfaces facing the servers as fcoe-trusted.

www.juniper.net

Fibre Channel • Chapter 8–39

Configuring and Monitoring QFabric Systems

Monitoring FIP The slide illustrates three commands available for monitoring FIP. Node the prompts for the last two commands indicate those commands are available only from a QFX3500 switch acting as an FCoE gateway.

Chapter 8–40 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

DCBX Monitoring Because the DCBX protocol is partially responsible for ensuring DCB protocols work properly, the show dcbx neighbors command is extremely helpful in troubleshooting issues with DCB/CEE standards. The output of show dcbx neighbors is too extensive to fit on the slide so the output of show dcbx neighbors terse is listed instead.

www.juniper.net

Fibre Channel • Chapter 8–41

Configuring and Monitoring QFabric Systems

Fibre Channel Monitoring Several commands are available for monitoring FC operation on an FCoE gateway. For more information about the output of these commands, consult the Juniper Networks technical publications at http://www.juniper.net/techpubs. You will be exposed to many of these commands in the subsequent lab associated with this chapter.

Chapter 8–42 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

This Chapter Discussed:

www.juniper.net



Fibre Channel operation;



Fibre Channel layers and speeds;



Logging into a Fibre Channel fabric;



Fibre Channel over Ethernet (FCoE) and FCoE Initialization Protocol (FIP); and



Fibre Channel and FCoE configuration and monitoring.

Fibre Channel • Chapter 8–43

Configuring and Monitoring QFabric Systems

Review Questions 1. A WWN can include a WWPN, a WWNN, or both, and is a 64-bit address equated to a MAC address in an Ethernet environment. The WWN is assigned by either the vendor or the FCF and used for login and discovery. The FCID is used for routing FC frames and is assigned by the FC services component of an FC switch. A mapping of the WWN to FCID is stored in the FLOGI database. 2. NPIV allows multiple N_Port IDs to associate with a single physical N_Port. 3. FLOGI is the first sequence used for logging into an FC fabric. The FLOGI message is sent to the FCF. PLOGI comes after FLOGI and includes port login to the FC services component and process login directly from N_Port to N_Port on node devices, enabling direct communication across the FC fabric. 4. The DCB working group defined PFC, ETS, QCN, and the DCBX standards.

Chapter 8–44 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems

Lab 4: Fibre Channel The slide provides the objectives for this lab.

www.juniper.net

Fibre Channel • Chapter 8–45

Configuring and Monitoring QFabric Systems

Resources to Help You Learn More The slide lists online resources to learn more about Juniper Networks and technology.

Chapter 8–46 • Fibre Channel

www.juniper.net

Configuring and Monitoring QFabric Systems Appendix A: System Upgrades

Configuring and Monitoring QFabric Systems

This Appendix Discusses: •

The different software packages used to upgrade QFabric systems;



Standard QFabric system software upgrades; and



Nonstop QFabric system software upgrades.

Appendix A–2 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

Standard Software Upgrades The slide lists the topics we cover in this appendix. We discuss the highlighted topic first.

www.juniper.net

System Upgrades • Appendix A–3

Configuring and Monitoring QFabric Systems

Downloading Software Images To upgrade a QFabric system or component, you must obtain the appropriate software image from Juniper Networks. The downloads are available at http://www.juniper.net/support/downloads. You must have an active service contract with Juniper Networks, including an access account. To obtain an account, complete the product registration form at https://www.juniper.net/registration/ Register.jsp. Once you have accessed the software page, choose the link for either the QFX3000-M or QFX3000-G system. Then click the Software tab.

Complete Install Package The software download page lists a number of software files available for download. At the top of the page is the complete install package. This package is an RPM containing the software for the various components of the QFabric system and is used for standard upgrades as well as nonstop software upgrades (NSSU). In a NSSU upgrade, the same package is used in multiple steps of the process. Continued on the next page.

Appendix A–4 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

Install Media Images There are also install media images available for the QFabric system or for individual QFabric system components. Install media images are primarily used for recovery purposes. Note that loading an install media image completely overwrites the entire contents of the Director device. It is important to backup the configuration file to an external device or location. To use the install media image for the Director device, you must download the .tgz file and unpack the file using the following command on a UNIX workstation: %tar -xvzf install-media-qfabric-12.2X50-D10.3.img.tgz The resulting .img file can then be copied to a USB drive using the following command: dd if=install-media-qfabric-12.2X50-D10.3.img of=/dev/sdb bs=16k To recover the QFabric system, insert the USB drive into the Director device and reboot the system. When prompted to “reinstall the QFabric software on this Director device, type: install”, type the install command and press Enter. Once the installation completes, the device reboots. You will be prompted to re-run the initial setup script. At this point you can choose to bypass the initial setup script and proceed with reloading your configuration from the stored external device or location. We discuss configuration backups and restorations later in this appendix.

www.juniper.net

System Upgrades • Appendix A–5

Configuring and Monitoring QFabric Systems

Component Software Images The software download page also lists install packages for Node and Interconnect devices as shown on the slide. These files are in the .tgz format and useful for recovery of an individual QFabric component. As mentioned previously, the components also have install media images available for download to a USB device.

Control Plane Switch Software The software download page provides quick access to the recommended Junos OS version for EX Series switches used in the control plane of a QFabric system. The EX Series switches run a standard EX Series version of the Junos OS. Also note, the recommended virtual chassis configuration files are also available for download.

Appendix A–6 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

Standard Upgrade Overview Performing a standard upgrade with the provided RPM image provides the fastest method for upgrading a QFabric system and should be used when time is a factor and forwarding resiliency of the data plane is not a factor. All components are upgraded simultaneously. The standard upgrade procedure consists of four primary steps:

Note that upgrades on a QFX3000-G system from version 11.x to version 12.x might require a new install script and a new jloader package. See the PSN-2012-07-657 and PSN-2011-11-434 product alerts for details. www.juniper.net

1.

Backup current configuration files either locally or externally.

2.

Download the appropriate RPM image.

3.

On the Director device, retrieve the software image from your local workstation or file server.

4.

Install the software package.

The slide shows a typical RPM image name and its parts.

System Upgrades • Appendix A–7

Configuring and Monitoring QFabric Systems

Back Up Configuration Files Before upgrading the QFabric system, backup the configuration file and initial installation settings by using the command shown on the slide. The result will be a file with both the Junos OS configuration as well as the initial configuration script settings.

Download Install Package Next download the install package RPM file to either your workstation or a file server supporting the FTP or SCP protocols.

Appendix A–8 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

Retrieve Software File As shown on the slide, use the request system software download command to transfer the file from your workstation to the Director device. In this example, SCP was used to transfer the file. However, you can also use FTP. By default the file is placed in the /pbdata/packages directory. user@qfabric> file list /pbdata/packages/: jinstall-qfabric-12.2X50-D10.3.rpm ais/

Install Software Package To install the software package, use the request system software add command as shown on the slide. Note the inclusion of the component all reboot keywords. The system will commence upgrading the Director, Interconnect, and Node devices. The upgrade can take up to an hour.

www.juniper.net

System Upgrades • Appendix A–9

Configuring and Monitoring QFabric Systems

Software Version Verification To verify the software upgrade completed successfully, use the show version component all command shown on the slide. The output on the slide is trimmed for brevity, but the output displays the Junos OS version on all the QFabric components, which should all match. You should also verify all the components are once again connected to the system and operational using the show fabric administration inventory command.

Appendix A–10 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

Nonstop Software Upgrades The slide highlights the topic we discuss next.

www.juniper.net

System Upgrades • Appendix A–11

Configuring and Monitoring QFabric Systems

NSSU Overview A nonstop software upgrade enables some QFabric system components to continue operating while similar components in the system are being upgraded. In general, the QFabric system upgrades redundant components in stages so that some components remain operational and continue forwarding traffic while their relevant counterparts upgrade to a new version of software. Nonstop upgrades are useful in situations where the service impact must be minimized. However, the NSSU process can be time consuming--requiring several hours.

Appendix A–12 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

NSSU Prerequisites To qualify for a nonstop upgrade, QFabric systems must be running Junos OS 12.2 or later. Before beginning the process, verify that all system components are connected and configured using the show fabric administration inventory command. Download the complete install package and place the package on the Director group device as discussed in the previous section about standard upgrades. To minimize traffic impact, sensitive traffic should transit LAGs in redundant server Node groups or the network Node group. This design allows traffic to continue to flow through one LAG member interface connected to an operational Node device while the counterpart Node device is rebooting as part of the upgrade process. To minimize routing protocol churn, enable graceful restart for supported protocols such as BGP and OSPF on the network Node group.

www.juniper.net

System Upgrades • Appendix A–13

Configuring and Monitoring QFabric Systems

Three Primary Steps The slide illustrates the three primary steps in the NSSU process: 1.

Upgrade the Director devices.

2.

Upgrade the fabric components. This includes the Interconnect devices and the Fabric Control Routing Engines (REs).

3.

Upgrade the Node groups, including all server Node groups, redundant server Node groups and the network Node group.

The steps must be completed in the order shown and all steps must be completed for a successful nonstop upgrade. Between each step, and each Node group upgrade, it is important to ensure all components are operational before moving on to the next step. You can verify successful operation with the show fabric administration inventory command.

Appendix A–14 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

Upgrading the Director Devices Use the request system software nonstop-upgrade director-group command to perform the first step of the NSSU process. By default, the Director device that is not hosting the CLI session is upgraded first. Therefore, we recommend that you execute the upgrade with a CLI session from the device which hosts the master Fabric Manager and network Node group RE, by issuing the cli command using a console connection or a direct SSH session. Although mastership switching is automated, this eliminates the necessity of switching mastership to begin the upgrade. Once the upgrade is initiated from the master Director group device, the QFabric system installs the software on the backup Director device and reboots the device. The master device then begins a 15-minute sequence that includes a temporary suspension of services and a database transfer. You cannot issue operational mode commands in the CLI during this time period. Next, the QFabric system installs the new software for the fabric manager and diagnostic REs on the master Director device. The QFabric system switches mastership of all processes to the backup Director device and reboots the master Director device. The previous master Director devices resumes operation as a backup device and all associated processes such as the Fabric Manager and network Node group REs become backup as well. To verify the Director group devices’ software version, use the show version component director-group command. Additionally, you can monitor or view the upgrade process with the /tmp/rolling_upgrade_director.log file. This file resides on the Director device from which the upgrade command was issued.

www.juniper.net

System Upgrades • Appendix A–15

Configuring and Monitoring QFabric Systems

Upgrading the Fabric Components Use the request system software nonstop-upgrade fabric command to initiate the next step of the NSSU process. With this command, the QFabric system downloads, validates, and installs the new software in all Interconnect devices and Fabric Control REs. First, one Fabric Control RE reboots and comes back online. Then the other Fabric Control RE reboots and comes back online. Next, the first Interconnect device reboots and comes back online, resuming the forwarding of traffic. Subsequent Interconnect devices reboot and return to service. To verify the software version of fabric components, issue the show version component fabric command. In addition to monitoring the upgrade using the CLI, logging is stored in the /tmp/perl_DCF.Utils.log file, which is accessible in the shell on the Director device from which the upgrade was issued. This file contains information pertaining to Node group upgrades as well.

Appendix A–16 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

Upgrading the Node Groups Use the request system software nonstop-upgrade node-group command to upgrade Node groups. All Node groups must be upgraded to complete the NSSU process. Verify the Node group version with the show version component node-group-name command. When upgrading a network Node group, the QFabric system copies the new software to each Node device, one at a time. The QFabric system validates and installs the new software in all Node devices simultaneously. The QFabric system copies the new software to the network Node group REs. The system then installs the software in the network Node group REs one at a time. The software is installed on the backup RE first and then the master network Node group RE. Subsequently, the backup network Node group RE and its supporting Node devices are rebooted and come back online, one at a time. Next, the master network Node group RE relinquishes mastership, reboots, and comes back online. For redundant server Node group, the QFabric system copies the software to the backup Node device, then the master Node device. The system validates and installs the software on the backup Node device, then the master Node device. The backup Node device reboots and becomes the master Node device. The previous master Node device reboots and comes back online as the backup Node device. Note that both devices in a redundant server Node group must be online before issuing the upgrade command. For server Node groups containing only one Node device, the QFabric system downloads the software, validates the software, installs the software, and reboots the device. Traffic loss will occur in a server Node group containing only one Node device. www.juniper.net

System Upgrades • Appendix A–17

Configuring and Monitoring QFabric Systems

NSSU Upgrade Groups You can upgrade network Node group devices one at a time or in groups, known as upgrade groups. Upgrade groups can shorten the time required to perform the NSSU process by upgrading two or more Node devices in a network Node group, or an entire network Node group simultaneously. The slide illustrates the configuration of NSSU upgrade groups. Note that if you add Node devices that have links to the same LAG to the same upgrade group, traffic loss can occur.

Appendix A–18 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

NSSU Caveats and Considerations The slide lists some caveats and considerations to keep in mind when performing or preparing to perform an NSSU upgrade process.

www.juniper.net

System Upgrades • Appendix A–19

Configuring and Monitoring QFabric Systems

This Appendix Discussed: •

The different software packages used to upgrade QFabric systems;



Standard QFabric system software upgrades; and



Nonstop QFabric system software upgrades.

Appendix A–20 • System Upgrades

www.juniper.net

Configuring and Monitoring QFabric Systems

Review Questions 1. The complete install package for a QFabric system is in an RPM format. 2. When upgrading the Director devices in an NSSU upgrade, we recommend that you initiate the upgrade from the master Director device. You can control the upgrade by logging in to the master director device directly using SSH or a console session and issuing the cli command. By initiating the upgrade from the master Director device, one less mastership switch of the master Fabric Manager and network Node group REs is required. 3. NSSU upgrade groups can help to expedite the upgrade of a network Node group by allowing two or more Node devices to be upgraded simultaneously. 4. LAGs and graceful restart can be used to provide traffic resiliency when performing an NSSU upgrade of the network Node group.

www.juniper.net

System Upgrades • Appendix A–21

Configuring and Monitoring QFabric Systems

Appendix A–22 • System Upgrades

www.juniper.net

Appendix B: Acronym List ACL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . access control list ARP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . address resolution protocol ASIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application-specific integrated circuit ATM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asynchronous Transfer Mode ATM-AAL5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asynchronous Transfer Mode—Adaption Layer 5 BPDU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bridge protocol data unit CEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . converged enhanced Ethernet CIFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Internet File System Cisco HDLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Cisco High-Level Data Link Control CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .command-line interface CNA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . converged network adapter CoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .class of service DAC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . direct attached copper DAS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . direct attached server DCB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .data center bridging DCBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Center Bridging Exchange Capabilities Notification dcd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .device control process DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic Host Configuration Protocol DLCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data-link connection identifier DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domain Name System DoS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . denial of service ERP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . enterprise resource planning ESCON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enterprise Systems Connection ETS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Enhanced transmission selection FC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Channel FC-LE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Channel-Link Encapsulation FCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Channel Forwarding FCID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Fibre Channel ID FCIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Fibre Channel over IP FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Channel over Ethernet FCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . frame check sequence FDISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabric Discovery FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Connection FIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FCoE Initialization Protocol FIPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Federal Information Processing Standards FLOGI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fabric login FPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flexible PIC Concentrator FPMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fabric provided MAC address FQDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fully qualified domain name FSPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fibre Shortest Path First FT1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fractional T1 GB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gigabyte GRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . graceful Routing Engine switchover GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .graphical user interface GW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .gateway HBA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . host bus adapter HIPPI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .High Performance Parallel Interface HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Hypertext Transfer Protocol HTTPS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hypertext Transfer Protocol over Secure Sockets Layer IB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . InfiniBand ICMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Control Message Protocol IDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . intrusion detection service IEEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Institute of Electrical and Electronics Engineers IETF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Engineering Task Force IGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . interior gateway protocol www.juniper.net

Acronym List • Appendix B–1

IPI-3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intelligent Peripheral Interface—3 ISL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . inter-switch link iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet SCSI ISSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . in-service software upgrade JNCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juniper Networks Certification Program JTAC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juniper Networks Technical Assistance Center KB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . kilobytes LACP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . link aggregation control protocol LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . link aggregation group LLDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Link Layer Discovery Protocol MB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . megabytes MBps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Megabytes per second MD5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Message Digest 5 MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Management Information Base MLPPP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multilink Point-to-Point Protocol MTU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .maximum transmission unit NAA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Address Authority NAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .network-attached storage NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network File System NIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . network interface card NMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . network management system NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .N_Port ID Virtualization NS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .name server NSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . nonstop active routing NTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Time Protocol OID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object identifier OoB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . out-of-band OSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Open Systems Interconnection OSPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Open Shortest Path First PFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . priority-based flow control PFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Packet Forwarding Engine PLOGI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . port login POP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point of presence PPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Point-to-Point Protocol pps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .packets per second QCN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantized congestion notification QSFP+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quad Small Form-factor Pluggable Plus RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . redundant array of independent disks RE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Routing Engine RMON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote Monitoring rpd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . routing protocol daemon RSTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rapid Spanning Tree Protocol RVI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .routed VLAN interface SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . storage area network SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial attached SCSI SBCCS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Single Byte Command Code Sets SFP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . small form-factor pluggable SHA-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Hash Algorithm 1 SMB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Message Block SPMA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . server provided MAC address STP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree Protocol TLV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . type/length/value TOE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCP offload engine TOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . top-of-rack TTL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . time-to-live ULP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .upper layer protocols URI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .uniform resource identifier USM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . user-based security model VACM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . view-based access control model VCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual channel identifier Appendix B–2 • Acronym List

www.juniper.net

VLAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual LAN VPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual path identifier VSAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual SAN WWN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . worldwide name WWNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . worldwide node name

www.juniper.net

Acronym List • Appendix B–3

Appendix B–4 • Acronym List

www.juniper.net

Appendix C: Answer Key Chapter 1:

Course Introduction This chapter contains no review questions.

Chapter 2:

System Overview 1. Some of the key benefits the QFabric system offers over a traditional tiered Layer 2 network are improved scalability, efficient use of resources, and improved performance, which is a result of the decrease of end-to-end latency. 2. The four main components of the QFabric system are the Node devices, Interconnect devices, Director devices, and EX4200 Series switches. The Node devices function as intelligent line cards and are the entry and exit point for traffic entering or leaving the system. The Interconnect devices serve as the system's fabric and are used to interconnect the various Node devices. All traffic passing between two distinct Node devices passes through at least one Interconnect device. The Director devices are the control entity for the system and are often compared to the Routing Engine in traditional modular switches. The EX4200 Series switches make up the infrastructure used to support the control plane network. 3. The control plane functions consist of discovering and provisioning the system, managing routing and switching protocol operations, exchanging reachability information, and discovering and managing paths through the fabric. The data plane functions consist of providing connectivity for attached servers and network devices, interconnecting Node devices through the fabric construct, and forwarding traffic through the system.

Chapter 3:

Software Architecture 1. The fabric admin consists of a user interface service and key management processes and is how the QFabric system provides the single administrative view to the end user. 2. A number of Routing Engines exist within the QFabric system including the fabric manager RE, network Node group REs, fabric control REs, diagnostic RE, and the local CPU REs, and server Node group REs distributed throughout the system. The fabric manager RE is used for system discovery and provisioning as well as topology discovery. The network Node group REs are used for Layer 2 and Layer 3 protocol processing. The fabric control REs are used to learn about and distribute reachability information. The local CPU REs and server Node group REs are used local processing tasks for distributed system operations. 3. System components are discovered and provisioned by the fabric manager RE through the fabric discovery protocol. The fabric manager RE interfaces with the fabric admin, VM manager and the individual VMs, and Node devices and Interconnect devices throughout the discovery and provisioning process. The fabric discovery protocol is based on IS-IS. 4. Reachability information is learned and distributed through the fabric control REs and fabric control protocol, which is based on BGP.

www.juniper.net

Answer Key • Appendix C–1

Chapter 4:

Setup and Initial Configuration 1. After the equipment is installed, you should first bring up the control plane Ethernet network infrastructure, which is provided through EX Series switches. You should then bring up the Director devices and ensure that they form a Director group. Once the Director group is formed, you should bring up the Interconnect and Node devices. 2. When bringing up the Director group, you will need the IP addresses for DG0 and DG1 as well as the default root partition virtual IP address. You will need the default gateway address for the management subnet on which the Director devices are connected. You will need two passwords—one for the Director devices and the other for Node devices and Interconnect devices. You will also need the serial number and MAC address range information, both of which are obtained through Juniper Networks when the system is purchased. 3. The QFabric system follows a four-level interface naming convention using the format device-name:type-fpc/pic/port, where device-name is the name of the Node device or Node group. The remainder of the naming convention elements are the same as those used with other Junos OS devices.

Chapter 5:

Layer 2 Features and Operations 1. The majority of the Layer 2 connections are within a data center, where the QFabric system is used to connect servers to the network. These connections include rack server connections and blade server connections and often support east-west traffic flows, which is traffic passing between devices within the data center. One other type of connection used involves the network Node group and is typically used to connect the system with the WAN edge and security devices. These connections are commonly used for north-south traffic flows which is traffic entering and leaving the data center. Note that there are also certain Layer 2 connections within the network Node group that are used for migration strategies and in situations where the blade switches within blade chassis deployments cannot be bypassed and must run STP. 2. When blade chassis are used that include blade switches and the connections interface with a server Node group, you must ensure STP is disabled. Otherwise the interfaces within the Node group that receive STP BPDUs will be disabled. 3. MAC addresses are first learned by the ingress Node groups through which the related traffic is received. The newly learned MAC address is then advertised from the Node group RE to the fabric control REs through the fabric control protocol, which is based on BGP. The fabric control REs then reflected the learned MAC addresses on to all other Node groups associated with the VLAN with which the MAC address is associated.

Chapter 6:

Layer 3 Features and Operations 1. RVIs are logical Layer 3 interfaces configured on QFabric systems. These interfaces are associated with VLANs and often serve as gateways for hosts on the VLANs to which their assigned. 2. Some of the available first hop router options mentioned in this chapter include RVIs, SRX Series devices, MX Series devices, or a hybrid scenario that uses more than one of these options.

Appendix C–2 • Answer Key

www.juniper.net

3. ARP entries learned by one Node group are shared with other Node groups associated with the same Layer 2 domain through the fabric control protocol.

Chapter 7:

Network Storage Fundamentals 1. Data is stored as raw blocks of storage in a SAN environment, which is one of the key differences between a SAN and NAS. 2. A CNA combines the functionality of an HBA and an Ethernet NIC into one interface card. Both standard Ethernet traffic and storage traffic such as FCoE can traverse the same physical interface. 3. Two methods for extending storage access over a WAN are iSCSI and FCoE. FCoE allows the transportation of SCSI over standard Ethernet networks. iSCSI allows the transportation of SCSI over TCP/IP.

Chapter 8:

Fibre Channel 1. A WWN can include a WWPN, a WWNN, or both, and is a 64-bit address equated to a MAC address in an Ethernet environment. The WWN is assigned by either the vendor or the FCF and used for login and discovery. The FCID is used for routing FC frames and is assigned by the FC services component of an FC switch. A mapping of the WWN to FCID is stored in the FLOGI database. 2. NPIV allows multiple N_Port IDs to associate with a single physical N_Port. 3. FLOGI is the first sequence used for logging into an FC fabric. The FLOGI message is sent to the FCF. PLOGI comes after FLOGI and includes port login to the FC services component and process login directly from N_Port to N_Port on node devices, enabling direct communication across the FC fabric. 4. The DCB working group defined PFC, ETS, QCN, and the DCBX standards.

Appendix A:

System Upgrades 1. The complete install package for a QFabric system is in an RPM format. 2. When upgrading the Director devices in an NSSU upgrade, we recommend that you initiate the upgrade from the master Director device. You can control the upgrade by logging in to the master director device directly using SSH or a console session and issuing the cli command. By initiating the upgrade from the master Director device, one less mastership switch of the master Fabric Manager and network Node group REs is required. 3. NSSU upgrade groups can help to expedite the upgrade of a network Node group by allowing two or more Node devices to be upgraded simultaneously. 4. LAGs and graceful restart can be used to provide traffic resiliency when performing an NSSU upgrade of the network Node group.

www.juniper.net

Answer Key • Appendix C–3

Appendix C–4 • Answer Key

www.juniper.net

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF