UCS NEXUS OVERVIEW
Short Description
UCS NEXUS OVERVIEW...
Description
Describe Data Center Trends
Evolution of Server Scalability Scale Out
Scale Up
Monolithic servers
Large numbers of CPUs Proprietary platform Proprietary OS Many apps per server
The 90’ 90’s s High cost / Proprietary Large failure domain
Commoditized Commoditized servers
1 APP / 1 Physical Server X86 platform Commoditized OS
Early 2000’s 2000’s Servers under-utilized Power & cooling
Scale In
Bladed and Rack servers
Multi-Socket / Multi-core CPUs
X64 platforms (Intel / AMD)
Commoditized OS
Virtual Machine Density
Now Management complexity Cloud Computing / Dynamic Resourcing
Evolution of Server Scalability: Changes in server design
Console, power, networking, and storage connectivity to each blade
Console, power, networking, and storage connectivity shared in chassis
Evolution of Server Scalability: CPU Density single core CPU single socket for 1 CPU
core core core core
single socket for 1 CPU
single socket, 1 CPU, 4 processing cores
Terminology Sockets – Slot in machine board for processing chip CPU – Processing Chip Core – the actual processing unit inside CPU
• • •
Server Impact • • • •
More cores in a CPU = More Processing Critical for application that become processing bound Core densities are increasing 2/4/6/8/12/16 CPUs are x64 based
Evolution of Server Scalability: Memory Density
DIMM Slots
•
•
•
•
DIMM
DIMMs – Dual Inline Memory Module - a series of dynamic randomaccess memory integrated circuits. These modules are mounted on a printed circuit board Ranking – memory modules with 2 or more sets of DRAM chips connected to the same address and data buses. Each such set is called a rank. 1 dual and quad ranks exist Speed – Measured in MHz most server memory is DDR3 and PC310600 = 1333 MHz As Server memory increases clock speed will sometimes drop in order to be able to utilize such large memory amounts
Adapter Buses - PCIe PCIe BUS In virtually all server compute platforms PCIe bus serves as the primary motherboard-level interconnect to hardware
•
Interconnect: A connection or link between 2 PCIe ports, can consist of 1 or more lanes
•
Lanes: A lane is composed of a transmit
•
Form Factor:
and receive pair of differential lines. PCIe slots can have 1 to 32 lanes. Data transmits bi-directionally over lane pairs. PCIe x16 where x16 represents the # lanes the card can use A PCIe card fits into a slot of its physical size or larger (maximum ×16), but may not fit into a smaller PCIe slot (×16 in a ×8 slot)
Growing Use of Platform Virtualization Platform Virtualization: • •
• •
•
Physical servers host multiple Virtual Servers Better physical server utilization (Using more of existing resources) Virtual Servers are managed like physical Access to physical resources on server are shared Access to resources are controlled by hypervisor on physical host
Key Technology for : • • • •
VDI / VXI Server Consolidation Cloud Services DR
Challenges: • • •
Pushing Complexity into virtualization Who manages what when everything is virtual Integrated and virtualization aware products
Server Management Challenge Server Orchestrators / Manager of Manager
Chassis Mgr
Chassis Mgr
Chassis Mgr
Network Mgr
Network Mgr
Network Mgr
Server Mgr
Server Mgr
Server Mgr
Vendor A
Vendor B
Vendor C
Characteristics of Movement Toward Cloud Cloud Virtualization Web Client Srv Mini Comp Mainframe 1960
1970
1980
1990
2000
2010
Define the Nature of Typical Cloud Services Software as a Service:
Platform as a Service:
Providing software infrastructure via cloud
Applications SalesForce Gmail BaseCamp Square Space • • • •
Providing data center infrastructure via the cloud
Management 3Tera RightScale Scala Vertabra
Platforms: Python – Google App Engine Facebook Appistry Force.com •
• •
•
•
Cloud Computing
•
IaaS Infrastructure as a Service Providing infrastructure for cloud based services.
Service Providers Amazon Web Services Joyent • •
• •
Cloud Layer Review
Service Catalog
Orchestration and Management
Infrastructure
VDI
Web Store
CRM
Orchestration / Management / Monitoring - Tidal, New Scale, Altiris - UCSM ECO Partner Integration (MS, IBM, EMC, HP) - UCSM XML API
Compute - UCS B Series - UCS C Series
Network - FCoE - Nexus 7K, 5K, 4K, 3K, 2K,
Virtualization -
Nexus 1KV VM-FEX - UCS A-FEX - VIC Virt Appliance - VSG
UCS Overview
Server Deployment Today Mgmt Server
Over the past 20 years •
An evolution of size, not thinking
•
More servers & switches than ever
•
Management applied, not integrated
•
Virtualization has amplified the problem
Result •
More points of management
•
More difficult to maintain policy coherence
•
More difficult to secure
•
More difficult to scale
Unified Computing Solution Mgmt Server
Embed management
Unify fabrics
Optimize virtualization
Remove unnecessary
switches,
–
adapters,
–
management modules
–
Less than 1/3rd the support infrastructure for a given workload
Unified Computing System (UCS) Single Point of Management Unified Fabric
Blade Chassis / Rack Servers
Cisco UCS and Nexus Technology UCS Components
Nexus Products
UCS Manager Embedded Manages entire system
UCS Fabric Interconnect
UCS Fabric Extender Remote line card
Nexus 5000/ 5500 Unified Fabric
Nexus 2000 Fabric Extender
UCS Blade Server Chassis Flexible bay configurations UCS Rack / Blade Server Industry-standard architecture
UCS Virtual Adapters Choice of multipleadapters
CNAs with FCoE
System Components - Logical SAN
LAN
G
G
MGMT
S
S
Fabric A Interconnect G
G
SAN
G
Fabric Interconnect
Chassis
G
Fabric A Interconnect
G
G
G
G
Up to 8 half width blades or 4 full width blades
–
Compute Chassis Fabric Extender
I
R x8
C
C
I
x8
Fabric Extender
R x8
x8
Fabric Extender Host to uplink traffic engineering
–
M
B
Adapter
P
P
B
Adapter
Adapter
Adapter Adapter for single OS and hypervisor systems
–
X X
X X X X
x86 Computer
x86 Computer
Compute Blade (Half slot)
Compute Blade (Full slot)
Compute Blade - Half Width or Full Width
System Topology LAN SAN A
Mgmt
SAN B
FC
Embedded Management (UCS Manager)
–
Custom Portal Systems Management Software
GUI
CLI
XML API
Standard APIs
UCS Manager
Single point of device management
Adapters, blades, chassis, LAN & SAN connectivity
–
Embedded manager
–
GUI &CLI
Standard APIs for systems management –
XML, SMASH-CLP, WSMAN, IPMI, SNMP
–
SDK for commercial & custom implementations
Designed for multi-tenancy –
RBAC, organizations, pools & policies
Hardware “State” Abstraction LAN LAN Connectivity
OS & Application
SAN
SAN Connectivity
State abstracted from hardware MAC Address NIC Firmware NIC Settings
Drive Controller F/W Drive Firmware
UUID BIOS Firmware BIOS Settings Boot Order
BMC Firmware
WWN Address HBA Firmware HBA Settings
UUID: 56 4dcd3f 59 5b… MAC : 08:00:69:02:01:FC WWN: 5080020000075740 Boot Order: SAN, LAN
Chassis-1/Blade-2
Chassis-8/Blade-5
Separate firmware, addresses, and parameter settings from server hardware
Physical servers become interchangeable hardware components
Easy to move OS & applications across server hardware
Service Profiles Server Name Server Name UUID Server UUID Name MAC UUID, MAC MAC,WWN WWN Boot info WWN Boot info firmware Boot info LAN Config LAN, SAN Config LAN Config SAN Config Firmware… SAN Config Run-time association
Contain server state information
User-defined
–
Each profile can be individually created
–
Profiles can be generated from a template
Applied to physical blades at run time –
Consistent and simplified server deployment – “pay-as-you-grow” deployment –
Configure once, purchase & deploy on an “as-needed” basis
Simplified server upgrades – minimize risk –
Without profiles, blades are just anonymous hardware components
Simply disassociate server profile from existing chassis/blade and associate to new chassis/blade
Enhanced server availability – purchase fewer servers for HA Use same pool of standby servers for multiple server types – simply apply appropriate profile during failover
–
View more...
Comments