Vplex a&m Student Guide
Short Description
VPLEX Basics...
Description
Welcome to VPLEX Architecture and Management Overview. Copyright © 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation Technology, Common Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz, DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab, EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, VMAX, Vblock, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, and where information lives, are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. © Copyright 2012 EMC Corporation. All rights reserved. Published in the USA.
Revision Date: September 2012 Revision Number: MR-7WN-VPXTOIC.3.0.a
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
1
This course covers the VPLEX system, its architectural components and management mechanisms. Terminology, hardware, and software components will be explored and defined. Basic product configuration and features will be outlined. This course is intended for anyone that will install, configure or manage a VPLEX environment. The objectives for this course are shown here.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
2
Upon completion of this course, you should be able to identify VPLEX terminology, configuration options, features, and components. This course presents benefits of VPLEX, internal communications, management tasks and operations and basic troubleshooting.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
3
This module provides an overview of the VPLEX system.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
4
For years, users have relied on “physical storage” to meet their information needs. New and evolving changes, such as virtualization and the adoption of Private Cloud computing, have placed new demands on how storage and information is managed. To meet these new requirements, storage must evolve to deliver capabilities that free information from a physical element to a virtualized resource that is fully automated, integrated within the infrastructure, consumed on demand, cost effective and efficient, always on, and secure. The technology enablers needed to deliver this combine unique EMC capabilities such as FAST, Federation, and storage virtualization. The result is a next generation Private Cloud infrastructure that allows users to:
• • • • •
Move thousands of VMs over thousands of miles; Batch process in low-cost energy locations; Enable boundary-less workload balancing and relocation; Aggregate big data centers; and Deliver “24 x forever” – and run or recover applications without ever having to restart.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
5
When EMC thinks of the Private Cloud, it is describing a strategy for your infrastructure that enables optimized resource use. This means you are optimized for energy, power, and cost savings. You can scale up and out simply and apply automated policies. You can guarantee greater availability and access for your production environment—significantly reducing or eliminating downtime.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
6
EMC VPLEX is a unique virtual storage technology that federates data located on multiple storage systems – EMC and non-EMC – allowing the storage resources in multiple data centers to be pooled together and accessed anywhere. When combined with virtual servers, it is a critical enabler of private and hybrid cloud computing and the delivery of IT as a flexible, efficient, and reliable resilient service. The VPLEX family addresses three primary IT needs: •Mobility: The ability to move applications and data across different storage installations, whether within the same data center, across a campus, within a geographical region - and now, with VPLEX Geo, across even greater distances. •Availability: The ability to create high-availability storage infrastructure across these same varied geographies with unmatched resiliency. •Collaboration: The ability to provide efficient real-time data collaboration over distance for such big data applications as video, geographic/ oceanographic research, and others.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
7
The VPLEX family brings many unique innovations and advantages. VPLEX technology enables new models of application and data mobility, leveraging distributed/federated virtual storage. For example, VPLEX is specifically optimized for virtual server platforms (e.g., VMware ESX, Hyper-V, Oracle Virtual Machine, AIX VIOS) and can streamline and even accelerate transparent workload relocation over distances, which includes moving virtual machines over distances. With its unique, highly available, scale-out clustered architecture, VPLEX can be configured with one, two, or four engines - and engines can be added to a VPLEX cluster without disruption. All virtual volumes presented by VPLEX are always accessible from every engine in a VPLEX cluster. Similarly, all physical storage connected to VPLEX is accessible from every engine in the VPLEX cluster. Combined, this scale-out architecture uniquely ensures maximum availability, fault tolerance, and scalable performance. Advanced data collaboration, through AccessAnywhere, provides cache-consistent activeactive access to data across two VPLEX clusters over synchronous distances with VPLEX Metro and asynchronous distances with VPLEX Geo.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
8
EMC VPLEX is a storage network-based federation solution that provides non-disruptive, heterogeneous data movement and volume management functionality. VPLEX is an appliance-based solution that connects to SAN Fibre Channel switches. The VPLEX architecture is designed as a highly available solution and, as with all data management products, high availability (HA) is a major component in most deployment strategies. VPLEX is offered in three EMC VPLEX Storage Cluster or cluster configurations: small, medium, and large. Each VPLEX Cluster can function as a stand-alone, single-site system, or can be linked to another VPLEX Cluster to function as a distributed, cache coherent system. VPLEX Clusters may be located in the same data center or in geographically distributed locations for synchronous or asynchronous cache write-through mode.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview
9
Listed here are some common VPLEX terms.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 10
Listed here are some common VPLEX terms.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 11
Listed here are some common VPLEX terms.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 12
The VPLEX product family has currently released VPLEX Local, Metro, and Geo and announced the future release of VPLEX Global. VPLEX Local provides seamless, non-disruptive data mobility and the ability to manage and mirror data between multiple heterogeneous arrays from a single interface within a data center. VPLEX Local consists of a single VPLEX cluster. It contains a next-generation architecture that allows increased availability, simplified management, and improved utilization and availability across multiple arrays. VPLEX Metro enables active/active, block level access to data between two sites within synchronous distances. The distance is limited not only by physical distance but also by host and application requirements. Depending on the application, VPLEX clusters should be installed with inter-cluster links that can support not more than 5ms1 round trip delay (RTT). The combination of virtual storage with VPLEX Metro and virtual servers enables the transparent movement of virtual machines and storage across synchronous distances. This technology provides improved utilization and availability across heterogeneous arrays and multiple sites. VPLEX Geo enables active/active, block level access to data between two sites within asynchronous distances. VPLEX Geo enables more cost-effective use of resources and power. VPLEX Geo extends the distance for distributed devices up to and within 50ms RTT. As with any asynchronous transport media, you must also consider bandwidth to ensure optimal performance. Due to the asynchronous nature of distributed writes, VPLEX Geo has different availability and performance characteristics than Metro.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 13
EMC VPLEX is a next generation architecture for data mobility and information access. It is based on unique technology that combines scale out clustering and advanced data caching, with the unique distributed cache coherence intelligence to deliver radically new and improved approaches to storage management. This architecture allows data to be accessed and shared between locations over distance via a distributed federation of storage resources. The first products being introduced based on this architecture include configurations that support local and metro environments. With the release of VPLEX 5.0, the Geo environment is introduced which allows asynchronous replication.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 14
VPLEX with GeoSynchrony is open and heterogeneous, supporting both EMC storage and arrays from other storage vendors, such as HDS, HP, and IBM. VPLEX conforms to established world wide naming (WWN) guidelines that can be used for zoning. VPLEX provides storage federation for operating systems and applications that support clustered file systems, including both physical and virtual server environments with VMware ESX and Microsoft Hyper-V. VPLEX supports network fabrics from Brocade and Cisco.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 15
The introduction of storage virtualization as a viable solution delivers the primary value proposition of moving data around without disruption. Customers look to this technology for transparent tiering, moving back-end storage data without having to touch the hosts, simplified operations over multiple frames, as well as ongoing data moves for tech refreshes and lease rollovers. Customers require tools that allow storage moves to be made without forcing interaction and work at the host and database administration levels. The concept of a virtualization controller was introduced and took its place in the market. While EMC released its own version of this with the Invista split-path architecture, EMC also continued development in both Symmetrix and CLARiiON to integrate multiple tiers of storage within a single array. Today, as Flash, Fibre Channel, and SATA are offered within EMC arrays, customers get a very transparent method of moving across these different storage types and tiers with virtual LUN capability. EMC found that providing both choices for customers allowed their products to meet a wider set of challenges than if only one of the two options were offered. The main value propositions for storage virtualization have been tech refreshes, consolidations, mobility, and simplifying the management and provisioning of multiple arrays. It also extends the floor life of certain older storage assets. Previously, these problems were the most prevalent and the technologies developed were designed to address them. While problems remain to be solved, new data center issues evolve and different problems emerge that require new solutions.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 16
VPLEX Local consists of a single rack at a single site:
• The rack may contain 1, 2, or 4 engines, each engine containing 2 directors. The hardware is fully redundant to survive any single point failure.
• Each configuration can be upgraded to the next level (small to medium; medium to large) to linearly scale to greater performance and redundancy.
• In any cluster, the fully redundant hardware can tolerate failure down to a single director remaining with no data unavailability or data loss condition.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 17
With VPLEX distributed federation, it becomes possible to configure shared volumes to hosts that are in different sites or failure domains. This enables a new set of solutions that can be implemented over synchronous distances, where previously these solutions could reside only within a single data center. VMware Storage vMotion over distance is a prime example of such solutions. Another key technology that enables AccessAnywhere is remote access. This makes it possible export block storage to a host at a remote cluster. The host uses the remote storage as if it were local storage.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 18
VPLEX Metro consists of two separate rack sites with less than 5ms round-trip latency. Each rack may contain 1, 2, or 4 engines, each engine containing 2 directors. The hardware is fully redundant to survive any single point of failure. Each director utilizes two 8 Gb FC ports for inter-cluster communication over the FC-WAN.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 19
With VPLEX distributed federation, it is possible to configure shared volumes to hosts that are in different sites or failure domains. This enables a new set of solutions that can be implemented over synchronous and asynchronous distances, where earlier these solutions could reside only within a single data center. VMware VMotion over distance is a prime example of such solutions. Another key technology that enables AccessAnywhere is remote access. This makes it possible to export block storage to a host at a remote cluster. The host uses the remote storage as if it were local storage. Remote access is only supported in VPLEX Metro and VPLEX Local environments.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 20
VPLEX Geo consists of two separate rack sites with less than 50ms round-trip latency:
• Each rack may contain 1, 2, or 4 engines, each engine containing 2 directors. The hardware is fully redundant to survive any single point of failure.
• Each director utilizes two 10 Gb Ethernet ports for inter-cluster communication over the IP-WAN. Previous generations of VPLEX hardware utilize two 1 Gb Ethernet ports for inter-cluster communication.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 21
This module covered an overview of the VPLEX system.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 22
This module introduces VPLEX hardware, communication paths, and connections.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 23
This lesson covers the following components for VPLEX 4.0 and 5.1: • Engine; • DAE and SSD card; • Power Supply; • Fan; and • Management Server
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 24
The diagram displays the legacy VS 1 Engines available with VPLEX 4.0 and the new VS 2 engines released with VPLEX 5.0. The example shows a Quad-Engine VPLEX configuration. Notice the VPLEX numbering starts from the bottom up. Every engine contains a standard power supply. The difference between a quad and a dual engine is the dual engine is simply the lack of two engines and SPS. A single engine implementation contains a Management Server, Engine, and SPS.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 25
VPLEX engines can be deployed as a single, dual, or quad cluster configuration depending upon the number of front-end and back-end connections required. The VPLEX’s advanced data caching algorithms are able to detect sequential reads to disk. As a result, VPLEX engines are able to fetch data from disk to cache in order to improve host read performance. VPLEX engines are the brains of a VPLEX system, they contain two directors, each providing front-end and back-end I/O connectivity.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 26
The two directors within a VPLEX engine are designated A and B. Director A is below Director B. Each director contains dual Intel Quad-core CPUs that run at 2.4 GHz. Each director contains 32 GB raw memory for a total of 64 GB per engine. Each director uses more than 20 GB for read cache. Each director contains a total of (16) 8 Gbps FC ports, 8 front-end and 8 back-end. Both directors are active during cluster operations as VPLEX is an Active/Active architecture.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 27
There are a total of 12 I/O modules in a VPLEX v4.0 engine. Ten of these modules are Fibre Channel and two are GigE. The Fibre Channel ports can negotiate up to 2, 4, or 8 Gbps. Four FC modules are dedicated for front-end use and four for the back-end. The two remaining FC modules are used for inter/intra cluster communication. The two 1 GigE I/O modules are not utilized in this release of VPLEX.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 28
A VPLEX v4.0 engine contains two I/O Module carriers, one for Director A and one for Director B. The one on the right is for Director A and the one on the left is for Director B. There are two I/O modules per carrier. The one that is shown in this picture contains a Fibre Channel module and a GigE module. As we just discussed, the Fibre Channel module is used for inter and intra-cluster communication within a VPLEX system.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 29
This is the FC I/O Module from an I/O Module Carrier which is used for inter and intra-cluster communication. In this module, Ports 0 and 1 are used for local COM. Ports 2 and 3 are used for WAN COM between clusters in a VPLEX Metro. In medium and large configurations, FC I/O COM ports run at 4 Gbps. In terms of physical hardware, this FC I/O module is identical to the I/O modules used for front-end and back-end connectivity in the director slots.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 30
Each engine contains two management modules and two power supplies. Each management module contains two serial ports and two Ethernet ports. The upper serial port on Engine 2 is used to monitor the UPSs. The other upper serial ports on the other engines are not used. The lower serial ports are used to monitor the SPSs. The Ethernet ports are used to connect to the Management Server and also to other Directors within the cluster, in a daisy-chain fashion. USB port unused.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 31
The LED status should be verified on each SPS and director. The SPS on battery stays on while the SPS unit charges. This could take a few minutes or a few hours depending on the state of the battery. If any other amber light remains on for more than 10 minutes, the cables should be verified to ensure they are connected correctly. The boot cycle for the VPLEX is:
• C4LX – SuSe • RPM packages • Management Server Software
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 32
Fans are used to keep the system cool. They are monitored through the power supplies. There are two power supplies in each engine and all four fans service the entire engine. If one fan is lost, there is no impact as the fan speed of the other fans will increase to accommodate for the lost fan. However, if two fans are lost, the engine will shut down after three minutes.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 33
A VPLEX VS2 Engine is a chassis containing two directors, redundant power supplies, fans, I/O modules, and management modules. The directors are the workhorse components of the system and are responsible for processing I/O requests from the hosts, serving and maintaining data in the distributed cache, providing the virtual-to-physical I/O translations, and interacting with the storage arrays to service I/O. A VPLEX VS2 Engine has 10 I/O modules, with five allocated to each director. Each director has one four-port 8 Gb/s Fibre Channel I/O module used for front-end SAN (host) connectivity and one four-port 8 Gb/s Fibre Channel I/O module used for back-end SAN (storage array) connectivity. Each of these modules has 40 Gb/s effective PCI bandwidth to the CPUs of their corresponding director. A third I/O module, called the WAN COM module, is used for inter-cluster communication. Two variants of this module are offered, one fourport 8 Gb/s Fibre Channel module and one two-port 10 Gb/s Ethernet module. The fourth I/O module provides two ports of 8 Gb/s Fibre Channel connectivity for intra-cluster communication. The fifth I/O module for each director is reserved for future use. A VS2 Engine is physically smaller than a v4.0 Engine (2U vs. 4u). There are fewer I/O modules (5 vs. 6). The CPU and PIC-E buses are also faster.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 34
Two, four-port, 8 Gbps Fibre Channel I/O modules provide 40 Gbps effective bandwidth: four ports for front-end connectivity and four ports for back-end connectivity. WAN communication between VPLEX Metro or VPLEX Geo clusters is over Fibre Channel or Gigabit Ethernet. The inter cluster link carries unencrypted user data. To protect the security of the data, secure connections are required between clusters.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 35
Depending on the cluster topology, slots A2 and B2 contain one of the displayed I/O modules (IOMs). Both IOMs must be the same type.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 36
Independent power zones in the data center feed each VPLEX power zone, providing redundant high availability. Each engine is connected to 2 standby power supplies (SPS) that provide a battery backup for cache vaulting in the event of transient site power failure. In single-engine clusters, the management server draws power directly from the cabinet PDU. In dual- and quad-engine clusters, the management server draws power from UPS-A. Each VPLEX engine is supported by a pair of standby power supplies (SPS) that provide a hold-up time of five minutes, allowing the system to ride through transient power loss. A single standby power supply provides enough power for the attached engine. Each standby power supply is a FRU and can be replaced with no disruption to the services provided by the system. The recharge time for a standby power supply is up to 5.5 hours. The batteries in the standby power supply are capable of supporting two sequential five-minute outages. There are two power supplies with fans per director. Both must be removed to pull the director out. To ensure the power supply is completely inserted into the engine, the yellow at the top of the power supply should not be visible.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 37
Each director is connected to a standby power supply. The Standby power supplies should be connected to separate power sources. Generally, the SPS On Battery LED stays on while the SPS units fully charge (which could be a few minutes or a few hours, depending on the state of the battery). If any amber LED not related to the SPS recharge remains on for more than 10 minutes the user needs to verify that the components are cabled correctly.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 38
For VPLEX Metro and Geo an optional component called the VPLEX Witness can be deployed at a third location to improve data availability in the presence of cluster failures and intercluster communication loss. The VPLEX Witness is implemented as a virtual machine and requires a VMware ESX server for its operation. The customer host must be deployed in a separate failure domain from either VPLEX cluster to eliminate the possibility of a single fault affecting both a cluster and VPLEX Witness. VPLEX Witness connects to both VPLEX clusters over the management IP network. VPLEX Witness observes the state of the clusters, and thus can distinguish between a outage of the inter-cluster link and a cluster failure. VPLEX Witness uses this information to guide the clusters to either resume or suspend I/O. VPLEX Witness capabilities vary depending on whether the VPLEX is a Metro (synchronous consistency groups) or Geo (asynchronous consistency groups). In Metro systems, VPLEX Witness provides seamless zero RTO fail-over for storage volumes in synchronous consistency groups. In Geo systems, VPLEX Witness automates fail-over for asynchronous consistency groups and provides zero RTO and zero RPO fail-over in all cases that do not result in data rollback.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 39
This lesson will focus on VPLEX Connections, Integration Points, and Communication Paths.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 40
Shown here is the high-level architectural view of the management connections between the Management Server and directors. In this picture there are NO internal VPLEX IP switches. The directors are in fact daisy chained together via two redundant Ethernet connections. The Management Server also connects via two redundant Ethernet connections to the directors in the cluster. The Management Server is the only VPLEX component that gets configured with a “public” IP on the data center network. From the data center IP network, the Management Server can be accessed via SSH or HTTPS.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 41
This table displays the formula for calculating the internal IP addresses for the Management Server and FC switches. By default, cluster-1 uses a cluster number of 1 and cluster-2 uses a cluster number of 2.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 42
This table displays the formula for calculating the internal IP addresses for the directors. Each director has two subnets 128.221.252.X and 128.221.253.X. “A” Directors are reachable via the 252 subnet and “B” directors are reachable via the 253 subnet.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 43
This diagram shows the Ethernet Management Server Connections. It also shows the internal IP addresses in cluster-1 which uses a cluster number of 1.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 44
VPLEX medium systems use 4 ports per switch. VPLEX large systems use 8 ports per switch. The inter COM network is completely private. No other connections are permitted on the switch. The Connectrix DS-300B connects to independent USPs. There is no option for Cisco and the switches are unavailable for customer use.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 45
This lesson covers supported configurations on a VPLEX environment.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 46
All supported VPLEX configurations ship in a standard, single rack. The shipped rack contains the selected number of engines, one Management Server, redundant Standby Power Supplies (SPS) for each engine, and any other needed internal components. The pair of SPS units provides DC power to the engines in case there is a loss of AC power. The batteries in the SPSs can hold a charge up to 10 minutes. However, the maximum hold time is 5 minutes. The dual and quad configurations include redundant internal FC switches for LCOM connection between the directors. In addition, dual and quad configurations contain redundant Uninterruptible Power Supplies (UPS) that service the FC switches and the Management Server.
GeoSynchrony is pre-installed on the VPLEX hardware and the system is pre-cabled, and also pre-tested. Engines are numbered 1-4 from the bottom to the top. Any spare space in the shipped rack is to be preserved for potential engine upgrades in the future. Since the engine number dictates its physical position in the rack, numbering will remain intact as engines get added during a cluster upgrade.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 47
This table provides a quick comparison of the three different VPLEX single cluster configurations available.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 48
This table provides a quick comparison of the three different VPLEX single cluster configurations available.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 49
Creating a metadata volume is part of VPLEX installation, “single cluster or VPLEX Metro”.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 50
A logging volume is used for a VPLEX Metro. It is recommended to stripe a logging volume across many LUNs for speed.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 51
There are two levels of accounts (Linux Shell and VPLEX Management Console) that exist on the VPLEX system. Currently roles are not configurable in VPLEX and the admin and server users have the same abilities. New users are assigned to the Administrator role.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 52
Back-end storage arrays are configured to present LUNs to VPLEX back-end ports. Each presented back-end LUN maps to one VPLEX storage volume. Storage Volumes are initially in the “unclaimed” state. Unclaimed storage volumes may not be used for any purpose within VPLEX other than to create metavolumes, which are for system internal use only. Once a storage volume has been claimed within VPLEX, it may be carved into one or more contiguous extents. A single extent may map to an entire storage volume; however, it cannot span multiple storage volumes. A VPLEX device is the entity that enables RAID implementation across multiple extents or other devices. VPLEX supports RAID-0 for striping, RAID-1 for mirroring, and RAID-C for concatenation. A Storage view is the masking construct that controls how virtual storage is exposed through the front-end. An operational storage view is configured with three sets of entities as shown next. Once a storage view is properly configured as described and operational, the host should be able to detect and use virtual volumes after initiating a bus-scan on its HBAs. Every front-end path to a virtual volume is an active path, and the current version of VPLEX presents volumes with the product ID “Invista”. The host requires supported multi-pathing software in a typical high-availability implementation.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 53
This module covered VPLEX hardware, communication paths, and connections.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 54
This module covers interfaces and steps used to manage a VPLEX system.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 55
This lesson introduces the VPLEX GUI and CLI.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 56
VPLEX provides two ways of management through the VPLEX Management Console. The Management Console can be accessed through a command line interface (CLI) as well as a graphical user interface (GUI). The CLI is accessed by connecting with SSH to the Management Server and then entering the command vplexcli. This command causes the CLI to telnet to port 49500. The GUI is accessed by pointing a browser at the Management Server IP using the https protocol. The GUI is based on Flash and requires the client to have Adobe Flash installed. Every time the Management Console is launched, it creates a session log in the /var/log/VPlex/cli/ directory. The log is created when launching the CLI as well as the GUI. This can be helpful in determining which commands were run while a user was using VPLEX.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 57
EMC VPLEX software architecture is object-oriented, with various types of objects defined with specific attributes for each. The fundamental philosophy of the management infrastructure is based on the idea of viewing, and potentially modifying, attributes of an object.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 58
The VPLEX CLI is based on a tree structure similar to the structure of a Linux file system. Fundamental to the VPLEX CLI, is the notion of “object context”, which is determined by the current location or pwd within the directory tree of managed objects. Many VPLEX CLI operations can be performed from the current context. However, some commands may require the user to cd to a different directory before running the command.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 59
VPLEX CLI supports both standard UNIX styles of options. Both - and -- are supported.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 60
This table lists several useful summary and status commands. Some of these are particularly useful during troubleshooting.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 61
Standard UNIX-style navigation is supported for traversing the directory tree within a VPLEX CLI session. Note that pushd and popd are supported as well, as in some UNIX shells. The tree command can be used to view a map of the entire object tree starting from the current context as the root of the tree.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 62
The VPLEX GUI provides many of the features that the VPLEX CLI provides. The GUI is very easy to navigate and requires no knowledge of VPLEX CLI commands. Operations are accomplished by clicking on VPLEX icons and selecting desired values.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 63
System Status on the navigation bar shows a graphical representation of your system. It allows you to quickly view the status of your system and some of its major components such as directors, storage arrays, and storage views. The cluster display also shows the size of the cluster configuration (single-engine, dual-engine, or quad-engine). Blue lights on the cluster display represent the number of engines in the cluster. For example, a quad-engine configuration display shows four blue lights, a dual-engine display shows two blue lights, and so on. System Status is the default screen when you log into the GUI. Notice we have a VPLEX Metro configuration with a VPLEX witness in place.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 64
Provisioning Overview on the navigation bar shows a graphical overview of provisioning, and provides steps to begin using the two methods of provisioning storage.
• EZ Provisioning - EZ provisioning allows the user to create virtual volumes directly from selected storage volumes. The Create Virtual Volumes wizard eliminates the individual steps required to claim storage, create extents, devices, and then virtual volumes on those devices. EZ provisioning uses the entire capacity of the selected storage volume to create a device, and then creates a virtual volume on top of the device. The user can use this method to quickly create a virtual volume that uses the entire capacity of a storage volume. Creating a virtual volume provides more information.
• Advanced Provisioning - Advanced provisioning allows the user to slice (use less than the entire capacity) storage volumes into extents. The user can then use one or more of these extents to create devices and then virtual volumes on these devices. This method requires the user to perform each of these steps individually in the order listed in the Step-by-Step instructions. The user can use this method to slice storage volumes and perform other advanced provisioning tasks such as creating complex devices.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 65
The Distributed Devices screen shows all distributed devices in the system. Use this screen to quickly view the status of a distributed device, view the components of a distributed device, or find a Distributed device. By default, distributed devices are sorted by name in ascending alphabetical order. The arrow in the default sort column, indicates the direction of the sort. To change the direction of the default sort, click the column header. This screen also shows the following information about distributed devices. To see additional distributed device properties, click the device name (when properties links are shown) to open the Properties dialog box.
• Name - The name of the Distributed device. You can change the name in the properties dialog box.
• • • •
Geometry - The underlying RAID structure of the Distributed device. Capacity - The size of the Distributed device. Virtual Volume - The name of the virtual volume created on the Distributed device. Rule Set - The name of the Rule Set applied to the Distributed device. You can change the Rule Set in the Distributed device Properties dialog box.
• Health - The overall health of the distributed device. • Status - How the distributed device is functioning. • Components of Selected Device - The components of the selected device.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 66
The data mobility feature allows you to non-disruptively move data on an extent or device to another extent or device in the same cluster. The procedure for moving extents and devices is the same—use either devices or extents as the source or target. The GUI supports moving data from extents and devices to other extents and devices only within the same cluster. To move data between extents or devices in different clusters in a Metro-Plex, use the CLI. You can run up to a total of 25 extent and device migrations concurrently. The system allocates resources and queues any remaining mobility jobs as necessary. View the status and progress of a mobility job in Mobility Central. Mobility Central provides a central location to create, view, and manage all extent and device mobility jobs. Use this screen to:
• Filter the jobs to view by cluster and job type (extent mobility jobs, device mobility jobs, or all jobs);
• • • • •
Launch the appropriate wizards to create extent and device mobility jobs; View the progress and status of mobility jobs; Manage mobility jobs (pause, resume, cancel, commit, and so on); View the properties of a mobility job; Sort jobs
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 67
The VPLEX Management Console GUI also provides online help under the Support tab. The online support page provides online help details on the VPLEX GUI, VPLEX System Status tab, Provisioning Storage tab, Exporting Storage, and Mobility. The VPLEX online GUI help also links to the VPLEX product documentation, VPLEX Procedure Generator, and Knowledgebase. It can be conveniently referenced anytime help is required.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 68
This lesson focuses on day to day operations of EMC VPLEX.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 69
Storage volumes must be claimed, and optionally, named before they can be used in a VPLEX cluster. Storage tiers allow the administrator to manage arrays based on price, performance, capacity and other attributes. If a tier ID is assigned, the storage with a specified tier ID can be managed as a single unit. Storage volumes without a tier assignment are simply assigned a value of ‘no tier’. Use the --set-tier argument to add or change a storage tier identifier in the storagevolume names from a given storage array.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 70
Storage volumes can also be claimed through the VPLEX CLI using the claimingwizard command. They can be manually claimed through the VPLEX CLI using the claim command. In most cases, the Claiming Wizard should be used as it can claim hundreds of volumes at once. To begin, open the wizard and select the storage array. Storage volumes presented to VPLEX must be claimed before extents can be carved into them. Claiming means that the VPLEX system can utilize the storage. The GUI uses the Claiming Wizard to claim storage volumes. Depending upon the array type, the Claiming Wizard may or may not require a host file. A host file contains the name of the storage volume to be claimed and what its name will be in VPLEX. VPLEX is able to claim Symmetrix storage volumes without a host file. Currently, CLARiiON arrays require a host file. Select the mapping file for the storage array and click next. Then, select the Storage Volumes you wish to claim and add them to the queue. Finally, review the operations you have queued before committing them. Once you do, click commit.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 71
Encapsulation is disruptive because the user cannot simultaneously present storage through EMC VPLEX and directly from the storage array without risking data corruption. This is due to read-caching at the EMC VPLEX level. The has to cut-over from direct array access to EMC VPLEX virtualized access. This implies a period where all paths to storage are unavailable to the application. EMC VPLEX maintains clean separation of metadata from host data. Metadata resides on metadata volumes and logging volumes. This forms the basis of simple “data-in-place” migration. Basic encapsulation is 1 for 1 migration where each native array LUN becomes 1 virtual volume. Simple volume encapsulation is the concept of starting with an existing LUN that is in use on a production host and migrating it into VPLEX without disrupting any data. Initially, the array contains a LUN that has live data. First, this LUN must be provisioned to VPLEX as well as the original host. The LUN will then be claimed by VPLEX in order to have a claimed storage volume. At this point, a new GUI-based wizard can be used to encapsulate the live data. The wizard automatically performs the following three steps. • First, an extent is created, using the entire capacity of the storage volume; • Next, a RAID 0 device is created, using the single extent; and • Third, a virtual volume is created on top of the device. At this point, the virtual volume is track-for-track identical to the original LUN. The application using the LUN can be stopped and the host running the application must then discover the virtual volume instead of the LUN directly on the storage array. The application can then be started again, completing the migration of the LUN into VPLEX. When encapsulating virtual volumes, always use the application consistent attribute. The application consistent flag protects against corrupting existing storage volume data or making the storage volume data unavailable. This function is available within the GUI as well.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 72
In order to encapsulate this storage volume one for one, click the Create Virtual Volumes button in the lower part of the right-hand pane. The Create Virtual Volumes from Storage Array wizard will appear. If you have not selected the volumes, you must select the Storage Array. The available claimed storage volumes appear in the left-hand list. Select one or more volumes and click Add or click Add All to move all of the available storage volumes to the selected volumes list on the right. Click Next when you are done. The Virtual Volumes are given a default name based on the default names of the intermediate extent and device that will be created. The name for each virtual volume can be changed at this point or left at the default value. Click Commit to commit the creation of the extents, devices, and virtual volumes. The final step shows the created Virtual Volumes.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 73
Storage volumes presented to VPLEX should be discovered automatically by VPLEX. They will be listed as VPD numbers in the Storage Volumes section of the GUI and the /cluster/cluster/storage-elements/storage-volumes section of the CLI. However, if additional volumes are added to VPLEX, VPLEX will need to rescan for the new storage volumes. This can be accomplished through the Management Console GUI or by using the array re-discover command. From the Management Console GUI Storage volumes section, extents can be created on storage volumes. ITLs can be displayed for individual storage volumes and storage volumes can be unclaimed. It is also possible to claim all of the storage on an array. To create extents on storage volumes, select the Storage Volumes you wish to create extents on and click Next. The next step allows you to alter the size of the extent. By default, the size is the entire Storage Volume. Click Finish to complete the extent. Once storage volumes have been discovered and claimed, extents must be carved into the storage volumes. An extent is a slice of a storage volume. Each extent consists of a contiguous set of storage from a storage volume. Extents are listed in the extents sections of the GUI. By default, extents will be named extent_ number. Up to 128 extents can be carved into one storage volume. If a storage volume is to be encapsulated, only one extent should be carved into it, consisting of the entire capacity of the storage volume. It is best practice to apply the application consistent attribute to a storage volume if encapsulating it into VPLEX. Extents are created using the largest free chunk of contiguous free space on a storage volume. Creating and deleting extents can cause the storage volume to become fragmented. Extent mobility can help to defrag the storage volume.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 74
RAID-0, RAID-1, and RAID-C should be considered the basic “building blocks” for building EMC VPLEX devices of arbitrary complexity. GeoSynchrony has the ability to create new devices of type RAID 0 and RAID 1 using the GUI interface. One limitation of performing device creation using the GUI, however, is that devices can only be created from extents, not from other devices—with the exception of distributed devices, which must be created from other devices.
• RAID-0 utilizes 1 or more extents, striping the data over all of the extents equally. While it is possible to use extents of different sizes, the space utilized on each extent will only be as large as the smallest extent. The stripe depth is the size at which data will be split among devices.
• RAID-1 combines exactly two extents of identical size into a device of the same size. The two extents each will contain an identical copy of the data. If different sized extents are selected for a RAID-1 device, the device size will be the size of the smaller extent. The additional capacity on the larger extent will not be used.
• RAID-C utilizes the full capacity of 1 or more extents. Unlike RAID-0 or RAID-1, extent space is not wasted when the extents composing a device are of different sizes. Also, the performance benefits that can be realized using other RAID types do not apply with RAID C, as each extent is mapped whole following the previous extent.
More information on all of these RAID types can be found on Powerlink in the Implementation and Planning Best Practices for EMC VPLEX technical document.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 75
The device creation wizard is shown here. An administrator can choose the type of device or devices to create. RAID 0 , RAID 1 or RAID C (RAID concatenation) are the available options. For now, the last option in this wizard will be ignored. In previous versions of GeoSynchrony, an administrator using the GUI could create only basic (RAID-0 with a stripe depth of 4kB) local devices. Once a selection has been made, click Next to continue. In order to define a new device, there are 4 steps in this section of the wizard.
• First, enter a new device name in the text box at the top left. • Second, if you have chosen a RAID-0 device, select a stripe depth from the drop-down box.
• Third, select one or more extents from the list of available extents on the left. • Fourth, click the Add Device button to move this selection into the Devices To Be Created list. Note that at this point, you can only create devices of the same RAID type as the choice of RAID type was selected in the previous step. Once you have defined all of the devices you would like to create, click the Next button to move on to the next step. Devices are created on top of extents. They can also be created on top of other devices. A VPLEX device can have a RAID level of 0, 1, or C. The CLI or GUI can create all of these RAID types. Distributed devices can also be created. In the VPLEX CLI, devices are located in the /clusters/cluster-/devices directory.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 76
There is a one to one mapping between virtual volumes and devices. Virtual volumes are created on top of top level devices. Virtual volumes are the objects that are presented out of the VPLEX front-end ports. They are placed into storage views with host initiators and VPLEX front-end ports and are created through the virtual volumes option within the GUI. They can also be created using the VPLEX CLI virtual-volume create command. Virtual volumes are located in the /clusters/cluster-/virtual-volumes directory. Each virtual volume is assigned a device ID that does not change throughout the virtual volume’s life. This device ID can be seen through PowerPath by issuing the powermt display dev=all command.
Within VPLEX, this device can be seen by issuing the export storage-view map command. On this screen, devices can be created, deleted, and torn down. Remote access can also be enabled and disabled on a virtual volume. Tearing down a virtual volume destroys all of the objects beneath it except for the storage volume. Enabling remote access to a virtual volume causes the virtual volume to be available to hosts at the remote cluster.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 77
VPLEX must be configured to communicate with specific host types of hosts just as arrays require certain bit settings to be configured to be able to communicate with hosts. This is accomplished by registering host initiators. Host initiators will automatically be viewable to VPLEX if they are zoned to VPLEX’s front-end ports. These initiators can then be resisted as default, hpux, sun-vcs, and aix. The default host type represents Windows, Linux, and VMware. Initiators can be registered in the VPLEX CLI using the initiator-port register command. Each port of a host’s HBA must be registered as a separate initiator. Host initiators must be zoned to VPLEX front-end ports for virtual volume presentation.
A best practice in registering the initiators is to register them with meaningful names. To change the host type, the initiator must first be unregistered and then re-registered with a different host type. Currently, you cannot register all initiators of a single host with a common registration. Each HBA must be registered separately.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 78
Storage views can be created and deleted. A map of the storage view can also be displayed in the GUI. Each storage view requires initiators, VPLEX front-end ports, and virtual volumes. Each storage view should have one VPLEX front-end port from each director in the cluster to provide maximum redundancy. Until all three components are added to the storage view, the view will be listed as inactive. Once all three components have been added to it, the view automatically becomes active. Storage views can be created using the storage-view create –n -p command. Initiators are added to the storage view using the storage-view addinitiatorport –v -i . The external LUN ID can also be specified for the virtual volume when adding a virtual volume to a storage view.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 79
When creating a storage view using the GUI in GeoSynchrony, an administrator is given the option to specify the LUN numbers that the host will see. This is a new feature in the GUI, but not to the CLI. In the Create Storage View Wizard, there are two options for customizing the Host LUN numbers. Choosing to auto-assign the LUN numbers is the default and will select the next available LUN IDs. The starting value can be changed. When a different starting number is specified with the auto-assign option selected, the LUN numbers are chosen sequentially following the specified starting number. The second option is to manually assign LUN numbers. When this option is selected, text boxes appear next to each LUN and a LUN number can be entered. The numbers initially start with whatever values they had held when Auto-Assign had been selected.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 80
This lesson analyzes possible failure scenarios and how to recover from them.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 81
The collect-diagnostics command will collect logs, cores, and configuration information from the Management Server and directors. This command will produce a tar.gz file.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 82
LUNs that were freshly added to a storage array may not initially appear in VPLEX. The array re-discover command may need to be run to discover the new LUNs.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 83
A VPLEX system may fail to claim disks if there is not a metadata volume. This is because the metadata volume contains all of the VPLEX object information. It is important to ensure that your VPLEX system has a metadata volume before attempting to claim disks.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 84
If the VPLEX loses access to its metadata volume, the system will not allow any changes to occur. If the metadata volume cannot be recovered, but the system is still running, a new metadata volume can and should be created immediately using the metavolume backup command. The metadata volume must be the same or greater in size. Currently, EMC recommends using a78 GB sized device for the metadata volume. The metavolume move command makes the backed up metadata volume the active metadata volume. The old metadata volume should be destroyed. If a full reboot occurs, the system could start using the old metadata volume. If the Metadata volume cannot be recovered, but the system is still running:
• Create a backup metadata volume; • Make the backup metadata volume the active metadata volume; and • Destroy the old metadata volume.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 85
If a VPLEX cluster fails and the metadata volume cannot be recovered, the cluster should be restarted with a backup metadata volume. Activate a backup copy of the metadata volume created prior to the failure. It is a good practice to create a backup on a regular basis to minimize or eliminate losing any configuration changes made since the backup was created; otherwise configuration changes made since a backup was created will be lost.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 86
This module covered interfaces and steps used to manage a VPLEX system.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 87
This course covered the VPLEX system, its architectural components, and management mechanisms. This concludes the training. Proceed to the course assessment on the next slide.
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 88
Copyright © 2012 EMC Corporation. Do not copy - All Rights Reserved.
VPLEX Architecture and Management Overview 89
View more...
Comments