Vnx Fundamentals Srg
Short Description
Vnx Fundamentals Srg...
Description
Welcome to VNX Fundamentals. Copyright © 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C‐Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation Technology, Common Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz, DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E‐Lab, EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, VMAX, Vblock, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM, Voyence, VPLEX, VSAM‐Assist, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, and where information lives, are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. © Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Revision Date: February 2014 Revision Number: MR-1WP-VNXFD
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
1
This course covers the EMC VNX Series. It includes the EMC VNX Series models, architecture, features, functions, capabilities, and management. The peripheral products associated with VNX are introduced.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
2
This module focuses on the VNX product benefits and use cases. Peripheral and/or adjacent products, tools and options are considered and positioned at a fundamental level.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
3
The VNX Series unifies EMC’s file‐based and block‐based offerings into a single product that can be managed with one easy to use GUI. In addition, Object storage solutions are available that make use of the VNX storage systems. The VNX Series is a storage solution designed for a wide range of environments that include midtier‐to‐enterprise. The back end storage connectivity is via Serial attached SCSI (SAS) which provides up to 6 Gb/s connection. The VNX unified storage platforms support the NAS protocols (CIFS for Windows and NFS for UNIX/Linux, including pNFS), patented Multi‐Path File System (MPFS) as well native block protocols (iSCSI and Fiber Channel). VNX is optimized for: • Core IT applications like transactional workloads – Oracle, SAP, SQL, Exchange or SharePoint. • Server virtualization and end user computing/VDI. • Applications that need traditional file, or block or unified storage. VNX is also a good fit for partner lead configurations optimized for virtual applications with VMware and Hyper‐V integration. The VNX with MCx (multi‐core optimization) architecture unleashes the power of Flash, taking full advantage of the latest Intel multi‐core technology with the introduction of MCx.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
4
The pressure in Exchange environments to eliminate backup windows and reduce recovery times is stronger than ever before. With dynamic, policy‐based protection management, these dynamic environments can easily exceed demanding recovery objectives using EMC VNX advanced technologies such as snapshots and continuos protection technology. It also helps efficiently use storage resources and optimize protection for Microsoft Exchange. The VNX storage array performance is optimal under heavy Exchange workload. Traditionally, the best practices for optimizing storage performance involved manual, resource‐ intensive processes. The VNX allows SQL administrators to leverage an easy‐to‐use and potentially hands‐off mechanism for optimizing the performance of the most demanding applications. Automating the movement of data between storage tiers saves both time and resources. The VNX eliminates the need to spend hours manually monitoring and analyzing data to determine a storage strategy, then maintaining, relocating and migrating LUNs (VNX logical volumes) to the appropriate storage tiers.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
5
Online Transaction Processing (OLTP) database applications tend to be mission‐critical and usually have stringent I/O latency requirements. Traditionally, these OLTP databases are deployed on a huge number of rotating Fibre Channel (FC) spindles to meet the low I/O latency requirement. Consequently, the effective capacity utilization of these spindles is very low. VNX reduce the need to buy more drives to keep up with database growth. Also the VNX automatically and nondisruptively migrates hot and cold data between the available storage tiers, thereby improving the effective storage utilization. The common business requirement in SAP environments is reducing TCO while improving performance and service level delivery. Frequently, responsiveness to sensitive SAP applications has deteriorated over time due to increased data volumes, unbalanced data stores, and changing business requirements. By using VNX with block data, SAP deployments can gain a significant performance boost without the need to redesign the applications, adjust the data layouts, or reload significant amounts of data. With automated sub‐LUN level tiering and extended cache, Administrators can properly balance data distribution across the tiers that allows capacity and performance optimization.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
6
The VNX Series is optimized for virtualization, and thus supports all leading Hypervisors, simplifies desktop creation and storage configuration. VNX leverages advanced technologies to optimize performance for the virtual desktop environment, helping support service level agreements. Virtualization management integration allows the VMware administrator or the Microsoft Hyper‐V administrator to extend their familiar management console for VNX related activities. VMware vStorage APIs for Array Integration (VAAI) for both SAN and NAS connections, allows the VNX to be fully optimized for virtualized environments. EMC Virtual Storage Integrator (VSI) is targeted towards the VMware administrator. VSI supports VNX provisioning within vCenter, full visibility to physical storage, and increases management efficiency. In the Microsoft Server 2012 and Hyper‐V 3.0 space, the Array Offloaded Data Transfer (ODX) allows the VNX to be fully optimized for Windows virtual environments. This technology offloads storage‐related functions from the server to the storage system. EMC Storage Integrator (ESI) for Windows provides the ability to provision block and file storage for Microsoft Windows or for Microsoft SharePoint sites.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
7
EMC VNX for file provides a GUI tool that directly applies the GPO security settings to the file systems. It has the same effect as applying the security update from a Windows server, but it takes significantly shorter time to do so on large and deep directories, because the security settings are managed locally on the VNX.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
8
This module covered the VNX solution and its benefits, the key use cases, and the peripheral products associated with VNX storage systems.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals
9
This module focuses on the VNX modular architecture, terminology, components, configurations, and models.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 10
The VNX modular architecture is designed to deliver a native block and file solution with dedicated components that are optimized for the specific use case and that leverage the hardware and core technologies across both the file and block implementations. This picture illustrates a unified storage product with scalable Data Movers, Control Station (not shown), storage processors, I/O (Input/Output) modules, link control cards (LCC), disk drives, and power supplies.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 11
VNX disk drives are held in Disk Array Enclosures or DAE. A storage pool is a single repository of physical disks from which LUNs may be created. Storage can be provisioned from either Pools or RAID Groups. A RAID group is a set of disks (up to 16 in a group) with the same type, capacity and redundancy, on which the Administrator can create one or more Classic LUNs. A Pool is a collection of disks that are dedicated to create LUNs. A Pool is somewhat similar to a RAID group. However, a Pool can contain a few disks or hundreds of disks, whereas a RAID Group is limited to 16 disks. Pools can be heterogeneous (made up of more than one type of drive) or homogeneous (composed by only one type of drive). A LUN (Logical unit number) is the identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is the last part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but the term is often used to refer to the logical unit itself.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 12
Storage processor is the core of the VNX platform. It delivers the VNX Block components and services and runs VNX OE for Block. The storage processor supports block data with UltraFlex I/O (Covered later in this module) technology that supports Fibre Channel, iSCSI, and FCoE protocols. It provides access for all external hosts and the file side of the VNX array. The VNX platform storage processors by design operate in Active/Active mode. Active‐Active implies that both controllers are active/on‐line and receiving host I/O simultaneously for the backend storage.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 13
The VNX Data Mover X‐Blade runs the VNX Operating Environment for File, which is optimized to move data between the storage and the IP network. The Data Movers provide highly available file level access to users and applications via NFS,CIFS, and MPFS protocols. The Ultraflex technology provides Ethernet access over 1 or 10 Gb/s using either optical or copper cables. The Data Mover X‐Blade stores and accesses data through the Storage Processors. A VNX Data Mover can be configured as a standby Data Mover which serves as a hot spare for up to seven primary or online Data Movers. If a primary Data Mover should go down the standby Data Mover takes its place with little or no disruption of service.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 14
The Control Station is included in all VNX Unified and VNX File storage arrays to provide management and monitoring functions. The Control Station runs a customized Linux kernel and operates VNX for file management services. A second Control Station may be present in some models for redundancy. The Control Station also provides a secure Administrative interface to all file‐server components – a single point of management for the whole VNX solution, which can be isolated to a secure, private network. The Control Station software is used to install and configure the system, monitor the health of the primary Data Movers and initiate failover to the standby Data Movers if the primary blade fails. It is also used to monitor the environmental conditions and performance of all components; and implement the call‐home and dial‐in support features.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 15
The Disk Processor Enclosure (DPE) is 3U in size and includes redundant Storage Processors, two Power Supply/Cooling Modules, and the first set of disk drives. The basic difference between a Disk Processor Enclosure (DPE) and a Storage Processor Enclosure (SPE) is that the DPEs contain both Storage Processor and disks. The SPE does not contain disks. The Storage Processor Enclosure (SPE) is 4U in size and houses each Storage Processor (SPB and SPA), Ultraflex I/O modules, Management modules, and Power Supply/Cooling modules. All Disk Array Enclosures (DAEs) accommodate two or four Link Control Cards (LCC), and two Power Supply/Cooling modules (PS A and PS B). The LCC’s main function is to be a SAS expander and provide enclosure services for all drive slots. Each LCC independently monitors the environmental status of the entire enclosure and communicates the status to the storage processors. A Data Mover Enclosure (DME) is required for file level access. A DME contains one or two Data Mover X‐Blades. Each X‐blade has Fibre Channel ports to access data through the SPs as well as Ethernet ports to provide file data access to users and applications.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 16
The power supplies provide the information necessary for Unisphere to monitor and display the ambient temperature and power consumption of the power supply. The power supplies are field replaceble units (FRUs). Each power supply includes two power LEDs and a status LED. The power supplies provide adaptive cooling, in which an array adjusts the power supply fan speeds to spin only as fast as needed to ensure sufficient cooling. The Power Supplies are hot‐swappable and redundant. The Standby Power Supply provides battery power to DPE and power to the DAE and SPE on some VNX model. The VNX with MCx uses a DPE which has the SPS built within the VNX SP (Battery On Board) except for the VNX8000 (SPE based) which uses independent SPSs. The VNX series platform can support up to two standby power supplies or dual SPS.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 17
VNX Series uses I/O Modules in various combinations for front and back‐end connectivity. Each I/O module is protocol independent and hot swappable. Options for block I/O include Fibre Channel, Fibre Channel over Ethernet (FCoE) and iSCSI. Options for file I/O include both 1 Gb/s and 10 Gb/s Ethernet with either copper or optical connections.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 18
The VNX series ships as a block only, file only or Unified file and block system. The three basic
configurations are: • Block – Supports block data • File – Supports file data • Unified – Supports block and file data A unified configuration has the same components as a file configuration.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 19
The VNX with MCx architecture consists of 5 VNX Series platforms that will scale up to 6 petabytes. The VNX with MCx architecture unleashes the power of Flash to address the high‐performance, low‐latency requirements of virtualized applications. VNX Series models are comprised of 5000, 7000 and 8000 class systems. The available models are VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, and VNX8000. The hardware and connectivity options scale with each model, providing more power and options throughout the model range. As seen earlier the VNX with MCx architecture takes full advantage of the latest Intel multi‐core technology with the introduction of MCx. MCx distributes all VNX data services across all cores – up to 32.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 20
The previous generation are comprised of 5000 and 7000 class systems. The available models are VNX5100 (Block‐only), VNX5300, VNX5500, VNX5700, and VNX7500. The hardware and connectivity options scale with each model, providing more power and options throughout the model range.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 21
VNX Series gateway platforms are NAS heads, only. These platforms access external storage such as block based VNX, CLARiiON, Symmetrix, or combinations of these platforms for optimal performance or TCO. VNX gateways allow the back‐end storage to be pooled among the NAS, MPFS and FC/iSCSI SAN, which improves storage usage and consolidates management. Gateways are ideal for environments with existing Fibre Channel/iSCSI SANs. A VNX gateway is very suited when both performance and capacity scaling are required. A VNX gateway supports up to four back‐end arrays concurrently, delivering increased I/O bandwidth to the front‐end X‐Blades.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 22
The VNX‐F, all‐flash configurations, are based on the VNX7600 and leverage the MCx multi‐core storage software operating environment. The VNX‐F is specifically configured as an all‐flash array and therefore does not include tiered storage. The flash storage implements enterprise multi‐cell (eMLC) flash technology. VNX‐F is available in four pre‐configured block‐only, fixed‐capacity configurations: Classic LUN, dual node symmetric Active‐Active, preconfigured RAID 5 (8+1), with optional block de‐duplication and compression. VNX‐F is optimized for virtualization and ensures the deployment of virtualized applications on hypervisor technologies, cloud deployments, high transactions, and high demand databases. (Visit http://support.emc.com for details) VNX‐F does not include tiered storage, HDDs (as data drives), File protocols (NFS and CIFS), additional IO modules, or RAID types other than what is pre‐configured.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 23
Atmos VE (virtual edition) with VNX in a virtualized infrastructure can be deployed to support web/cloud applications and storage‐as‐a service. Atmos VE on VNX has four key components: VNX storage, vSphere, Atmos VMs and Application integration points or access methods. The key here is that applications consuming storage from any VM will see the system as the large object store with a unified name space.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 24
This module covered the architecture and components of the VNX storage solution. Also this module identified configurations and models of the VNX storage solution.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 25
This module focuses on the VNX features, capabilities and the usage of these in an IT environment. It explains the high availability, storage efficiency, local , performance, and additional features. This module will also describe the VNX software suites and packs.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 26
This lesson covers the various VNX series software suites.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 27
The VNX Series arrays software packaging comes in a variety of suites. These various suites each contain a unique set of solutions to improve efficiency and availability by simplifying and automating many storage tasks. Functionality is purchased in the form of a Suite and combinations of Suites are packaged into software packs. The Total Efficiency pack contains all of the software suites while the Total Protection Pack only contains the protections suites. The following suites are available for the VNX arrays. • FAST Suite – Automatically optimizes the array for the highest system performance and the lowest storage cost simultaneously. • Security and Compliance Suite – Keeps data safe from changes, deletions, and malicious activity. • Local Protection Suite – Is used for protecting and repurposing data. • Remote Protection Suite – Protects data from localized failures, outages, and disasters. • Application Protection Suite – Automates application copies and proves compliance.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 28
This lesson covers Hardware and Component Redundancy, LUN Trespass and VNX Active/Active mode, Symmetric Active/Active mode, and Network high availability features.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 29
VNX availability and redundancy features include: • Dual storage processors with mirrored write cache. Each storage processor contains both primary cached data for its LUNs and a secondary copy of the cache for its peer storage processor. • Scalable and redundant X‐Blades (or Data Movers). • Second Control Station may be present in some models for redundancy. • RAID protection levels 0, 1/0, 5, and 6 are available and can co‐exist in the same array simultaneously to match different protection requirements. • Each disk drive has two data ports. This gives two separate paths to each drive. If an SP fails, or any component of the path fails, the drive can still be accessed by the other SP. • Proactive hot sparing enhances system robustness and delivers maximum reliability and availability. • Redundant power supplies. • Battery backup to allow for an orderly shutdown and cache de‐staging to vault disks to ensure data protection in the event of a power failure. • In the event of a power failure, the Vault drives (first four disks of first enclousure) provide the de‐stage area for data in write cache that is not yet committed to the disk.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 30
LUNs are managed and accessed by a single SP. This process is called LUN ownership. LUN ownership is automatically assigned to storage processors in a round‐robin fashion when a LUN is bound. Both SPs in a VNX array have access to every LUN, I/O to each LUN comes only from the SP that is the LUN owner. In the example on the left, SP A owns LUN 0. If a path goes offline, for example, if SP A is being rebooted due to a software upgrade, all LUNs from that SP will be “Trespassed” to the other SP. With the Asymmetric Logical Unit Access, or ALUA, (a request forwarding implementation) an I/O received by an SP that does not own a LUN is redirected to the other SP without LUN trespassing. This implementation should not be confused by an active‐active model because I/O is not serviced by both SPs for a given LUN. The LUN ownership is still in place. I/O is redirected to the SP owning the LUN. In the event of a front‐end path failure, there is no need to trespass LUNs immediately. The Upper Redirector driver will route the I/O to the SP owning the LUNs through the CMI channel. In the event of a back‐end path failure, there is no need to trespass LUNs immediately. The Lower Redirector will route the I/O to the SP owning the LUNs through the CMI channel. The host is transparent to this redirection.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 31
Now with VNX with MCx architecture EMC has introduced the first phase of an active/active access model. Symmetric Active/Active allows clients to access a Classic LUN (not supported on pool LUNs) simultaneously through both SPs for improved reliability, ease‐of management and improved performance. Since all paths are active, there is no need for the storage processor to "trespass" to gain access to the LUN “owned” by the other storage processor on a path failure, eliminating application timeouts. The same is true for an SP failure, as the SP merely picks up all of the I/Os from the host through the alternate “optimized” path. This capability enables additional LUN performance, as there is more backend bandwidth that can be served by a single SP or FE port.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 32
For high availability VNX uses Ethernet Channel and Link Aggregation Control Protocol (LACP). Ethernet Channel can combines multiple physical ports (two, four, or eight) into a single virtual device for the purpose of providing fault tolerance for Ethernet ports and cabling. If a link is lost, the link fails over to another within the channel. All traffic on the channel is then distributed across the remaining active link. Link Aggregation Control Protocol (LACP) is an alternative to EtherChannel. The IEEE 802.3ad Link Aggregation Control Protocol allows multiple Ethernet links to be combined into a single virtual device. LACP supports link aggregations with two or more ports. Althought link aggregation provides more overall bandwidth than a single port, the connection to any single client runs through one physical port and is therefore limited by the bandwidth of the port.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 33
EMC VNX FailSafe Network or “FSN” provides high availability in the event of an Ethernet switch failure by connecting the two components of the FSN to separate switches. Unlike EtherChannel and LACP, FSN can maintain full bandwidth when failed over, given the same bandwidth on both the active and passive configurations. They do not require any special switch configuration. FailSafe Networks are configured as sets of ports, FastEtherChannels, Link Aggregations, or combinations of these. Only one connection in an FSN is active at a time. If the FailSafe device detects that the active connection has failed, the Data Mover automatically switches to the surviving partner with the same identity as the failed connection.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 34
The VNX‐CA (Continuous Availability) provides for increased application availability options with the ability to transparently move application data between storage arrays or between data centers. It provides the ability to scale capacity, IO throughput and performance, all accomplished non‐ disruptively. VNX‐CA is based on the VNX5400 and leverages EMC VPLEX technology. EMC VPLEX allows customers to move applications and data between VNX arrays within a data center or across data centers without impacting hosts or users. VNX‐CA also supports Fibre Channel Block‐only architecture. The VNX‐CA use cases, in addition to addressing the continuous availability examples, include: • Application and data mobility • Non‐disruptive IT refreshes • Load balancing within a VNX‐CA4 or between VNX‐CA’s • Enhanced VMware capabilities across VNX‐CAs using vMotion, DRS, HA and/or FT • Extend Oracle RAC over distance • Stretch other popular server clusters across VNX‐CA arrays
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 35
VNX‐CA is available in two configurations: • VNX‐CA2 (2 storage controllers) is the entry solution, with the intention of non‐disruptively expanding with additional capacity, storage and performance. The VNX‐CA2 consists of one VNX5400 (block‐only) with the appropriate capacity (up to 250 drives), one VPLEX engine with two Fibre Channel switches (that are internal to the VNX‐CA). • VNX‐CA4 (4 storage controllers) is ideal for a continuously available storage architecture that allows more storage to be virtualized and enables cross array mobility, workload balancing, and uninterrupted maintenance on the array. This option offers optimal performance and capacity. The VNX‐CA4 consists of two VNX5400’s (block‐only) with appropriate capacity (up to 500 drives), one VPLEX engine and two Fibre Channel switches (that are internal to the VNX‐CA).
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 36
This lesson covers the storage efficiency features including Virtual Provisioning, Block Deduplication and Compression, File Deduplication and Compression, File Level Retention (FLR), and User Quotas.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 37
VNX Virtual Provisioning improves storage capacity utilization by only allocating storage as needed. File systems as well as LUNs can be logically sized to required capacities, and physically provisioned with less capacity. This means storage need not sit idle in a file system or LUN until it is used. VNX Virtual Provisioning safeguards allow users to keep a track of thinly provisioned file systems and LUNs. By reporting on actual physical usage, total logical size, and available capacity, administrators can both predict and set alerts to avoid running out of physical capacity.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 38
VNX with MCx architecture platforms, support Block level deduplication. Deduplication of data happens at an 8KB block granularity and requires an 8KB mapping. It can be set at the pool LUN level and Thin, Thick and Deduplicated LUNs can be stored in a single Pool. The deduplication domain is the pool itself; deduplication does not take place across pools. Deduplication occurs out‐of‐band and is throttled to minimize host I/O impact. The Block Compression feature provides a further reduction of capacity savings initially provided by Virtual Provisioning (compression requires Thin LUNs). The Block Compression feature uses standard data compression algorithms to reduce space allocated to LUNs. Block compression may be applied to Classic LUNs or Pool LUNs. Block Compression operates in the background, while the LUNs are available for host access. All compression and decompression processes are handled by VNX, so no server cycles are consumed in the process, and no additional server software is required. When sufficient new data is written to a compressed LUN, the system will automatically attempt to compress the uncompressed data in the background.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 39
VNX also support file level deduplication and compression. VNX for File performs all deduplication processing as a background, asynchronous operation that acts on file data after it has been written into the file system. It does not process active data or data as it is written into the file system. Deduplication activity can be throttled to avoid impact on processes serving client I/Os. Once candidate files have been identified for deduplication, two activities take place: • Compression: Compression is accomplished by using components similar to those used in VNX for block LUN compression. • Deduplication: File‐level deduplication or single instancing is accomplished by using components of another EMC product called Avamar which duplicates file identification algorithms (file identification for single instance is accomplished using a hashing algorithm from Avamar).
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 40
VNX File Level Retention is a capability available to VNX for File that protects files in a NAS environment from modification and deletion until a user specified retention date. FLR enables organizations to create a permanent unalterable set of files and directories and ensures the integrity of the data. At the NAS level this effectively provides what is traditionally known as Write Once Read Many (WORM) access within the VNX for File, and also includes tools to help users manage FLR automatically.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 41
A quota is a limit placed on the number of allocated disk blocks and/or files that a user/group/tree can have on a production file system (PFS). In other words, quotas provide a way of controlling the amount of disk space and the number of files that a user/group/tree can consume. Quotas can be managed via the VNX Control Station (CLI or GUI) or via Windows. Limiting usage is not the only application of quotas. The quota tracking capability can also just track and report usage. There are three types of implementation choices. They are hard quota, soft quota, and tracking. Hard quota: Denies space on disk and generates an error when the quota is reached. Soft quota: Offers a grace period before starting to deny space on disk. Tracking: Disk usage is tracked, but no limits are imposed.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 42
This lesson covers the VNX performance features including FAST VP and FAST Cache.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 43
VNX storage systems support an optional FAST Cache consisting of a storage pool of Flash disks configured to function as FAST Cache. This cache provides low latency and high I/O performance without requiring a large number of Flash disks. It is also expandable while I/O to and from the storage system is occurring. The FAST Cache is based on the locality of reference of the data set. By promoting the data set to the FAST Cache, the storage system services any subsequent requests for this data faster from the Flash disks that make up the FAST Cache; thus, reducing the load on the disks in the LUNs that contain the data (the underlying disks). FAST Cache consists of one or more pairs of mirrored disks (RAID 1) and provides both read and write caching. For reads, the FAST Cache driver copies data off the disks being accessed into the FAST Cache. For writes, FAST Cache effectively buffers the data waiting to be written to disk. In both cases, the workload is off‐loaded from slow rotating disks to the faster Flash disks in FAST Cache. The performance boost provided by FAST Cache varies with the workload and the cache size.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 44
VNX FAST VP tracks data in a Pool at a granularity of 256 MB – a slice – and ranks slices according to their level of activity and how recent that activity took place. Slices that are heavily and frequently accessed are moved to the highest tier of storage, typically Flash drives, while the data that is accessed least are moved to lower performing, but higher capacity storage – typically NL‐ SAS drives. This sub‐LUN granularity makes the process more efficient, and enhances the benefit achieved from the addition of Flash drives. The ranking process is automatic, and requires no user intervention. Relocation of slices occurs according to a schedule which is user‐configurable, but which defaults to a daily relocation. Users can also start a manual relocation if desired.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 45
This lesson covers the VNX local protection features that are used for protecting and repurposing data. It includes an overview of VNX SnapSure (File), VNX Snapshot (Block), VNX SnapView (Block), and RecoverPoint/SE CDP.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 46
VNX SnapSure is a Local Protection Suite software package that creates a point‐in‐time view of a file system. Also SnapSure creates a checkpoint “file system” that is not a copy or a mirror image of the original file system. Rather, the checkpoint “file system” is a view of what the production file system looked like at a particular time. VNX SnapSure saves disk space and time by allowing multiple snapshot versions of a VNX file system. These logical views are called snapshots. SnapSure snapshots can be read‐only or read/write.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 47
VNX Snapshot is a Local Protection Suite software package that improve the SnapView Snapshot capabilities by better integrating with pools. VNX Snapshots is a point‐in‐time copy of a source LUN using redirect on first write methodology.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 48
VNX SnapView is a software product that run on the VNX. SnapView creates block‐based logical point‐in‐time views of production information using snapshots and point‐in‐time copies using clones. Snapshots use only a fraction of the original disk space, while clones require the same amount of disk space as the source. SnapView allows multiple business processes to have concurrent, parallel access to information.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 49
RecoverPoint allows users to recover applications using DVR‐like rollback to any point in time without impact to the production application or to ongoing protection. RecoverPoint CDP is a continuous synchronous product that mirrors SAN volumes in real time between one or more arrays at a local site. RecoverPoint CDP maintains a history journal of all changes that can be used to roll back the mirrors to any point in time. RecoverPoint/SE is appliance‐based, which enables it to better support large amounts of information stored across a heterogeneous server and storage environment.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 50
This lesson covers the VNX remote protection features that are used to protect data from localized failures, outages, and disasters. It includes an overview of SAN Copy, VNX MirrorView (Block), VNX Replicator (File), and RecoverPoint/SE CRR.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 51
SAN Copy allows fast bulk transfer of data between EMC VNX storage systems, or between VNX and CLARiiON or Symmetrix systems (other vendor systems are not discussed here). The (full mode) SAN Copy Source LUN is likely to be a point in time copy because of the way SAN Copy functions when full data copies are used. If Incremental mode is used, then the source LUN may be the production LUN, and can be online. SAN Copy requires no special software to be loaded on the peer storage systems.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 52
MirrorView is a storage system‐based software that provides a replication solution for FC or iSCSI host environments. It can be implemented in a synchronous mode or an asynchronous mode depending on RTO (Recovery Time Objective) and replication distance requirements. MirrorView mirroring is used for Disaster Recovery. After a disaster, MirrorView lets data processing operations resume with minimal overhead. MirrorView enables a quick recovery by creating and maintaining a copy of the data on another storage system.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 53
VNX Replicator is an IP‐based replication solution that produces a read‐only, point‐in‐time copy of a source file system. The VNX Replicator service periodically updates this copy, making it consistent with the production file system. Replicator uses internal checkpoints to ensure availability of the most recent point‐in‐time copy. These internal checkpoints are based on SnapSure technology. This read‐only replica can be used by a Data Mover in the same VNX cabinet (local replication), or an Data Mover at a remote site (remote replication) for content distribution, backup and application testing.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 54
RecoverPoint/SE CRR (Continuous Remote Replication) — is a comprehensive data protection solution that provides bi‐directional synchronous and asynchronous replication. RecoverPoint/SE CRR allows users to recover applications remotely to any significant point in time without impact to the production operations. When an application server issues a write to a LUN that is being protected by RecoverPoint, this write is duplicated by an array splitter running in the VNX storage processor or optionally on the application host. The VNX array will intercept all writes to LUNs being protected by RecoverPoint, and will send a copies to the RecoverPoint appliance. In all cases, the original write travels though its normal path to the production LUN. Once the appliance receives the write, redundant blocks are eliminated, the writes are sequenced and stored with their corresponding time stamp information. The package is then compressed, and sent across the IP network to the remote appliance. At remote site the data is uncompressed and written to the journal volume. Once the data has been written to the journal volume, it is distributed to the remote volumes, ensuring that write‐order sequence is preserved.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 55
This lesson covers the VNX additional features and functions including VPLEX – Continuous Operations, Network Data Management (NDMP) protocol backup, EMC Common Event Enabler, and Host Encryption.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 56
By leveraging VNX with VPLEX, which allows the administrator to have the exact same information in two separate locations accessible at the same time from both locations, companies can achieve improved levels of continuous operations, non‐disruptive migrations/technology refresh, and higher availability of their infrastructure. VPLEX with VNX transparently relocates data and applications over distance, protects the data center against disaster, and enables efficient data mobility between sites. VPLEX allows load balancing across two VNX arrays. Extend local VMware functionality beyond a single VNX array and extend Oracle RAC and other clusters over distance (VPLEX is the only solution certified by VMware and Oracle.) VPLEX allows to organizations deliver on an active/active VNX data access strategy over distance.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 57
VNX supports NDMP (Network Data Management Protocol) which is an industry‐standard TCP/IP‐ based protocol specifically designed for backup in a NAS environment. NDMP allows an administrator to control the backup and recovery of an NDMP server through a network backup application, without installing third‐party software on the server. It communicates with several elements in the backup environment (NAS head, backup devices, backup server, etc.) for data transfer and enables vendors to use a common protocol for the backup architecture. Data can be backed up using NDMP regardless of the operating system or platform. On VNX, the Data Mover functions as the NDMP server. NDMP separates the control and data transfer components of a backup or restore operation. The actual backups are handled by the Data Mover, which minimizes network traffic.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 58
CEE (Common Event Enabler) is a capability available to VNX for File that provides an integration point between best of breed third party storage management application tools and VNX for File (NAS). In essence, CEE provides an alerting facility such that third party applications can take actions against NAS client activities on the VNX. EMC VNX Antivirus Agent provides an Antivirus solution to clients using a VNX system. It uses an industry‐standard CIFS protocol in a Microsoft Windows Server domain as well as supporting Windows clients. The Antivirus agent uses third‐party antivirus software to identify and eliminate known viruses before they infect files on the storage system. The EMC Common Event Enabler (CEE) was formerly know as VNX Event Enabler (VEE).
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 59
VNX Host Encryption provides data encryption at the storage‐device level to protect data from unauthorized access or from the removal of a disk drive or array from a secured environment. It makes use of PowerPath Encryption technology and leverages RSA Key Manager to centrally manage and automate encryption keys. PowerPath Encryption is a host‐based data‐at‐rest
encryption that utilizes PowerPath and RSA Key Manager for the Datacenter to protect customers against unauthorized access or inadvertent loss of unprotected information.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 60
This module covered the VNX features, capabilities, and software suites and packs.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 61
This module provides an overview of basic VNX management options and list the peripheral products associated with VNX.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 62
This lesson covers VNX management Suite offerings. Additionally, it also covers EMC ProSphere, which is cloud management software to manage VNX in virtual and cloud environments.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 63
EMC Unisphere provides a flexible, integrated experience for managing VNX storage systems. Unisphere’s wizards help the user to provision and manage the storage while automatically implementing best practices for the configuration. Unisphere is completely web‐enabled for remote management of the storage environment. Unisphere Management Server runs on the SPs and the Control Station. Administrative users must authenticate to the VNX when using Unisphere. The VNX provides flexible options for administrative user accounts. For deployments where the VNX will be administered by multiple people, the VNX offers the ability for creating multiple unique administrative accounts. Different administrative roles can be defined for the user accounts to distribute different administrative tasks for the users.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 64
Administration of the VNX system can also be performed with a command line interface (CLI). Administrative users must authenticate to the VNX when using CLI interface as well. Block enabled systems have a host‐based Secure CLI software option available for block administrative tasks. The CLI can be used to automate management functions through shell scripts and batch files. The Navisphere Secure CLI is a client application that allows simple operations on the EMC VNX Series platform and some other legacy storage systems. File enabled VNX systems use a command line interface to the Control Station for file administrative tasks. If VNX for File or Unified is present, then it can be connected to it via serial or SSH to troubleshoot many VNX for File hardware components.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 65
Unisphere Remote is software that provides centralized multi‐box monitoring of hundreds of VNX systems whether they reside in a data center or are deployed in remote and branch offices. It gives users the ability to monitor the health and alerts of large numbers of VNX systems from a central console. Unisphere Remote is a virtual appliance that runs in a VMware virtual environment.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 66
Unisphere Analyzer is the VNX performance analysis tool. Unisphere Analyzer is an application to help identify bottlenecks and hotspots in VNX storage systems, and enable users to evaluate and fine‐tune the performance of their VNX system. Data may be collected by the storage system, or by a Windows host, running the appropriate software, in the environment. Different performance metrics are collected from disks, Storage Processors, LUNs, cache, and SnapView snapshot sessions. Data may be displayed in real‐time, or for later analysis, saved as a .nar (Unisphere Archive) file. For File based detailed monitoring, users have access to the native monitoring capability of Unisphere for File and additional monitoring is available for the VNX for File components in the separately licensed Data Protection Advisor for File Server offering that provides detailed reporting not only on replication configurations but VNX for File in general.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 67
Unisphere Quality of Service Manager (UQM) measures, monitors, and controls application performance on the VNX storage system. UQM can be a powerful tool for evaluating the storage system to determine the current service levels and to provide guidance on what service levels are possible, given the specific environment. UQM may be managed by Unisphere GUI, Secure CLI, or Unisphere Client. Because UQM is array resident, there is no host component to load, and no performance impact on the host. UQM controls array performance by allocating resources to user‐defined classes of I/O. This resource allocation allows the specified I/O classes to meet pre‐defined performance goals.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 68
VNX Monitoring and Reporting is a cost effective software solution. Accessible from a web portal it has a tree view format that drills down into several summary or filtered views. VNX Monitoring and Reporting is a limited version of Watch4Net for VNX. It provides basic monitoring and reporting capabilities for VNX administrators. VNX Monitoring and Reporting automatically collects block and file storage statistics along with configuration data, and stores them into a database that can be viewed from dashboards and reports. Watch4Net offers a custom report development framework extending the preconfigured VNX reports and creates new reports to meet specific reporting needs. It provides real‐time, historical and projected visibility into network, data center, storage and cloud infrastructure performance. Users who have more complex environments, or more complex needs, can use Watch4net or upgrade to Watch4net.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 69
EMC ProSphere is cloud storage management software designed to monitor and analyze storage services across the virtual infrastructure. ProSphere can collect resource and performance data for EMC Symmetrix, VNX and CLARiiON storage arrays. ProSphere's federated architecture aggregates information across sites to simplify the management between data centers from a single console. ProSphere is managed from a web browser to allow easy access over the Internet for remote management.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 70
This lesson covers the integration between the automation tools for virtualized and application environment and VNX management.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 71
EMC AppSync protects virtualized Microsoft applications in EMC VNX (block) environments. It is designed to provide better management of protection and replication for critical applications and databases. EMC’s advanced technology – snapshots and continuous data protection – enable instant, granular restores and zero data loss replication and can be managed from a central location using EMC AppSync. Replication Manager is a tool designed to automate the creation, management, and use of EMC point‐in‐time replicas (snapshots, clones, mirrors). No scripting is required. Replication Manager auto‐discovers the environment (application host, associated storage, and underlying replication technology) and enables easy point‐and‐click management by integrating the technology stack from the application to the storage and replicas.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 72
EMC Storage Integrator (ESI) for Windows is a tool targeted at Windows and Microsoft applications administrators. ESI for Windows provides storage viewing and provisioning capabilities. ESI also enables the user to create a file share and mount that file share as a network attached drive in the Windows environment. ESI supports the EMC VNX series, the EMC VNXe series, EMC CLARiiON, the EMC Symmetrix VMAX and the EMC Symmetrix VMAXe. ESI supports Hyper‐v virtual disks in release 1.3.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 73
VMware vSphere Storage APIs (Application Programming Interface) Array Integration (VAAI) is a storage integration feature that increases virtual machine scalability. VAAI consists of a set of APIs that allows vSphere to offload specific host operations to EMC storage arrays. These are supported with VMFS (Virtual Machine File System) and RDM (Raw Device Mapping) volumes. VAAI for SAN enables very tight integration between the VNX platform and VMware vSphere. VAAI for NAS minimizes the impact of high I/O virtualization tasks on ESXi hosts.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 74
Storage Analytics for VNX is a standalone software product that comprises of a light version of VMware vCenter Operations Manager Enterprise tool and the EMC Adapter for VNX. VMware vCenter Operations Manager provides automated operations management using patented analytics and an integrated approach to performance, capacity and configuration management. The EMC Adapter for VNX is bundled with a connector that enables vCenter Operations Manager to collect performance and capacity metrics, and proactive information to help administrators make informed decisions, heat maps etc. Virtual Storage Integrator (VSI) is a plug‐in to the VMware vCenter management software. It allows the VMware administrator to provision, monitor, and manage VMware vSphere datastores on EMC storage arrays directly from vCenter, simplifying management of the virtualized environment. VSI empowers VM administrators to view storage and VMware performance metrics from a single view.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 75
VMware vCenter Site Recovery Manager (SRM) automates VMware site failover. It is integrated with vCenter and EMC Storage Arrays and managed through a vCenter client plug‐in. SRM leverages the data replication capabilities of the underlying storage system through an interface called a Storage Replication Adapter (SRA). SRM supports SRAs for VNX Replicator, VNX MirrorView, and EMC RecoverPoint. Each EMC SRA is a software package that enables SRM to implement disaster recovery for virtual machines by using VNX storage arrays that run replication software. SRA‐specific scripts support array discovery, replicated LUN discovery, test failover‐failback, and actual failover. Disaster recovery plan provides the interface to define failover policies for virtual machines running on NFS, VMFS, and RDM storage.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 76
This module covered the native VNX management options, the additional options for managing VNX and the peripheral products associated with VNX.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 77
This course covered the VNX storage solution models, features, data features, architecture, and management. This concludes the VNX Fundamentals training.
Copyright © 2014 EMC Corporation. Do not copy - All Rights Reserved.
VNX Fundamentals 78
View more...
Comments