EMC Storage Integration With VMware VSphere Best Practices
Short Description
storage integration vnx and vmware...
Description
Welcome to EMC Storage Integration with VMware vSphere Best Practices. Copyright ©2015 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The trademarks, logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation and other parties. Nothing contained in this publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party that owns the Trademark. EMC, EMC² AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS ECO, Document Sciences, Documentum, DR Anywhere, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Multi-Band Deduplication,Navisphere, Netstorage, NetWorker, nLayers, EMC OnCourse, OnAlert, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.
Revision Date: December 2015 Revision Number: MR-1WP-EMCSTORVMWBP .2.0
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
1
This course covers the integration of EMC arrays and add-on technologies into VMware virtualized environments. It includes an overview of VMware virtualized environments, infrastructure connectivity considerations, virtualization solutions, such as local and remote replication options, monitoring and implementation of EMC plug-in, and vSphere API enhancements.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
2
Click any button to learn more about the integration of EMC storage into VMware environments. Click the Course Assessment button to proceed to the course assessment. Once you have entered the Course assessment you cannot return to the course material until you have completed the assessment.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
3
This lesson covers some of the general environment considerations needed for implementing a VMware solution. It also introduces VMware storage consumption models, and generic storage and storage infrastructure considerations.
.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
4
VMware offers a diverse product line so it is important to clarify what is expected in a vSphere offering. VMware features are governed by the license obtained. If a VMware feature or product is not licensed, it will not be supported. In EMC engagements, it is common to expect the ESXi host to have an Enterprise Plus license and support all the features of this license. Some key features for consideration when deploying VMware: •
Licensing level will provide required feature support. Enterprise Plus will provide the most feature support
•
Enterprise environments should be managed by vCenter to take advantage of enhanced feature support: – vMotion® and Storage vMotion – High Availability and Fault Tolerance – Distributed Resource Scheduler – Storage APIs for Array Integration, Multipathing – Distributed Switch™ – Storage DRS™ and Profile-Driven Storage – I/O Controls (Network and Storage) – Host Profiles and Auto Deploy
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
5
Prior to focusing on storage array presentation it is important to consider the storage presentation models supported by VMware. Presenting a virtual machine with local storage is a possibility in most deployments, but not a consideration of this course. When it is used it can restrict some of the functionality expected in an Enterprise level deployment (e.g. a local storage datastore can only be accessed by one machine and is usually a Single Point of Failure (SPOF) in the environment). Storage array presentation is the more preferred method of storage presentation in an Enterprise environment, as this model is typically designed to meet specific needs of an application specific workload or SLA. However, the final configuration of a solution is also not restricted to a single type of configuration and most environments may be comprised of many different aspects, both array based and local storage, dependent upon the most suitable resolution of the solutions expectations. Access to an array using Raw Device Mapping (RDM) volumes enables storage to be accessed directly by Guest Operating Systems (VMs). The RDM contains metadata for managing and redirecting disk access to the physical device.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
6
I/Os are the major metric by which storage arrays, applications and interconnect infrastructures are evaluated. These metrics are often documented in Service Level Agreements (SLA), which are goals that have been agreed to and must be achieved and maintained. As storage plays a very large part of any computing solution there are many aspects that must be considered when implementing these SLAs and defining service requirements of the solution. Some of these considerations include, but are not restricted to: • Connectivity infrastructure • Physical cables • Protocols • Distance • Array type • Physical architecture • Cache • Buses • spinning or solid-state drives • software enhancements • Disk connectivity interfaces • FC • SAS • SATA
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
7
With any environment there are many factors affecting the infrastructure. Depending upon the considerations and their importance the design could change radically from one originally envisioned. The design of any infrastructure is to achieve the highest possible success in meeting the majority of the demands expected. This means that compromise and segmentation of purpose are always factors in design. Additional considerations include: • Application workload profiles • Local server buses (PCI, PCI-X, PCI Express, etc.) • Disk types • Cache size • Number of back-end buses • RAID level implemented • Stripe size • Replication technologies and mechanisms • VMware API support for integration and array offload
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
8
The key points for LUN utilization are: • The choice of RAID Level and disk type to match the specific workload proposed for the LUN • Each LUN should only contain a single VMFS datastore to segment workload characteristics of differing Virtual Machines and prevent resource contention. However, if multiple Virtual Machines do access the same VMFS datastore, the use of disk shares to prioritize virtual machine I/O is recommended. Any solution can produce various combinations of LUN presentation models. Both large and small LUNs can be presented. One reason to create fewer, larger LUNs is to provide more flexibility to create VMs without storage administrator involvement. Another reason is more flexibility for resizing virtual disks and snapshots and fewer VMFS datastores to manage. A reason to create smaller LUNs is to waste less storage space by building in storage overhead for growth and removing it from the global pool of storage in the array. Smaller LUNs may also be preferred if there are many differing performance profiles required in the environment along with varied required RAID level support.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices
9
One of the key concerns in any storage array architecture is latency. This will always cause performance degradation and should be minimized as much as possible. There are many areas that introduce latency but there are some general rules that can be applied to start reducing the impact of latency. Once of the first considerations is the use of Flash Drives with the vSphere Flash Infrastructure for host swap files and the use of vSphere Flash Read Cache (vFRC). Another key considerations is the use of Storage arrays that make use of vStorage APIs (vStorage APIs for Array Integration (VAAI), (VAI), (VASA), (VADP)), as this will greatly enhance the performance of any infrastructure by off-loading operations to native array tools and functionality and free up ESXi resources for other task processing.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 10
The overall solution bandwidth is possibly the most critical concern. This refers not only to interconnectivity bandwidth, but internal array and server bus bandwidth too. Any form of connectivity pipe contention/restriction will reduce performance and in-turn reduce SLA compliance. Both traditional networking infrastructure and storage networking infrastructures need to be addressed. As a rule of thumb, workload segmentation is required to minimize resource contention, but other methods can be used when physical segmentation is not immediately possible, such as peak time bandwidth throttling of non-critical applications or application scheduling. However, with global enterprises these options are not always viable alternatives to physical segmentation. When using VMware® Storage VMotion™, which enables live migration for running virtual machine disk files from one storage location to another with no downtime or service disruption, the available storage infrastructure bandwidth is of key importance.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 11
Another source of resource contention is the access to the actual disk by the server threads. This is generally referred to as the HBA Queue Depth, and is the number of pending I/O requests to a volume. By ensuring that this is set to the maximum permissible limit, there should be no throttling of the I/O stream. However, the underlying infrastructure should be able to support this configured number otherwise further congestion will occur at the volume level due to an overrun of the I/O stream. Local server architecture could cause a bottleneck and the performance degradation could occur before the I/O even left the host. If this was the case, no amount of external environment tuning would improve the performance. Knowing the relative expected I/O transfer rates of the server buses will give a base level from which other performance figures can be determined (e.g. PCI-X specifications allow different rates of data transfer, anywhere from 512 MB to 1 GB of data per second). The Oracle’s Sun StorageTek Enterprise 4 Gb/s Fibre Channel PCI-X Host Bus Adapter is a highperformance 4/2/1 Gb/sec HBA capable of providing throughput rates up to 1.6 GB/sec (dual port) in full-duplex mode. This would not challenge the physical limitations of the Fibre Channel medium, until we try to push multiple connections of this type through a single connection; hence the segmentation of workload and use of Dual Port connectivity.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 12
This lesson covered an overview of a typical VMware enterprise infrastructure with VMware storage consumption models and generic storage and storage infrastructure considerations.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 13
In this lesson we will examine the importance of ESXi storage connectivity. This lesson reviews specific SAN options, configurations and ramifications with an emphasis on storage connectivity recommendations and best practices.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 14
The connectivity capability of a ESXi host plays a major role in presenting array-based storage. The protocol(s) that you use to access storage do not provide equal performance levels. Cost, throughput, and distance all play a role in any solution. When a network infrastructure is used, it can provide valuable metrics when measuring array performance. If the network infrastructure becomes congested, then the storage array performance will appear to suffer. The array performance will remain capable of SLA standards, but it is not being supplied enough data to process and maintain its SLA requirements. Therefore, it is important to measure and account for performance end-to-end rather than always focusing on just one component of any infrastructure. When presenting storage over a network it is always recommended to isolate the storage traffic wherever possible. This guarantees known or expected performance levels from specific interconnects, which has increased importance with profile based storage presentation.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 15
Understanding what must be achieved and how that is measured is a goal of all solution deployments. There are several common metrics mentioned on this slide that should be gathered and analyzed to assist with the deployment and tuning of any storage integration with VMware. However, the tuning and maintenance of any solution is an ongoing process and should be constantly monitored for any change in conditions or SLA non-compliance. Goals: •
Workload is key determination in any solution
•
Throughput – IOPs – Bandwidth
•
Latency
•
Capacity
•
Proper assessment of environment
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 16
ESXi general SAN considerations are very straight forward: •
As with any technology solution the components should all be of a compatible software and hardware level to meet the proposed solution requirement.
•
Diagnostic partitions shouldn’t be configured to use SAN LUNs, as this 110MB partition is used to collect core dumps for debugging and technical support, and in the event of failure the data may not be successfully copied to an array (depending on the failure). If diskless servers are being used then a shared diagnostic partition should be used with sufficient space configured to contain all the connected server information.
•
For multipathing to work correctly a LUN should be presented to all the relevant ESXi servers with the same LUN ID.
•
As discussed previously, the HBA Queue Depth should be configured to prevent any I/O congestion and throttling to connected volumes.
A couple of ESXi SAN restrictions are: •
Fibre Channel connected tape drives are not supported.
•
Multipathing software at the Guest OS level cannot be used to balance I/O to a single LUN.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 17
Fibre Channel LUNs are presented to an ESXi host as block level access devices, and can either be formatted with VMFS or used as an RDM. All devices presented to the ESXi host are discovered on boot or if a rescan is performed. VMware has predefined connection guidelines, but it is up to the storage vendor, EMC in this case, to determine the best practices for integration with the virtualized infrastructure. As a rule with Fibre Channel SANs, the requirements are to use single-initiator, single-target zoning with a minimum of two paths per initiator and always ensure consistent speeds end to end.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 18
NPIV (N-Port ID Virtualization) is a standard Fibre Channel facility that provides the ability to assign multiple FCIDs to a single N-Port. The N-Port may be an NPIV capable HBA port or a switch port. Each entity on the switch or server attempts a Fabric Login (FLOGI) to receive an FCID (Fibre Channel ID). The first login proceeds in the normal way. But subsequent logins that use the same N-Port are converted to FDISC (Fabric Discovery) commands. The switch or server maps the FCIDs to the appropriate entity. The switch with the F-Port must have NPIV enabled in order to support this functionality. Note: For a Fibre Channel switch to connect with an N-Port to an NPIV enabled switch, it must be in Access Gateway mode (Brocade) or NPV mode (Cisco).
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 19
NPIV application in a virtualized server environment allows each VM to be assigned its own logical Fibre Channel path to the fabric, while sharing an HBA port with other VMs. Each VM will have a unique WWN and FCID to communicate with the fabric and attached storage. The first time an HBA port logs into a fabric it goes through the normal FLOGI (Fabric Login) process. When additional VMs want to login to the fabric through the same HBA, the FLOGI is converted to an FDISC (Fabric Discovery). This is only supported if the Fibre Channel switch that the HBA is connected to is enabled for NPIV. The FLOGI and FDISC operations result in a unique FCID being assigned to a virtual port (VPORT) on the HBA. Each VPORT is mapped to a separate VM.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 20
If you are going to consider an N-Port virtualization environment (NPIV) you will need to keep in mind: •
It can only be used by Virtual Machines with RDM disks.
•
The HBAs must be all of the same type/manufacturer.
•
The NPIV LUN number and Target ID must be the same as the physical LUN number and target ID.
•
Only vMotion is supported, not Storage vMotion.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 21
The ideal Fibre Channel environment contains: •
No single point of failure
•
An equal load distribution
•
Each device presented should match the intended utilization profile
•
Fabric zoning is single-initiator, single-target, which reduces problems and configuration issues by its focused deployment.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 22
The enterprise connectivity protocol Fibre Channel Over Ethernet FCoE is support by two implementation methodologies. The first is a Converged Network Adapter (CNA), referred to as the hardware methodology, and the second is a Software FCoE adapter. When using the CNA, the network adapter is presented to the ESXi as a standard network adapter and the Fibre Channel adapter is presented as an host bus adapter. This allows the administrator to configure connectivity to both the network and the Fibre Channel infrastructures in the traditional way without any specialized configuration requirements, outside of the specialized networking infrastructure component (e.g. FCoE switches). The client will discover the Networking component as a standard network adapter, vmnic, and the Fibre Channel component as an FCoE adapter, vmhba. The software adapter uses a specialized NIC card that supports Data Center Bridging and I/O offload to communicate to the respective storage area infrastructures via specialized switches. The networking environment must be properly configured before the software FCoE adapter is activated. When booting from a software FCoE adapter only one host has access to the boot LUN. If using an Intel 10 Gigabit Ethernet Controller (Niantec) with a Cisco switch, the Spanning Tree Protocol (STP) should be enabled, and the switchport trunk native vlan for the FCoE VLAN, should be turned off. At present there is no FCoE pass through to the Guest OS level. ESXi supports a maximum of four software FCoE adapters on one host.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 23
For software FCoE adapters to work with network adapters, specific considerations apply: •
Make sure that the latest microcode is installed on the FCoE network adapter.
•
If the network adapter has multiple ports, when configuring networking, add each port to a separate vSwitch. This practice helps you to avoid an all paths down [APD] condition when a disruptive event, such as an MTU change occurs.
•
Do not move a network adapter port from one vSwitch to another when FCoE traffic is active. If you need to make this change, reboot your host afterwards.
•
If you changed the vSwitch for a network adapter port and caused a failure, moving the port back to the original vSwitch resolves the problem.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 24
When trying to ensure a solid Ethernet network connection, there are proactive strategies that can be followed to assist in this objective. Avoid contention between the VMkernel port and the virtual machine network. This can be done by placing them on separate virtual switches and ensuring each is connected to its own physical network adapter. Be aware of network physical constraints because logical segmentation does not solve the problem of physical oversubscription. Application workload profiles can be monitored to prevent excessive resource oversubscription with shared resources.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 25
Storage I/O Control was designed to help alleviate many of the performance issues that can arise when many different types of virtual machines share the same VMFS volume on a large SCSI disk presented to VMware ESXi hosts. This technique of using a single large disk allows optimal use of the storage capacity. However, this approach can result in performance issues if the I/O demands of the virtual machines cannot be met by the large SCSI disk hosting the VMFS, regardless if SIOC is being used or not. In order to prevent these performance issues from developing, no matter the size of the LUN, EMC recommends using VMAX Virtual Provisioning, which will spread the I/O over a large pool of disks. Note: EMC does not recommend making changes to the Congestion Threshold.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 26
Monitor CPU utilization of high throughput workloads which can limit the maximum network throughput. Virtual machines that reside in the same ESXi host should be connected to the same virtual switch. This avoids network physical traffic overhead because the VMkernel is processing the transaction which avoids unnecessary use of CPU resources associated with network communication. Ensure virtual machines that require low network latency are using the VMXNET3 virtual network adapter.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 27
As a general rule, for optimal performance and configurability any NIC used in the ESXi environment should have the feature set shown here in the slide. Supporting these features will enable an administrator to configure and tune connectivity to produce optimal throughput, and offload processing to the network interface card, thereby freeing up server cycles to perform other tasks. Using jumbo frames can be an important network consideration. Adjust the maximum transfer unit size to 9000 if you are sure that this can be met end-to-end. If the same conditions are not met end-to-end, further challenges may be introduced into the network infrastructure (e.g. dropped packets, high retransmission rates, inefficient use of packet size, etc.). Possible considerations include: • Virtual machine network adapter type • Ethernet network devices such as switches and routers • Virtual switch configuration
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 28
Being presented with the ideal Ethernet network environment can be a difficult goal to achieve. Ethernet networks carry traffic for many more types of communication than just storage. This slide contains ideal characteristics which reduce network related challenges. Addressing these concerns is typically done with the network administrator inside a corporation. •
CAT 6 cables for copper networks – For high speed and error free transmissions
•
Multi-Chassis Link Aggregation technology - Used in multipathing NFS - a type of link aggregation group (LAG) with constituent ports that terminate on separate chassis
•
Support for 10 GbE
•
Network segmentation - Dedicated network or Private VLAN – to reduce traffic contention and congestion
•
Network management - Flow control, RSTP or STP with portfast, restricted PDUs on storage network ports, support for Jumbo Frames. However, most ESXi I/O is random, therefore the benefit of Jumbo Frames may be minimal.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 29
iSCSI Storage Guidelines: •
Supported Initiators for iSCSI connectivity are: Software, Dependent and Independent (discussed in a later slide)
•
Dynamic discovery, or static discovery addresses and storage system target name, must be configured
•
Challenge Handshake Authentication Protocol security may be configured for additional security and initiator verification
NFS Networking Guidelines: •
For network connectivity, the host requires a standard network adapter.
•
ESXi supports Layer 2 and Layer 3 Network switches. If you use Layer 3 switches, ESXi hosts and NFS storage arrays must be on different subnets and the network switch must handle the routing information.
•
A VMkernel port group is required for NFS storage. You can create a new VMkernel port group for IP storage on an already existing virtual switch (vSwitch) or on a new vSwitch when it is configured. The vSwitch can be a vSphere Standard Switch (VSS) or a vSphere Distributed Switch (VDS).
•
If you use multiple ports for NFS traffic, make sure that you correctly configure your virtual switches and physical switches. For information, see the vSphere Networking documentation.
•
NFS 3 and non-Kerberos NFS 4.1 support IPv4 and IPv6.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 30
IP based storage devices are discovered through the Ethernet network. The ESXi host must be configured with a VMkernel port, which includes an IP address to provide a communication path for iSCSI and NFS traffic, unless Direct Path I/O is being configured (see later slide). While the VMkernel port serves many functions in an ESXi host, we will only discuss the IP storage role in this course. It is always recommended that the network segment that carries this traffic be private and not routed. VLAN tagging is a permitted alternative but not preferred. An IP storage transaction can be described by following these steps: • Inside the virtual machine the operating system sends a write request • Write request is processed through the virtual machine monitor to VMkernel • The VMkernel then passes the request through the VMkernel port created by the administrator • That request goes through the Ethernet network • Write request arrives at the storage array The response of the array is the inverse of the processes shown here.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 31
To access iSCSI targets, your host needs iSCSI initiators. The job of the initiator is to transport SCSI requests and responses, encapsulated into the iSCSI protocol, between the host and the iSCSI target. ESXi supports two types of initiators: Software iSCSI and hardware iSCSI. A software iSCSI initiator is VMware code built in to the VMkernel. With the software iSCSI initiator, you can use iSCSI without purchasing any additional equipment. This initiator requires the configuration of VMware Ethernet networking, iSCSI target information, and management interfaces, and uses CPU resources for encapsulation / de-encapsulation. Hardware iSCSI initiators are divided into two categories: • Dependent hardware iSCSI • Independent hardware iSCSI A dependent hardware iSCSI initiator adapter depends on VMware networking and on iSCSI configuration and management interfaces that are provided by VMware. An independent hardware iSCSI adapter handles all iSCSI and network processing and management for your ESXi host.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 32
To maximize the configuration for iSCSI, iSCSI Port binding is a valuable tool when configuring port redundancy and client connectivity. It can also make use of advanced networking features like NIC teaming and Jumbo frames (recommended). In the case of VNX systems, as shown on this slide, the SP paths to primary access SP are active until failure occurs. The secondary paths (blue dashed line) are configured in a stand-by mode. This configuration will ensure that any failure will not impact client connectivity or bandwidth. It also enables the segmentation of iSCSI clients to the different network address segments, and ensures the failover to a secondary path in the event of any network connectivity failure. This example’s context can be applied to any connectivity methodology, application workload and SLA desired objective. By configuring multiple vmkernel ports, port binding and cross SP or Engine connectivity, greater redundancy, and possibly bandwidth, can be realized.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 33
A network administrator through NIOC can allocate I/O shares and limits to different traffic types, based on their requirements. NIOC capabilities are enhanced such that administrators can now create user-defined traffic types and allocate shares and limits to them. Administrators can provide I/O resources to the vSphere replication process by assigning shares to vSphere replication traffic types. Some NIOC features are: • Can be used to ensure iSCSI and NFS receive adequate resources • Requires distributed virtual switch (vDS) – Not available on standard virtual switch • Allows allocation of network bandwidth to network resource pools • Network bandwidth allocated to resource pools – Shares – Limits • Pre-defined Resource Pools – FT, iSCSI, vMotion, management, vSphere Replication (VR), NFS, and VM traffic
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 34
EMC recommends using “zeroedthick” instead of “thin” virtual disks when using VNX or Symmetrix Virtual Provisioning because using thin provisioning on two separate layers (host and storage array) increases risk of out-of-space conditions for the virtual machines. The thin on thin is now acceptable in the vSphere 4.1 or later and Symmetrix VMAX Enginuity 5875 and later, as well as VNX systems (Note that XtremIO systems are always thin provisioned). The use of thin virtual disks with VNX or Symmetrix virtual provisioning is facilitated by many new technologies in the vCenter server and Symmetrix VMAX Enginuity 5875 or VNX Block OE features such as vStorage API for Storage Awareness (VASA), Block Zero and Hardware-Assisted Locking (ATS), and Storage DRS. It is also important to remember that using “thin” rather than “zeroedthick” virtual disks does not provide increased space savings on the physical disks. The “zeroedthick” only writes data written by the guest OS; it does not consume more space on the VNX or Symmetrix pool than a similarly sized “thin” virtual disk. Because of the inline deduplication available on XtremIO systems, “eagerzeroedthick” is the preferred format. XtremIO does not write blocks consisting only of zeroes, so no physical writes are performed.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 35
In ESXi 5.5 and 6, dead space is now reclaimed in multiple iterations instead of all at once. A user can now provide the number of blocks (the default is 200) to be reclaimed during each iteration. Default number of blocks reclaimed per iteration: 200 – 1 MB per block allocation datastores, for 5.5 and 6, this equals 200 MB This specified block count controls the size of each temp file that VMware creates. Since vSphere 5.5 and 6 default to 1 MB block datastores, this would mean that if no number is passed, VMware will unmap 200 MB per iteration. VMware still issues UNMAP to all free blocks, even if those blocks have never been written to. VMware will iterate serially through the creation of the temp file as many times as needed to issue UNMAP to all free blocks in the datastore. The benefit now is that the risk of temporarily filling up available space on a datastore due to a large balloon file is essentially zero.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 36
Some general VNX storage guidelines are listed here; most are good general practices for any storage environments. The limitation of datastores to 80% of their capacity will reduce the possibility of out of space conditions, and will enable administrators time to extend datastores when high storage utilization alerts are triggered. It is also advisable to use no more than three VM Snapshots for extended periods, and rather use VM clones for point-in-time copies to avoid the overhead of change tracking and logging activity. Enable Storage I/O Control (SIOC) to control periods of high I/O and, if response times are consistently high, redistribute the VMs to balance workloads. Utilize FAST Cache for frequently accessed random I/O workloads. Sequentially accessed workloads often require a longer time to warm FAST Cache, as they typically read or write data only once during an access operation. This means that sequentially accessed workloads are better serviced by SP Cache. Monitoring of data relocation on FAST VP LUNs will enable administrators to increase the number of disks in the highest tier if a large percentage of data is constantly being rebalanced.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 37
This slide illustrates the Fibre Channel SAN connectivity best practices for ESXi servers. With multipathing, this gives storage connectivity redundancy and High Availability.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 38
For Symmetrix storage arrays there are several setting prerequisites These are: •
Common serial number
•
Auto negotiation enabled
•
SCSI 3 set enabled
•
Unique world wide name
•
SPC-2 flag set or SPC-3 which is set by default at Enginuity 5876 or later
Most of these are on by default, but please consult the latest EMC Support Matrix for the latest port settings. NOTE: The ESXi host considers any LUNs from a Symmetrix storage system that have a capacity of 50MB or less as management LUNs. These LUNs are also known as pseudo or gatekeeper LUNs. These LUNs appear in the EMC Symmetrix Management Interface and should not be used to hold data.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 39
This slide depicts the recommended port connectivity for ESXi servers and Symmetrix arrays. If there is only a single engine in the array then the recommended connectivity is that each HBA should be connected to odd and even directors within the engine (picture on the left). If there are multiple engines in the array, then each HBA should be connected to different directors on different engines (picture on the right).
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 40
Striped meta volumes perform better than concatenated meta volumes when there are enough spindles to support them. However, if the striping leads to the same physical spindle hosting two or more members of the meta volume, striping loses its effectiveness. In such a case, using concatenated meta volumes may be better. It is not a good idea to stripe on top of a stripe. Thus, if host striping is planned and meta volumes are being used, concatenated meta volumes are better. Usually, striped meta volumes perform better than concatenated volumes: • Because they reside on more spindles • If there are not enough drives for all the meta members to be on separate drives, consider concatenated • If host striping is planned, concatenated meta volumes may be better Concatenated meta volumes can be placed on the same RAID group: • Can create a reasonable emulation of a native large volume on Symmetrix DMX systems or Symmetrix VMAX systems Do not place striped meta volumes on the same RAID group (wrapped).
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 41
When you create volumes in XtremIO for a host, the default logical block (LB) size of a new XtremIO volume is 512B. This parameter can be adjusted to 4KB for supported hosts. When applications use a 4KB (or a multiple of 4KB) block size, it is recommended to present a volume with a 4KB block size to the host. In all other cases, use a 512B block size.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 42
There are some very specific VMware configuration considerations for XtremIO environments. Most of these configuration steps are performed automatically if the VSI (Virtual Solutions Integration) plug-in is used in this environment.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 43
Some further XtremIO VMware environment considerations are shown here. Integration with vSphere APIs assists greatly with the integration of this array as well.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 44
Modifying the HBA is designed for advanced users and operational familiarity is expected. For optimal operation, consider adjusting the queue depth and execution throttle of the FC HBA. The execution throttle setting controls the amount of outstanding I/O requests per HBA port. Queue depth settings control the amount of outstanding I/O requests per a single path and is controlled at driver module for the card at the OS level When the execution throttle in the HBA level is set to a value lower than the queue depth, it may limit the queue depth to a lower value than set.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 45
Adjusting the Execution Throttle varies per host. This slide contain excerpts from the Host configuration guide. Depending on the host there can be many steps to perform.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 46
Use cation when modifying the Queue Depth as incorrect setting can have a negative impact in performance. Adjusting the Queue Depth varies per host. This slide contain excerpts from the Host configuration guide. Depending on the host there can be many steps to perform.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 47
EMC is responsible for providing steps to integrate a solution into any environment. Use caution when using EMC storage presentation best practices, as they are guidelines, not rules. Understanding all the variables in the solution prior to implementation is key to success. Technical documentation, guides, VMware configuration maximums, White Papers and EMC TechBooks are frequently updated; make sure to check sources often.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 48
This lesson covered the supported ESXi storage connectivity protocols, both Fibre Channel and IP storage presentation to ESXi, and some generic networking considerations for ESXi environments.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 49
This lesson covers several Virtualization solution considerations, including, Replication, VMware Storage Distributed Resource Scheduler (SDRS), VMware Datastore Cluster, VMware Site Recovery Manager (SRM) and VMware Horizon Virtual Desktop Infrastructure storage environment integration with XtremIO.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 50
In most cases the native snapshot and replication wizards within vCenter provide the best virtual machine replication option. They offer integrated vCenter functionality to automate and register the virtual machine replicas. EMC provides alternative replication options to create and register virtual machine replicas on NFS datastores, and create datastore replicas on EMC storage devices. VNX provides the following features for virtual machine clones: • VNX SnapView™ for block storage when using the FC, iSCSI, or FCoE protocols • VNX SnapSure™ for file systems when using the NFS protocol • EMC VNX Replicator • EMC MirrorView™ and SAN Copy • EMC RecoverPoint® The TimeFinder local replication solutions include: • TimeFinder/Clone - creates full-device and extent-level point-in-time copies • TimeFinder/Snap - creates pointer-based logical copies that consume less storage space on physical drives. • TimeFinder VP Snap - provides the efficiency of Snap technology with improved cache utilization and simplified pool management. Using TimeFinder on a VMAX device containing a VMFS requires that all extents of the file system be replicated. Currently only SRDF, SRDFe, and Open Replicator for Symmetrix (ORS) are supported by the VASA Provider. XtremIO features include: Snapshots for local replication [Clones do not exist because of inline deduplication], and native remote replication with RecoverPoint [no splitter is required]. RecoverPoint may be used for replication that requires a splitter by adding VPLEX to the environment.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 51
VMware ESX/ESXi assigns a unique signature to all VMFS volumes when they are formatted with VMFS. The unique signature and the VMFS label are also stored on the device. Storage array replication technologies create exact replicas of the source volumes, and all information including the unique signature is copied. This causes problems if the replica is to be presented to the host[s] that own the source device.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 52
VMware vSphere has the ability to individually re-signature and/or mount VMFS volumes copies through the use of the vSphere Web Client or with the CLI utility esxcfg-volume (vicfg-volume for ESXi). Volume specific re-signaturing allows for much greater control in the handling of snapshots. This is very useful when creating and managing volume copies created by EMC replication tools.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 53
If the VMware environment utilizes the frequent presentation of clone volumes back to the original owning ESXi server, then to minimize the administrative overhead of re-signaturing volumes, the LVM.enableResignature flag can be set. By setting this flag all snapshot LUNs will be automatically re-signatured. This is a host wide setting, not a per LUN setting, and care needs to be exercised when setting this flag. The CLI is used to set this flag.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 54
This slide lists the tools available for cloning Virtual Machines and their supporting protocols and technology (e.g. Fibre Channel, FCoE, iSCSI, Copy On First Write (COFW), Redirect On Write (ROW), etc.). • VNX SnapView – Block storage clones using FC, FCoE and iSCSI protocols – Block storage snapshots (COFW) using FC, FCoE and iSCSI protocols • VNX SnapSure – File system clones using the NFS protocol • VNX Snapshots – Block storage, Pool LUN only, Redirect On Write (ROW) snapshots • VMWare VAAI technology – Block storage acceleration of native Virtual Machine clones • NFS VAAI Plug-in – FAST Virtual Machine clones on NFS datastores • VSI Plug-in for Web Client – Individual Virtual Machine clones
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 55
Shown here are the decision factors and tool options for Symmetrix Snap/Clone operations.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 56
Snapshots are instantaneous copy images of Volume data with the state of the data captured exactly as it appeared at the specific point in time that the Snapshot was created, enabling users to save the Volume data state and then access the specific Volume data whenever needed, including after the source Volume has changed. Snapshots in XtremIO are regular volumes created as writeable snapshots. Creating Snapshots in XtremIO does not affect system performance, and a Snapshot can be taken either directly from a source volume or from other snapshots. XtremIO Snapshots are inherently writeable, but can be created as read-only. When a snap is created, the following steps occur: 1. Two empty containers are created in-memory 2. Snapshot SCSI personality is pointing to the new snapshot sub-node 3. The SCSI personality which the host is using, is linked to the second node in the internal data tree
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 57
This slide illustrates EMC tools for Remote Replication of Virtual Machines stored on VNX arrays and which volume technologies are supported by each of the tools.
SAN Copy is a VNX service that enables you to create copies of block storage devices on separate storage systems. SAN Copy propagates data from the production volume to a volume of equal or greater size on a remote storage array. SAN Copy performs replication at the LUN level and creates copies of LUNs that support VMFS datastores or RDM volumes to a remote systems.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 58
The VMAX-based replication technologies can generate a restartable or recoverable copy of the data. The difference between the two types of copies can be confusing; a clear understanding of the differences between the two is critical to ensure that the recovery goals for a vSphere environment can be met. • A recoverable copy of the data is one in which the application (if it supports it) can apply logs and roll the data forward to a point in time after the copy was created. • If a copy of a running virtual machine is created using EMC Consistency technology without any action inside the virtual machines, the copy is normally a restartable image of the virtual machine. This means that when the data is used on cloned virtual machines, the operating system and/or the application goes into crash recovery. Symmetrix SRDF options: • Synchronous SRDF (SRDF/S) is a method of replicating production data changes from locations less than 200 km apart. Synchronous replication takes writes that are inbound to the source VMAX and copies them to the target VMAX. The resources of the storage arrays are exclusively used for the copy. The write operation from the virtual machine is not acknowledged back to the host until both VMAX arrays have a copy of the data in their cache. • SRDF/A, or asynchronous SRDF, is a method of replicating production data changes from one VMAX to another using delta set technology. Delta sets are the collection of changed blocks grouped together by a time interval that can be configured at the source site. The default time interval is 30 seconds. The delta sets are then transmitted from the source site to the target site in the order they were created. SRDF/A preserves the dependent-write consistency of the database at all times at the remote site.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 59
•
•
•
•
The SRDF/Star disaster recovery solution provides advanced multi-site business continuity protection for Enterprise environments. It combines the power of Symmetrix Remote Data Facility (SRDF) synchronous and asynchronous replication, enabling the most advanced threesite business continuance solution available today. SRDF/Star enables concurrent SRDF/S and SRDF/A operations from the same source volumes with the ability to incrementally establish an SRDF/A session between the two remote sites in the event of a primary site outage — a capability only available through SRDF/Star software. Concurrent SRDF allows the same source data to be copied concurrently to VMAX arrays at two remote locations. The capability of a concurrent R1 device to have one of its links synchronous and the other asynchronous is supported as an SRDF/Star topology. Additionally, SRDF/Star allows the reconfiguration between concurrent and cascaded modes dynamically. Cascaded SRDF allows a device to be both a synchronous target (R2) and an asynchronous source (R1) creating an R21 device type. SRDF/Star supports the cascaded topology and allows the dynamic reconfiguration between cascaded and concurrent modes. The SRDF/Extended Distance Protection (EDP) functionality is a licensed SRDF feature that offers a long distance disaster recovery (DR) solution. This is achieved through a Cascaded SRDF setup, where a VMAX system at a secondary site uses DL R21 devices to capture only the differential data that would be owed to the tertiary site in the event of a primary site failure.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 60
RecoverPoint native replication technology is implemented by leveraging the content aware capabilities of XtremIO. This allows efficient replication by only replicating the changes since the last cycle. In addition, it only leverages the mature and efficient BW management of RecoverPoint for maximizing the amount of I/O that the replication can support. When RecoverPoint replication is initiated, the data is fully replicated to the remote site. RecoverPoint creates a snapshot on the source and transfers it to the remote site. The first replication is done by first matching signatures between the local and remote copies, and only then replicating the required data to the target copy. For every subsequent cycle, a new snapshot is created and RecoverPoint replicates just the changes between the snapshots to the target copy and stores the changes to a new snapshot at the target site (as shown on this slide).
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 61
Synchronous replication and CDP is supported with the VPLEX splitter solution. PowerPath, VPLEX, RecoverPoint and XtremIO can be integrated together to offer a strong, robust, and high performing block storage solution. • PowerPath – Is installed on hosts to provide path failover, load balancing and performance optimization VPLEX engines (or directly to the XtremIO array if VPLEX is not used). • VPLEX Metro – Allows sharing storage services across distributed virtual volumes and enables simultaneous read and write access across metro sites and across array boundaries. • VPLEX Local – Used at the target site, virtualizes both EMC and non-EMC storage devices, leading to better asset utilization. • RecoverPoint/EX – Any device encapsulated by VPLEX (including XtremIO) can use the RecoverPoint services for asynchronous, synchronous or dynamic synchronous data replication. The slide shows and example of XtremIO replication using the VPLEX splitter: An organization has three data centers at New Jersey, New York City, and Iowa. Oracle RAC and VMware HA nodes are dispersed between the NJ and NYC sites and data is moved frequently between all sites. The organization has adopted multi-vendor strategy for their storage infrastructure: • XtremIO storage is used for the organization's VDI and other high performing applications. • VPLEX Metro is used to achieve data mobility and access across both of the NJ and NYC sites. VPLEX metro provides the organization with Access-Anywhere capabilities, where virtual distributed volumes can be accessed in read/write at both sites. • Disaster recovery solution is implemented by using RecoverPoint for asynchronous continuous remote replication between the metro site and the Iowa site. • VPLEX metro is used at the Iowa site to improve assets and resource utilization, while enabling replication from EMC to non-EMC storage.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 62
VMware’s Storage DRS (Distributed Resource Scheduler) automates moves via Storage vMotion. It leverages the datastore cluster construct and it uses datastore capacity and I/Os to determine the optimal location for VM file placement. The more datastores a datastore cluster has, the more flexibility SDRS has to better balance the cluster’s load. It is recommended to monitor the datastore I/O latency during the peak hours to determine if there are performance problems that can/are being addressed. Make sure thin-provisioned devices do not run low on space. A VM that attempts to write on space that does not exist will be suspended.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 63
When using the VMware vSphere Storage Distributed Resource Scheduler (SDRS) it is recommended that all devices in the datastore cluster have the same host I/O limit. Also, all datastores should come from the same array type. These recommendations are given because Host I/O Limit throttles I/O to those devices. If a datastore contains devices – whether from the same or a different array – that do not have a Host I/O Limit, there is always the possibility in the course of its balancing that SDRS will relocate virtual machines on those Host I/O limited devices to non I/O limited devices. Such a change might alter the desired quality of service or permit the applications on the virtual machines to exceed the desired throughput. It is therefore prudent to have device homogeneity when using EMC Host I/O Limit in conjunction with SDRS.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 64
VMware Storage Distributed Resource Scheduler (SDRS) operates on a Datastore Cluster. A Datastore Cluster is a collection of datastores with shared resources. SDRS provides initial placement and ongoing balancing recommendations to datastores in a SDRS enabled datastore cluster. The aim is to minimize risk of over-provisioning one datastore, storage I/O bottlenecks, and performance impact on virtual machines. A datastore cluster can contain a mix of datastores with different sizes and I/O capacities, and can be from different arrays and vendors. However, EMC does not recommend mixing datastores backed by devices that have different properties (i.e. different RAID types or disk technologies) unless the devices are part of a FAST VP policy. Replicated datastores cannot be combined with non-replicated datastores in the SDRS cluster. If SDRS is enabled, only manual mode is supported with replicated datastores. When EMC FAST (DP or VP) is used in conjunction with SDRS only capacity based SDRS is recommended. Storage I/O load balancing is not recommended; simply uncheck the “Enable I/O metric for SDRS recommendations” box for the datastore cluster. Unlike FAST DP which operates on thick devices at the whole device level, FAST VP operates on thin devices at the far more granular extent level. Because FAST VP is actively managing the data on disks, knowing the performance requirements of a VM (on a datastore under FAST VP control) is important before a VM is migrated from one datastore to another. This is because the exact thin pool distribution of the VM’s data may not be the same as it was before the move. Therefore, if a VM houses performance sensitive applications, EMC advises not using SDRS with FAST VP for that VM. Preventing SDRS from moving the VM can be achieved by setting up a rule or using Manual Mode for SDRS.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 65
A VMware datastore cluster is a collection of datastores grouped together to present many storage resources as a single object. It is a key component in other VMware features like Storage Distributed Resource Scheduler (SDRS) and Storage Profiles. There are certain configuration guidelines that should be followed: •
Different arrays are supported, though device characteristics must match
•
ESXi 5.0 or greater host required
•
Must contain similar, interchangeable datastores
•
You can mix VMFS3 and VMFS5, but it is not recommended
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 66
VMware vCenter Site Recovery Manager (SRM) provides a standardized framework to automate VMware site failover. SRM is integrated with vCenter and EMC storage systems. It is managed through a vCenter client plug-in that provides configuration utilities and wizards to define, test and, execute failover processes called recovery plans. A recovery plan defines which assets are failed over, and the order in which they are restored when the plan is executed. SRM includes capabilities to execute pre- and post-failover scripts to assist in preparing and restoring the environment.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 67
Observe the following recommendations and cautions: •
Install VMware tools on the virtual machines targeted for failover. If the tools are not installed, an error event is generated in the recovery plan when SRM attempts to shut down the virtual machine. Click the History tab to view any errors.
•
Create alarms to announce the creation of new virtual machines on the datastore so that the new virtual machines are added to the mirrors in the SRM protection scheme.
•
Complete array replication configurations (local and remote replication) before installing SRM and SRA.
•
Ensure that there is enough disk space configured for both the virtual machines and the swap file at the secondary site so that recovery plan tests run successfully.
•
If SRM is used for failover, use SRM for simplified failback. Manual failback is a cumbersome process where each LUN is processed individually, including selecting the appropriate device signature option in vSphere on primary ESXi hosts. SRM automates these steps.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 68
The Symmetrix Remote Data Facility Adapter (SRA) is a lightweight application to enable VMware Site Recovery Manager to interact with remote data copies being performed on a Symmetrix array. The EMC Virtual Storage Integrator (VSI) can be used in conjunction with the vSphere Client to provide a GUI interface for configuration and customization of the SRA.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 69
Typical storage infrastructure for an Horizon VDI environment will consist of 2 major subdivisions: the storage used by the management infrastructure, and the storage used by the virtual desktop infrastructure. The management infrastructure delivers, monitors, and manages the virtual desktops, and provides necessary services to the environment. This part of the environment has a more predictable I/O workload, and storage requirements consist largely of the operating systems and applications required for management. The virtual desktop infrastructure consists of the virtual desktops themselves. This part of the VDI environment has a largely unpredictable I/O workload [though guidelines exist for different user classes], and storage requirements consist largely of the operating systems, applications, and unique data for all of the desktops provided. Unique user data typically has lower performance requirements than the OS or application data, and should be saved on lower-tier storage. VNX systems are ideal for this purpose.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 70
With XtremIO, VDI is totally different. With its scale-out, truly N-way active-active architecture, XtremIO delivers high performance at a consistently low latency needed to scale your virtual desktop environment while always maintaining a great end-user experience. XtremIO’s unique content-based in-memory metadata engine, coupled with its inline, all-the time data reduction technologies, vastly reduces the VDI storage capacity footprint. It is the only solution in the market today that can not only satisfy all the requirements of non-persistent and persistent desktops at scale but also deliver on the emerging VDI platform software technologies, like graphics-intensive VDI and Desktops-as-aService.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 71
Multiple LUNs should be configured – each has its own queue, and maximizing queue depths will be important in VDI environments. The master VM image should be on a LUN by itself; this image will be duplicated many times as copies are presented to users. The image should contain the operating system and installed applications; user data and persona data should be stored on external NAS or lower-tier block storage, since the performance requirements are not as high. Data should be aligned on 8 kB boundaries for LUNs assigned to the hypervisor and LUNs or virtual disks used by VMs. In some cases, the alignment may be performed automatically.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 72
The SAN connectivity, whether FC or iSCSI, will be important in a VDI environment. Zoning/IP connectivity should be as broad as possible, keeping in mind the limit on path count to a single device. This restriction will be significant in 6 and 8 X-Brick clusters, where the total number of front-end ports exceeds 16.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 73
This lesson covered several Virtualization solution considerations, including, Replication, VMware Storage Distributed Resource Scheduler (SDRS), VMware Datastore Cluster, VMware Site Recovery Manager (SRM) and VMware Horizon Virtual Desktop Infrastructure storage environment integration with XtremIO.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 74
This lesson covers virtualization monitoring interfaces, EMC Virtual Storage Integrator (VSI) Plugin advantages, and VMware vStorage APIs for Storage integration.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 75
There are various tools that can be used to monitor the VMware vSphere and vCenter environments. These tools can be either command-line or graphical user interface. Be cautious of using tools inside a virtual machine, as there is a level of abstraction from the physical resources by the ESXi host, that may obscure the actual metrics of the environment, and provide inaccurate data for the actual environment. This may skew expectations and not provide required performance goals.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 76
The VMware esxtop tool provides a real-time view (updated every five seconds, by default) of ESX Server worlds sorted by CPU usage. The term world refers to processes running on the VMkernel. ESXTOP requires the operator to understand the different modes in which it provides data. ESXTOP provides insight to the operator to identify and isolate performance related issues. Two examples of using ESXTOP to address a storage related concern are provided: 1. Check that the average latency of the storage device is not too high by verifying the GAVG/cmd metric. • If Storage I/O Control (SIOC) is applied, then the GAVG/cmd value must be below the SIOC setting. • Default SIOC is 30 ms. In VNX storage with EFD or other SSD storage, the value might be reduced to accommodate the fast disk type. 2. Monitor QFULL/BUSY errors, if Storage I/O Control (SIOC) is not used. • Consider enabling and configuring queue depth throttling • Reduction of the number of commands returned from the array • Queue depth throttling is not compatible with Storage DRS resxtop – is the remote version of the esxtop tool. Because VMware ESXi lacks a user-accessible Service Console where you can execute scripts, you can't use ''traditional'' esxtop with VMware ESXi. Instead, you have to use ''remote'' esxtop, or resxtop. The resxtop command is included with the vSphere Management Assistant (vMA), a special virtual appliance available from VMware that provides a command-line interface for managing both VMware ESX and VMware ESXi hosts.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 77
vCenter provides the ability to monitor performance at many levels. These are common tasks for VMware administrators. The advanced chart options contain a wide array of metrics that can be sorted in many ways. Understand the variables inside this interface to correlate its possible impact on your solution. It is also a good tool to demonstrate that everything is functioning within expectations. Overview mode displays most common metrics • Typically used to identify which advanced metrics require further investigation Advanced mode has granularity • CPU – check CPU ready • Memory – check ballooning, page file usage • Network – check latency • Storage – check VMkernel and physical device latency
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 78
VMware vCenter Operations Manager is a component of the vCenter Operations Management Suite. It provides a more simplified approach to operations management of vSphere infrastructure. vCenter Operations Manager provides operation dashboards to gain insights and visibility into health, risk and efficiency, performance management, and capacity optimization capabilities. This is an advanced component and represents an additional cost above vCenter.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 79
EMC Virtual Storage Integrator (VSI) is targeted towards the VMware administrator. VSI supports EMC storage provisioning within vCenter, full visibility to physical storage and increases management efficiency. VMware administrators can utilize VSI to do tasks such as create VMFS and NFS datastores, RDM volumes, have Access Control Utility support and many other storage management functions from their native VMware GUI interface.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 80
The reference architecture is depicted on this slide and is a reminder of the plug-in installation location. The VSI plug-in will enable VMware Administrators to simplify administration of the following storage systems: • EMC Celerra® network-attached storage (NAS) • EMC CLARiiON® block • EMC Symmetrix® VMAX® • EMC VNX®, EMC VNX Next-Generation, and EMC VNXe® • EMC VPLEX® • EMC XtremIO™
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 81
Addition of the VSI Plug-in adds management capability to both the Symmetrix and VMAX environments as shown here: EMC VMAX® storage systems • View properties of VMFS datastores and RDM disks • Provision VMFS datastores and RDM disks • Set a read-only property on an array • Restrict device size and thin pool size • View detailed array properties • Provision multiple RDM disks EMC VMAX3™ storage systems • View properties of VMFS datastores and RDM disks • Provision VMFS datastores and RDM disks • Set a read-only property on an array • Restrict device size • View detailed array properties • Provision multiple RDM disks EMC VPLEX® Storage Systems • View properties of the VPLEX storage system, VMFS datastores, and RDM disks • Provision VMFS datastores and RDM disks
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 82
The benefits of the VSI Plug-in addition to VNX environments are shown here. EMC VNX® storage for ESX/ESXi hosts • View properties of NFS and VMFS datastores and RDM disks • Provision NFS and VMFS datastores and RDM disks • Compress and decompress storage system objects on NFS and VMFS datastores • Enable and disable block deduplication on VMFS datastores • Create fast clones and full clones of virtual machines on NFS datastores • Extend NFS and VMFS datastores
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 83
VNXe virtualized management environment enhancements are shown here. EMC VNXe1600™ storage for ESX/ESXi hosts • View properties of VMFS datastores and RDM disks • Provision VMFS datastores and RDM disks • Extend datastores on thick or thin LUNs EMC VNXe3200™ storage for ESX/ESXi hosts • View properties of NFS and VMFS datastores and RDM disks • Provision NFS and VMFS datastores and RDM disks • Bulk provision VMFS datastores and RDM disks • Extend datastores on thick or thin LUNs and NFS file systems • Compress and decompress virtual machines
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 84
Integration of the VSI Plug-in into a virtualized XtremIO environment enhances and improves management functionality by enabling a user to: • View properties of ESX/ESXi datastores and RDM disks • Provision VMFS datastores and RDM disks • Create full clones using XtremIO native snapshots • Integrate with VMware Horizon View and Citrix XenDesktop • Set host parameters to recommended values • Reclaim unused storage space • Extend datastore capacity • Bulk-provision datastores and RDM disks • Schedule the Unmap operation for space reclamation
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 85
Further VSI Plug-in XtremIO management options: • Set Block Size to 256 kB for XCOPY during cloning • Create native snapshots • View XtremIO snapshots generated for virtual machine restore • Display clear capacity metrics (without “zeroed” space) • Set properties on multiple hosts EMC XtremIO 4.0 storage systems: • Manage multiple clusters from a single XMS • Create writable or read-only snapshots • Create and manage snapshot schedules • Restore virtual machines and datastores from XtremIO snapshots
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 86
The integration of the VSI Plug-in into software-defined storage environments like EMC ViPR and vVNX enables the management opportunities and features as shown here: EMC ViPR® software-defined storage • View properties of NFS and VMFS datastores and RDM disks • Provision NFS and VMFS datastores and RDM disks EMC vVNX™ software only storage • View properties of VMFS datastores and RDM disks • Provision VMFS datastores and RDM disks • Extend datastores on thick or thin LUNs and NFS file systems • Enable compression and deduplication on NFS datastores • Provision and view properties of virtual volumes (VVols)
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 87
Integration of the VSI Plug-in into AppSync virtualized environments provides many administrative capabilities and opportunities: • Manage AppSync server credentials • Manage AppSync service plans (run on demand, subscribe, unsubscribe, create and subscribe, modify, and view subscriptions) • Ignore selected virtual machine snapshots while protecting a datastore • Manage AppSync datastore copies (restore, mount, unmount, expire, view event history)
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 88
Further VSI management enhancements and opportunities are: • Restore a virtual machine from a virtual machine copy • Manage protected virtual machines using AppSync at the datacenter level (restore, view copy event history) • Subscribe to alerts at datacenter level
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 89
The current version of VSI supports the following functions of EMC RecoverPoint: • Manage credentials • Configure RecoverPoint to enable testing failover • View and configure consistency groups • Manage VMware vCenter Site Recovery Manager credentials • View protection groups
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 90
The current version of VSI supports setting multipathing policies from VSI using VMware Native MultiPathing Plug-in (NMP) or EMC PowerPath/VE. You can modify multipathing policies for datacenters, clusters, folders, and hosts.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 91
vStorage APIs for Storage Awareness (VASA) are VMware-defined APIs that storage vendors can implement to obtain and display storage information through vCenter. This visibility makes it easier for virtualization and storage administrators to make decisions about how data stores should be maintained -- for example, choosing which disk should host a particular virtual machine (VM). VMware Aware Integration (VAI) allows the end-to-end discovery of VMware environment from the Unisphere GUI interface. The user can import and view VMware Virtual Centers, ESXi Servers, Virtual Machines, and VMDisks and view their relationships. Also VAI allows the users to create, manage, and configure VMware datastores on ESXi servers from Unisphere.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 92
VMware vStorage APIs for Array Integration (VAAI) •
For Block connection – Hardware locking operations Hardware-assisted locking provides an alternate method to protect the metadata for VMFS cluster file systems and improve the scalability of large ESXi servers sharing a VMFS datastore. Atomic Test & Set (ATS) allows locking at the block level of a logical unit (LU) instead of locking a whole LUN. Hardware-assisted locking provides a much more efficient method to avoid retries for getting a lock when many ESXi servers are sharing the same datastore. It offloads the lock mechanism to the VNXe3200, and then the array performs the lock at a very granular level. This permits significant scalability without compromising the integrity of the VMFS-shared storage pool metadata when a datastore is shared on a VMware cluster. – Bulk Zero Acceleration This feature enables the VNXe3200 to zero out a large number of blocks to speed up virtual machine provisioning. The benefit is that with Block Zero the process of writing zeros is offloaded to the storage array. Redundant and repetitive write commands are eliminated to reduce the server load and the I/O load between the server and storage. This results in faster capacity allocation. – Full copy acceleration This feature enables the VNXe3200 to make full copies of data within the array without the need for the VMware ESXi server to read and write the data. The result is the copy processing is faster. The server workload and I/O load between the server and storage are reduced.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 93
VMware vStorage APIs for Array Integration (VAAI) – Thin Provisioning With this VAAI feature the storage device is communicated that the blocks are no longer used. This leads to more accurate reporting of disk space consumption and enables the reclamation of the unused blocks on the thin LUN. •
For NFS connections, allows the VNXe Series to be fully optimized for virtualized environments. This technology offloads VMware storage-related functions from the server to the storage system, enabling more efficient use of server and network resources for increased performance and consolidation.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 94
vStorage APIs for Data Protection (VADP) is a vCenter interface used to create and manage Virtual Machine snapshots, that utilizes Change Block Tracking (CBT) to facilitate backups and reduce the amount of time and data transferred to back up a Virtual Machine (after the initial full backup).
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 95
These demos cover examples of vStorage API integration. Click the Launch button to view the video.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 96
This lesson covered virtualization monitoring interfaces, EMC Virtual Storage Integrator (VSI) Plug-in advantages, and VMware vStorage APIs for Storage integration.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 97
This course covered the integration of EMC arrays and add-on technologies into VMware virtualized environments. It included an overview of VMware virtualized environments, infrastructure connectivity considerations, virtualization solutions, such as local and remote replication options, monitoring and implementation of EMC plug-in, and vSphere API enhancements. This concludes the training.
Copyright 2015 EMC Corporation. All rights reserved.
EMC Storage Integration with VMware vSphere Best Practices 98
View more...
Comments