VNX Local Protection Suite Fundamentals
Short Description
VNX Local Protection...
Description
Welcome to the VNX Local Protection Suite Fundamentals course. Copyright © 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 , 2014 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation Technology, Common Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz, DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab, EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, VMAX, Vblock, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, and where information lives, are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. © Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Revision Date: 10/2014 Revision Number: MR-1WP-VNXLPSFD
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
1
This course covers an introduction to the VNX Local Protection Suite solutions. We will discuss the architecture, features, and functionality of the VNX SnapView, VNX Snapshots, VNX SnapSure, and RecoverPoint/SE Local Protection products.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
2
This module focuses on the solutions available in the VNX Local Protection Suite. During this module, we will also outline the various VNX software suites that are available.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
3
Here we see all of the software suites available for the VNX series array. These various suites each contain a unique set of solutions to improve efficiency by simplifying and automating many storage tasks. This training focuses on the VNX Local Protection Suite. The VNX Local Protection Suite solutions are used for protecting and repurposing data by creating local file and block replicas.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
4
The VNX Local Protection Suite offers local replication solutions that can significantly enhance a customer’s business and technical operations. This is done by providing access points to production data that enables parallel-processing activities such as backups. These accesspoints can also be used for such things as disk-based recovery after corruptions and creating test environments for application testing. The VNX Local Protection Suite contains four products: VNX SnapView, VNX Snapshots, VNX SnapSure, and RecoverPoint. VNX SnapView creates block-based logical point-in-time views of production information using snapshots and point-in-time copies using clones. Snapshots use only a fraction of the original disk space, while clones require the same amount of disk space as the source. VNX Snapshots create block based logical point-in-time views of production information using snapshot technology. By using a different approach to the way new writes to the production file system are handled, VNX Snapshots provide an improvement to the overall performance and consumes less allocated storage space. VNX SnapSure creates logical point-in-time views of production file systems using snapshots. SnapSure uses only a fraction of the original disk space used by the source file system. RecoverPoint Local Protection is a synchronous product that mirrors volumes in real time between one or more arrays at a local site. RecoverPoint maintains a history journal of all changes that can be used to roll back the mirrors to any point in time. This course will cover each one of the products offered by the VNX Local Protection Suite in separate modules which describe and contrast their functionality, and benefits.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
5
This module focuses on the VNX SnapView block-based replication features. During this module we will take a look at the business uses for the SnapView product along with the product’s features and functionalities.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
6
SnapView is a group of software products that run on the VNX. Since SnapView runs on the storage system, zero host cycles are spent managing information giving SnapView an advantage over host-based products. SnapView creates block-based logical point-in-time views of production information using snapshots and point-in-time copies using clones. Snapshots use only a fraction of the original disk space, while clones require the same amount of disk space as the source. SnapView allows companies to make efficient use of their most valuable resource— information—by enabling parallel information access. SnapView allows multiple business processes to have concurrent, parallel access to information utilizing multiple point-in-time replicas. If there is a need to return the Source LUN to a previous data state, both clones and snapshots have restore to source LUN capabilities. If there is a need to restore data, the primary host will see the data immediately causing minimal disruption to production. Snapshots and clones also have an optional consistency feature which allows data to be copied from multiple Source LUNs at exactly the same point in time. This ensures that data stored across several LUNs, such as database data and database logs, can be used to create a useable point-in-time copy. Unisphere allows easy management of SnapView features. Unisphere has been simplified by the addition of two wizards, one each for clones and snapshots. The wizards allow the user to create point-in-time copies without having to get involved in complex details.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
7
SnapView snapshots and clones enable users to perform various tasks on a given data set as required in a typical business environment without compromising the access to the data. SnapView technologies have a wide variety of business uses. The primary goal of SnapView is to allow system backups to run concurrently with application processing. When performing backups, consistent data must be written to the backup medium. If the application uses several related LUNs for storage, all those LUNs must be in the same state when the backup is performed. SnapView snapshots and clones can take care of this issue with data consistency. While backups are the primary use for SnapView, it is versatile enough to be used in other ways. For example, point-in-time copies can be taken every hour for critical applications. This allows easy recovery from corrupted or damaged files. Also, decision support systems can use point-in-time copies, allowing them to use real data, with minimal effect to the application. Other uses for SnapView replicas include local data replication and making copies of data to be used as a source for remote data replication.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
8
Now we’ll take a look at some of the different technologies associated with local replication. Pointer based technology uses pointers to indicate where data is currently located. Data can be located on the source LUN (Classic, Thin, or Thick ), or may have been copied to the Save Area as a result of the technology used. When using pointer based technology, the pointer based LUN typically uses less space than when using full copy based technology. Though it appears to the host to be an actual LUN, it is a virtual LUN, with data located elsewhere. As a rough guide, pointer based technology uses about 20% of the space used by its Source LUN.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
9
In contrast to pointer based technology, full copy based technology makes a full copy of the Source LUN data, and therefore uses additional disk space equal to 100% of the space used by the Source LUN. Because data can be copied back to the Source LUN, there is always a requirement that the Source LUN and Full Copy Based LUN be exactly the same size. When a Full Copy Based LUN is detached from the Source LUN, changes to both LUNs are tracked. This enables the LUNs to be resynchronized at a later time without having to perform another full copy. It also allows for the restoration of the source LUN back to the point in time when the LUN was detached.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
10
SnapView Snapshots use pointer-based replication to indicate where data is currently located. It may be on the source LUN (Classic, Thin, or Thick) or may have been copied to the Reserved LUN Pool which is the private area used to contain Copy on First Write (CoFW) data chunks. The SnapView Snapshot consists of 3 managed objects. The first is the Snapshot which is a point-in-time copy of a Source LUN, the second is the Snapshot Session which defines a point-in-time designation by invoking CoFW activity for updates to the Source LUN, and the third is the Reserved LUN Pool. As a result of the Copy on First Write technology used, SnapView snapshots may use appreciably less space than a full copy, such as a SnapView clone, would use. As a rough guide, a snapshot will use around 20% of the space occupied by its Source LUN.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
11
To take advantage of the SnapView snapshot features, the VNX uses a Reserved LUN Pool. The Reserved LUN Pool is global to the VNX storage system. The LUNs that are used in the Reserved LUN Pool are created in the same manner as any other LUN; however instead of being placed in Storage Groups and allocated to hosts, they are used internally by the storage system software. This slide shows the LUNs in the Reserved LUN Pool referred to as ‘private LUNs’ – private in this case means that they cannot be used or seen by attached hosts. Each VNX system model has a maximum number of LUNs it will support and therefore, each has a maximum number of LUNs that can be added to the Reserved LUN Pool. LUNs in the Reserved LUN Pool count towards the maximum number of LUNs allowed on the VNX. Those limits define the maximum number of snapshot Source LUNs on the VNX The first step in configuring SnapView is the creation and assignment of LUNs to the Reserved LUN Pool. Only then are SnapView sessions allowed to start. Remember that as snapable LUNs are added to the VNX, the Reserved LUN Pool size may have to be reviewed and adjusted. If changes to the Reserved LUN Pool are needed, they can be made online.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
12
The Copy on First Write mechanism involves saving an original data chunk into the Reserved LUN Pool when that data chunk is changed for the first time on the Source LUN. The chunk is saved only once per snapshot. This ensures that the view of the LUN is consistent and, unless writes are made to the snapshot, it is always a true indication of what the Source LUN looked like at the time it was snapped. Saving only chunks that have been changed allows for efficient use of the available disk space; whereas a full copy of the LUN would use additional space equal in size to the Source LUN. A snapshot typically uses about 20% of the space, on average.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
13
In the example shown here, a snapshot session is active and started on the Source LUN. When the host writes new data Chunk-C to the Source LUN, the original data Chunk C, is first copied to the Reserved LUN Pool, then the write is processed against the Source LUN. This maintains the consistent, point-in-time copy of the data for the ongoing snapshot. Now after the write has been processed, the reporting host now points to both the unchanged data on the Source LUN, plus the original data that was copied to the Reserved LUN Pool.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
14
A SnapView clone is a complete copy of a Source LUN that uses copy-based technology. In order to start the copy process, a Source LUN and a clone LUN need to be placed into a clone group. While the clone is a part of the clone group and un-fractured, any production write requests made to the Source LUN are simultaneously copied to the clone. Once the clone contains the desired data, you can fracture the clone. Fracturing the clone separates it from its Source LUN, after which you can make it available to a secondary server. This technology allows users to perform additional storage management functions with minimal impact to the production host. While fractured, changes to both the Source LUN and the Clone are tracked using the Clone Private LUNs (discussed later in this module). By tracking the changes it allows for the clone to be incrementally synchronized with the Source LUN to obtain any updates that have been written to the Source LUN since the fracture. Alternately, a reverse synchronization can be performed from the Clone back to the Source LUN. This allows you to revert to an earlier copy of the Source LUN if the source becomes corrupted, or if new Source LUN writes are not desired.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
15
Clone Private LUNs are at least 1 GB in size and two of them need to be created before any other clone operations can commence on the VNX. Clone Private LUNs record information about modified data extents on the Source LUN and clone LUN after the clone is fractured. A modified data extent is an extent of data that a production or secondary server changes by writing to the Source LUN or the clone LUN. A bitmap in the clone private LUN called the fracture log records this information, but no actual data is written to the clone private LUN. The log reduces the time it takes to synchronize or reverse synchronize a clone and its Source LUN as the software only copies modified extents.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
16
There are several differences between snapshots and clones. One such difference is when a secondary host is accessing the snapshot or the clone. When a host accesses the snapshot, the data is immediately available as soon as the snapshot is started; whereas, when a host accesses the data on a clone, it must wait until the clone is fully synchronized and fractured before it is allowed to access the data. In contrast however, there is no performance impact when using clones as clones are independent of their Source LUNs when fractured. Since snapshots rely on the source LUN the Copy on First Write technology increases response times. There are several other differences between snapshots and clones, please take a moment to review them.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
17
This module covered the VNX SnapView block-based replication product. We looked at the business uses for the SnapView product, the different technologies used in the product, and the differences between snapshots and clones.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
19
This module focuses on the VNX Snapshots block-based replication product. We will take a look at the business uses for the VNX Snapshots product and discuss its features and functionality.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
20
VNX Snapshots improve the SnapView Snapshot capabilities by better integrating with pools. VNX Snapshots is a point-in-time copy of a source LUN using Redirect on First Write technology. This functionality differs significantly from the previous method of Copy on First Write technology used by SnapView. When using Redirect on First Write, there is increased write performance while causing no impact to host I/O response time as the number of snapshots per primary LUN increases. Also, the use of Pool LUNs for both primary LUN and snapshots provide a 90% increase on storage utilization efficiency. The Redirect on First Write technology addresses the limitations of Copy on First Write technology, allowing for 256 writable snapshots per LUN. VNX Snapshots allow snapshots of snapshots, which provide point-in-time copies of snapshots that can be written to for such cases as code and application testing. Other uses are snapshots for backups, VMware VDI/VSI deployments, and reporting. VNX Snapshots are application consistent and can be used to quickly and efficiently provision copies of source data to Consistency Groups. VNX snapshots also provide restore capabilities of snapshots to Primary LUNs. VNX snapshots operations can be performed via Unisphere, Navisphere Secure CLI, and snapCLI. The Secure CLI uses the same security features embodied in Unisphere. Users are authenticated via a username, password, and scope combination associated with each CLI command sent to the storage system. A host-based utility, snapCLI, can perform a subset of the VNX Snapshot management operations as well.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
21
VNX Snapshots enable data access for several applications in a typical business environment without compromising the access to production data. VNX Snapshots are designed to provide point-in-time data copies for system backups, testing, decision support scenarios, fast local data recovery, and local data migration. For critical applications, point-in-time copies may be taken every hour to allow easy recovery from corrupted or damaged files. Decision support systems benefit from using point-in-time copies, as they are using real data, and have a minimal effect on the production application. In virtualized environments VNX Snapshots can provide local-level protection for VMware VDI/VSI deployments.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
22
There are several differences between VNX snapshots and SnapView snapshots. One of the main differences is that VNX Snapshots improve upon the SnapView Snapshot capabilities by better integrating with pools. Also, VNX Snapshots use Redirect on First Write technology. This functionality differs significantly from the previous method of Copy on First Write used by SnapView. With Redirect on First Write, it is no longer necessary to read and write old data blocks to a reserved area when new writes from the application are processed to the source LUN. Other differences include the maximum number of snapshots per source LUN and maximum number of snapshots per VNX and the ability of VNX snapshots to take a snapshot of a snapshot. There are several other differences between VNX snapshots and SnapView snapshots. Please take a moment to review them. Note: CGs (Consistency Groups) and SMP (Snapshot Mount Point).
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
23
The Redirect on First Write technology is significantly different from SnapView Copy on First Write. It is no longer necessary to write old data blocks to a reserved area when new writes from the application are processed to the source LUN. With Redirect on First Write, new writes to the primary LUN are stored in a new area within the same VNX Pool as the snapshot data. Reads to these data blocks are directed to the new location. In the same manner, writes to a snapshot are directed to the new location to preserve the snapshot, and reads to these modified data blocks are referenced to the new data location.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
24
When using Redirect on First Write, when a host writes to the source, the new data is written to new area and then subsequent reads to that data reference the new location. When writes are performed to the snapshot, these writes are also sent to the new area. When reads are done to the first snapshot, all data is read from the source LUN unless writes to the snapshot have occurred. In that case the data will be read from the new area. If a second snapshot is made after writes to the Primary LUN have taken place, data will be read from the original (Primary LUN) location as well as the new data area.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
25
A Snapshot Mount Point is the mechanism that attaches the VNX Snapshot to the host. Hosts cannot attach to the snapshot unless the Snapshot Mount Point is created and added to the host’s storage group on the VNX. Once attached, the Snapshot Mount Point appears and can be used as a regular LUN. Snapshot Mount Points provide the ability for hosts to write to and change snapshots without the need to rescan the SCSI bus on the client.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
26
Copying an unattached VNX Snapshot creates an identical replica except for the snapshot name and the “allowReadWrite” flag which is by default set to “no”. These copies reside in the same pool as the Primary LUN and the source VNX Snapshot and will preserve the properties of the Primary LUN. This process can be performed only on snapshots that are not attached to a Snapshot Mount Point. Attached Snapshots are branched by the process of snapping Snapshot Mount Points, which is called Cascading Snapshots. Copying a VNX Snapshot will increase the snapshot count by one and will observe the Max Snapshots per source limit of 256.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
27
A Consistency Group is an object that groups either a list of Primary LUNs or Snapshot Mount Points that are associated with a host and/or server application – Consistency Groups cannot mix Primary LUNs and Snapshot Mount Points. These LUNs are treated as a single entity for taking write-order consistent snapshots. Snapping the Consistency Group creates a snapshot set that represents the snapshots of the individual Consistency Group members. Typical uses are transactional databases and logs stored in separate LUNs that belong to an application server. When a snapshot of a Consistency Group is initiated, all writes to Primary LUNs are held until their snapshots have been created.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
28
Expired snapshots are destroyed at regular intervals. The VNX scans for expired snapshots once an hour keep in mind the Auto-Delete process does not process destruction of expired snapshots; the destruction is handled by another software layer. When the expiration time is reached, the snapshot may not go away immediately. It is deleted by the process started at the next running interval. The destruction of a snapshot triggers a storage reclamation process to return consumed capacity back to the pool. The expiration field is a user settable number in days/hours/months/years. When the expiration date is reached, the snapshot is deleted but may not go away until the driver deletes expired snapshots or snapshot sets. Snapshots that are attached to a Snapshot Mount Point or that are involved in restores are not destroyed by the Expiration process. Also, enabling Auto-Delete for a Snapshot automatically clears the Expiration timestamp for that Snapshot and the user is warned in Unisphere.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
29
VNX Snapshots share the same pool as the Primary LUN. More storage space might be necessary to accommodate the changes to a Primary LUN and its point-in-time copies. Autodelete is a mechanism that provides automated space management and capacity reclamation to a storage pool that is filling up in order to protect the Thin LUNs. Auto-delete is a background process that scans the pool for eligible Primary LUNs, Snapshot Mount Points, Consistency Groups, snapshots, and snapshot sets. All eligible expired snapshots are deleted before processing for snapshots with the ‘auto-delete’ option enabled and the expired snapshots are deleted regardless of pool auto-delete thresholds. Auto-delete is triggered using two independent thresholds – consumed pool space and consumed snapshot space. The process is stopped when the threshold conditions are met, or if no eligible snapshots remain to be deleted or the process is manually stopped. Storage reclamation is independent from auto-delete; therefore, free pool capacity may be realized slowly. Attached snapshots are excluded from auto-delete regardless of other settings. Also, enabling auto-delete on a snapshot with an expiration date set will cause the expiration date to automatically be cleared.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
30
This module covered the VNX Snapshots Block-based replication product. We discussed the business uses for the VNX Snapshots product, the technology used within the product, and the differences between the VNX snapshots and VNX SnapView products.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
32
This module focuses on the VNX SnapSure File-based replication product. During this module we will explore the business uses for the VNX SnapSure product and review the features and functionality of the product.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
33
VNX SnapSure provides read only or read/write point in time views of VNX File system. These logical views are called snapshots and saves disk space and time by allowing multiple snapshot versions of a VNX file system. SnapSure is not a discrete copy product and does not maintain a mirror relationship between source and target volumes. It maintains pointers to track changes to the primary file system and reads data from either the primary file system or from a specified copy area. The copy area is referred to as a SavVol, and is defined as a VNX metavolume. VNX SnapSure is fully supported in both Unisphere and Navisphere Secure CLI.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
34
Read-only and writeable snapshots can serve as a direct data source for applications that require point-in-time data. Read-only snapshots can be used as a data source for automated backup engines performing backup to tape or disk. You can also use a read-only or a writeable snapshot to restore a Primary File System or part of a file system, such as a file or directory, to the state in which it existed when the snapshot was created. Snapshots are integrated and can be used with Windows Volume Snapshot Service. Writeable snapshots can be used for applications such as simulation testing, data warehouse population, and can also serve as a source for applications that require incremental, temporary changes. For example, a VNX file system may host an Oracle database on which a customer wants to test changes without actually applying those changes back to the production view. Another example is an administrator applying a database patch. It is first applied to the writeable snapshot and tested. If everything works, then a snapshot Restore is used to commit the changes from the writeable snapshot back to the Primary File System. Another use is when an administrator wants to perform sanity checks to the Primary File System, without taking it offline, the administrator can do this by running an fsck against a writeable snapshot instead of the Primary File System.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
35
Let’s have a look at how the SavVol functionality works. We start with a Production File System with data blocks containing the letters A through F. When the first file system snapshot is created, a SavVol is also created on disk to hold the bitmap, the original data from the Production File System, and that particular snapshot’s blockmap. Each bit of the bitmap will reference a block on the Production File System. Next, a user or application makes some modification to the Production File System. In this example, we are writing an “H” in place of “B”, and a “K” in place of “E”. Before these writes can take place, SnapSure will place a hold on the I/Os and copy the “B” and “E” to the SavVol. Then, the blockmap will be updated with the location of the data in the SavVol. In this example, the first column of the blockmap refers to the block address in the Production File System, and the second column refers to the block address in the SavVol. Next, the bitmap is updated with “1”s wherever a block has changed in the Production File System. A “0” means that there were no changes for that block. After all this process takes place, SnapSure will then release the hold on the I/Os and the writes can be established. If these same two blocks are modified once again, the writes will go through and nothing will be saved in the SavVol. This is true because the Copy on First Write principle – we already saved the original data from that point in time and anything after that is not Snapshot 1’s responsibility.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
36
With VNX SnapSure, you can create, delete, and restore writeable snapshots. Writeable snapshots are branched from the “baseline” read-only snapshots and can be mounted and exported as read-write file systems. A baseline snapshot exists for the lifetime of the writeable snapshot. Any writeable snapshot must be deleted before the baseline is deleted. Writeable snapshots cannot be refreshed or be part of a snapshot schedule.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
37
Writeable snapshots share the same SavVol with read-only Snapshots. The amount of space used is proportional to the amount of data written to the writeable snapshot file system. Keep in mind that block overwrites do not consume more space. The SavVol will grow to accommodate a busy writeable snapshot file system. There is no SavVol shrink, the space cannot be returned to the cabinet until all snapshots of a file system are deleted. A deleted writeable snapshot returns its space to the SavVol.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
38
The Checkpoint Virtual File System is a navigation feature that provides NFS and CIFS clients with automatic read-only access to all read-only snapshots from within an automatically created hidden directory in the Primary File System. This reduces or eliminates the need for help-desk involvement in user-level file and directory restore requests, and minimizes the necessity of using the backups for these small restore requests. NFS clients can access the Checkpoint Virtual File System by mounting the UNIX host and navigating to the hidden directory. CIFS clients access the Checkpoint Virtual File System via the ShadowCopyClient which is a Microsoft Windows feature that allows Windows users to access previous versions of a file via the Microsoft Volume Shadow Copy Server. The ShadowCopyClient is supported by the VNX to enable Windows clients to list, view, copy, and restore files in snapshots created with VNX SnapSure. All read-only snapshots of the Primary File System are immediately visible to VSS clients as “previous versions”. Remember that Writeable Snapshots cannot be accessed through the Checkpoint Virtual File System.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
39
This module covered the VNX SnapSure file-based replication product. We explored the business uses for the VNX SnapSure product and reviewed the features and functionality of the product.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
41
This module focuses on the RecoverPoint/SE Local Protection block-based replication product. During this module we will explore the business uses for the RecoverPoint/SE Local Protection product and review the products features and functionality.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
42
RecoverPoint/SE Local Protection does not use the WAN to replicate data. Instead of compressing the data and sending it over the WAN to a remote volume, it writes the data to a local volume. As there is no WAN involved, and hence no latency concern, RecoverPoint/SE Local Protection can synchronously track every write in the local Journal and distribute the write to the target volume, without impacting the application server’s performance. The process begins with the production host writing data to the production volumes. The write is intercepted by the splitter, and the splitter sends the write data to the RecoverPoint Appliance. Immediately upon receipt of the write data, the RecoverPoint Appliance returns an acknowledgement to the splitter. The splitter then writes the data to the production storage volume. Next, the storage system returns an acknowledgement to the splitter upon successfully writing the data to storage. Finally, the splitter sends an acknowledgement to the host acknowledging the write has been completed successfully.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
43
RecoverPoint/SE Local Protection is an enterprise-scale solution designed to protect application data on heterogeneous servers and storage arrays. RecoverPoint/SE Local Protection protects and supports the replication of data within the local storage environment and uses the existing infrastructure to integrate seamlessly with existing host applications and data storage subsystems. RecoverPoint/SE Local Protection runs on an out-of-band appliance and combines industryleading continuous data protection technology with a bandwidth efficient, no-data-loss replication technology. Point in time copies can be created for each write, or the user can choose the amount of data lag that can be tolerated for an application. This option is configurable for each group of volumes, and can be edited at any time. The ability to access data from a copy allows for testing without sacrificing protection. This feature also is integrated with various applications, such as Exchange, SQL, and VMware. This allows for application-driven point in time copies The RecoverPoint/SE Local Protection Management Application allows you to manage the RecoverPoint/SE Local Protection system. The application provides access to all boxes in the local RPA cluster. ll of the information necessary for routine monitoring and configuration of the RecoverPoint/SE Local Protection system is included in the RecoverPoint/SE Local Protection Management Application.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
44
RecoverPoint/SE Local Protection offers flexible recovery of all your data to any point in time. It does this by intercepting all writes to a source volume, and mirroring a copy of that write to the RecoverPoint appliance, which stores a compressed copy of the data in the Journal, then writes it to the replica LUN. Recovered images can be used for a variety of purposes such as backup and recovery, testing, development, and training, surgical recovery of files/folders, seeding a data mining farm, and cloning a federated environment.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
45
The RecoverPoint Journal stores all changes to all LUNs in a Consistency Group. It also stores metadata that allows an Administrator to quickly identify the correct image to be used for recovery. The Journal provides time-stamped recovery points with application-consistent bookmarks. It also correlates system-wide events (port failure, system error, etc.) with potential corruption events, very useful when performing root-cause analysis. These application and system bookmarks are automatic, but users can also enter their own bookmarks into the system. The Journal also has the ability for application-specific annotations for Microsoft Exchange Server, Microsoft SQL Server, and has Oracle awareness for Oracle 9i running on Solaris.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
46
Each copy of data in a consistency group must contain one or more volumes that are dedicated to holding point in time history of the data. The type and amount of information contained in the journal differs according to the journal type. There are two types of journal volumes’ copy journals and production journals. The journal volumes hold snapshots of data to be replicated. Each Journal volume holds as many point in time images as its capacity allows, after which the oldest image is removed to make space for the newest. Journals consist of one or more volumes presented to all the RecoverPoint Appliances for the cluster. Space can be added, to allow a longer history to be stored, without affecting replication. The size of a Journal volume is based several factors. The change rate of the data being protected, the amount of time between point in time images, and the number of point in time images that are kept.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
47
Replication volumes or Replicas are the production storage volumes and their matching target volumes which are used during replication. Target volumes must be the same size or larger than the source volumes. Any excess size will not be replicated or visible to the host. This is an important design consideration for heterogeneous storage environments.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
48
A special volume called the Repository volume must be dedicated on the SAN-attached storage for each RecoverPoint Appliance cluster. This volume stores configuration information about the RecoverPoint Appliances, the cluster, and the consistency groups. This enables a properly functioning RecoverPoint Appliance to seamlessly assume the replication activities of a failing RecoverPoint Appliance from the same RecoverPoint Appliance cluster. There is a Repository volume for every RecoverPoint cluster. The volume is presented to each RecoverPoint Appliance, either via the SAN or using iSCSI for virtual RecoverPoint Appliances.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
49
The array-based splitter runs in each storage processor of a VNX array and splits all writes to a volume, sending one copy to the original target and the other copy to the RecoverPoint CDP appliance. The Array-based splitter is supported on VNX arrays.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
50
Consistency groups define protection for a set of volumes. If two data sets are dependent on one another such as a database and a database log, they should be part of the same consistency group. Consistency groups maintain write order between the data sets. Settings and policies for data protection are defined for each consistency group. Examples of these parameters are: compression, bandwidth limits, and maximum lag. As an example, imagine a motion picture film. The video frames are saved on one volume, the audio on another. Neither volume will make sense without the other. The saves must be coordinated so that they will always be consistent with one another. In other words, the volumes must be replicated together in one consistency group to guarantee that at any point in time, the saved data will represent a true state of the film. The consistency group ensures that updates to the production volumes are also written to the copies in consistent and correct write-order so the copy can always be used to continue working from, or to restore the production source.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
51
RecoverPoint supports 3 replication modes. In Synchronous replication, the replica must acknowledge the write transaction before an acknowledgement can be returned to the host application that initiated the write. Replication in synchronous mode produces a replica that is 100% up to date with the production source. Synchronous communications is efficient within the local SAN environment for RecoverPoint. Asynchronous replication does not conserve bandwidth. Furthermore, and particularly as volumes increase, it is increasingly susceptible to data loss, as more and more data that has been acknowledged at the source may not have been delivered to the target. RecoverPoint can limit the use of asynchronous replication to those situations in which it enables superior host performance but does not result in an unacceptable level of potential data loss. Snap-based replication is an alternative Asynchronous replication mode available for VNX arrays. VNX user’s can leverage and replicate array snaps during high load periods or periodically. It reduces the amount of traffic sent from source to target, saving bandwidth and consuming less journal space.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
52
This module covered the RecoverPoint/SE Local Protection block-based replication product. During this module we explored the business uses for the RecoverPoint/SE Local Protection product and reviewed the products features and functionality.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
54
As we have seen in this course, the VNX Local Protection Suite is comprised of four products. VNX SnapView, VNX Snapshots, VNX SnapSure, and RecoverPoint/SE Local Protection. Each of these products has a unique set of features for protecting and repurposing data by creating local File and Block replicas. This table contrasts the various products contained in the VNX Local Protection Suite. Please take a moment to review them.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
55
This course covered the various technologies that make up VNX Local Protection Suite solutions. This included an overview of the architecture, features, and functionality of VNX SnapView, VNX Snapshots, VNX SnapSure, and RecoverPoint/SE Local Protection.
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
56
Copyright 2014 EMC Corporation. All rights reserved.
VNX Local Protection Suite Fundamentals
57
View more...
Comments