3PAR Presentation 14apr11-2
Short Description
Download 3PAR Presentation 14apr11-2...
Description
3PAR TechCircle HP Dübendorf 14. April 2011
• Reto Dorigo Business Unit Manager Storage
• Serge Bourgnon 3PAR Business Development Manager
• Peter Mattei Senior Storage Consultant
• Peter Reichmuth Senior Storage Consultant © Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
Agenda 09:00 – 09:15
Hewlett-Packard Schweiz
Begrüssung
Serge Bourgnon
09:15 – 10:15
Hewlett-Packard Schweiz
HP 3PAR Architecture
Peter Reichmuth Peter Mattei
10:15 – 10:45
Pause
10:45 – 11:45
Hewlett-Packard Schweiz
HP 3PAR Software + Funktionen
Peter Mattei / Peter Reichmuth
11:45 – 12.15
Hewlett-Packard Schweiz
Live Demo
Peter Mattei / Peter Reichmuth
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
3PAR background • Founded by server engineers • Funded by leading infrastructure providers • Commercial shipments since 2002 • Initial Public Offering, November 2007 • NYSE: PAR • Profitable and strong balance sheet • Expanding presence in US, Canada, Europe, Asia, and Africa • HP acquisition September 2010
© HP Copyright 2011 – Peter Mattei
Online
The HP Storage Portfolio
Services
Software
Infrastructure
Nearline
X1000
4
X3000
RDX, tape drives & tape autoloaders
X9000
D2D Backup Systems
ProCurve Wired, Wireless, Data Center, Security & Management
P2000
P4000
EVA
MSL tape libraries
3PAR VLS virtual library systems
EML tape libraries
SAN Connection Portfolio
P9500
ProCurve Enterprise Switches
ESL tape libraries
B, C & H Series FC Switches/Directors Business Copy Continuous Access
Data Protector Express
Storage Mirroring
Storage Array Software
Data Protector
Storage Essentials
Cluster Extension
Proactive Select SAN Assessment Proactive 24 Backup & Recovery Critical Service SupportPlus 24 Entry Data Migration SAN Implementation Installation & Start-up Data Migration Storage Performance Analysis Data Protection Remote Support Consulting services (Consolidation, Virtualization, SAN Design)
© HP Copyright 2011 – Peter Mattei
HP Storageworks Portfolio Leading the next storage wave Block Level Storage Large Enterprise Federal Cloud / Hosting Service Providers
Corporate
Mid Size Small/Remote Office Branch Office
© HP Copyright 2011 – Peter Mattei
P9000 (XP)
File Level Storage
Backup/Recovery
X9000 (IBRIX)
3PAR
StoreOnce P6000 (EVA)
P4000 (LeftHand) P2000 (MSA)
X3000 (MS WSS)
X1000 (MS WSS)
Architecture for Cloud Services Multi-Tenant Clustering
Thin Technologies
• Performance and capacity scalability for multiple apps
• High utilization with high performance/service levels
• Handle diverse and unpredictable workloads
• Eliminate capacity reservations
• Security among tenants • Resilient • Acceptable service levels with a major component failure © HP Copyright 2011 – Peter Mattei
• Allow fat to thin volume migrations without disruption, post processing • Continual, intelligent re-thinning without disruption • Fast implementations of low overhead RAID levels
Autonomic Management
• Autonomic configuration, including for server clusters • Autonomic capacity provisioning • Autonomic data movement • Autonomic performance optimization • Autonomic storage tiering
3PAR LEADS IN ALL 3 CATEGORIES Built-In, Not Bolt-On
Multi-Tenant Clustering
Thin Technologies
• Mesh Active, Cache Coherent Cluster
• Reservation-less, Dedicate-onWrite
• ASIC-based Mixed Workload
• Thin Engine and Thin APIbased Reclamation
• Virtual Private Array Security • Tier 1 HA, DR • Failure-Resistant Performance, QoS
© HP Copyright 2011 – Peter Mattei
Autonomic Management
• Autonomic Groups • Autonomic capacity provisioning for thin technologies • Dynamic Optimization
• ASIC-based Zero Detection
• System Tuner, Policy Advisor
• Wide-Striping, sub-Disk RAID
• Adaptive Optimization
• ASIC-based Fast RAID
HP 3PAR Industry Leadership Best new technology in the market 3PAR Thin Provisioning Industry leading technology to maximize storage utilization
3PAR Autonomic Storage Tiering Automatically optimizes using multiple classes of storage
3PAR Virtual Domains Multi-tenancy for service providers and private clouds
3PAR Dynamic Optimization Workload management and load balancing
3PAR Full Mesh Architecture Advanced shared memory architecture
8
© HP Copyright 2011 – Peter Mattei
HP 3PAR InServ Storage Servers
Same OS, Same Management Console, Same Replication Software
F200 Controller Nodes
F400
T400
2
2–4
2–4
2–8
Fibre Channel Host Ports Optional iSCSI Host Ports Built-in Remote Copy Ports
0 – 12 0–8 2
0 – 24 0 – 16 2
0 – 48 0 – 16 2
0 – 96 0 – 32 2
GBs Control/Data Cache
8/12
8-16/12-24
8-16/24-48
8-32/24-96
Disk Drives
16 – 192
16 - 384
16 – 640
16 – 1,280
Drive Types
50GB SSD*, 300, 600GB FC and/or 1, 2TB NL
50GB SSD* 300, 600GB FC and/or 1, 2TB NL
50GB SSD* 300, 600GB FC and/or 1, 2TB NL
50GB SSD* 300, 600GB FC and/or 1, 2TB NL
128TB
384TB
400TB
800TB
1,300 (MB/s) 46,800
2,600 (MB/s) 93,600
3,800 (MB/s) 156,000
5,600 (MB/s) 312,000
Max Capacity Throughput/ IOPS (from disk) SPC-1 Benchmark Results
93,050 * max. 32 SSD per Node Pair
9
T800
© HP Copyright 2011 – Peter Mattei
224,990
Array Comparison Maximum Values
EVA8400
3PAR T800
P9500
Internal Disks
324
1280
2048
Internal Capacity TB
194/324 ¹
800
Subsystem Capacity TB
324
800
247‘000
FC Host Ports
8
128/32 ²
192
# of LUNs
2048
NA
65280
Cache GB
22
32+96
512
Sequential Performance Disk GB/s
1.57
6.4
>15
Random Performance Disk IOPS
78’000
>300‘000
>350‘000
Internal Bandwidth GB/s
NA
44.8
192
1 600GB 2 3
© HP Copyright 2011 – Peter Mattei
1226/2040
3
FC / 1TB FATA disks optional iSCSI Host Ports 600GB SAS / 1TB Near-SAS disks
HP 3PAR Scalable Performance: SPC-1 Comparison 30 Transaction-intensive applications typically demand response time < 10 ms
EMC CLARiiON CX3-40
25
Response Time (ms)
NetApp FAS3170
20
Mid Range
IBM DS8300 Turbo
15 HDS AMS 2500
10
High End
3PAR InServ F400
3PAR InServ T800
IBM DS5300
5
HDS USP V / HP XP24000
0 0
25,000
50,000
75,000
100,000
125,000
SPC-1 IOPS™ 11 © HP Copyright 2011 – Peter Mattei
150,000
175,000
200,000
225,000
Legacy vs. HP 3PAR Hardware Architecture Traditional Tradeoffs Traditional Modular Storage
HP 3PAR meshed and active
Cost-efficient but scalability and resiliency limited by dual-controller design
Traditional Monolithic Storage Host Connectivity
Cache
Switched Backplane
Distributed Controller Functions
Disk Connectivity Scalable and resilient but costly. Does not meet multi-tenant requirements efficiently 12 © HP Copyright 2011 – Peter Mattei
Cost-effective, scalable and resilient architecture. Meets cloud-computing requirements for efficiency, multi-tenancy and autonomic management.
HP 3PAR – Four Simple Building Blocks F200 and F400
T400 and T800 Controller Nodes Performance and connectivity building block CPU, Cache and 3PAR ASIC System Management RAID and Thin Calculations
Node Mid-Plane Cache Coherent Interconnect 1.6 GB/sec per Node Completely Passive encased in steel Defines Scalability
Drive Chassis Capacity Building Block F Chassis 3u 16 Disk T Chassis 4 U 40 Disks
Service Processor One 1U SVP per system For service and monitoring 13 © HP Copyright 2011 – Peter Mattei
HP 3PAR Architectural differentiation Purpose built on native virtualization HP 3PAR Utility Storage F-Class - T-Class Thin Provisioning Thin Conversion
Dynamic Optimization
Adaptive Optimization
Virtual Domains
Virtual Lock
Recovery Managers
System Reporter
Virtual Copy
Remote Copy
Thin Persistence
Self-Configuring Self-Optimizing
Self-Healing Autonomic Policy Management
Utilization Manageability
Performance InForm fine-grained OS
Mesh Active Fast RAID 5 / 6 14 © HP Copyright 2011 – Peter Mattei
Self-Monitoring
Instrumentation Mixed Workload
Gen3 ASIC
Zero Detection
Mixed workload support Multi-tenant performance hosts
I/O Processing : Traditional Storage Heavy throughput workload applied
Unified Processor and/or Memory
Host interface Heavy transaction workload applied
hosts
Heavy throughput workload sustained
Host interface
= control information (metadata) = data
3PAR ASIC & Memory Control Processor & Memory
control information and data are pathed and processed separately 15 © HP Copyright 2011 – Peter Mattei
disk
small IOPs wait for large IOPs to be processed
I/O Processing : 3PAR Controller Node
Heavy transaction workload sustained
Disk interface
Disk interface
disk
HP 3PAR High Availability Spare Disk Drives vs. Distributed Sparing
3PAR InServ Traditional Arrays
Spare chunklets Spare drive
Many-to-many rebuild Few-to-one rebuild hotspots & long rebuild exposure
16 © HP Copyright 2011 – Peter Mattei
parallel rebuilds in less time
HP 3PAR High Availability Guaranteed Drive Shelf Availability
3PAR InServ
Shelf
RAID Group
Raidlet Group
Raidlet Group
Raidlet Group
Shelf
RAID Group
Shelf
Shelf
Traditional Arrays
Shelf-independent RAID Shelf-dependent RAID Shelf failure means no access to data
17 © HP Copyright 2011 – Peter Mattei
Despite shelf failure Data access preserved
HP 3PAR High Availability Write Cache Re-Mirroring
3PAR InServ Traditional Arrays
Write-Cache Mirroring off
Persistent Write-Cache Mirroring Traditional Write-Cache Mirroring Poor performance due to write-thru mode
18 © HP Copyright 2011 – Peter Mattei
• •
No write-thru mode – consistent performance Works with 4 and more nodes • F400 • T400 • T800
HP 3PAR virtualization advantage HP 3PAR
Traditional Array • • •
Each RAID level requires dedicated disks Dedicated spare disk required Limited single LUN performance
• • •
All RAID levels can reside on same disks Distributed sparing Built-in wide-striping based on Chunklets
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
R1
R1
R5
R6
R6
R1
R5
R5
R1
R1
R5
R6
R6
R1
R5
R5
Traditional Controllers RAID1
LUN 6 LUN 7
LUN 5
RAID6 Set LUN 4 LUN 3
RAID5 Set
LUN 0 LUN 1
LUN 2
19 © HP Copyright 2011 – Peter Mattei
Physical Disks Spare
RAID1 Set
Spare
RAID5 Set
3PAR InServ Controllers
HP 3PAR F-Class InServ Components – 16 Slot Drive Chassis (3U) •
Capacity building block
•
Add non-disruptively
•
Industry leading density
– Controller Nodes (4U) •
Performance and connectivity building block −
Adapter cards
•
Add non-disruptively
•
Runs independent OS instance
– Full-mesh Back-plane •
Post-switch architecture
•
High performance, tightly coupled
•
Completely passive
– Service Processor (1U) •
Remote error detection
•
Supports diagnostics and maintenance
•
Reporting to 3PAR Central
20 © HP Copyright 2011 – Peter Mattei
3PAR 40U, 19” Cabinet or Customer Provided
− 4-Disk Drive Magazines
HP 3PAR F-Class Node Configuration Options – One Xeon Quad-Core 2.33GHz CPU – One 3PAR Gen3 ASIC per node – 4GB Control & 6GB Data Cache per node – Built-in I/O ports per node • 10/100/1000 Ethernet port & RS-232 • Gigabit Ethernet port for Remote Copy • 4 x 4Gb/s FC ports – Optional I/O per node GigE Management Port GigE IP Replication Port 2 built-in FC Disk Ports
• Up to 4 more FC or iSCSI ports (mixable) – Preferred slot usage (in order); depending on customer requirements • Disk Connections: Slot 0 (ports 1,2), 0, 1 higher backend connectivity and performance
2 built-in FC Disk or Host Ports Slot 1: optional 2 FC Ports for Host , Disk or FC Replication or 2 GbE iSCSI Ports Slot 0: optional 2 FC Ports for Host , Disk or FC Replication or 2 GbE iSCSI Ports 21 © HP Copyright 2011 – Peter Mattei
• Host Connections: Slot 0 (ports 3,4), 1, 0 higher front-end connectivity and performance
• RCFC Connections: Slot 1 or 0 Enables FC based Remote Copy (first node pair only)
• iSCSI Connections: Slot 1, 0 adds iSCSI connectivity
HP 3PAR InSpire Architecture F-Class Controller Node Controller Node(s)
Quad-Core Xeon 2.33 GHz
– Cache per node •
Control Cache: 4GB (2 x 2048MB DIMMs)
•
Data Cache: 6 GB (3 x 2048MB DIMMs)
LAN SERIAL SATA
Multifunction Controller
4GB
Control Cache
Data Cache
6 GB
– SATA : Local boot disk – Gen3 ASIC • • •
Data Movement XOR RAID Processing Built-in Thin Provisioning
– I/O per node •
3 PCI-X buses/ 2 PCI-X slots and one onboard 4 port FC HBA
22 © HP Copyright 2011 – Peter Mattei
0
1
High Speed Data Links
2 – Onboard 4 Port FC
F-Class DC3 Drive Chassis
Drive Chassis or “cage” contains 4 drive bays that accommodate: – 4 drive magazines – Each magazine holds four disks – Each disk is individually accessible 23 © HP Copyright 2011 – Peter Mattei
F-Class DC3 Drive Chassis – – – –
Node 0 Node 1
Maximum 16 Drives per Drive Chassis Must populate 4 drives (a magazine) at a time 2 x 4Gb interfaces connected to 2 controller nodes Can be Daisy Chained to have 32 drives per loop doubling the amount of capacity behind a node pair
Node 0 Node 1
Non-Daisy Chained
*Drive Magazine = 4 disks – Minimum configuration is 4 Drive Chassis – Upgrades must Increment at 4 Drive Chassis – Must deploy 4 Drive Magazines at a time (16 drives) across all 4 Drive Chassis (1 drive magazine per Chassis)
Daisy Chained 24 © HP Copyright 2011 – Peter Mattei
Connectivity Options: Per F-Class Node Pair Ports 0–1
Ports 2-3
PCI Slot 1
PCI Slot 2
# of FC Host Ports
# of iSCSI Ports
# of Remote Copy FC Ports
# of Drive Chassis
Max # of Disks
Disk
Host
-
-
4
-
-
4
64
Disk
Host
Host
-
8
-
-
4
64
Disk
Host
Host
Host
12
-
-
4
64
Disk
Host
Host
iSCSI
8
4
-
4
64
Disk
Host
iSCSI
RCFC
4
4
2
4
64
Disk
Host
Disk
-
4
-
-
8
128
Disk
Host
Disk
Host
8
-
-
8
128
Disk
Host
Disk
iSCSI
4
4
-
8
128
Disk
Host
Disk
RCFC
4
-
2
8
128
Disk
Host
Disk
Disk
4
-
-
12
192
25 © HP Copyright 2011 – Peter Mattei
HP 3PAR T-Class InServ Components – Drive Chassis (4U) •
Capacity building block
3PAR 40U, 19” Cabinet Built-In Cable Management
− Drive Magazines •
Add non-disruptively
•
Industry leading density
– Service Processor (1U) •
Post-switch architecture
•
High performance, tightly coupled
•
Completely passive
– Controller Nodes (4U) •
Performance and connectivity building block −
Adapter cards
•
Add non-disruptively
•
Runs independent OS instance
– Full-mesh Back-plane
26 © HP Copyright 2011 – Peter Mattei
•
Post-switch architecture
•
High performance, tightly coupled
•
Completely passive
The 3PAR Evolution Bus to Switch to Full Mesh Progression •
3PAR InServ Full Mesh Backplane •
High Performance / Low Latency
•
Passive Circuit Board
•
Slots for Controller Nodes
•
Links every controller (Full Mesh)
•
•
•
1.6 GB/s (4 times 4Gb FC)
•
28 links (T800)
Single hop
3PAR InServ T800 with 8 Nodes •
8 ASICS with 44.8 GB/s bandwidth
•
16 Intel® Dual-Core processors
•
32 GB of control cache
•
96GB total data cache
•
24 I/O buses, totaling 19.2 GB/s of peak I/O bandwidth
•
123 GB/s peak memory bandwidth
27 © HP Copyright 2011 – Peter Mattei
T800 with 8 Nodes 640 Disks
HP 3PAR T-Class Controller Node Controller Node(s) •
2 to 8 per System – installed in pairs
•
2 Intel Dual-Core 2.33 GHz
•
16GB Cache
0 1 2 3 4 5 PCI Slots 0 1 2 3 4 5
4GB Control/12GB Data
•
Gen3 ASIC
•
Data Movement, ThP & XOR RAID Processing
•
•
T-Class Node pair
Scalable Connectivity per Node 3 PCI-X buses/ 6 PCI-X slots •
Preferred slot usage (in order) • 2 slots – 8 FC disk ports • Up to 3 slots – 24 FC Host ports • 1 slot – 1 FC port used for Remote Copy (first node pair only) • Up to 2 slots – 8 1GbE iSCSI Host ports
28 © HP Copyright 2011 – Peter Mattei
Console port C0 Remote Copy Eth port E1 Mgmt Eth port E0 Host FC/iSCSI/RC FC ports Disk FC ports
HP 3PAR InSpire architecture T-Class Controller Node Controller Node(s) • • •
•
GEN3 ASIC
• • •
29 © HP Copyright 2011 – Peter Mattei
Scalable Performance per Node 2 to 8 Nodes per System Gen3 ASIC • Data Movement • XOR RAID Processing • Built-in Thin Provisioning 2 Intel Dual-Core 2.33 GHz • Control Processing SATA : Local boot disk Max host-facing adapters • Up to 3 (3 FC / 2 iSCSI) Scalable Connectivity Per Node • 3 PCI-X buses/ 6 PCI-X slots
T-Class DC04 Drive Chassis •
From 2 to 10 Drive Magazines
•
(1+1) redundant power supplies
•
Redundant dual FC paths
•
Redundant dual switches
Each Magazine always holds 4 disks of the same drive type • Each Magazines in a Chassis can have different Drive types. For example: • 3 magazines of FC • 1 magazine of SSD • 6 magazines of SATA. •
30 © HP Copyright 2011 – Peter Mattei
T400 Configuration examples
6 0 0
6 0 0
62 0T 0B
F F C C
F N C L
6 0 0
6 0 0
62 0T 0B
F F C C
N F L C
6 0 0
6 0 0
2 6 T 0 0 B
F F C C
F N C L
6 0 0
6 0 0
2 6 T 0 B 0
F F C C
N F C L
OK OK / !
0 3
1 4
2 5
O | 0 1 0
OK
0 3
OK / !
CD ROM
Off
Pulizzi
Off
Pulizzi
2 5
O| E. E. |C O| 0 1 0
3PAR Service Processor
||
Pulizzi
Pulizzi
1 4
0 |
On CB1
||
0 |
On CB1
||
0 |
Off On CB1 ||
0 |
Off On CB1
|| Off
On CB2
|| Off
0 |
0 |
On CB2
||
0 |
Off On CB2 ||
0 |
Off On CB2
31 © HP Copyright 2011 – Peter Mattei
– A T400 minimum configuration is – 2 nodes – 4 drive chassis with – 2 magazines per chassis. – Upgrades are done as columns of magazines down the drive chassis..
T800 Fully Configured – 224’000 SPC IOPS 6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
CD ROM
OK OK / !
OK OK / !
OK OK / !
OK OK / !
0 3
0 3
0 3
0 3
1 4
1 4
1 4
1 4
3PAR Service Processor
2 5
2 5
2 5
2 5
E 0
< …. >
E 0
< …. >
…. E 0
E…. 0
E…. |COO| | 1 0
OK
…. E |COO| | 1 0
OK
OK
Pulizzi
Off
Pulizzi
Off
On CB1
||
Off
Pulizzi
Off
0 |
On CB1
||
Pulizzi
OK
0
|
On CB1
||
0 |
On CB1
1 4
0 3
OK / !
0 |
1 4
0 3
OK / !
E…. |COO| | 1 0
1 4
0 3
OK / !
E…. |COO| | 1 0
||
0 3
OK / !
|| Off
0
|
On CB2
|| Off
0 |
On CB2
|| Off
2 5
2 5
2 5
E…. 0
…. E 0
E 0
< …. >
E…. 0
E…. |COO| | 1 0
…. E |COO| | 1 0
…. E |C|OO| 1 0
E…. |COO| | 1 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
6 0 0
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
F C
Pulizzi
On CB2
|| Off
1 4
0 |
2 5
0 |
On CB2
• 8 Nodes • 32 Drive Chassis • 1280 Drives • 768TB raw capacity with 600GB drives
Pulizzi
|| Off
On CB1
|| Off
Off
Pulizzi
Off
0 |
On CB1
||
Pulizzi
0 |
0 |
On CB1
||
0 |
On CB1
|| Off
On CB2
|| Off
0 |
On CB2
|| Off
0 |
On CB2
|| Off
0 |
0 |
On CB2
Pulizzi Pulizzi Pulizzi Pulizzi
|| Off
On CB1
|| Off
0 |
On CB1
|| Off
0 |
On CB1
|| Off
0 |
0 |
On CB1
|| Off
On CB2
|| Off
0 |
On CB2
|| Off
0 |
On CB2
|| Off
0 |
0 |
On CB2
Pulizzi Pulizzi
|| Off
On CB1
|| Off
Off
Pulizzi
Off
0 |
On CB1
||
Pulizzi
0 |
0 |
On CB1
||
0 |
On CB1
|| Off
0 |
On CB2
|| Off
0 |
On CB2
|| Off
0 |
On CB2
|| Off
• 224’000 SPC IOPS
0 |
On CB2
Pulizzi Pulizzi
|| Off
0 |
On CB1
|| Off
0 |
On CB1
|| Off
0 |
On CB2
|| Off
0 |
On CB2
Nodes and Chassis are FC connected and can be up to 100 meters apart 32 © HP Copyright 2011 – Peter Mattei
T-Class redundant power Controller Nodes and Disk Chassis (shelves) are powered by (1+1) redundant power supplies. The Controller Nodes are backed up by a string of two batteries.
33 © HP Copyright 2011 – Peter Mattei
HP 3PAR InForm OS™ Virtualization Concepts
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Virtualization Concept Example: 4-Node T-Class with 8 Drive Chassis • Nodes are added in pairs for cache redundancy • An InServ with 4 or more nodes supports “Cache Persistence” which enables maintenance windows and upgrades without performance penalties. • Drive Chassis are point-to-point connected to controllers nodes in the T-Class to provide “cage level” availability to withstand the loss of an entire drive enclosure without losing access to your data. 35 © HP Copyright 2011 – Peter Mattei
3PAR Mid-Plane
HP 3PAR Virtualization Concept Example: 4-Node T-Class with 8 Drive Chassis • T-Class Drive Magazines hold 4 of the very same drives • SSD, FC or SATA • Size • Speed
• SSD, FC, SATA drive magazines can be mixed • A minimum configuration has 2 magazines per enclosure • Each Physical Drive is divided into 256 MB “Chunklets”
36 © HP Copyright 2011 – Peter Mattei
HP 3PAR Virtualization Concept Example: 4-Node T-Class with 8 Drive Chassis • RAID sets will be built across enclosures and massively striped to form Logical Disks (LD) • LDs are equally allocated to controller nodes • Logical Disks are bound together to build Virtual Volumes • Each Virtual Volume is automatically wide-striped across “Chunklets” on all disk spindles of the same type creating a massively parallel system • Virtual Volumes can now be exported as LUNs to servers 37 © HP Copyright 2011 – Peter Mattei
Exported LUN
LD LD LD
LD LD LD
LD LD LD
LD LD LD
LD LD LD
LD LD LD
LD LD LD
LD LD LD
Virtual Virtual Volume Virtual Volume Volume
Chunklets – the 3PAR Virtualization Basis •
•
Each physical disk in a 3PAR array is initialized with data and spare Chunklets of 256MB each Chunklets are Automatically Grouped by Drive Rotational Speed
Physical Disk DC DC DC DC DC DC DC DC DC DC SC DC DC SC DC DC DC DC DC DC SC DC DC DC 38 © HP Copyright 2011 – Peter Mattei
Device Type
Total # of Chunklets
50GB SSD
185
147GB FC 15K
545
300GB FC 15K
1115
450GB FC 15K
1675
600GB FC 15K
2234
1TB NL 7.2K
3724
2TB NL 7.2K
7225
DC = 256 MB Data Chunklet SC = 256 MB Spare Chunklet
Why are Chunklets so Important? Ease of use and Drive Utilization • Same drive spindle can service many different LUNs and different RAID types at the same time • Allows the array to be managed by policy, not by administrative planning • Enables easy mobility between physical disks, RAID types and service levels by using Dynamic or Adaptive Optimization
Performance • Enables wide-striping across hundreds of disks • Avoids hot-spots • Allows Data restriping after disk installations
High Availability • HA Cage - Protect against a cage (disk tray) failure. • HA Magazine - Protect against magazine failure 39 © HP Copyright 2011 – Peter Mattei
0
1
2
3
4
5
6
7
R1 R1 R5 R6 R6 R1 R5 R5
3PAR InServ Controllers
Physical Disks
Common Provisioning Groups (CPG) CPGs are Policies that define Service and Availability level by • Drive type (SSD, FC, SATA) • Number of Drives • RAID level (R10, R50 2D1P to 8D1P, R60 6D2P or 14D2P) Multiple CPGs can be configured and optionally overlap the same drives • i.e. a System with 200 drives can have one CPG containing all 200 drives and other CPGs with overlapping subsets of these 200 drives. CPGs have many functions: • They are the policies by which free Chunklets are assembled into logical disks • They are a container for existing volumes and used for reporting • They are the basis for service levels and our optimization products. 40 © HP Copyright 2011 – Peter Mattei
HP 3PAR Virtualization – the Logical View
41 © HP Copyright 2011 – Peter Mattei
Create CPG(s) Easy and straight forward – In the “Create CPG” Wizard select and define •
3PAR System
•
Residing Domain (if any)
•
Disk Type −
SSD – Solid State Disk
−
FC – Fibre Channel Disk
−
NL – Near-Line SATA Disks
•
Disk Speed
•
RAID Type
– By selecting advanced options more granular options can be defined •
Availability level
•
Step size
•
Preferred Chunklets
•
Dedicated disks
42 © HP Copyright 2011 – Peter Mattei
Create Virtual Volume(s) Easy and straight forward – In the “Create Virtual Volume” Wizard define •
Virtual Volume Name
•
Size
•
Provisioning Type: Fat or Thinly
•
CPG to be used
•
Allocation Warning
•
Number of Virtual Volumes
– By selecting advanced options more options can be defined •
Copy Space Settings
•
Virtual Volume Geometry
43 © HP Copyright 2011 – Peter Mattei
Export Virtual Volume(s) Easy and straight forward – In the “Export Virtual Volume” Wizard define •
Host or Host Set to be presented to
– Optionally •
Select specific Array Host Ports
•
Specify LUN ID
44 © HP Copyright 2011 – Peter Mattei
HP 3PAR Autonomic Groups Simplify Provisioning Autonomic HP 3PAR Storage
Traditional Storage
Autonomic Host Group
Cluster of VMware ESX Servers
V1
V2
V3
V4
V5
V6
V7
V8
V9
V10
V1
•
Requires 50 provisioning actions (1 per host – volume relationship)
– Add another host •
Requires 10 provisioning actions (1 per volume)
– Add another volume •
Requires 5 provisioning actions (1 per host)
45 © HP Copyright 2011 – Peter Mattei
V3
V4
V5
V6
V7
V8
V9
V10
Autonomic Volume Group
Individual Volumes
– Initial provisioning of the Cluster
V2
– Initial provisioning of the Cluster • • •
Add hosts to the Host Group Add volumes to the Volume Group Export Volume Group to the Host Group
– Add another host •
Just add host to the host group
– Add another volume •
Just add the volume to the Volume Group
– Volumes are exported automatically
HP 3PAR InForm Software and Features
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
Four License Models: Consumption Based Spindle Based Frame Based Free*
HP 3PAR Software and Licensing
* Support fee associated
3PAR InForm Software InForm Additional Software
InForm Host Software
Thin Provisioning
Virtual Domains
System Reporter
Thin Conversion
Adaptive Optimization
Host Explorer
Recover Manager for SQL
Thin Persistence
Virtual Lock
3PAR Manager for VMware vCenter
Recovery Manager for VMware
Virtual Copy
Remote Copy
Multi Path IO IBM AIX
Recovery Manager for Oracle
System Tuner
Dynamic Optimization
Multi Path IO Windows 2003
Recovery Manager for Exchange
InForm Operating System Full Copy
Access Guard
Autonomic Groups
Thin Copy Reclamation
RAID MP (Multi-Parity)
Rapid Provisioning
LDAP
InForm Administration Tools
Scheduler
Host Personas
47 © HP Copyright 2011 – Peter Mattei
HP 3PAR Thin Technologies
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Thin Technologies Leadership Overview Start Thin
Thin Provisioning
Get Thin
Thin Conversion
No pool management or reservations
‣ Eliminate the time & complexity of getting thin
–
No professional services
–
Fine capacity allocation units
‣ Open, heterogeneous migrations for any array to 3PAR
–
–
Variable QoS for snapshots
Buy up to 75% less storage capacity 49 © HP Copyright 2011 – Peter Mattei
‣ Service levels preserved during inline conversion
Reduce Tech Refresh Costs by up to 60%
Stay Thin
Thin Persistence ‣ Free stranded capacity ‣ Automated reclamation for 3PAR offered by Symantec, Oracle ‣ Snapshots and Remote Copies stay thin
Thin Deployments Stay Thin Over time
HP 3PAR Thin Technologies Leadership Overview • Built-in − HP 3PAR Utility Storage is built from the ground up to support Thin Provisioning (ThP) by eliminating the diminished performance and functional limitations that plague bolt-on thin solutions.
• In-band − Sequences of zeroes are detected by the 3PAR ASIC and not written to disks. Most other vendors ThP implementation write zeroes to disks, some can reclaim space as a post-process.
• Reservation-less − HP 3PAR ThP draws fine-grained increments from a single free space reservoir without pre-dedication of any kind. Other vendors ThP implementation require a separate, pre-dedicated pool for each data service level.
• Integrated − API for direct ThP integration in Symantec File System, VMware, Oracle ASM and others 50 © HP Copyright 2011 – Peter Mattei
HP 3PAR Thin Provisioning – Start Thin Dedicate on write only Traditional Array – Dedicate on allocation
HP 3PAR Array – Dedicate on write only Server presented Capacities / LUNs Required net Array Capacities
Free Chunkl
Physical Disks Physically installed Disks
Physically installed Disks Actually written data 51 © HP Copyright 2011 – Peter Mattei
HP 3PAR Thin Conversion – Get Thin Thin your online SAN storage up to 75% A practical and effective solution to eliminate costs associated with:
Gen3 ASIC
• Storage arrays and capacity • Software licensing and support • Power, cooling, and floor space
Unique 3PAR Gen3 ASIC with built-in zero detection delivers: • Simplicity and speed – eliminate the time & complexity of getting thin • Choice - open and heterogeneous migrations for any-to-3PAR migrations • Preserved service levels – high performance during migrations
52 © HP Copyright 2011 – Peter Mattei
0000
Fast
0000 0000
Before
After
HP 3PAR Thin Conversion – Get Thin How to get there 1. Defragment source Data a)
If you are going to do a block level migration via an appliance or host volume manager (mirroring) you should defragment the filesystem prior to zeroing the free space
b)
If you are using filesystem copies to do the migration the copy will defragment the files as it copies eliminating the need to defragment the source filesystem
2. Zero existing volumes via host tools a)
On Windows use sdelete –c *
b)
On UNIX/Linux use dd script
* sdelete is a free utility available from Microsoft
53 © HP Copyright 2011 – Peter Mattei
HP 3PAR Thin Conversion at a Global Bank • No budget for additional storage Recently had huge layoffs
• Moved 271 TBs, DMX to 3PAR •
Online/non-disruptive
•
No Professional Services
•
Large capacity savings
• “The results shown within this document demonstrate a highly efficient migration process which removes the unused storage” • “No special host software components or professional services are required to utilise this functionality”
Capacity requirement s reduced by >50%
power & cooling costs
$3 million savings in upfront capacity purchases
Sample volume migrations on different OSs
200 150 GBs
•
Reduced
100
EMC
50
3PAR
0 Unix
ESX
Win
(VxVM) (VMotion) (SmartMove) 54 © HP Copyright 2011 – Peter Mattei
HP 3PAR Thin Persistence – Stay Thin Keep your array thin over time – Non-disruptive and applicationtransparent “re-thinning” of thin provisioned volumes
Gen3 ASIC
Fast
– Thin “insurance” against unexpected or thin-hostile application behavior – Returns space to thin provisioned volumes and to free pool for reuse – Unique 3PAR Gen3 ASIC with built-in zero detection delivers: •
Simplicity – No special host software required. Leverage standard file system tools/scripts to write zero blocks.
•
Preserved service levels – zeroes detected and unmapped at line speeds
– Integrated automated reclamation with Symantec and Oracle 55 © HP Copyright 2011 – Peter Mattei
0000 0000
Before
After
HP 3PAR Thin Persistence – manual thin reclaim Remember: Deleted files still occupy disk space LUN 1
LUN 2
LUN 1
Unused Data 1
Data 2
Free Chunklets
Initial state: • LUN1 and 2 are ThP Vvols • Data 1 and 2 is actually written data
LUN 1
00000000
Data1
LUN 2 00000000 00000000 00000000
Free Chunklets
Data 2
Unused Data 2
Free Chunklets
After a while: • Files deleted by the servers/file system still occupy space on storage
LUN 1
Zero-out unused space: • Windows: sdelete * • Unix/Linux: dd script 56 © HP Copyright 2011 – Peter Mattei
Data1
LUN 2
LUN 2
Free Chunklets Data 1
Data 2
Run Thin Reclamation: • Compact CPC and Logical Disks • Freed-up space is returned to the free Chunklets
* sdelete is a free utility available from Microsoft
HP 3PAR Thin Persistence and VMware ESX
ESX
All zeroes need to be written to disk This will impact the performance of the storage
0
Hardware zero detection in the 3PAR Gen3 ASIC
0 0 0
0 0 0
0 0 0
0
DataStore
No physical disk IO required!
0
DataStore
0 0 0
0000000000000000000 0000000000000000000 0000000000000000000 100GB Eager Zeroed Thick VMDK
Without 3PAR Thin Persistence Capacity used = 100GB 57 © HP Copyright 2011 – Peter Mattei
100GB Eager Zeroed Thick VMDK
With 3PAR Thin Persistence Capacity used = 0GB
VMware and HP 3PAR Thin Provisioning Options Virtual Machines (VMs)
Thin Virtual Disks (VMDKs)
100GB
150GB
10GB
100GB 30GB
10GB
Volume Provisioned at Storage Array
100GB
10GB
150GB
100GB 30GB
Storage Array
10GB
150GB
200 GB
Over provisioned VMs:
250 GB
250 GB
Physically Allocated:
200 GB
40 GB
50GB
210 GB
58 © HP Copyright 2011 – Peter Mattei
30GB
3PAR Array
40 GB
Capacity Savings:
30GB
200GB Thin LUN
200GB Thick LUN
VMware VMFS Volume/Datastore
150GB
HP 3PAR Thin Provisioning positioning Built-in not bolt on
No upfront allocation of storage for Thin Volumes
No performance impact when using Thin Volumes unlike competing storage products
No restrictions on where 3PAR Thin Volumes should be used unlike many other storage arrays
Allocation size of 16k which is much smaller than most ThP implementations
Thin provisioned volumes can be created in under 30 seconds without any disk layout or configuration planning required
Thin Volumes are autonomically wide striped over all drives within that tier of storage
59 © HP Copyright 2011 – Peter Mattei
HP 3PAR Virtual Copy
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Virtual Copy – Snapshot at its best 3PAR Virtual Copy
Up to 8192 Snaps per array
– Smart Promotable snapshots • Individually deleteable snapshots • Scheduled creation/deletion • Consistency groups •
– Thin
Base Volume
100s of Snaps… …but just one CoW
No reservations needed • Non-duplicative snapshots • Thin Provisioning aware • Variable QoS •
– Ready Instant readable or writeable snapshots • Snapshots of snapshots • Control given to end user for snapshot management • Virtual Lock for retention of read-only snaps •
61 © HP Copyright 2011 – Peter Mattei
Integration with Oracle, SQL, Exchange, VMware
HP 3PAR Virtual Copy – Snapshot at its best – Base volume and virtual copies can be mapped to different CPG’s This means that they can have different quality of service characteristics. For example, the base volume space can be derived from a RAID 1 CPG on FC disks and the virtual copy space from a RAID 5 CPG on Nearline disks. – The base volume space and the virtual copy space can grow independently without impacting each other (each space has it’s own allocation warning and limit). – Dynamic optimization can tune the base volume space and the virtual copy space independently.
62 © HP Copyright 2011 – Peter Mattei
HP 3PAR Virtual Copy Relationships The following shows a complex relationship scenario
63 © HP Copyright 2011 – Peter Mattei
Creating a Virtual Copy Using The GUI Right Click and select “Create Virtual Copy”
64 © HP Copyright 2011 – Peter Mattei
InForm GUI View of Virtual Copies The GUI gives a very easy to read graphical view of VCs:
65 © HP Copyright 2011 – Peter Mattei
HP 3PAR Remote Copy
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Remote Copy – Protect and share data 3PAR Remote Copy – Smart • • • •
Initial setup in minutes Simple and intuitive commands No consulting services VMware SRM integration
– Complete • • • • • • •
Native IP-based, or FC No extra copies or infrastructure needed Thin provisioning aware Thin conversion Synchronous, Asynchronous Periodic or Synchronous Long Distance (SLD) Mirror between any InServ size or model Many to one, one to many
Primary
S
Sync or
P
Async Perodic
P S
1:N Configuration Primary
P Sync
S1 Secondary
67 © HP Copyright 2011 – Peter Mattei
Secondary
Async Periodic Standby
S2 Tertiary
Synchronous Long Distance Configuration
HP 3PAR Remote Copy Synchronous
• Real-time Mirror – Highest I/O currency
Primary Volume 1
– Thin provisioning aware
• Targeted Use
2
P
– Lock-step data consistency
• Space Efficient
Secondary Volume
4
S 3
Step 1 : Host server writes I/Os to primary cache
– Campus-wide business continuity
Step 2 : InServ writes I/Os to secondary cache Step 3 : Remote system acknowledges the receipt of the I/O Step 4 : I/O complete signal communicated back to primary host
68 © HP Copyright 2011 – Peter Mattei
HP 3PAR Remote Copy Data integrity
Assured Data Integrity – Single Volume • All
writes to the secondary volume are completed in the
same order as they were written on the primary volume
– Multi-Volume Consistency Group • Volumes
can be grouped together to maintain write ordering across the set of volumes
• Useful
for databases or other applications that make dependant writes to more than one volume
69 © HP Copyright 2011 – Peter Mattei
HP 3PAR Remote Copy Asynchronous Periodic The Replication Solution for long-distance implementations
• Efficient even with high latency replication links – Host writes are acknowledged as soon as the data is written into cache of the primary array
• Bandwidth-friendly – The primary and secondary Volumes are resynchronized periodically either scheduled or manually – If data is written to the same area of a volume in between resyncs only the last update needs to be resynced
• Space efficient – Copy-on-write Snapshot versus full PIT copy – Thin Provisioning-aware
• Guaranteed Consistency – Enabled by Volume Groups – Before a resync starts a snapshot of the Secondary Volume or Volume Group is created
70 © HP Copyright 2011 – Peter Mattei
Remote Copy Asynchronous Periodic Primary Site Sequence
1
Snapshot
Initial Copy
A
Resynchronization. Starts with snapshots
B
2 Resynchronization. Delta Copy
3
Base Volume
Base Volume
Snapshot
SA SA
P B-A delta
Upon Completion. Delete old snapshot
A
Ready for next resynchronization
B
71 © HP Copyright 2011 – Peter Mattei
Remote Site
SB SA SB
HP 3PAR Remote Copy many-to-one / one-to-many • Asynchronous Periodic Only • Distance Limit and Performance characteristics same as that supported for asynchronous periodic mode ~4800km /3000 miles and 150ms • Requires 2 gigabit Ethernet adapters per array • InServ Requirements – Max support is 4 to 1. One of the 4 can mirror bi-directionally – Requires a minimum of 2 controllers per array per site. Target site requires 4 or more controller nodes in the array
72 © HP Copyright 2011 – Peter Mattei
P
Primary Site A
P
Primary Site B
RC
RC
RC
P RC
Target Site
P
Primary Site C P RC
Primary / Target Site D
HP 3PAR Remote Copy Supported Distances and Latencies
Remote Copy Type
Max Supported Distance
Max Supported Latency
Synchronous IP
210 km /130 miles
1.3ms
Synchronous FC
210 km /130 miles
1.3ms
Asynchronous Periodic IP
N/A
150ms round trip
Asynchronous Periodic FC
210 km /130 miles
1.3ms
Asynchronous Periodic FCIP
N/A
60ms round trip
73 © HP Copyright 2011 – Peter Mattei
VMware ESX DR with SRM Automated ESX Disaster Recovery Production Site
•
− Simplifies DR and increases reliability − Integrates VMware Infrastructure with HP 3PAR Remote Copy and Virtual Copy − Makes DR protection a property of the VM − Allowing you to pre-program your disaster response − Enables non-disruptive DR testing
Recovery Site VirtualCenter
Site Recovery Manager
Virtual Machines
Site Recovery Manager
VirtualCenter
Virtual Machines
VMware Infrastructure
VMware Infrastructure
Servers
Servers HP 3PAR
HP 3PAR
What does it do?
•
Requirements: − − − −
VMware vSphere™ VMware vCenter™ VMware vCenter Site Recovery Manager™ HP 3PAR Replication Adapter for VMware vCenter Site Recovery Manager − HP 3PAR Remote Copy Software − HP 3PAR Virtual Copy Software (for DR failover testing)
Production LUNs Remote Copy DR LUNs Virtual Copy Test LUNs 74 © HP Copyright 2011 – Peter Mattei
Local cluster HA solution with shared disk resource
•
− Provides application failover between servers
Cluster
A
What does it do?
A
•
Advantages: − No manual intervention required in case of server failure − Can fail over automatically or manually
•
Disadvantages: − No protection against storage or Data Center failures
Data Center
© HP Copyright 2011 – Peter Mattei
Campus cluster Using server/volume manager based mirroring
Quorum Data Center 3
•
− Provides very high availability of application/services − Provides application failover between servers, storage and Data Centers
Cluster
A
What does it do?
A
•
Advantages: − Data is replicated by OS/volume manager − No array based replication needed − Storage failure does not require restart of application/service − Can fail over automatically or manually
Data Center 1
Data Center 2
Up to 100km
© HP Copyright 2011 – Peter Mattei
•
Disadvantages: − High risk for split brain if no arbitration node or service is deployed − Risk for rolling disaster/data inconsistency
Stretch cluster Using storage array based mirroring Swap CA Mount volume Restart App
•
− Data is replicated by the Storage Array (Remote Copy)
Cluster
A
A
What does it do?
•
Advantages: − Data consistency can be assured
•
Disadvantages: − Manual failover − Array based replication needed
Remote Copy
Data Center 1
Data Center 2
Up to several 100km © HP Copyright 2011 – Peter Mattei
Cluster Extension Geocluster for Windows End-to-end clustering solution to protect against site failure
File share Witness Data Center 3
A
Microsoft Cluster
•
− Provides manual or automated sitefailover for Server and Storage resources − Allows for transparent Live Migration of Hyper-V VMs between data centers.
A B
CLX Geocluster
•
Data Center 1
Data Center 2
Up to 500km
© HP Copyright 2011 – Peter Mattei
Supported environments: − Microsoft Windows Server
• Remote Copy
What does it do?
Requirements: − − − − −
3PAR Disk Arrays Remote Copy sync Microsoft Cluster Cluster Extension Geocluster Max 20ms network round-trip delay
HP 3PAR Dynamic and Adaptive Optimization
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Dynamic and Adaptive Optimization Manual or Automatic Tiering
3PAR Dynamic Optimization
3PAR Adaptive Optimization
Tier 0 – SSD
Tier 1 – FC
Tier 2 – SATA
- Region
80 © HP Copyright 2011 – Peter Mattei
Autonomic Data Movement
Autonomic Tiering and Data Movement
Storage Tiers – HP 3PAR Dynamic Optimization SSD RAID 5 (3+1)
RAID 5 2+1)
RAID 5 (7+1)
FC
RAID 1 RAID 6 (6+2)
Performance
RAID 6 (14+2)
Nearline RAID 5 (7+1)
RAID 5 2+1) RAID 1
RAID 6 (6+2)
RAID 5 RAID 5 (2+1) (3+1) RAID 5 (7+1)
RAID 5 (3+1)
RAID 6 (14+2) RAID 1
RAID 6 (6+2) RAID 6 (14+2)
Cost per Useable TB 81 © HP Copyright 2011 – Peter Mattei
In a single command… non-disruptively optimize and adapt cost, performance, efficiency and resiliency
HP 3PAR Dynamic Optimization – Use Cases Deliver the required service levels for the lowest possible cost throughout the data lifecycle ~50% Savings 10TB net RAID 10 300GB FC Drives
~80% Savings 10TB net RAID 50 (3+1) 600GB FC Drives
10TB net RAID 50 (7+1) 2TB SATA-Class Drives
Accommodate rapid or unexpected, application growth on demand by freeing raw capacity 7.5TB net free 10 TB net
10 TB net
20 TB raw – RAID 10
20 TB raw – RAID 50
82 © HP Copyright 2011 – Peter Mattei
Free 7.5 TBs of net capacity on demand !
How to Use Dynamic Optimization
83 © HP Copyright 2011 – Peter Mattei
How to Use Dynamic Optimization
84 © HP Copyright 2011 – Peter Mattei
How to Use Dynamic Optimization
85 © HP Copyright 2011 – Peter Mattei
Performance Example with Dynamic Optimization Volume Tune from R5, 7+1 SATA to R5, 3+1 FC 10K
86 © HP Copyright 2011 – Peter Mattei
HP 3PAR Dynamic Optimization at a Customer Free
Before Dynamic Optimization
Used 600
Data layout after a series of capacity upgrades
500
Chunklets
400
300
200 Free
After Dynamic Optimization
100
Used 600
0 1
20
39
58
77
500
96
Physical Disks
Data layout after Dynamic Optimization (non-disruptive) 87 © HP Copyright 2011 – Peter Mattei
Chunklets
400
300
200 100
0 1
20
39
58 Physical Disks
77
96
HP 3PAR Adaptive Optimization Improve Storage Utilization Deployment with HP 3PAR AO
Traditional deployment •
Single pool of same disk drive type, speed and capacity and RAID level
• An AO Virtual Volume draws space from 2 or 3 different tiers/CPGs
•
Number and type of disks are dictate by the max IOPS + capacity requirements
• Each tier/CPG can be built on different disk types, RAID level and number of disks
Single pool of high-speed media 100%
Required IOPS
100%
Required IOPS 0%
High-speed media pool
Wasted space
IO distribution 0%
Required Capacity
88 © HP Copyright 2011 – Peter Mattei
100%
0%
Medium-speed media pool
Low-speed media pool
IO distribution 0%
Required Capacity
100%
A New Optimization Strategy for SSDs • Flash Price decline has enabled SSD as a viable storage tier but data placement is difficult on a per LUN basis
SSD only
Non-optimized approach
Non-Tiered Volume/LUN
• A new way of autonomic data placement and cost/performance optimization is required: HP 3PAR Adaptive Optimization
Tier 0 SSD Tier 1 FC
Tier 2 NL Multi-Tiered Volume/LUN
89 © HP Copyright 2011 – Peter Mattei
Optimized approach for leveraging SSDs
IO density differences across applications 100.00% 90.00% 80.00% Cumulative Access Rate %
ex2k7db_cpg 70.00%
ex2k7log_cpg oracle
60.00%
oracle-stage 50.00%
oracle1-fc windows-fc
40.00%
unix-fc vmware
30.00%
vmware2 20.00%
vmware5 windows
10.00% 0.00% 0.00%
10.00%
20.00%
30.00% Cumulative Space %
90 © HP Copyright 2011 – Peter Mattei
40.00%
50.00%
60.00%
HP 3PAR Adaptive Optimization Improve Storage Utilization Two tiers with Adaptive Optimization running
Used Space GiB
Used Space GiB
One tier without Adaptive Optimization
Access/GiB/min
Access/GiB/min
•
•
This chart out of System reporter shows that most of the capacity has very low IO activity Adding Nearline disks would lower cost without compromising overall performance
91 © HP Copyright 2011 – Peter Mattei
•
A Nearline tier has been added and Adaptive Optimization enabled
•
Adaptive Optimization has moved the least used chunklets to the Nearline tier
HP 3PAR Virtual Domains
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
What are HP 3PAR Virtual Domains? Multi-Tenancy with Traditional Storage • Admin A • App A • Dept A • Customer A
• Admin B • App B • Dept B • Customer B
• Admin C • App C • Dept C • Customer C
Multi-Tenancy with 3PAR Domains • Admin A • App A • Dept A • Customer A
• Admin B • App B • Dept B • Customer B
• Admin C • App C • Dept C • Customer C
Domain A Domain B Domain C
Separate, Physically-Secured Storage
93 © HP Copyright 2011 – Peter Mattei
Shared, Logically-Secured Storage
What are the benefits of Virtual Domains? Centralized Storage Admin with Traditional Storage
Self-Service Storage Admin with 3PAR Virtual Domains
End Users (Dept, Customer)
Provisioned Storage
Provisioned Storage
Virtual Domains
Centralized Storage Administration
Centralized Storage Administration
Physical Storage Consolidated Storage
94 © HP Copyright 2011 – Peter Mattei
Physical Storage Consolidated Storage
3PAR Domain Types & Privileges Super User(s)
Edit User(s) (set to “All” Domain)
• Domains, Users, Provisioning Policies
• Provisioning Policies
“All” Domain
“Engineering” Domain Set Domain “A” (Dev)
“No” Domain Unassigned elements
CPG(s) Host(s) User(s) & respective user level(s)
Unassigned elements
95 © HP Copyright 2011 – Peter Mattei
VLUNs VVs & TPVVs VCs & FCs & RCs Chunklets & LDs
Domain “B” (Test)
HP 3PAR Virtual Domains Overview • Requires a license • Allows fine-grained access control on a 3PAR array • Up to 1024 domains or spaces per array • Each User may have privileges over one, up to 32 selected or all domains • Each domain can be dedicated to a specific application • System provides different privileges to different users for Domain Objects with no limit on max # Users per Domain
Also see the analyst report and product brief on http://www.3par.com/litmedia.html
96 © HP Copyright 2011 – Peter Mattei
Authentication and Authorization LDAP Login Management Workstation
3PAR InServ 1 6
LDAP Server 2 3 4 5
Step 1 :
User initiates login to 3PAR InServ via 3PAR CLI/GUI or SSH
Step 2 :
InServ searches local user entries first. Upon mismatch, configured LDAP Server is checked
Step 3 :
LDAP Server authenticates user.
Step 4 :
InServ requests User’s Group information
Step 5 :
LDAP Server provides LDAP Group information for user
Step 6 :
InServ authorizes user for privilege level based on User’s group-to-role mapping.
97 © HP Copyright 2011 – Peter Mattei
HP 3PAR Virtual Lock
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR Virtual Lock • HP 3PAR Virtual Lock Software prevents alteration and deletion of selected Virtual Volumes for a specified period of time • Supported with – Fat and Thin Vitual Volumes – Full Copy, Virtual Copy and Remote Copy
• Locked Virtual Volumes cannot be overwritten • Locked Virtual Volumes cannot be deleted, even by a HP 3PAR Storage System administrator with the highest level privileges. • Because it’s tamper-proof, it’s also a way to avoid administrative mistakes. Also see the product brief on http://www.3par.com/litmedia.html
99 © HP Copyright 2011 – Peter Mattei
HP 3PAR Virtual Lock
– Easily set just by defining Retention and/or Expiration Time in a Volume Policy – Remember: Locked Virtual Volumes cannot be deleted, even by a HP 3PAR Storage System user with the highest level privileges.
100 © HP Copyright 2011 – Peter Mattei
HP 3PAR System Reporter
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
HP 3PAR System Reporter – Allows monitoring performance, creating charge back reports and plan storage resources – Enables metering of all physical and logical objects including Virtual Domains – Provides custom thresholds and e-mail notifications – Run or schedule canned or customized reports at your convenience – Export data to a CSV file – Controls Adaptive Optimization – Use DB of choice – SQLite, MySQL or Oracle – DB Access: • Clients: Windows IE, Mozilla, Excel • Directly via published DB schema 102 © HP Copyright 2011 – Peter Mattei
HP 3PAR System Reporter Example Histogram – VLUN Performance
Export data to a CSV file
103 © HP Copyright 2011 – Peter Mattei
System Reporter Historical performance information with 3 levels • Daily • Hourly • High resolution. Default 5mn, can be set to 1mn All logical and physical objects instrumented
© HP Copyright 2011 – Peter Mattei
System Reporter Front-end statistics
© HP Copyright 2011 – Peter Mattei
System Reporter Backend statistics IOPS and bandwidth should be the same on all backend ports
© HP Copyright 2011 – Peter Mattei
System Reporter CPU statistics
Thanks to the 3PAR ASIC, CPUs gets barely used, even during IO peaks
© HP Copyright 2011 – Peter Mattei
System Reporter for capacity planning Physical disks vs Virtual Volumes usage
© HP Copyright 2011 – Peter Mattei
HP 3PAR VMware Integration
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
3PAR Management Plug-In for vCenter Enhanced visibility into Storage Resources – Improved Visibility •
VM-to-Datastore-to-LUN mapping
– Storage Properties View LUN properties including Thin versus Fat • See capacity utilized •
– Integration with 3PAR Recovery Manager •
Seamless rapid online recovery
Also see the whitepapers, analyst reports and brochures on http://www.3par.com/litmedia.html
110 © HP Copyright 2011 – Peter Mattei
3PAR Recovery Manager for VMware Array-based Snapshots for Rapid Online Recovery – Solution composed of 3PAR Recovery Manager for VMware • 3PAR Virtual Copy • VMware vCenter •
– Use Cases Expedite provisioning of new virtual machines from VM copies • Snapshot copies for testing and development •
– Benefits •
Hundreds of VM snapshots granular, rapid online recovery − Reservation-less, non-duplicative without agents
•
vCenter integration – superior ease of use
111 © HP Copyright 2011 – Peter Mattei
vStorage API for array integration (VAAI) Hardware Assisted Full Copy – Optimized data movement within the SAN Storage VMotion • Deploy Template • Clone •
– Significantly lower CPU and network overhead •
Quicker migration
112 © HP Copyright 2011 – Peter Mattei
HP 3PAR VMware VAAI support Example VMware Storage VMotion with VAAI enabled and disabled
Backend Disk IO
Frontend IO
113 © HP Copyright 2011 – Peter Mattei
DataMover. HardwareAcceleratedMove=1
DataMover. HardwareAcceleratedMove=0
Virtual Infrastructure IOs are Random In a virtual infrastructure, multiple VMs and applications share the same I/O queue. The result is that even with applications that do sequential I/Os the physical server will end up doing random I/Os because of intermeshing of these applications
Random I/Os typically miss cache and will be served by the physical disks. Therefore the performance of a VM store will be directly linked to the number of physical disks that compose this LUN
VM store 3
Cache
Random I/Os miss cache and are served by disks © HP Copyright 2011 – Peter Mattei
vStorage API for array integration (VAAI) Hardware Assisted Locking Increase I/O performance and scalability, by offloading block locking mechanism Moving a VM with VMotion; Creating a new VM or deploying a VM from a template; Powering a VM ON or OFF; Creating a template; Creating or deleting a file, including snapshots
ESX
ESX
SCSI Reservation locks entire LUN Without VAAI 115 © HP Copyright 2011 – Peter Mattei
SCSI Reservation locks at Block Level With VAAI
vStorage API for array integration (VAAI) Hardware Assisted Block Zero – offloads large, block-level write operations of zeros to storage hardware – reduction of the ESX server workload.
ESX 0
ESX 0
0000000 0000000 0000000
0000000 0 0000000 0000000
Without VAAI
With VAAI
116 © HP Copyright 2011 – Peter Mattei
VMware vStorage VAAI Are there any caveats that I should be aware of? The VMFS data mover does not leverage hardware offloads and instead uses software data movement if: The source and destination VMFS volumes have different block sizes • The source file type is RDM and the destination file type is non-RDM (regular file) • The source VMDK type is eagerzeroedthick and the destination VMDK type is thin • The source or destination VMDK is any sort of sparse or hosted format • The source Virtual Machine has a snapshot • The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device •
− all datastores created with the vSphere Client are aligned automatically
The VMFS has multiple LUNs/extents and they are all on different arrays • Hardware cloning between arrays (even if within the same VMFS volume) does not work. •
vStorage APIs for Array Integration FAQ – http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021976
Also see the analyst report and brochure on http://www.3par.com/litmedia.html 117 © HP Copyright 2011 – Peter Mattei
HP 3PAR Recovery Managers
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
3PAR Recovery Manager for VMware Array-based Snapshots for Rapid Online Recovery – Solution composed of 3PAR Recovery Manager for VMware • 3PAR Virtual Copy • VMware vCenter •
– Use Cases Expedite provisioning of new virtual machines from VM copies • Snapshot copies for testing and development •
– Benefits •
Hundreds of VM snapshots granular, rapid online recovery − Reservation-less, non-duplicative without agents
•
vCenter integration – superior ease of use
119 © HP Copyright 2011 – Peter Mattei
Recovery Manager for Microsoft – Exchange & SQL Aware Automatic discovery of Exchange and SQL Servers and their associated databases • VSS Integration for application consistent snapshots • Support for Microsoft® Exchange Server 2003, 2007, and 2010 • Support for Microsoft® SQL Server™ 2005 and Microsoft® SQL Server™ 2008 • Database verification using Microsoft tools •
– Built upon 3PAR Thin Copy technology Fast point-in-time snapshot backups of Exchange & SQL databases • 100’s of copy-on-write snapshots with just-in-time, granular snapshot space allocation • Fast recovery from snapshot, regardless of size • 3PAR Remote Copy integration • Export backed up databases to other hosts •
Also see the brochure on http://www.3par.com/litmedia.html 120 © HP Copyright 2011 – Peter Mattei
3PAR Recovery Manager for Oracle • Allows PIT Copies of Oracle Databases Non-disruptive, eliminating production
downtime Uses 3PAR Virtual Copy technology
• Allows Rapid Recovery of Oracle Databases
Increases efficiency of recoveries Allows Cloning and Exporting of new databases
• Integrated High Availability with Disaster Recovery Sites
Integrated 3PAR Replication / Remote Copy for Array to Array DR
Also see the brochure on http://www.3par.com/litmedia.html
121 © HP Copyright 2011 – Peter Mattei
HP 3PAR the right choice! Thank you Serving Information®. Simply.
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Confidentiality label goes here
Questions ???
Further Information
3PAR Whitepapers, Reports, Videos, Datasheets etc. http://www.3par.com/litmedia.html
123 © HP Copyright 2011 – Peter Mattei
View more...
Comments