Vitualization for Dummies

June 9, 2018 | Author: palmarket | Category: Fault Tolerance, Cloud Computing, Desktop Virtualization, Virtualization, Virtual Machine
Share Embed Donate


Short Description

Vitualization for Dummies...

Description

Virtualization FOR

DUMmIES



2ND STRATUS SPECIAL EDITION

by Linda Hammer and Ken Donoghue

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

Virtualization For Dummies®, 2nd Stratus Special Edition Published by John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030-5774  www.wiley.com 

Copyright © 2012 by John Wiley & Sons, Inc. Published by John Wiley & Sons, Inc., Hoboken, NJ No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions. Trademarks: Wiley, the Wiley logo, For Dummies, the Dummies Man logo, A Reference for the Rest of Us!, The Dummies Way, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries, and may not be used without written permission. Stratus and ftServer are registered trademarks of Stratus Technologies. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc., is not associated with any product or vendor mentioned in this book. LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WARRANTIES, INCLUDING WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES OR PROMOTIONAL MATERIALS. THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY SITUATION. THIS WORK IS SOLD WITH THE UNDERSTANDING THAT THE PUBLISHER IS NOT ENGAGED IN RENDERING LEGAL, ACCOUNTING, OR OTHER PROFESSIONAL SERVICES. IF PROFESSIONAL ASSISTANCE IS REQUIRED, THE SERVICES OF A COMPETENT PROFESSIONAL PERSON SHOULD BE SOUGHT. NEITHER THE PUBLISHER NOR THE AUTHOR SHALL BE LIABLE FOR DAMAGES ARISING HEREFROM. THE FACT THAT AN ORGANIZATION OR WEBSITE IS REFERRED TO IN THIS WORK AS A CITATION AND/OR A POTENTIAL SOURCE OF FURTHER INFORMATION DOES NOT MEAN THAT THE AUTHOR OR THE PUBLISHER ENDORSES THE INFORMATION THE ORGANIZATION OR WEBSITE MAY PROVIDE OR RECOMMENDATIONS IT MAY MAKE. FURTHER, READERS SHOULD BE AWARE THAT INTERNET WEBSITES LISTED IN THIS WORK MAY HAVE CHANGED OR DISAPPEARED BETWEEN WHEN THIS WORK WAS WRITTEN AND WHEN IT IS READ. For general information on our other products and services, please contact our Business Development Department in the U.S. at 317-572-3205. For details on how to create a custom For Dummies book for your business or organization, contact [email protected] . For information about licensing the For Dummies brand for products or services, contact [email protected] . ISBN 978-1-118-22819-7 (pbk); ISBN 978-1-118-29622-6 (ebk) Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1

Publisher’s Acknowledgments Some of the people who helped bring this book to market include the following:  Acquisitions, Editorial, and Media  Development 

Project Editor: Jennifer Bingham Editorial Manager: Rev Mengle Business Development Representative: Sue Blessing

Composition Services

Senior Project Coordinator: Kristie Rees Layout and Graphics: Carrie A. Cesavice, Samantha K. Cherolis Proofreader: Dwight Ramsey

Custom Publishing Project Specialist: Michael Sullivan

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

Introduction



irtualization technology is being widely applied today with excellent operational and financial results. It’s become a matter of course for most businesses to work with some aspect of virtualization. This book provides you with a brief introduction to the subject, discusses cloud technology, and helps you understand the various options regarding availability. Knowing all this can help you create an action plan as you move forward with the next phase of your virtualization infrastructure.

 About This Book Virtualization deserves a closer look. This edition of the book focuses in particular on how hardware can help you succeed in your quest for a virtualized system. Think about it: In a virtualized world, where a single server may support many virtual machines, hardware becomes more important, not less. It now has an effect on multiple operating systems and many more applications. It’s essentially putting all your eggs in one basket. This book was written for and with Stratus and discusses technology advantages from Stratus Technologies that can keep your operations running at “five nines” or better.

 Icons Used in This Book Throughout this book, you find a series of icons in the margins that help to flag special information. This information is important. Keep it in mind, and file it away in your brain.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

2

Virtualization For Dummies, 2nd Stratus Special Edition This icon flags a shortcut or some information that’s really useful.

This icon flags technical information that is helpful to know, but not really essential.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

Chapter 1

Virtualization: More Than a Technology Fad!  In This Chapter  ▶ Virtualization — what’s old is new again ▶ Virtualization is mainstream ▶ Do you trust your hardware?



hen your boss comes to you and starts talking about virtualization, you know that it’s reached the mainstream. But did you know that this technology has been around since man first walked on the moon? That’s right. The hot topic of today has been around since the time of the Apollo program. Most books begin by defining virtualization and telling you it’s a good thing. Then they spend the entire book explaining why. I’ll cut to the chase, and start by explaining why virtualization is good: It saves money. But you still want to know what it is? A simple way to define virtualization is: “The use of an additional software layer that enables multiple operating systems and software applications to interact with a piece of hardware as though each had complete control of the hardware.” Computers used virtualization concepts for years before the recent trendiness, but it’s become especially important in the current economic climate. Being able to share hardware allows for greater utilization without the significant configuration issues that can occur from installing many applications

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

4

Virtualization For Dummies, 2nd Stratus Special Edition on a system at once. The virtualization layer keeps everything separate and isolated. Segmentation also helps businesses maintain compliance with industry and regulatory standards.

What Got Us Here? Computing has had its share of fads. Some of these fads were actually important technology breakthroughs that just didn’t seem like it at the time. Client-server computing, for example, was new to the users of mainframe computers. Client-server topologies have grown so much that it seems every new client application that an organization deploys has its own corresponding server. These applications often can’t all reside on a single operating system, so you begin to see server proliferation, commonly known as “server sprawl.” Today, server sprawl has spun out of control, and so has the amount of electronic data that these systems need to process. With so many systems required for all the applications and data, the amount of physical space required for the servers has also grown considerably. Data centers are packed — between the massive storage arrays and the servers, there is no space! These systems also use electricity, lots of it! And the majority of these servers are idle up to 85 percent of the time. This equates to a lot of wasted electricity being converted into heat. A server running at full tilt uses more electricity than an idling server, but six servers each running at 10 percent use much more electricity than a single server running at 60 percent. So, virtualization on industry-standard servers came at an opportune time; just when the demand was there, the server capabilities were there. Plus the external environmental needs, such as power consumption and physical space constraints, make virtualization an attractive option. Today’s more powerful servers enable you to consolidate multiple applications on multiple operating systems on a single hardware platform; something that would not have been possible even as little as five years ago.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

 Chapter 1: Virtualization: More Than a Technology Fad!

Virtualization: What’s old is new again Although it might seem that virtualization is the latest and greatest hot  thing in technology, it is actually a venerable and well-established computing concept. As long ago as the 1960s IBM was providing the ability to “timeshare” its machines. This concept, radical in its day, provided the ability to subdivide a machine to allow multiple programs to execute within an operating system and share the operating system’s resources (if this sounds a lot like Windows or Linux, you’re right — not much new exists under  the computing sun). It wasn’t long before IBM created  the ability to run multiple virtual machines on its mainframes. Each virtual machine (also referred to as a VM) looked exactly like the underlying machine, and each VM could have its own operating system installed. Perhaps the most famous of these early virtualization-enabled

mainframes was the VM/370, IBM’s direct successor to its revolutionary IBM 360 mainframe. The virtualization layer in these virtualization-enabled machines was referred to as the Virtual Machine Monitor (or VMM), which is a phrase you’ll hear applied to today’s virtualization products from companies like VMware, Microsoft, and Citrix. So, even though it seems like virtualization is a brand-new phenomenon, it has an honorable history. It’s just  that virtualization is no longer limited to mainframes; it’s been turned loose into today’s data center, ready  to run on everyday servers. This is made possible by the extremely powerful processors that come in servers nowadays. Sizing a server usually means that you must size for  the peaks, but the peaks are rare. Additionally, you may have many servers, but they don’t peak at the same time.

Hey, This Stuff Works  Early on in the rise of x86 virtualization there was significant doubt. Although showing the savings of making one server perform the function of several was very compelling, there was always the concern that if the virtualization layer (called a hypervisor  ) didn’t work, then many systems would fail. Despite this, organizations big and small embraced virtualization for their nonproduction applications. Development, testing, and other systems were virtualized, and many organizations saw benefits in stability and performance.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.



6

Virtualization For Dummies, 2nd Stratus Special Edition These successes led to virtualization of other applications, including smaller or less important production workloads. By abstracting the operating system installation from the hardware, organizations also saw benefits in their hardware procurement flexibility and lifecycle.

 Next Stage: Front-Line  Applications  The benefits of virtualization were significant, so it wasn’t long before IT organizations wanted to bring them to their most important applications. This task wasn’t difficult, but the risks were substantial. It meant putting a lot more faith in the hardware. Do you trust your hardware? How much? Enough to consolidate 20 servers onto one? If that one fails and takes out 20 virtual machines running your business-critical applications and databases, what will that do to your organization? And, how long will it take to fix the problem? This book focuses on virtualization as well as how important the hardware becomes when using virtualization. The commodity hardware that has empowered virtualization has also become virtualization’s biggest liability. Fortunately, Stratus Technologies has a solution that allows the ideal consolidation ratios for your organization without the risk of server failure.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

Chapter 2

The Forecast Is Cloudy with a Chance of Confusion  In This Chapter  ▶ Is your head in the clouds? ▶ IT loves to recycle ideas ▶ Public and private clouds

 M

any of the technologies related to virtualization seem confusing, especially when companies recycle the same ideas with new names. Sometimes, they even modify established nomenclature with new terms to better suit the times. Even so, it’s important to try to break through the fog and see how cloud computing matters to you.

You Say Cloud, I Say Timesharing? It seems like a law was recently passed prohibiting any discussion, presentation, or book on IT that doesn’t mention cloud computing. Cloud computing is a concept that treats computing as a utility, as opposed to computing normally done locally in a data center or on a server. This means that in cloud computing, you don’t care where the processing is done or on what hardware. You trust that the underlying hardware can do it, and you would usually have some sort of contract that defines the level of service you get, similar to a utility.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.



Virtualization For Dummies, 2nd Stratus Special Edition Virtualization has been around for a while, and originally it was called timesharing. Cloud computing takes timesharing to a whole new level, because the pools of resources that are timeshared are much bigger than they could have been in the early days of computing. Organizations can make use of cloud computing to do complex calculations that they would never be able to do in-house.

Old Concept, New Technologies  Just like virtualization, cloud computing has its roots in the 1960s. Some computer scientists in the 1960s predicted that computation would be similar to an electric or telephone grid, but it took the Internet’s ubiquity to make this practical. Using high speed connections over long distances, it can seem like cloud computing has almost limitless resources. Many older IT diagrams included an image of a cloud — the cloud was the network. This is where cloud computing  gets its name from, and that image fits with the public utility paradigm. But, did you know you can bring the cloud down into your data center? This private cloud concept is new, and represents the idea that an organization can offer cloud-based services to its internal customers.

The Role Virtualization Plays in the Cloud  Virtualization and cloud computing are separate things, but almost all cloud computing utilizes virtualization to make efficient use of resources. Whether the solution is a public cloud, private cloud, or a hybrid, virtualization is the abstraction engine that moves things around in the cloud without impacting the applications or servers. In a hybrid model, some elements of the solution are in a public cloud, and other parts are in a private cloud or not in a cloud at all.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

Chapter 3

Virtually Everything  In This Chapter  ▶ Virtualization — more than just servers ▶ VDI demands more reliable hardware, not less ▶ Virtualization outside of the server



irtualization’s time has come. The exponential power growth of computers, the substitution of automated processes for manual work, the increasing cost to power a multitude of computers to users, and the need to protect against business disruption all cry out for a less expensive way to run data centers. In fact, it’s probably not an exaggeration to say that a more efficient method of running data centers is critical, because the traditional methods of delivering computing are becoming cost-prohibitive. Virtualization is a solution for many of these problems.

 Not Just for Servers Anymore Desktops have seen great gains in hardware capabilities too; so, like servers, much of a desktop is underutilized. Why not create a bunch of desktops on the virtualization platform? Many organizations did just that, and virtual desktop infrastructure (VDI) was born. VDI usually involves taking desktop computing, virtualizing it, and running it on the servers. A connection broker is required to act as the traffic cop, directing requests from users throughout the enterprise to the right place. Businesses are turning to VDI to improve security, meet compliance standards, and regain enterprise-wide IT control while offering flexibility to users.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

10 

Virtualization For Dummies, 2nd Stratus Special Edition Core VDI components such as the broker and management solution are business-critical. If the host servers supporting these modules fail, many users are affected and work comes to a standstill. A resilient hardware platform built to prevent downtime should be part of your VDI deployment plan. The virtualization craze hasn’t spared storage either. Many storage arrays and storage networks include virtualization. The common thing is the abstraction of the hardware and the way virtualization places the focus on the object  — the data, server, or desktop, rather than the infrastructure that it runs on.

 Servers: Still the Hub of the Action Whether you’re talking about the desktop, storage, or anything else, all those things share the server as a hub. In fact, with a virtual desktop, the applications and the majority of your data reside on a server. Storage is critical, but not very useful unless it can be accessed and processed through a server. Applications running in the cloud may seem like they’re magic, but the reality is that somewhere, a server is still running the applications to make the magic appear. This is why the hardware that you choose for the server is critical. Most virtualization solutions have ways to handle server failures, such as failing over and recovering or restarting on another server. But these mechanisms do nothing to prevent downtime from happening in the first place.

Understanding Fault Tolerance Telephone connections. Emergency services. Financial transactions. Pharmaceutical manufacturing. Electronic health records. Utilities services. Applications where downtime can cause stiff fines to be imposed by regulatory authorities. All these are critical systems that can’t afford unplanned downtime — in fact, just a few seconds of unplanned downtime can cost thousands — even hundreds of thousands of dollars — or put lives on the line.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

 Chapter 3: Virtually Everything

11

In these environments, the harsh consequences of hardware failure drove the development of an extra-robust class of servers known as fault-tolerant  systems. The design principle behind fault-tolerant servers is that hardware will fail, so it’s critical to be able to continue processing data when something breaks. Fault-tolerant hardware architecture is based on redundant components — literally two of everything — so that if one component fails, its companion component is already on the job. There’s more to it than just hardware duplication, however. Fault-tolerant machines integrate the hardware with clever software that perfectly synchronizes all those duplicate components, so that they operate as a single machine, and both sets of CPUs are processing the same instructions at the same time. This is called lockstep processing. These machines deliver in excess of 99.999 percent uptime, which translates to less than five minutes of unplanned downtime a year! If you’ve got applications your business can’t be without or that lives depend upon, fault-tolerant systems are the most appropriate solution for preventing downtime. They also bring important advantages to virtualized environments.

The Virtualized Environment  Virtualization changes the traditional hardware expectations of IT shops: It used to be that most companies owned a few mission-critical applications that necessitated highly reliable hardware, along with a myriad of less-important applications that ran on commodity servers. In a virtualized infrastructure, collections of less-important applications become business-critical because they’re aggregated on fewer servers supporting many virtual machines. It is important to have a hardware strategy appropriate to the environment. And don’t forget just how vital it is to keep your virtualization management solution up and running all  the time. Virtualization environments are heterogeneous, with a range of applications of varying criticality. Infrastructure must be

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

12

Virtualization For Dummies, 2nd Stratus Special Edition properly aligned with the application mix to deliver the corresponding service levels that each application category requires. This environment is illustrated in Figure 3-1. Virtualized Environment Virtualization Management and Services Virtual Machines

APP

APP

APP

APP

APP

APP

APP

APP

APP

APP

OS

OS

OS

OS

OS

OS

OS

OS

OS

OS

Virtualization Layer Fault-Tolerant Servers

Physical Servers

SAN

Minimum Service Level

Noncritical 99%

High Availability 99.9% or greater

Fault Tolerant 99.999% or greater

Figure 3-1: The virtualization environment.

As you can see, evaluating your applications’ uptime requirements will allow you to understand what type of hardware to apply within your virtualized infrastructure. Noncritical virtual machines don’t require the redundancy of fault-tolerant hardware. However, important virtual machines need the uptime assurance that fault tolerance guarantees — as does the virtualization management software itself. Remote offices and branch offices (ROBO) should also consider the advantages of consolidating both critical and noncritical applications on a single virtualization platform. This reduces total cost of ownership (TCO), simplifies deployment and maintenance procedures, and provides a platform to run legacy applications on the latest server technologies. Doing this on a single fault-tolerant machine can guarantee server uptime and provide “peace of mind” in locations where IT personnel are nonexistent or in short supply.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

Chapter 4

Uptime All the Time  In This Chapter  ▶ So

you think 99.9 percent is good?

▶ That

fifth nine — the Holy Grail

▶ Aren’t

high availability and fault tolerance the same thing?



he paradox of virtualization is that it removes hardware dependencies while making hardware more important. A collection of virtual machines relies even more heavily on reliable hardware, because fewer physical servers now support a large collection of virtual machines. Business-critical applications are vital to the operation of a company. When an individual server must support many workloads, even noncritical applications become critical when viewed as part of the collective business process. Although a number of solutions can improve the reliability of applications, fault tolerance provides a hardware-based method that delivers continuous uptime assurance.

The Nines  If 100 percent is perfection, then 99.999+ percent availability is the Holy Grail. What do most solutions attain? Try plain old 99 percent! That’s right, an average x86 server will often deliver 99 percent availability for the service running on it. This may seem pretty good, until you consider what this means to your organization. Two nines of availability means a system has unplanned outages totaling 87.6 hours a year — and you don’t get to choose those hours! Now consider that an hour of downtime costs the average company between $100,000 and $150,000 dollars. You do the math.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

14

Virtualization For Dummies, 2nd Stratus Special Edition It’s relatively easy to go to three nines: 99.9 percent. All it takes is a good server with redundant power supplies, fans, and a RAID array. Coupled with good practices, you can get to three nines, which equals about 8.76 hours of unplanned downtime a year. That might seem like a big jump, but a day’s downtime during peak processing periods can still wreak havoc on your bottom line. Climbing up the scale to 99.95 percent uptime often involves cluster technology. Typically called a high availability  (HA) solution, clusters restart an application on a healthy system after failure. Some cluster solutions claim 99.99 percent, but offering only 52 minutes of downtime a year requires a really well-built cluster with an application that can failover very quickly. Many commonly clustered applications such as databases can’t failover that quickly because they must check file integrity and replay transaction logs after the failure. So the Holy Grail for any system is five nines: 99.999 percent availability, which converts to a bit more than five minutes of downtime a year! In order to hit this number, you need to avoid system failure in the first place, rather than try to recover from it. Take a look at Figure 4-1 to get a visual. Stratus Zone

The Bottom Line on Nines Conventional, unmanaged

Typical Cloud service level

Conventional clusters, VMs

Stratus Avance® Software

Stratus ftServer® Systems

Uptime Level

99%

99.9%

99.95%

99.99%

99.999%

Downtime/Yr.

87.6 hr.

8.76 hr.

4.38 hr.

57 min.

5 1 / 2 min.

Annual Cost

$ 8,760,000

$ 876,000

$ 438,000

$ 83,200

$ 8,000

Figure 4-1: Nine chart. (Annual cost is based on $100,000 per hour of unplanned downtime).

 So You Think You’re Fault Tolerant  The terms high availability and fault tolerance get used interchangeably all the time, leading to confusion. Traditional HA solutions usually involve data replication or clustering that

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

 Chapter 4: Uptime All the Time

15 

aims to recover from a failure. However, in these scenarios, a system failure does occur. To recover from failure, applications are restarted on a healthy system. In most cases this requires applications that are cluster aware and may involve scripting by your IT staff. In a fault-tolerant server, every component is duplicated and runs in hardware lockstep. That means the components are processing the same instructions on the same CPU clock cycle. If a part fails, its partner simply keeps right on processing. That’s why a fault-tolerant ftServer System doesn’t failover or require a restart. Fault tolerance also ensures that all your data is available, even data that was being written out to disk or was in memory (known as in-flight data ) when the hardware component failed. All fault tolerance isn’t equal. Some virtualization solutions emulate fault tolerance with software, which has several drawbacks. For one, it essentially creates another shadow virtual machine (VM) that processes the instructions in a softwarebased lockstep. Software emulation incurs lots of overhead. This can have a big impact on performance, because the CPU has to handle this load. There are also limitations regarding the ability to scale past a single CPU core, certainly not ideal for those power-hungry business applications and databases. By contrast, the Stratus architecture is based on full-function hardware fault tolerance. ftServer systems are designed from the beginning as fault-tolerant platforms. Applications are able to take full advantage of multicore symmetric multiprocessing. Hardware fault tolerance ensures maximum performance, uptime assurance, and comprehensive data protection.

 Stratus Equals Uptime Stratus products and services are designed to automatically prevent downtime and data loss. Their advanced uptime assurance technologies leverage the unique intellectual property they’ve refined over 30 years of keeping critical applications up and running.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

16

Virtualization For Dummies, 2nd Stratus Special Edition Today, Stratus customers benefit from plug-and-play uptime assurance and worry-free computing for physical servers, virtual servers, or cloud deployments (see Figure 4-2). Proactive Availability Management

24/7 monitoring: people / practices

Automated Uptime Layer

Detects, isolates, and handles faults before they cause downtime Lockstep hardware withstands faults  that would cause other servers  to crash

Figure 4-2: Uptime assurance.

These integrated uptime technologies are embedded into every Stratus ftServer product and service to ensure uptime, all the time: ✓ Resilient fault-tolerant server hardware: Duplicated lockstep hardware withstands faults that would cause other servers to crash. ✓ Automated Uptime Layer: Predictive technology constantly monitors hundreds of system components and sensors to identify, isolate, handle, and report issues automatically — before they cause downtime or data loss. ✓ Proactive availability monitoring and management: Stratus uptime experts remotely monitor the system over a secure global network. Leveraging information provided by the automated uptime layer, these experts are available 24/7 to diagnose and remediate more complex issues remotely.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

Chapter 5

Service: A Key Ingredient in the Fault-Tolerant Pie  In This Chapter  ▶ Service is everything ▶ Designed to make service easy ▶ Virtualizing the technician

 N 

o matter how well designed any component is, occasionally it will fail. Almost everyone in IT understands that, but the real measuring stick is how well you deal with it. Stratus’ first line of defense against downtime is embedded into every ftServer system they make. The resilient server rides through many errors. If a part malfunctions, the system keeps on running and automatically “phones home” to Stratus to report the issue and request a replacement component.

Following the Sun 7/24/365  Most people are at their best when they’re wide awake. With a new virtualization host running many business-critical VMs, you probably aren’t sleeping much, anyway. But if you want to experience the good night’s sleep that comes from peace of mind, search out a fault-tolerant server. With a Stratus ftServer system, you can rest easy. These servers are monitored 7/24/365 over a secure global ActiveService network. Leveraging information provided by the automated uptime layer, Stratus service experts are able to resolve nearly every issue you have online, while your system keeps on running. These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

18 

Virtualization For Dummies, 2nd Stratus Special Edition There’s no waiting for a service technician to arrive and get your business back online.

Fixing It before It Breaks  Although it may sometimes seem like computers fail out of the blue, there are usually indications before this happens. Things like component temperature, rotational speed of fans, and the number of errors that a hard drive makes can all predict a failure. The key is noticing and tracking these indicators and then putting them together. Most humans aren’t very good at this, because it takes a lot of detail work. A temperature difference of 1 degree in a CPU may not seem like an issue — if the ambient temperature is acceptable, the system won’t be overtaxed, and yet if the CPU temperature is creeping up, there may be a problem brewing. Every Stratus server has a built-in Automated Uptime layer that acts as your first line of defense against downtime. It tracks and alerts your team to a multitude of important details that mere mortals might overlook. The Automated Uptime Layer constantly monitors more than 500 system components and sensors to identify, handle, and report faults — before they impact your business applications. It’s like having a dedicated technician monitor the server. This virtual technician never gets tired, never gives up, and is always looking at the big picture, providing rootcause analysis data.

Phone Home Even the most skilled technicians reach their technical limits every once in a while, and the best ones ask for help. Like with people, asking for help is a sign of skill and maturity, not weakness. Who better to ask for help on a server than the folks who designed it? That’s why Stratus servers automatically “phone home” to Stratus’ Customer Assistance Centers (CAC) to report hardware and software issues. So when bad things occur, the information is immediately sent to the people who can fix it. Even if a component fails, a Stratus ftServer system will keep going without performance degradation, data loss, or even the most minuscule amount of downtime.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

 Chapter 5: Service: A Key Ingredient in the Fault-Tolerant Pie

19

 All the Pieces Make for  a Fault-tolerant Pie One of the best and worst things about a fault-tolerant system is that after a component fails, the system keeps working, and no one notices. That makes for happy customers, but in case you’re not paying close attention, your Stratus server notifies you too. That’s why a Stratus ftServer with full-featured fault tolerance goes way beyond commodity servers and clusters.

Stratus Avance high availability software Small to medium businesses that or standby solutions. Key benefits of want to virtualize but haven’t because Stratus Avance high-availability vir they lack funding or specialized IT  tualization software include: staff now have an option. Stratus ✓ Automatic high availability. Avance software is an integrated, Avance offers greater than 99.99 easy-to-use, high-availability solupercent uptime (less than one  tion that comes equipped with built-in hour of downtime per year) for virtualization. In a matter of minutes, industry-standard x86 servers, Avance software transforms a pair including Dell, HP, IBM, and Intel of standard x86 servers into a highly x86 servers. available, highly reliable virtualization platform that shields small to medium ✓ The ability to ride through most businesses and remote locations causes of system interruption, from the high costs of downtime, data featuring automatic recovery in loss, and reduced productivity. seconds. The built-in virtualization wizard makes it easy to set up a highly available virtual environment in minutes — without any specialized knowledge of these technologies.

✓ No need for specialized knowledge or certification in high availability or virtualization to set up, use, and manage Avance software.

Designed to configure and manage its own operation, Avance software makes it easy and economical to benefit from virtualization. The results are also far superior in performance and cost to clusters

✓ A single web-based management console simplifies administration. For more information on Avance software, visit  www.stratus.com/ avance.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

20 

Virtualization For Dummies, 2nd Stratus Special Edition The Stratus fault-tolerant architecture protects organizations from software faults and the failure of individual hardware components. Although redundancy is one aspect, there is much more to true fault tolerance than just the hardware design. It encompasses hardware, software, and service technologies working in concert to prevent downtime and data loss. With a Stratus ftServer system, you don’t need to change your career if you’re one of those people that need to be able to sleep. Virtualizing your business-critical applications and databases on a full featured fault-tolerant server is good for you and your organization!

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

Chapter 6

Ten Things to Keep in Mind when Virtualizing Those Business Apps  In This Chapter  ▶ Don’t build a house of cards ▶ Service is key



ere are ten key things to keep in mind when virtualizing business-critical applications:

✓ Remember that virtualization makes hardware more important, not less. Even though virtualization is a software application, it enables you to run more systems on a single server — so the server becomes more important. If the server crashes, all the operating systems and applications (VMs) on that server come tumbling down as well. ✓ Evaluate your current computing environment. Your computing infrastructure is made up of a wide variety of applications — some mission-critical, some important to daily work tasks, and some that are called on during peak periods. Virtualization is a good opportunity to take inventory and figure out how many of each type you have. ✓ Recognize that virtualization creates business-critical applications. Fewer physical servers host many virtual machines. Collectively they become critical to the smooth operation of your business. ✓ Keep your nines in mind. When it comes to servers, the important number is nine. With standard availability of

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

22

Virtualization For Dummies, 2nd Stratus Special Edition 99 percent, you can expect to have days of downtime every year — instead of the five minutes or less you may get with 99.999 percent uptime assurance from Stratus.

✓ How much uptime is right for you? Clusters recover after failure has already occurred; their tragic flaw is that they don’t prevent failure from happening in the first place. Stratus hardware resources are duplicated, and each resource works on every computing instruction, so hardware failures don’t cause downtime — if one resource fails, another is already working on the same instruction and transparently continues operating as normal. ✓ No modifications required.  With such powerful capabilities of a fully fault tolerant server, it’s easy to think that you would have to modify your applications to take advantage of them. Not so with a Stratus ftServer system, where you can immediately take advantage of Stratus’ full range of fault-tolerant features. ✓ Don’t forget the role service plays! Resilient hardware is just one piece of the fault tolerant pie. If a problem does arise, it’s nice to know that 99 percent of the time it can be remotely resolved. Sleep soundly, because you won’t even notice anything went wrong until you get a call from Stratus. ✓ Don’t overlook virtualization management. The ease of creating new virtual machines has led to a new term, VM sprawl, referring to the fact that virtualization can make new systems multiply like rabbits. That’s why it’s important to use an appropriate virtualization management tool, and configure it on a platform that ensures management capability is always available. ✓ Even small businesses can play the virtualization game. Stratus Avance high-availability software is the answer for small and medium businesses (SMBs). This reliable and affordable solution allows you to easily set up VMs and keep them up and running without any expertise in virtualization or HA technologies. ✓ Don’t forget your end-of-project party!  This phase of your virtualization project is the easiest one of all. Use the time and money you’ve saved by virtualizing on Stratus solutions to hire the best caterer, bartender, and entertainment.

These materials are the copyright of John Wiley & Sons, Inc. and any dissemination, distribution, or unauthorized use is strictly prohibited.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF