Perf

December 19, 2018 | Author: neovik82 | Category: Scalability, Workload, Computer Data Storage, Systems Theory, Computer Engineering
Share Embed Donate


Short Description

Download Perf...

Description

Term / Concept

Description

Capacity

The capacity of a system is the total workload it can handle without violating  predetermined key performance acceptance criteria. A capacity test complements test  complements load testing by determining your server’s ultimate failure  point, whereas load testing monitors results at various levels of load and traffic patterns. You perform capacity testing in conjunction with capacity planning, which you use to  plan for future growth, such as an increased user base or increased volume of data. For  example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network   bandwidth) are necessary to support future usage levels. Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or  scale out. test  is any performance test A component test is that targets an architectural component of the application. Commonly tested components include servers, databases, networks, firewalls, and storage devices. test is a type of performance An endurance test is test focused on determining or validating  performance characteristics characteristics of the product under test when subjected to workload models and load volumes anticipated during  production operations operations over an extended  period of time. Endurance testing is a subset of load testing.  Investigation is an activity based on collecting information related to the speed, scalability, and/or stability characteristics of  the product under test that may have value in determining or improving product quality. Investigation is frequently employed to  prove or disprove hypotheses regarding the root cause of one or more observed  performance issues.

Capacity test

Component test

Endurance test

Investigation

Latency

Metrics

Performance

Performance test

Performance budgets or allocations

Performance goals

Performance objectives

Performance requirements

 Latency is a measure of responsiveness that represents the time it takes to complete the execution of a request. Latency may also represent the sum of several latencies or  subtasks. Metrics are measurements obtained by running performance performance tests as expressed on a commonly understood scale. Some metrics commonly obtained through performance tests include processor utilization over time and memory usage by load.  Performance refers to information information regarding your application’s response times, throughput, and resource utilization levels. A performance test  is a technical investigation investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance Performance testing is the superset containing all other subcategories of   performance testing described in this chapter.  Performance budgets (or allocations (or allocations)) are constraints placed on developers regarding allowable resource consumption for their  component.  Performance goals are the criteria that your  team wants to meet before product release, although these criteria may be negotiable under certain circumstances. For example, if  a response time goal of three seconds is set for a particular transaction but the actual response time is 3.3 seconds, it is likely that the stakeholders will choose to release the application application and defer performance tuning of  that transaction for a future release.  Performance objectives are usually specified in terms of response times, throughput (transactions per second), and resourceutilization levels and typically focus on metrics that can be directly related to user  satisfaction.  Performance requirements requirements are those criteria that are absolutely non-negotiable non-negotiable due to contractual obligations, service level agreements (SLAs), or fixed business needs. Any performance criterion that will not unquestionably unquestionably lead to a decision to delay a release until the criterion passes is not

absolutely required ― and therefore, not a requirement.

Performance targets

Performance testing objectives

Performance thresholds

Resource utilization

Response time

Saturation Scalability

 Performance targets are the desired values for the metrics identified for your project under a particular set of conditions, usually specified in terms of response time, throughput, and resource-utilization levels. Resource-utilization Resource-utilization levels include the amount of processor capacity, memory, disk  I/O, and network I/O that your application consumes. Performance targets typically equate to project goals.  Performance testing objectives refer to data collected through the performance-testing  process that is anticipated to have value in determining or improving product quality. However, these objectives are not necessarily quantitative quantitative or directly related to a  performance requirement, requirement, goal, or stated quality of service (QoS) specification. specification.  Performance thresholds are the maximum acceptable values for the metrics identified for your project, usually specified in terms of  response time, throughput (transactions per  second), and resource-utilization resource-utilization levels. Resource-utilization Resource-utilization levels include the amount of processor capacity, memory, disk  I/O, and network I/O that your application consumes. Performance thresholds typically equate to requirements.  Resource utilization is the cost of the project in terms of system resources. The primary resources are processor, memory, disk I/O, and network I/O.  Response time is a measure of how responsive an application or subsystem is to a client request. Saturation refers to the point at which a resource has reached full utilization. Scalability refers to an application’s ability to handle additional workload, without adversely affecting performance, by adding resources such as processor, memory, and

storage capacity.

Scenarios

Smoke test

Spike test

Stability

Stress test

Throughput

Unit test

In the context of performance testing, a  scenario is a sequence of steps in your  application. application. A scenario can represent a use case or a business function such as searching a product catalog, adding an item to a shopping cart, or placing an order. A smoke test is test  is the initial run of a  performance test to see if your application can perform its operations under a normal load. A spike test is test  is a type of performance test focused on determining or validating  performance characteristics characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time. Spike testing is a subset of stress testing. In the context of performance testing,  stability refers to the overall reliability, robustness, functional and data integrity, availability, availability, and/or consistency of  responsiveness for your system under a variety conditions. test is a type of performance test A stress test is designed to evaluate an application’s application’s  behavior when it is pushed beyond normal or   peak load conditions. The goal of stress testing is to reveal application application bugs that surface only under high load conditions. These bugs can include such things as synchronization synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application’s weak points, and shows how the application behaves under  extreme load conditions. Throughput  is the number of units of work  that can be handled per unit of time; for  instance, requests per second, calls per day, hits per second, reports per year, etc. In the context of performance testing, a unit  test is test is any test that targets a module of code where that module is any logical subset of  the entire existing code base of the application, application, with a focus on performance

characteristics. Commonly tested modules include functions, procedures, routines, objects, methods, and classes. Performance unit tests are frequently created and conducted by the developer who wrote the module of code being tested In the context of performance testing, Utilization utilization is the percentage of time that a resource is busy servicing user requests. The remaining percentage of time is considered idle time. test  compares the speed, Validation test A validation test compares scalability, and/or stability characteristics of  the product under test against the expectations expectations that have been set or presumed for that product. Workload is Workload  is the stimulus applied to a Workload system, application, or component to simulate a usage pattern, in regard to concurrency and/or data inputs. The workload includes the total number of users, concurrent active users, data volumes, and transaction volumes, along with the transaction mix. For performance modeling, you associate a workload with an individual scenario. Performance testing helps to identify bottlenecks in a system, establish a baseline for future testing, support a performance tuning effort, and determine compliance with performance goals and requirements. Including performance testing very v ery early in your development life cycle tends to add significant value to the project. For a performance performance testing project to be successful, the testing must be relevant to the context of the project, which helps you to focus on the items that that are truly important. If the performance characteristics are unacceptable, you will typically want to shift the focus from performance testing to performance tuning in order to make the application perform acceptably. You will likely also focus on tuning if you want to reduce the amount of  resources being used and/or further improve system performance. performance. Performance, load, and stress tests are subcategories of performance testing, each intended for a different purpose. Creating a baseline against which to evaluate the effectiveness of subsequent performanceimproving changes to the system or application will generally increase project efficiency.

Term

Purpose

Notes

Performance test

Load test

To determine or validate speed, scalability, and/or  stability.

1 A performance test is a technical investigation done to determine or validate the responsiveness, speed, scalability, scalability, and/or stability characteristics characteristics of the product under test.

To verify application behavior  under normal and peak load Load testing is conducted to conditions. verify that your application can meet your desired  performance  performance objectives; these  performance  performance objectives are often specified in a service level agreement (SLA). A load test enables you to measure response times, throughput throughput rates, and resource-utilization resource-utilization levels, and to identify your application’s  breaking point, assuming that the breaking point occurs  below the peak load condition. Endurance testing is a subset of load testing. An endurance test is test  is a type of performance test focused on determining or  validating the performance characteristics characteristics of the product under test when subjected to workload models and load volumes anticipated during  production operations over an extended period of time. 1 Endurance testing may  be used to calculate Mean Time Between Failure (MTBF), Mean Time To Failure (MTTF), and similar  metrics.

Stress test

To determine or validate an application’s application’s behavior when it 1 The goal of stress is pushed beyond normal or  testing is to reveal application  peak load conditions.  bugs that surface only under  high load conditions. These  bugs can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application’s weak points, and shows how the application behaves under  extreme load conditions. 2 Spike testing is a subset of stress testing. A  spike test is test  is a type of   performance  performance test focused on determining or validating the  performance  performance characteristics characteristics of  the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of  time.

Summary Matrix of Benefits by Key Performance Test Types Term

Benefits

Challenges and Areas Not Addressed

Performance test o

o

Determines the speed, scalability and stability characteristics of an application, thereby  providing an input to making sound business decisions. Focuses on determining if  the user of the system will  be satisfied with the  performance characteristics characteristics of the application.

o

o

o

o

May not detect some functional defects that only appear under  load. If not carefully designed and validated, may only be indicative of   performance characteristics in a very small number of  Production scenarios. Unless tests are conducted on the

o

o

Identifies mismatches   between performancerelated expectations and reality. Supports tuning, capacity  planning, and optimization efforts

 production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.

Load test o

o

o

o

o

o

o

o

Stress test

Determines the throughput required to support the anticipated  peak production load. Determines the adequacy of a hardware environment. Evaluates the adequacy of a load  balancer. Detects concurrency issues. Detects functionality functionality errors under load. Collects data for  scalability and capacity-planning  purposes. Helps to determine how many users the application can handle  before performance is compromised. Helps to determine how much load the hardware can handle  before resource utilization limits are exceeded.

Determines if data can Overstressing the system.

1 Is not designed to  primarily focus on speed of  response. 2 Results should only be used for comparison with other related load tests.

o



Provides an estimate of how far beyond the target load an

Because stress tests are unrealistic by design, some stakeholders may dismiss test results.

application application can go  before causing failures and errors in addition to slowness.

It is often difficult to know how much stress is worth applying.

o

It is possible to cause application and/or  network failures that may result in significant disruption if not isolated to the test environment. environment.

o o

o

o

o

Allows you to establish applicationapplicationmonitoring monitoring triggers to warn of impending failures. Ensures that security vulnerabilities vulnerabilities are not opened up by stressful conditions.

1

Determines the side effects of common hardware or  supporting application failures. Helps to determine what kinds of failures are most valuable to  plan for.

Capacity test •









Provides information about how workload can  be handled to meet  business requirements. Provides actual data that capacity planners can use to validate or  enhance their models and/or predictions. Enables you to conduct various tests to compare capacity-planning models and/or   predictions. Determines the current usage and capacity of  the existing system to aid in capacity planning. Provides the usage and





Capacity model validation tests are complex to create.  Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most

capacity trends of the existing system to aid in capacity planning

Summary Table of Core Performance-Testing Activities The following table summarizes the seven core performance-testing activities along with the most common input and output for each activity. Note that project context is not listed, although it is a critical input item for each activity. Activity Input Output Activity 1. Identify the Test Environment







Activity 2. Identify Performance Acceptance Criteria

• •





Logical and physical  production architecture Logical and physical test architecture Available tools

Client expectations Risks to be mitigated Business requirements Contractual obligations

















Activity 3. Plan and Design Tests





• •

Available application application features and/or components Application usage scenarios Unit tests Performance acceptance criteria

• •









Comparison of test and production environments Environment-related concerns Determination Determination of  whether additional tools are required

Performance-testing success criteria Performance goals and requirements Key areas of  investigation Key performance indicators Key business indicators

Conceptual strategy Test execution  prerequisites Tools and resources required Application Application usage models to be simulated Test data required to implement tests Tests ready to be

implemented

Activity 4. Configure the Test Environment

• • •

Conceptual strategy Available tools Designed tests





Activity 5. Implement the Test Design

• •





Activity 6. Execute the Test

• •





Activity 7. Analyze Results, Report, and Retest







Conceptual strategy Available tools/environment Available application application features and/or components Designed tests







Configured loadgeneration and resource-monitoring tools Environment ready for performance testing

Validated, executable tests Validated resource monitoring Validated data collection

Task execution plan Available tools/environment Available application application features and/or components Validated, executable tests

1

Test execution results

Task execution results Performance acceptance criteria Risks, concerns, and issues

1 2 3

Results analysis Recommendations Reports

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF