Testing+Tools+Material

October 28, 2017 | Author: Eshwar Chaitanya | Category: Software Testing, Software Bug, Quality Assurance, Usability, Software Release Life Cycle
Share Embed Donate


Short Description

Download Testing+Tools+Material...

Description

Software Testing Material Software Testing: Testing is a process of executing a program with the intent of finding error. Software Engineering: Software Engineering is the establishment and use of sound engineering principles in order to obtain economically software that is more reliable and works efficiently on real machines. Software engineering is based on Computer Science, Management Science, Economics, Communication Skills and Engineering approach. What should be done during testing? Confirming product as • Product that has been developed according to specifications • Working perfectly • Satisfying customer requirements Why should we do testing? • Error free superior product • Quality Assurance to the client • Competitive advantage • Cut down costs How to test? Testing can be done in the following ways: • Manually • Automation (By using tools like WinRunner, LoadRunner, TestDirector …) • Combination of Manual and Automation. Software Project: A problem solved by some people through a process is called a project. Information Gathering – Requirements Analysis – Design – Coding – Testing – Maintenance: Are called as Project Software Project

Problem

Process

Product

Software Development Phases: Information Gathering: It encompasses requirements gathering at the strategic business level. Planning: To provide a framework that enables the management to make reasonable estimates of

Page 1 of 132

Software Testing Material

• • • •

Resources Cost Schedules Size

Requirements Analysis: Data, Functional and Behavioral requirements are identified. • • •

Data Modeling: Defines data objects, attributes, and relationships. Functional Modeling: Indicates how data are transformed in the system. Behavioral Modeling: Depicts the impact of events.

Design: Design is the engineering representation of product that is to be built. • • • •

Data Design: Transforms the information domain model into the data structures that will be required to implement the software. Architectural design: Relationship between major structural elements of the software. Represents the structure of data and program components that are required to build a computer based system. Interface design: Creates an effective communication medium between a human and a computer. Component level Design: Transforms structural elements of the software architecture into a procedural description of software components.

Coding: Translation into source code (Machine readable form) Testing: Testing is a process of executing a program with the intent of finding error • • • •

Unit Testing: It concentrates on each unit (Module, Component…) of the software as implemented in source code. Integration Testing: Putting the modules together and construction of software architecture. System and Functional Testing: Product is validated with other system elements are tested as a whole User Acceptance Testing: Testing by the user to collect feed back.

Maintenance: Change associated with error correction, adaptation and enhancements. • • • •

Correction: Changes software to correct defects. Adaptation: Modification to the software to accommodate changes to its external environment. Enhancement: Extends the software beyond its original functional requirements. Prevention: Changes software so that they can be more easily corrected, adapted and enhanced.

Business Requirements Specification (BRS): Consists of definitions of customer requirements. Also called as CRS/URS (Customer Requirements Specification / User Requirements Specification) Page 2 of 132

Software Testing Material

Software Requirements Specification (S/wRS): Consists of functional requirements to develop and system requirements(s/w & H/w) to use. Review: A verification method to estimate completeness and correctness of documents. High Level Design Document (HLDD): Consists of the overall hierarchy of the system in terms of modules. Low Level Design Document (LLDD): Consists of every sub module in terms of Structural logic (ERD) and Backend Logic(DFD) Prototype: A sample model of an application without functionality is called as prototype(Screens) White Box Testing: A coding level testing technique to verify completeness and correctness of the programs with respect to design. Also called as Glass BT or Clear BT Black Box Testing: It is a .exe level of testing technique to validate functionality of an application with respect to customer requirements. During this test engineer validate internal processing depends on external interface. Grey Box Testing: Combination of white box and black box testing. Build: A .Exe form of integrated module set is called build. Verification: whether system is right or wrong? Validation: whether system is right system or not? Software Quality Assurance(SQA): SQA concepts are monitoring and measuring the strength of development process. Ex: LCT (Life Cycle Testing) Quality: • Meet customer requirements • Meet customer expectations (cost to use, speed in process or performance, security) • Possible cost • Time to market For developing the quality software we need LCD and LCT LCD: A multiple stages of development stages and the every stage is verified for completeness. V model: Build: When coding level testing over. it is a completely integration tested modules. Then it is called a build. Build is developed after integration testing. (.exe)

Page 3 of 132

Software Testing Material Test Management: Testers maintain some documents related to every project. They will refer these documents for future modifications. Assessment of Development Plan Prepare TestPlan Information Gathering Requirements Phase Testing & Analysis

Design and Coding

Install Build

Maintenance

Design Phase Testing Program Phase Testing (WBT) Functional & System Testing User Acceptance Testing Test Environment Process

Port Testing Test Software Changes Test Efficiency

Port Testing: This is to test the installation process. Change Request: The request made by the customer to modify the software. Defect Removel Efficiency: DRE= a/a+b. a = Total no of defects found by testers during testing. b = Total no of defects found by customer during maintenance. DRE is also called as DD(Defect Deficiency). BBT, UAT and Test management process where the independent testers or testing team will be involved. Refinement form of V-Model: Due to cost and time point of view v-model is not applicable to small scale and medium scale companies. This type of organizations are maintaining a refinement form of v-model.

Page 4 of 132

Software Testing Material

BRS/URS/CRS

User Acceptance Testing

S/wRS

Functional & System Testing

HLDD

Integration Testing

LLDD

Unit Testing Code

Fig: Refinement Form of V-Model Development starts with information gathering. After the requirements gathering BRS/CRS/URS will be prepared. This is done by the Business Analyst. During the requirements analysis all the requirements are analyzed. at the end of this phase S/wRS is prepared. It consists of the functional (customer requirements) + System Requirements (h/w + S/w) requirements. It is prepared by the system analyst. During the design phase two types of designs are done. HLDD and LLDD. Tech Leads will be involved. During the coding phase programs are developed by programmers. During unit testing, they conduct program level testing with the help of WBT techniques. During the Integration Testing, the testers and programmers or test programmers integrating the modules to test with respect to HLDD. During the system and functional testing the actual testers are involved and conducts tests based on S/wRS. During the UAT customer site people are also involved, and they perform tests based on the BRS. From the above model the small scale and medium scale organizations are also conducts life cycle testing. But they maintain separate team for functional and system testing. Reviews during Analysis: Quality Analyst decides on 5 topics. after completion of information gathering and analysis a review meeting conducted to decide following 5 factors.

Page 5 of 132

Software Testing Material 1. 2. 3. 4. 5.

Are they complete? Are they correct? Or Are they right Requirements? Are they achievable? Are they reasonable? ( with respect to cost & time) Are they testable?

Reviews during Design: After the completion of analysis of customer requirements and their reviews, technical support people (Tech Leads) concentrate on the logical design of the system. In this every stage they will develop HLDD and LLDD. After the completion of above like design documents, they (tech leads) concentrate on review of the documents for correctness and completeness. In this review they can apply the below factors. • • • • •

Is the design good? (understandable or easy to refer) Are they complete? (all the customer requirements are satisfied or not) Are they correct? Are they right Requirements? (the design flow is correct or not) Are they follow able? (the design logic is correct or not) Does they handle error handling? ( the design should be able to specify the positive and negative flow also)

User Information

User

Logi n

Inbox

Invalid User Unit Testing: After the completion of design and their reviews programmers are concentrating on coding. During this stage they conduct program level testing, with the help of the WBT techniques. This WBT is also known as glass box testing or clear box testing. WBT is based on the code. The senior programmers will conduct testing on programs WBT is applied at the module level. There are two types of WBT techniques, such as 1. Execution Testing  Basis path coverage (correctness of every statement execution.)  Loops coverage (correctness of loops termination.) Page 6 of 132

Software Testing Material  Program technique coverage (Less no of Memory Cycles and CPU cycles during execution.) 2. Operations Testing: Whither the software is running under the customer expected environment platforms (such as OS, compilers, browsers and etc…sys s/w.) Integration Testing: After the completion of unit testing, development people concentrate on integration testing, when they complete dependent modules of unit testing. During this test programmers are verifying integration of modules with respect to HLDD (which contains hierarchy of modules). There are two types of approaches to conduct Integration Testing: • •

Top-down Approach Bottom-up approach.

Stub: It is a called program. It sends back control to main module instead of sub module. Driver: It is a calling Program. It invokes a sub module instead of main module. Top-down: This approach starts testing, from the root.

Mai n

Sub Module1

Stub Sub Module2

Bottom-Up: This approach starts testing, from lower-level modules. drivers are used to connect the sub modules. ( ex login, create driver to accept default uid and pwd)

Mai n

Driver

Sub Module1

Sub Module2 + Page 7 of 132

Software Testing Material Sandwich: This approach combines the Top-down and Bottom-up approaches of the integration testing. In this middle level modules are testing using the drivers and stubs.

Mai n

Driver

Sub Module1 Stub Sub Module2

Sub Module3

System Testing: • Conducted by separate testing team • Follows Black Box testing techniques • Depends on S/wRS • Build level testing to validate internal processing depends on external interface processing depends on external interface • This phase is divided into 4 divisions After the completion of Coding and that level tests(U & I) development team releases a finally integrated all modules set as a build. After receiving a stable build from development team, separate testing team concentrate on functional and system testing with the help of BBT. This testing is classified into 4 divisions. • • • •

Usability Testing (Ease to use or not. Low level Priority in Testing) Functional Testing (Functionality is correct or not. Medium Priority in Testing) Performance Testing (Speed of Processing. Medium Priority in Testing) Security Testing (To break the security of the system. High Priority in Testing)

Usability and System testing are called as Core testing and Performance and Security Testing techniques are called as Advanced testing. Usability Testing is a Static Testing. Functional Testing is called as Dynamic Testing. From the testers point of view functional and usability tests are important. Usability Testing: User friendliness of the application or build. (WYSIWYG.) Usability testing consists of following subtests also.

Page 8 of 132

Software Testing Material User Interface Testing • Ease of Use ( understandable to end users to operate ) • Look & Feel ( Pleasantness or attractiveness of screens ) • Speed in interface ( Less no. of events to complete a task.) Manual Support Testing: In general, technical writers prepares user manuals after completion of all possible tests execution and their modifications also. Now a days help documentation is released along with the main application.

Development Team releases Build User Interface Testing.

Remaining System Testing techniques like Functionality, Performance and Security Tests.

System Testing

Manual Support Testing.

Help documentation is also called as user manual. But actually user manuals are prepared after the completion of all other system test techniques and also resolving all the bugs. Functional testing: During this stage of testing, testing team concentrate on " Meet Customer Requirements". For performing what functionality, the system is developed met or not can be tested. For every project functionality testing is most important. Most of the testing tools, which are available in the market are of this type. The functional testing consists of following subtests System Testing 80 %

Functional Testing 80 %

Functionality / Requirements Testing

Functionality or Requirements Testing: During this subtest, test engineer validates correctness of every functionality in our application build, through below coverage. If they have less time to do system testing, they will be doing Functionality Testing only. Page 9 of 132

Software Testing Material Functionality or Requirements Testing has following coverages • • • • • • • •

Behavioral Coverage ( Object Properties Checking ). Input Domain Coverage ( Correctness of Size and Type of every i/p Object ). Error Handling Coverage ( Preventing negative navigation ). Calculations Coverage ( correctness of o/p values ). Backend Coverage ( Data Validation & Data Integrity of database tables ). URL’s Coverage (Links execution in web pages) Service Levels ( Order of functionality or services ). Successful Functionality ( Combination of above all ).

All the above coverages are mandatory or must. Input Domain Testing: During this test, the test engineer validates size and type of every input object. In this coverage, test engineer prepares boundary values and equivalence classes for every input object. Ex: A login process allows user id and password. User id allows alpha numeric from 4-16 characters long. Password allows alphabet from 4-8 characters long. Boundary Value analysis: Boundary values are used for testing the size and range of an object. Equivalence Class Partitions: Equivalence classes are used for testing the type of the object. Recovery Testing: This test is also known as Reliability testing. During this test, test engineers validates that, whether our application build can recover from abnormal situations or not. Ex: During process power failure, network disconnect, server down, database disconnected etc…

Abnormal

Backup & Recovery Procedures

Normal Recovery Testing is an extension of Error Handling Testing. Page 10 of 132

Software Testing Material Compatibility Testing: This test is also known as portable testing. During this test, test engineer validates continuity of our application execution on customer expected platforms( like OS, Compilers, browsers, etc..) During this compatibility two types of problems arises like 1. Forward compatibility 2. Backward compatibility Forward compatibility: The application which is developed is ready to run, but the project technology or environment like OS is not supported for running.

Buil d

OS

Backward compatibility: The application is not ready to run on the technology or environment.

Buil d

OS

Configuration Testing: This test is also known as Hardware Compatibility testing. During this test, test engineer validates that whether our application build supports different technology i.e. hardware devices or not? Inter Systems Testing: This test is also known as End-to-End testing. During this test, test engineer validates that whither our application build coexistence with other existing software in the customer site to share the resources (H/w or S/w).

WBAS

Water Bill Automation

Electricity Bill Automation

Tele Phone Bill Automation

Local Data Base Server

Local ESeva Center

TPBAS

ITBAS

Income Tax Bill Automation Newly Added Component

EBAS

Sharable Resource

New Server

Remote Servers

Page 11 of 132

Software Testing Material

Banking Information System

Bank Loans

The first example is one system is our application and other one is sharable. The second example is same system but different components. System Software Level: Compatibility Testing Hardware Level: Configuration Testing Application Software Level: Inter Systems Testing Installation Testing: Testing the applications, installation process in customer specified environment and conditions.

Build

Server

Test Engineer Systems

Build +Required S/w components to run application

Instal lation

1. Setup Program Customer Site Like Environment

2. Easy Interface 3. Occupied Disk Space

The following conditions or tests done in this installation process. • • •

Setup Program: Whither Setup is starting or not? Easy Interface: During Installation, whither it is providing easy interface or not ? Occupied Disk Space: How much disk space it is occupying after the installation? Page 12 of 132

Software Testing Material Sanitation Testing: This test is also known as Garbage Testing. During this test, test engineer finds extra features in your application build with respect to S/w RS. Maximum testers may not get this type of problems. User Id Password Login

Forgot Password

Parallel or Comparitive testing: During this test, test engineer compares our application build with similar type of applications or old versions of same application to find competitiveness. This comparative testing can be done in two views: • Similar type of applications in the market. • Upgraded version of application with older versions. Performance Testing: It is an advanced testing technique and expensive to apply. During this test, testing team concentrate on Speed of Processing. This performance test classified into below subtests. 1. 2. 3. 4.

Load Testing Stress Testing Data Volume Testing Storage Testing

Load Testing: This test is also known as scalability testing. During this test, test engineer executes our application under customer expected configuration and load to estimate performance. Load: No. of users try to access system at a time. This test can be done in two ways 1. Manual Testing. 2.By using the tool, Load Runner. Stress Testing: During this test, test engineer executes our application build under customer expected configuration and peak load to estimate performance. Data Volume Testing: A tester conducts this test to find maximum size of allowable or maintainable data, by our application build.

Page 13 of 132

Software Testing Material Storage Testing: Execution of our application under huge amounts of resources to estimate storage limitations to be handled by our application is called as Storage Testing.

=

Performance

Trashing --

+

Resources Security Testing: It is also an advanced testing technique and complex to apply. To conduct this tests, highly skilled persons who have security domain knowledge. This test is divided into three sub tests. Authorization: Verifies authors identity to check he is a authorized user or not. Access Control: Also called as Privileges testing. The rights given to a user to do a system task. Encryption / Decryption: Encryption- To convert actual data into a secret code which may not be understandable to others. Decryption- Converting the secret data into actual data.

Source

Encryption

Decryption

Client Destination

Destination

Server Decryption

Encryption

Source

User Acceptance Testing: After completion of all possible system tests execution, our organization concentrate on user acceptance test to collect feed back. To conduct user acceptance tests, they are following two approaches like Alpha (α) - Test and Beta (β) -Test. Note: In s/w development projects are two types based on the products like software application ( also called as Project ) and Product. Software Application ( Project ) : Get requirements from the client and develop the project. This software is for only one company. And has specific customer. For this Alpha test will be done.

Page 14 of 132

Software Testing Material Product : Get requirements from the market and develop the project. This software may have more than one company. And has no specific customer. For this β- Version or Trial version will be released in the market to do Beta test.

Alpha Testing For what software applications applicable to specific customer

Beta Testing For software products. By customer site like people.

By real customer In customer site like environment. In development site Real environment. Virtual environment Collect Feedback. Collect Feedback. Testing during Maintenance: After the completion of UA Testing, our organization concentrate on Release Team (RT) formation. This team conducts Port Testing in customer site, to estimate completeness and correctness of our application installation. During this Port testing Release team validate below factors in customer site: • • • • • • • •

Compact Installation (Fully correctly installed or not) On screen displays Overall Functionality Input device handling Output device handling Secondary Storage Handling OS Error handling Co-existence with other Software

The above tests are done by the release team. After the completion of above testing, the Release Team will gives training and application support in customer site for a period. During utilization of our application by customer site people, they are sending some Change Request (CR) to our company. When CR is received the following steps are done Based on the type of CR there are two types, 1. Enhancement 2. Missed Defect

Page 15 of 132

Software Testing Material

Change Request Enhancement Impact Analysis

Missed Defect CCB

Impact Analysis Developers

Perform that change Test that S/w Change

Perform that change

Review old test process capability to improve Test that S/w Change Tester

Change Control Board: It is the team which will handles customer requests for enhancement changes. Testing Stages Vs Roles: Reviews in Analysis – Business Analyst / Functional Lead. Reviews in Design – Technical Support / Technical Lead. Unit Testing – Senior Programmer. Integration Testing – Developer / Test Engineer. Functional & System Testing – Test Engineer. User Acceptance Testing – Customer site people with involvement of testing team. Port Testing – Release Team. Testing during Maintenance – Change Control Board Testing Stages Reviews in Analysis Reviews in Design Unit Testing Integration Testing Functional & System Testing User Acceptance Testing Port Testing Testing during Maintenance/ Test Software Changes

-

Roles Business Analyst / Functional Lead. Technical Support / Technical Lead. Senior Programmer. Developer / Test Engineer. Test Engineer. Customer site people with involvement of Testing team. Release Team. Change Control Board

Testing Team: From refinement form of V-Model small scale companies and medium scale companies are maintaining separate testing team for some of the stages in LCT. In their teams organisation maintains below roles Quality Control: Defines the objectives of Testing Quality Assurance: Defines approach done by Test Manager

Page 16 of 132

Software Testing Material Test Manager: Schedule that approach Test Lead: Maintain testing team with respect to the test plan Test Engineer: Conducts testing to find defects Quality Control

Quality Assurance

Project Manager

Test Managers

Test Lead

Project Lead

Programmers

Test Engineer / QA Engineer

Quality Control: Defines the objectives of Testing Quality Assurance: Defines approach done by Test Manager Test Manager: Schedule, Planning Test Lead: Applied Test Engineer: Followed Testing Terminology:Monkey / Chimpanzee Testing: The coverage of main activities only in your application during testing is called as monkey testing.(Less Time) Gerilla Testing: To cover a single functionality with multiple possibilities to test is called Gerilla ride or Gerilla Testing. (No rules and regulations to test a issue) Exploratory Testing: Level by level of activity coverage of activities in your application during testing is called exploratory testing. (Covering main activities first and other activities next) Sanity Testing: This test is also known as Tester Acceptance Test (TAT). They test for whither developed team build is stable for complete testing or not? Development Team Released Build Sanity Test / Tester Acceptance Test Functional & System Testing

Page 17 of 132

Software Testing Material Smoke Testing: An extra shakeup in sanity testing is called as Smoke Testing. Testing team rejects a build to development team with reasons, before start testing. Bebugging: Development team release a build with known bugs to testing them. Bigbang Testing: A single state of testing after completion of all modules development is called Bigbang testing. It is also known as informal testing. Incremental Testing: A multiple stages of testing process is called as incremental testing. This is also known as formal testing. Static Testing: Conduct a test without running an application is called as Static Testing. Ex: User Interface Testing Dynamic Testing: Conduct a test through running an application is called as Dynamic Testing. Ex: Functional Testing, Load Testing, Compatibility Testing Manual Vs Automation: A tester conduct a test on application without using any third party testing tool. This process is called as Manual Testing. A tester conduct a test with the help of software testing tool. This process is called as Automation.

Automation (40% -60%)

Impact & Criticality

Need for Automation: When tools are not available they will do manual testing only. If your company already has testing tools they may follow automation. For verifying the need for automation they will consider following two types: Impact of the test: It indicates test repetition No1 No2 Multiply Result

Page 18 of 132

Software Testing Material Criticality: Load Testing, for 1000 users. Criticality indicates complex to apply that test manually. Impact indicates test repetition. Retesting: Re execution of our application to conduct same test with multiple test data is called Retesting. Regression Testing: The re execution of our test on modified build to ensure bug fix work and occurrences of side effects is called regression testing. Any dependent modules may also cause side effects.

Impacted Passed Tests

Modifie d Build

Failed Tests

Buil d 11 Test Fail 10 Tests Passed

Development

Selection of Automation: Before starting one project level testing by one separate testing team, corresponding project manager or test manager or quality analyst defines the need of test automation for that project depends on below factors. Type of external interface: GUI – Automation. CUI – Manual. Size of external interface: Size of external interface is Large – Automation. Size of external interface is Small – Manual. Expected No. of Releases: Several Releases – Automation. Less Releases – Manual. Maturity between expected releases: More Maturity – Manual. Less Maturity – Automation. Tester Efficiency: Knowledge of automation on tools to test engineers – Automation. No Knowledge of automation on tools to test engineers – Manual. Support from Senior Management:

Page 19 of 132

Software Testing Material Management accepts – Automation. Management rejects – Manual.

C.E.O

Testing Policy

Company Level Test Strategy

Test Manager/ QA / PM

Test Lead

Test Methodology

Test Plan

Test Cases

Test Procedure

Test Lead, Test Engineer

Project Level Test Script

Test Log

Defect Report

Test Lead

Test Summary Report

Testing Policy: It is a company level document and developed by QC people. This document defines testing objectives, to develop a quality software. Address Testing Definition : Verification & Validation of S/w Testing Process : Proper Test Planning before start testing Testing Standard : 1 Defect per 250 LOC / 1 Defect per 10 FP Testing Measurements : QAM, TMM, PCM. CEO Sign

QAM: Quality Assessment Measurements Page 20 of 132

Software Testing Material TMM: Test Management Measurements PCM: Process Capability Measurements Note: The test policy document indicates the trend of the organization. Test Strategy: 1. Scope & Objective: Definition, need and purpose of testing in your in your organization 2. Business Issues: Budget Controlling for testing 3. Test approach: defines the testing approach between development stages and testing factors. TRM: Test Responsibility Matrix or Test Matrix defines mapping between test factors and development stages. 4. Test environment specifications: Required test documents developed by testing team during testing. 5. Roles and Responsibilities: Defines names of jobs in testing team with required responsibilities. 6. Communication & Status Reporting: Required negotiation between two consecutive roles in testing. 7. Testing measurements and metrics: To estimate work completion in terms of Quality Assessment, Test management process capability. 8. Test Automation: Possibilities to go test automation with respect to corresponding project requirements and testing facilities / tools available (either complete automation or selective automation) 9. Defect Tracking System: Required negotiation between the development and testing team to fix defects and resolve. 10. Change and Configuration Management: required strategies to handle change requests of customer site. 11. Risk Analysis and Mitigations: Analyzing of future common problems appears during testing and possible solutions to recover. 12. Training plan: Need of training for testing to start/conduct/apply. Test Factor: A test factor defines a testing issue. There are 15 common test factors in S/w Testing. Ex: QC – Quality PM/QA/TM – Test Factor TL – Testing Techniques TE – Test cases PM/QA/TM – Ease of use TL – UI testing TE – MS 6 Rules PM/QA/TM – Portable TL – Compatibility Testing TE – Run on different OS

Page 21 of 132

Software Testing Material Test Factors: 1. Authorization: Validation of users to connect to application Security Testing Functionality / Requirements Testing 2. Access Control: Permission to valid user to use specific service Security Testing Functionality / Requirements Testing 3. Audit Trail: Maintains metadata about operations Error Handling Testing Functionality / Requirements Testing 4. Correctness: Meet customer requirements in terms of functionality All black box Testing Techniques 5. Continuity in Processing: Inter process communication Execution Testing Operations Testing 6. Coupling: Co existence with other application in customer site Inter Systems Testing 7. Ease of Use: User friendliness User Interface Testing Manual Support Testing 8. Ease of Operate: Ease in operations Installation testing 9. File Integrity: Creation of internal files or backup files Recovery Testing Functionality / Requirements Testing 10. Reliability: Recover from abnormal situations or not. Backup files using or not Recovery Testing Stress Testing 11. Portable: Run on customer expected platforms Compatibility Testing Configuration Testing 12. Performance: Speed of processing Load Testing Stress Testing Data Volume Testing Storage Testing 13. Service Levels: Order of functionalities Stress Testing Functionality / Requirements Testing 14. Methodology: Follows standard methodology during testing Compliance Testing 15. Maintainable: Whither application is long time serviceable to customers or not Compliance Testing (Mapping between quality to testing connection) Quality Gap: A conceptual gap between Quality Factors and Testing process is called as Quality Gap. Test Methodology: Test strategy defines over all approach. To convert a over all approach into corresponding project level approach, quality analyst / PM defines test methodology. Step 1: Collect test strategy

Page 22 of 132

Software Testing Material Step 2: Project type Project Type

Information Gathering & Design Coding System Maintenance Analysis Testing Traditional Y Y Y Y Y Off-the-Shelf X X X Y X Maintenance X X X X Y Step 3: Determine application type: Depends on application type and requirements the QA decrease number of columns in the TRM. Step 4: Identify risks: Depends on tactical risks, the QA decrease number of factors (rows) in the TRM. Step 5: Determine scope of application: Depends on future requirements / enhancements, QA try to add some of the deleted factors once again. (Number of rows in the TRM) Step 6: Finalize TRM for current project Step 7: Prepare Test Plan for work allocation. Testing Process:

Test Initiation

Test Plannin g

Test Design

Test Executio n

Regression Testing

Test Closur e Defect

Test Report PET (Process Experts Tools and Technology): It is an advanced testing process developed by HCL, Chennai.This process is approved by QA forum of India. It is a refinement form of V-Model.

Page 23 of 132

Software Testing Material

Information Gathering (BRS) Analysis ( S/wRS )

Design ( HLDD & LLDD )

PM / QA

Coding

Test Initiation

Test Lead

Unit Testing + Integration Testing

Test Planning

Study S/wRS & Design Docs Test Design

Initial Build

Level – 0 ( Sanity / Smoke / TAT ) Test Automation Test Batches Creation

(Modified Build) Bug Resolving

Next

(Regression ) (Level – 2)

Select a batch and starts execution ( Level - 1 )

Defect Defect Fixing

Report

Independent If u got any mismatch then suspend that Batch

Otherwise Test Closure Final Regression / Pre Acceptance / Release / Post Mortum / Level -3 Testing User Acceptance Test Sign Off Page 24 of 132

Software Testing Material Test Planning: After completion of test initiation, test plan author concentrates on test plan What to test How to test When to test Who to test

- Development Plan - S/wRS - Design Documents - Team Formation

Development Plan & S/wRS & Design Documents

TRM

Team Formation Identify tactical Risks

Test Plan

Prepare Test Plan Review Test Plan

1. Team Formation In general test planning process starts with testing team formation, depends on below factors. • Availability of Testers • Test Duration • Availability of test environment resources The above three are dependent factors. Test Duration: Common market test team duration for various types of projects. C/S, Web, ERP projects - SAP, VB, JAVA – Small - 3-5 months System Software - C, C++ - Medium – 7-9 months Machine Critical - Prolog, LISP - Big - 12-15 months System Software Projects: Network, Embedded, Compilers … Machine Critical Software: Robotics, Games, Knowledge base, Satellite, Air Traffic. 2. Identify tactical Risks After completion of team formation, test plan author concentrates on risks analysis and mitigations. 1) 2) 3) 4) 5) 6) 7) 3.

Lack of knowledge on that domain Lack of budget Lack of resources(h/w or tools) Lack of testdata (amount) Delays in deliveries(server down) Lack of development process rigor Lack of communication( Ego problems) Prepare Test Plan Page 25 of 132

Software Testing Material

Format: 1) Test Plan id: Unique number or name 2) Introduction: About Project 3) Test items: Modules 4) Features to be tested: Responsible modules to test 5) Feature not to be tested: Which ones and why not? 6) Feature pass/fail criteria: When above feature is pass/fail? 7) Suspension criteria: Abnormal situations during above features testing. 8) Test environment specifications: Required docs to prepare during testing 9) Test environment: Required H/w and S/w 10) Testing tasks: what are the necessary tasks to do before starting testing 11) Approach: List of Testing Techniques to apply 12) Staff and training needs: Names of selected testing Team 13) Responsibilities: Work allocation to above selected members 14) Schedule: Dates and timings 15) Risks and mitigations : Common non technical problems 16) Approvals: Signatures of PM/QA and test plan author 4.

Review Test Plan

After completion of test plan writing test plan author concentrate on review of that document for completeness and correctness. In this review, selected testers also involved to give feedback. In this reviews meeting, testing team conducts coverage analysis. • • •

S/wRS based coverage ( What to test ) Risks based coverage ( Analyze risks point of view ) TRM based coverage ( Whither this plan tests all tests given in TRM )

Test Design: After completion of test plan and required training days, every selected test engineer concentrate on test designing for responsible modules. In this phase test engineer prepares a list of testcases to conduct defined testing, on responsible modules. There are three basic methods to prepare testcases to conduct core level testing.  Business Logic based testcase design  Input Domain based testcase design  User Interface based testcase design Business Logic based testcase design: In general test engineers are writing list of testcases depends on usecases / functional specifications in S/wRS. A usecase in S/wRS defines how a user can use a specific functionality in your application.

Page 26 of 132

Software Testing Material BRS

S/wRS Usecases + Functional Specifications

TestCases

HLDD LLDD Coding .Exe To prepare testcases depends on usecases we can follow below approach: Step 1: Collect responsible modules usecases Step 2: select a usecase and their dependencies ( Dependent & Determinant ) Step 2-1: identify entry condition Step 2-2: identify input required Step 2-3: identify exit condition Step 2-4: identify output / outcome Step2-5: study normal flow Step 2-6: study alternative flows and exceptions Step3: prepare list of testcases depends on above study Step 4: review testcases for completeness and correctness TestCase Format: After completion of testcases selection for responsible modules, test engineer prepare an IEEE format for every test condition. TestCase Id : Unique number or name TestCase Name : Name of the test condition Feature to be tested : Module / Feature / Service TestSuit Id : Parent batch Id’s, in which this case is participating as a member. Priority : Importance of that testcase Po – Basic functionality P1 – General Functionality (I/p domain, Error handling …) P2 – Cosmetic TestCases (Ex: p0 – os, p1-difft oss, p2 – look & feel) Test Environment: Required H/w and S/w to execute the test cases Test Effort: (Person Per Hour or Person / Hr) Time to execute this test case ( 20 Mins ) Test Duration: Date of execution Test Setup: Necessary tasks to do before start this case execution Test Procedure: Step by step procedure to execute this testcase.

Page 27 of 132

Software Testing Material

Step No.

Action

I/p Required

Expected

Test Design

Result

Defect ID

Comments

Test Execution

TestCase Pass/Fail Criteria: When that testcase is Pass, When that testcase is fail. Input Domain based TestCase Design: To prepare functionality and error handling testcases, test engineers are using UseCases or functional specifications in S/wRS. To prepare input domain testcases test engineers are depending on data model of the project (ERD & LLD) Step1: Identify input attributes in terms of size, type and constraints. (size- range, type – int, float constraint – Primary key) Step2: Identify critical attributes in that list, which are participating in data retrievals and manipulations. Step3: Identify non critical attributes which are input, output type. Step4: Prepare BVA & ECP for every attribute. ECP ( Type ) Input Attribute

Valid

Invalid

BVA ( Size / Range ) Minimum

Maximum

Fig: Data Matrix User Interface based testcase design: To conduct UI testing, test engineer write a list of test cases, depends on our organization level UI rules and global UI conventions. For preparing this UI testcases they are not studying S/wRS, LLDD etc… Functionality testcases source: S/wRS. I/P domain testcases source: LLDD Testcases: For all projects applicable Testcase1: Spelling checking Tesecase2: Graphics checking (alignment, font, style, text, size, micro soft 6 rules) Testcase3: Meaningful error messages or not. (Error Handling Testing – related message is coming or not. Here they are testing that message is easy to understand or not) TestCase4: Accuracy of data displayed (WYSIWYG) (Amount, d o b) Testcase5: Accuracy of data in the database as a result of user input. (Tc4 screen level, tc5 at database level) Form Table

DSN Bal

66.666

66.7

Page 28 of 132

Software Testing Material Testcase6: Accuracy of data in the database as a result of external factors?

DS

Mail Server Image compression

Mail + .Gif

Image Decompression

Mail + .Gif

Import Testcase7: Meaningful Help messages or not?(First 6 tc for uit and 7 manual support testing) Review Testcases: After completion of testcases design with required documentation [IEEE] for responsible modules, testing team along with test lead concentrate on review of testcases for completeness and correctness. In this review testing team conducts coverage analysis 1. 2. 3. 4. 5.

Business Requirements based coverage UseCases based coverage Data Model based coverage User Interface based coverage TRM based coverage

Fig: Requirements Validation / Traceability Matrix. Business Requirements ******

Sources (Use Cases, Data Model…) ***** ***** *****

TestCases * * * * * *

Page 29 of 132

Software Testing Material Test

Execution:

Development Site

Initial Build

Testing Site Level-0 (Sanity / Smoke / TAT) Test Automation

Stable Build Defect Report Defect Fixing

Level-1 (Comprehensive)

8-9 Times Bug Resolving Modified Build

Level-2 (Regression)

Level-3 (Final Regression) Test Execution levels Vs Test Cases: Level 0 – P0 Level 1– P0, P1 and P2 testcases as batches Level 2– Selected P0, P1 and P2 testcases with respect to modifications Level 3– Selected P0, P1 and P2 testcases at build. Test Harness = Test Environment + Test Bed Build Version Control: Unique numbering system. ( FTP or SMTP)

Server

Softbase

Build FTP

Test Environment

After defect reporting the testing team may receive • Modified Build • Modified Programs

Page 30 of 132

Software Testing Material To maintain this original builds and modified builds, development team use version control softwares.

Server 1

2

Modified Build

Modified Programs Test Environment Embed into Old Build

Level 0 (Sanity / Smoke / TAT): After receiving initial build from development team, testing team install into test environment. After completion of dumping / installation testing team ensure that basic functionality of that build to decide completeness and correctness of test execution. During this testing, testing team observes below factors on that initial build. 1. 2. 3. 4. 5. 6. 7. 8.

Understandable: Functionality is understandable to test engineer. Operable: Build is working without runtime errors in test environment. Observable: Process completion and continuation in build is estimated by tester. Controllable: Able to Start/ Stop processes explicitly. Consistent: Stable navigations Maintainable: No need of reinstallations Simplicity: Short navigations to complete task. Automatable: Interfaces supports automation test script creation.

This level-0 testing is also called as Testability or Octangle Testing (bcz based on 8 factors). Test Automation: After receiving a stable build from development team, testing team concentrate on test automation. Test Automation two types: Complete and Selective.

Test Automation

* Complete

Selective

(All P0 and carefully selected P1 Testcases)

Page 31 of 132

Software Testing Material Level-1: (Comprehensive Testing): After completion of stable build receiving from development team and automation, testing team starts test execution of their testcases as batches. The test batch is also known as TestSuit or test set. In every batch, base state of one testcase is end state of previous testcase. During this test batches execution, test engineers prepares test log with three types of entries. 1. Passed 2. Failed 3. Blocked Passed: All expected values are equal to actual. Failed: Any expected value is variated with actual. Blocked: Corresponding testcases are failed. Skip

Passed

In Progress

In Queue

Blocked

Failed

Closed

Partial Pass / Fail

Level-2 Regression Testing: Actually this Regression testing is part of Level-1 testing. During comprehensive test execution, testing team reports mismatches to development team as defects. After receiving that defect, development team performs modifications in coding to resolve that accepted defects. When they release modified build, testing team concentrate on regression testing before conducts remaining comprehensive testing. Severity: Seriousness of the defect defined by the tester through Severity (Impact and Criticality) importance to do regression testing. In organizations they will be giving three types of severity like High, Medium and Low. High: Without resolving this mismatch tester is not able to continue remaining testing. (Show stopper). Medium: Able to continue testing, but resolve must. Low: May or may not resolve. Ex:

High: Database not connecting. Medium: Input domain wrong. (Accepting wrong values also) Low: Spelling mistake.

Xyz are three dependent modules. If u find bug in z, then Do on z and colleges: High Full z module: Medium Partial z module: Low

Page 32 of 132

Software Testing Material

Resolved Bug Severity

High

Medium

Less

All P0 All P1 Selected P2

All P0 Selected P1 Some P2

Some P0 Some P1 Some P2

On modified Build to ensure bug resolving Possible ways to do Regression Testing: Case 1: If development team resolved bug and its severity is high, testing team will re execute all P0, P1 and carefully selected P2 test cases with respect to that modification. Case 2: If development team resolved bug and its severity is medium, testing team will re execute all P0, selected P1 [80-90 %] and some of P2 test cases with respect to that modification. Case 3: If development team resolved bug and its severity is low, testing team will re execute some of the P0, P1, P2 test cases with respect to that modification. Case 4: If development team performs modifications due to project requirement changes, testing team reexecute all P0 and selected testcases. Severity: With respect to functionality Priority: With respect to customer. Severity: All defects are not with same severity. Priority: All defects are not with same priority. Severity: Seriousness of the defect. Priority: Importance of the defect. Severity: Project functionality point of view important. Priority: Customer point of view important.

Page 33 of 132

Software Testing Material Defect Reporting and Tracking: During comprehensive test execution, test engineers are reporting mismatches to development team as defect reports in IEEE format. 1. Defect Id: A unique number or name. 2. Defect Description: Summary of defect. 3. Build Version Id: Parent build version number. 4. Feature: Module / Functionality 5. Testcase name and Description: Failed testcase name with description 6. Reproducible: (Yes / No) 7. If yes, attach test procedure. 8. If No, attach snapshots and strong reasons 9. Severity: High / Medium / Low 10. Priority 11. Status: New / Reopen (after 3 times write new programs) 12. Reported by: Name of the test engineer 13. Reported on: Date of Submission 14. Suggested fix: optional 15. Assign to: Name of PM 16. Fixed by: PM or Team Lead 17. Resolved by: Name of the Developer 18. Resolved on: Date of solving 19. Resolution type: 20. Approved by: Signature of the PM Defect Age: The time gap between resolved on and reported on. Defect Submission:

QA Test Manager

Project Manager

Test Lead

Team Lead

Test Engineer

Developers

Transmittal Reports Fig: Large Scale Organizations.

Page 34 of 132

Software Testing Material

Defect Submission:

Project Manager

Test Lead

Team Lead

Test Engineer

Developers

Transmittal Reports Fig: Small Scale Organizations. Defect Status Cycle:

New Fixed (Open, Reject, Deferred)

Closed

Reopen Bug Life Cycle:

Page 35 of 132

Software Testing Material

Detect Defect Reproduce Defect Report Defect Fix Bug Resolve Bug Close Bug Resolution Type:

Testing

Development Defect Report

Resolution Type There are 12 resolution types such as 1. Duplicate: Rejected due to defect like same as previous reported defect. 2. Enhancement: Rejected due to defect related to future requirement of the customer. 3. H/w Limitation: Raised due to limitations of hardware (Rejected) 4. S/w Limitation: Rejected due to limitation of s/w technology. 5. Functions as design: Rejected due to coding is correct with respect to design documents. 6. Not Applicable: Rejected due to lack of correctness in defect. 7. No plan to fix it: Postponed part timely (Not accepted and rejected) 8. Need for More Information: Developers want more information to fix. (Not accepted and rejected) 9. Not Reproducible: Developer want more information due to the problem is not reproducible. (Not accepted and rejected) 10. User misunderstanding: (Both argues you r thinking wrong) (Extra negotiation between tester and developer) 11. Fixed: Opened a bug to resolve (Accepted) 12. Fixed Indirectly: Differed to resolve (Accepted) Types of Bugs:

Page 36 of 132

Software Testing Material

UI bugs: (Low severity) Spelling mistake: High Priority Wrong alignment: Low Priority Input Domain bugs: (Medium severity) Object not taking Expected values: High Priority Object taking Unexpected values: Low Priority Error Handling bugs: (Medium severity) Error message is not coming: High Priority Error message is coming but not understandable: Low Priority Calculation bugs: (High severity) Intermediate Results Failure: High Priority Final outputs are Wrong: Low Priority Service Levels bugs: (High severity) Deadlock: High Priority Improper order of Services: Low Priority Load condition bugs: (High severity) Memory leakage under load: High Priority Doesn't allows customer expected load: Low Priority Hardware bugs: (High severity) Printer not connecting: High Priority Invalid printout: Low Priority Boundary Related Bugs: (Medium Severity) Id control bugs: (Medium severity) Wrong version no, Logo Version Control bugs: (Medium severity) Difference between two consecutive versions Source bugs: (Medium severity) Mismatch in help documents Test Closure: After completion of all possible testcase execution and their defect reporting and tracking, test lead conduct test execution closure review along with test engineers. In this review test lead depends on coverage analysis: • • • • •

BRS based coverage UseCases based coverage (Modules) Data Model based coverage (i/p and op) UI based coverage (Rules and Regulations) TRM based coverage (PM specified tests are covered or not)

Page 37 of 132

Software Testing Material Analysis of the differed bugs: Whither deferred bugs are postponable or not. Testing team try to execute the high priority test cases once again to confirm correctness of master build. Final Regression Process: Gather requirements Effort estimation (Person/Hr) Plan Regression Execute Regression Report Regression Final Regression Testing: Gather requirements

Report Regression

Execute Regression

Effort estimation

Plan Regression

User Acceptance Testing: After completion of test execution closure review and final regression, our organization concentrates on UAT to collect feed back from customer / customer site like people. There are two approaches: 1. Alpha testing 2. Beta testing SignOff: After completion of UA and then modifications, test lead creates Test Summary Report (TSR). It is a part of s/w release note. This TSR consists of 1. 2. 3. 4. 5.

Test Strategy / Methodology (what tests) System Test Plan (schedule) Traceability Matrix (mapping requirements and testcases) Automated Test Scripts (TSL + GUI map entries) Final Bug summary Report

Page 38 of 132

Software Testing Material Bug Id

Description

Found By

Status(Closed Deferred)

/ Severity

Module / Comme Functionality nts

Case Study (Schedule for 5 Months): Deliverable TestCase Selection TestCase Review RVM / RTM Sanity & Test Automation Test Execution as Batches Test Reporting

Responsibility Test Engineer Test Lead, Test Engineer Test Lead Test Engineer Test Engineer Test Engineer & Test Lead

Completion Time 20-30 days 4-5 days 1 day 20-30 days 40-60 days On going during test execution Weakly twice

Communication and Status Everyone in testing team Reporting Final Regression Testing & Test Engineer and Test Lead 4-5 days Closer Review User Acceptance Testing Customer Site People 5-10 days ( Involvement of Testing Team) Test Summary Report Test Lead 1-2 days (Sign Off) Testing computer software – Cem Kamer Effective methods for software testing – William E Perry Software Testing Tools – Dr. K.V.K.K. Prasad [email protected] What u r doing? What type of testing process going on ur company? What type of test documentation prepared by ur organization? What type of test documentation u will prepare? Whats ur involvement in that? What are key components of ur company test plan? What type of format u prepare for test cases? How ur pm selects what type of tests need for ur project? When u will go to automation? What is regression testing? When u will do this? How u report defects to development team? How u know whither defect accepted or rejected? What u do when ur defect rejected? How u will learn project with out documentation? What is the difference between defect age and Build interval period? How u will do test without documents? What do u mean by green box testing? Experience on winrunner Exposure to td… Winrunner 8/10. Load runner 7/10.

Page 39 of 132

Software Testing Material Auditing: During testing and maintenance, testing team conducts audit meetings to estimate status and required improvements. In this auditing process they can use three types of measurements and metrics. Quality Measurement Metrics: These measurements are used by QA or PM to estimate achievement of quality in current project testing [monthly once] Product Stability: N o.

20% Testing – 80% Bugs

O f

80% Testing – 20% Bugs

bu g s

Duration

Sufficiency: • Requirements Coverage • Type – Trigger Analysis (Mapping between covered requirements and applied tests) Defect Severity Distribution Organization trend limit check: • Organisation trend limit check Test Management Measurements: These measurements are used by test lead during test execution of current project [weakly twice] Test Status • Executed tests • In progress • Yet to execute Delays in Delivery • Defect Arrival Rate • Defect Resolution Rate • Defect Aging Test Effort • Cost of finding a defect (Ex: 4 defects / person day) Page 40 of 132

Software Testing Material Process Capability Measurements: These measurements are used by quality analyst and test management to improve the capability of testing process for upcoming projects testing. (It depends on old projects maintenance level feedback) Test Efficiency • Type-Trigger Analysis • Requirements Coverage Defect Escapes • Type-Phase analysis. (What type of defects my testing team missed in which phase of testing) Test Effort • Cost of finding a defect (Ex: 4 defects / person day) This topic looks at Static Testing techniques. These techniques are referred to as "static" because the software is not executed; rather the specifications, documentation and source code that comprise the software are examined in varying degrees of detail. There are two basic types of static testing. One of these is people-based and the other is toolbased. People-based techniques are generally known as “reviews” but there are a variety of different ways in which reviews can be performed. The tool-based techniques examine source code and are known as "static analysis". Both of these basic types are described in separate sections below. What are Reviews? “Reviews” is the generic name given to people-based static techniques. More or less any activity that involves one or more people examining something could be called a review. There are a variety of different ways in which reviews are carried out across different organisations and in many cases within a single organisation. Some are very formal, some are very informal, and many lie somewhere between the two. The chances are that you have been involved in reviews of one form another. One person can perform a review of his or her own work or of someone else’s work. However, it is generally recognised that reviews performed by only one person are not as effective as reviews conducted by a group of people all examining the same document (or whatever it is that is being reviewed). Review techniques for individuals Desk checking and proof reading are two techniques that can be used by individuals to review a document such as a specification or a piece of source code. They are basically the same processes: the reviewer double-checks the document or source code on their own. Data stepping is a slightly different process for reviewing source code: the reviewer follows a set of data values through the source code to ensure that the values are correct at each step of the processing. Review techniques for groups The static techniques that involve groups of people are generically referred to as reviews. Reviews can vary a lot from very informal to highly formal, as will be discussed in more detail shortly. Two examples of types of review are walkthroughs and Inspection. A Page 41 of 132

Software Testing Material walkthrough is a form of review that is typically used to educate a group of people about a technical document. Typically the author "walks" the group through the ideas to explain them and so that the attendees understand the content. Inspection is the most formal of all the formal review techniques. Its main focus during the process is to find faults, and it is the most effective review technique in finding them (although the other types of review also find some faults). Inspection is discussed in more detail below. Reviews and the test process Benefits of reviews There are many benefits from reviews in general. They can improve software development productivity and reduce development timescales. They can also reduce testing time and cost. They can lead to lifetime cost reductions throughout the maintenance of a system over its useful life. All this is achieved (where it is achieved) by finding and fixing faults in the products of development phases before they are used in subsequent phases. In other words, reviews find faults in specifications and other documents (including source code) which can then be fixed before those specifications are used in the next phase of development. Reviews generally reduce fault levels and lead to increased quality. This can also result in improved customer relations. Reviews are cost-effective There are a number of published figures to substantiate the cost-effectiveness of reviews. Freedman and Weinberg quote a ten times reduction in faults that come into testing with a 50% to 80% reduction in testing cost. Yourdon in his book on Structured Walkthroughs found that faults were reduced by a factor of ten. Gilb and Graham give a number of documented benefits for software Inspection, including 25% reduction in schedules, a 28 times reduction in maintenance cost, and finding 80% of defects in a single pass (with a mature Inspection process) and 95% in multiple passes. What can be Inspected? Anything written down can be Inspected. Many people have the impression that Inspection applies mainly to code (probably because Fagan's original article was on "Design and code inspection"). However, although Inspection can be performed on code, it gives more value if it is performed on more "upstream" documents in the software development process. It can be applied to contracts, budgets, and even marketing material, as well as to policies, strategies, business plans, user manuals, procedures and training material. Inspection also applies to all types of system development documentation, such as requirements, feasibility studies and designs. It is also very appropriate to apply to all types of test documentation such as test plans, test designs and test cases. In fact even with Fagan's original method, it was found to be very effective applied to testware. What can be reviewed? Anything that can be Inspected can also be reviewed, but reviews can apply to more things than just those ideas that are written down. Reviews can be done on visions, strategic plans and "big picture" ideas. Project progress can be reviewed to assess whether work is proceeding according to the plans. A review is also the place where major decisions may be made, for example about whether or not to develop a given feature. Reviews and Inspections are complementary. Inspection excludes discussion and solution optimising, but these activities are often very important. Any type of review that tries to combine more than one objective tends not to work as well as those with a single focus. It

Page 42 of 132

Software Testing Material works better to use Inspection to find faults and to use reviews to discuss, come to a consensus and make decisions. What to review / Inspect? Looking at the ‘V’ life cycle diagram that was discussed in Session 2, reviews and Inspections apply to everything on the left-hand side of the V-model. Note that the reviews apply not only to the products of development but also to the test documentation that is produced early in the life cycle. We have found that reviewing the business needs alongside the Acceptance Tests works really well. It clarifies issues that might otherwise have been overlooked. This is yet another way to find faults as early as possible in the life cycle so that they can be removed at the least cost. Costs of reviews You cannot gain the benefits of reviews without investing in doing them, and this does have a cost. As a rough guide, something between 5% and 15% of project effort would typically be spent on reviews. If Inspections are being introduced into an organisation, then 15% is a recommended guideline. Once the Inspection process is mature, this may go down to around 5%. Note that 10% is half a day a week. Remember that the cost of reviews always needs to be balanced against the cost of not doing them, and finding the faults (which are already there) much later when it will be much more expensive to fix them. The costs of reviews are mainly in people's time, i.e. it is an effort cost, but the cost varies depending on the type of review. The leader or moderator of the review may need to spend time in planning the review (this would not be done for an informal review, but is required for Inspection). The studying of the documents to be reviewed by each participant on their own is normally the main cost (although in practice this may not be done as thoroughly as it should). If a meeting is held, the cost is the length of the meeting times the number of people present. The fixing of any faults found or the resolution of issues found may or may not be followed up by the leader. In the more formal review techniques, metrics or statistics are recorded and analysed to ensure the continued effectiveness and efficiency of the review process. Process improvement should also be a part of any review process, so that lessons learned in a review can be folded back into development and testing processes. (Inspection formally includes process improvement; most other forms of review do not.) Types of review We have now established that reviews are an important part of software testing. Testers should be involved in reviewing the development documents that tests are based on, and should also review their own test documentation. In this section, we will look at different types of reviews, and the activities that are done to a greater or lesser extent in all of them. We will also look at the Inspection process in a bit more detail, as it is the most effective of all review types. Characteristics of different review types Informal review As its name implies, this is very much an ad hoc process. Normally it simply consists of someone giving their document to someone else and asking them to look it over. A document may be distributed to a number of people, and the author of the document would hope to receive back some helpful comments. It is a very cheap form of review because there is no monitoring of metrics, no meeting and no follow--up. It is generally perceived to be useful,

Page 43 of 132

Software Testing Material and compared to not doing any reviews at all, it is. However, it is probably the least effective form of review (although no one can prove that since no measurements are ever done!) Technical review or Peer review A technical review may have varying degrees of formality. This type of review does focus on technical issues and technical documents. A peer review would exclude managers from the review. The success of this type of review typically depends on the individuals involved they can be very effective and useful, but sometimes they are very wasteful (especially if the meetings are not well disciplined), and can be rather subjective. Often this level of review will have some documentation, even if just a list of issues raised. Sometimes metrics will be kept. This type of review can find important faults, but can also be used to resolve difficult technical problems, for example deciding on the best way to implement a design. Decision-making review This type of review is closely related to the previous one (in fact the syllabus does not distinguish them). In this type of review, which may be technical or managerial, the focus is on discussing the issues, coming to a consensus and making decisions, for example about whether a given feature should be included in the next release or not. Walkthrough A walkthrough is typically led by the author of a document, for the purpose of educating the participants about the content so that everyone understands the same thing. A walkthrough may include "dry runs" of business scenarios to show how the system would handle certain specific situations. For technical documents, it is often a peer group technique. Inspection An Inspection is the most formal of the formal review techniques. There are strict entry and exit criteria to the Inspection process, it is led by a trained Leader or moderator (not the author), there are defined roles for searching for faults based on defined rules and checklists. Metrics are a required part of the process. Characteristics of reviews in general Objectives and goals The objectives and goals of reviews in general normally include the verification and validation of documents against specifications and standards. Some types of review also have an objective of achieving a consensus among the attendees (but not Inspection). Some types of review have process improvement as a goal (this is formally included in Inspection). Activities There are a number of activities that may take place for any review. The planning stage is part of all except informal reviews. In Inspection (and possibly other reviews), an overview or kickoff meeting is held to put everyone "in the picture" about what is to be reviewed and how the review is to be conducted. This pre-meeting may be a walkthrough in its own right. The preparation or individual checking is usually where the greatest value is gained from a review process. Each person spends time on the review document (and related documents), becoming familiar with it and/or looking for faults. In some reviews, this part of the process is optional (at least in practice). In Inspection it is required. Most reviews include a meeting of the reviewers. Informal reviews probably do not, and Inspection does not hold a meeting if it would not add economic value to the process. Sometimes the meeting time is the only time people actually look at the document. Page 44 of 132

Software Testing Material Sometimes the meetings run on for hours and discuss trivial issues. The best reviews (of any level of formality) ensure that value is gained from the meeting. The more formal review techniques include follow-up of the faults or issues found to ensure that action has been taken on everything raised (Inspection does, as do some forms of technical or peer review). The more formal review techniques collect metrics on cost (time spent) and benefits achieved. Roles and responsibilities For any of the formal reviews (i.e. not informal reviews), there is someone responsible for the review of a document (the individual review cycle). This may be the author of the document (walkthrough) or an independent Leader or moderator (formal reviews and Inspection). The responsibility of the Leader is to ensure that the review process works. He or she may distribute documents, choose reviewers, mentor the reviewers, call and lead the meeting, perform follow-up and record relevant metrics. The author of the document being reviewed or Inspected is generally included in the review, although there are some variants that exclude the author. The author actually has the most to gain from the review in terms of learning how to do their work better (if the review is conducted in the right spirit!). The reviewers or Inspectors are the people who bring the added value to the process by helping the author to improve his or her document. In some types of review, individual checkers are given specific types of fault to look for to make the process more effective. Managers have an important role to play in reviews. Even if they are excluded from some types of peer review, they can (and should) review management level documents with their peers. They also need to understand the economics of reviews and the value that they bring. They need to ensure that the reviews are done properly, i.e. that adequate time is allowed for reviews in project schedules. There may be other roles in addition to these, for example an organisation-wide co-ordinator who would keep and monitor metrics, or someone to "own" the review process itself - this person would be responsible for updating forms, checklists, etc. Deliverables The main deliverable from a review is the changes to the document that was reviewed. The author of the document normally edits these. For Inspection, the changes would be limited to faults found as violations of accepted rules. In other types of review, the reviewers suggest improvements to the document itself. Generally the author can either accept or reject the changes suggested. If the author does not have the authority to change a related document (e.g. if the review found that a correct design conflicted with an incorrect requirement specification), then a change request may be raised to change the other document(s). For Inspection and possibly other types of review, process improvement suggestions are a deliverable. This includes improvements to the review or Inspection process itself and also improvements to the development process that produced the document just reviewed. (Note that these are improvements to processes, not to reviewed documents.) The final deliverable (for the more formal types of review, including Inspection) is the metrics about the costs, faults found, and benefits achieved by the review or Inspection process.

Page 45 of 132

Software Testing Material Pitfalls Reviews are not always successful. They are sometimes not very effective, so faults that could have been found slip through the net. They are sometimes very inefficient, so that people feel that they are wasting their time. Often insufficient thought has gone into the definition of the review process itself - it just evolves over time. One of the most common causes for poor quality in the review process is lack of training, and this is more critical the more formal the review. Another problem with reviews is having to deal with documents that are of poor quality. Entry criteria to the review or Inspection process can ensure that reviewers' time is not wasted on documents that are not worthy of the review effort. A lack of management support is a frequent problem. If managers say that they want reviews to take place but don't allow any time in the schedules for the, this is only "lip service" not commitment to quality. Long-term, it can be disheartening to become expert at detecting faults if the same faults keep on being injected into all newly written documents. Process improvements are the key to long-term effectiveness and efficiency. Inspection Typical reviews versus Inspection There are a number of differences between the way most people practice reviews and the Inspection process as described in Software Inspection by Gilb and Graham, AddisonWesley, 1993. In a typical review, the document is given out in advance, there are typically dozens of pages to review, and the instructions are simply "Please review this." In Inspection, it is not just the document under review that is given out in advance, but also source or predecessor documents. The number of pages to focus the Inspection on is closely controlled, so that Inspectors (checkers) check a limited area in depth - a chunk or sample of the whole document. The instructions given to checkers are designed so that each individual checker will find the maximum number of unique faults. Special defect-hunting roles are defined, and Inspectors are trained in how to be most effective at finding faults. In typical reviews, sometimes the reviewers have time to look through the document before the meeting, and some do not. The meeting is often difficult to arrange and may last for hours. In Inspection, it is an entry criterion to the meeting that each checker has done the individual checking. The meeting is highly focused and efficient. If it is not economic, then a meeting may not be held at all, and it is limited to two hours. In a typical review, there is often a lot of discussion, some about technical issues but much about trivia. Comments are often mainly subjective, along the lines of "I don't like the way you did this" or "Why didn't you do it this way?" In Inspection, the process is objective. The only thing that is permissible to raise as an issue is a potential violation of an agreed Rule (the Rulesets are what the document should conform to). Discussion is severely curtailed in an Inspection meeting or postponed until the end. The Leader's role is very important to keep the meetings on track and focused and to keep pulling people away from trivia and pointless discussion. Many people keep on doing reviews even if they don't know whether it is worthwhile or not. Every activity in the Inspection process is done only if its economic value is continuously proven. Page 46 of 132

Software Testing Material Inspection is more Inspection contains many mechanisms that are additional to those found in other formal reviews. These include the following: Entry criteria, to ensure that we don't waste time Inspecting an unworthy document; Training for maximum effectiveness and efficiency; Optimum checking rate to get the greatest value out of the time spent by looking deep; Prioritising the words: Inspect the most important documents and their most important parts; Standards are used in the Inspection process; there are a number of Inspection standards also; Process improvement is built in to the Inspection process Exit criteria ensure that the document is worth and that the Inspection process was carried out correctly One of the most powerful exit criteria is the quantified estimate of the remaining defects per page. This may be say 3 per page initially, but can be brought down by orders of magnitude over time. Inspection is better Typical reviews are probably only 10% to 20% effective at detecting existing faults. The return on investment is usually not known because no one keeps track even of their cost. When Inspection is still being learned, its effectiveness is around 30% to 40% (this is demonstrated in Inspection training courses). Once Inspection is well established and mature, this process can find up to 80% of faults in a single pass, 95% in multiple passes. The return on investment ranges from 6 hours to 30 for every hour spent. The Inspection process The diagram shows a product document infected with faults. The document must pass through the entry gate before it is allowed to start the Inspection process. The Inspection Leader performs the planning activities. A Kickoff meeting is held to "set the scene" about the documents and the process. The Individual Checking is where most of the benefits are gained. 80% or more of the faults found will be found in this stage. A meeting is held (if economic). The editing of the document is done by the author or the person now responsible for the document. This involves redoing some of the activities that produced the document initially, and it also may require Change Requests to documents not under the control of the editor. Process improvement suggestions may be raised at any time, for improvements either to the Inspection process or to the development process. The document must pass through the Exit gate before it is allowed to leave the Inspection process. There are two aspects to investigate here: is the product document now ready (e.g. has some action been taken on all issues logged), and was the Inspection process carried out properly? For example, if the checking rate was too fast, then the checking has not been done properly. A gleaming new improved document is the result of the process, but there is still a "blob" on it. It is not economic to be 100% effective in Inspection. At least with Inspection you Page 47 of 132

Software Testing Material consciously predict the levels of remaining faults rather than fallaciously assuming that we have found them all! How the checking rate enables deep checking in Inspection There is a dramatic difference of Inspection to normal reviews, and that is in the depth of checking. This is illustrated by the picture of a document. Initially there are no faults visible. Typically in reviews, the time and size of document determine the checking rate. So for example if you have 2 hours available for a review and the document is 100 pages long, then the checking rate will be 50 page per hour. (Any two of these three factors determine the third.) This is equivalent to "skimming the surface" of the document. We will find some faults - in this example we have found one major and two minor faults. Our typical reaction is now to think: "This review was worthwhile wasn't it - it found a major fault. Now we can fix that and the two other minor faults, and the document will now be OK." Think: are we missing anything here? Inspection is different. We do not take any more time, but it is the optimum rate for the type of document that is used to determine the size of the document that will be checked in detail. So if the optimum rate is one page per hour and we have two hours, then the size of the sample or chunk will be 2 pages. (Note that the optimum rate needs to be established over time for different types of document and will depend on a number of factors, and it is based on prioritised words (logical page rather than physical page). Of course it doesn't take an hour just to read a single page, but the checking done in Inspection includes comparing each paragraph or sentence on the target page with all source documents, checking each paragraph or phrase against relevant rule sets, both generic and specific, working through checklists for different role assignments, as well as the time to read around the target page to set the context. If checking is done to this level of thoroughness, it is not at all difficult to spend an hour on one page!) How does this depth-oriented approach affect the faults found? On the picture, we have gone deep in the Inspection on a limited number of pages. We have found the major one found in the other review plus two (other) minors, but we have also found a deep-seated major fault, which we would never have seen or even suspected if we had not spent the time to go deep. There is no guarantee that the most dangerous faults are lying near the surface! When the author comes to fix this deep-seated fault, he or she can look through the rest of the document for similar faults, and all of them can then be corrected. So in this example we will have corrected 5 major faults instead of one. This gives tremendous leverage to the Inspection process - you can fix faults you didn't find! Inspection surprises To summarise the Inspection process, there are a number of things about Inspection which surprise people. The fundamental importance of the Rules is what makes Inspection objective rather than a subjective review. The Rules are democratically agreed as applying (this helping to defuse author defensiveness) and by definition a fault is a Rule violation. The slow checking rates are surprising, but the value to be gained by depth gives far greater long-term gains than surface-skimming review that miss major deep-seated problems. The strict entry and exit criteria help to ensure that Inspection gives value for money. The logging rates are much faster than in typical reviews (30 to 60 seconds; typical reviews log one thing in 3 to 10 minutes). This ensures that the meeting is very efficient. One reason

Page 48 of 132

Software Testing Material this works is that the final responsibility for all changes is fully given to the author, who has total responsibility for final classification of faults as well as the content of all fixes. More information on Inspection can be found in the book Software Inspection, Tom Gilb and Dorothy Graham, Addison-Wesley, 1993, ISBN 0-201-63181-4. Static analysis What can static analysis do? Static analysis is a form of automated testing. It can check for violations of standards and can find things that may or may not be faults. Static analysis is descended from compiler technology. In fact, many compilers may have static analysis facilities available for developers to use if they wish. There are also a number of stand-alone static analysis tools for various different computer programming languages. Like a compiler, the static analysis tool analyses the code without executing it, and can alert the developer to various things such as unreachable code, undeclared variables, etc. Static analysis tools can also compute various metrics about code such as cyclomatic complexity. Data flow analysis Data flow analysis is the study of program variables. A variable is basically a location in the computer's memory that has a name so that the programmer can refer to it more conveniently in the source code. When a value is put into this location, we say that the variable is "defined". When that value is accessed, we say that it is "used". For example, in the statement "x = y + z", the variables y and z are used because the values that they contain are being accessed and added together. The result of this addition is then put into the memory location called “x”, so x is defined. The significance of this is that static analysis tools can perform a number of simple checks. One of these checks is to ensure that every variable is defined before it is used. If a variable is not defined before it is used, the value that it contains may be different every time the program is executed and in any case is unlikely to contain the correct value. This is an example of a data flow fault. Another check that a static analysis tool can make is to ensure that every time a variable is defined it is used somewhere later on in the program. If it isn’t, then why was defined in the first place? This is known as a data flow anomaly and although can be a perfectly harmless fault, it can also indicate something more serious is at fault. Control flow analysis Control flow analysis can find infinite loops, inaccessible code, and many other suspicious aspects. However, not all of the things found are necessarily faults; defensive programming may result in code that is technically unreachable. Cyclomatic complexity Cyclomatic complexity is related to the number of decisions in a program or control flow graph. The easiest way to compute it is to count the number of decisions (diamond-shaped boxes) on a control flow graph and add 1. Working from code, count the total number of IF's and any loop constructs (DO, FOR, WHILE, REPEAT) and add 1. The cyclomatic complexity does reflect to some extent how complex a code fragment is, but it is not the whole story. Other static metrics Lines of code (LOC or KLOC for 1000’s of LOC) is a measure of the size of a code module. Operands and operators is a very detailed measurement devised by Halstead, but not much used now. Fan-in is related to the number of modules that call (in to) a given module. Modules with high fan-in are found at the bottom of hierarchies, or in libraries where they are Page 49 of 132

Software Testing Material frequently called. Modules with high fan-out are typically at the top of hierarchies, because they call out to many modules (e.g. the main menu). Any module with both high fan-in and high fan-out probably needs re-designing. Nesting levels relate to how deeply nested statements are within other IF statements. This is a good metric to have in addition to cyclomatic complexity, since highly nested code is harder to understand than linear code, but cyclomatic complexity does not distinguish them. Other metrics include the number of function calls and a number of metrics specific to objectoriented code. Limitations and advantages Static analysis has its limitations. It cannot distinguish "fail-safe" code from real faults or anomalies, and may create a lot of spurious failure messages. Static analysis tools do not execute the code, so they are not a substitute for dynamic testing, and they are not related to real operating conditions. However, static analysis tools can find faults that are difficult to see and they give objective quality information about the code. We feel that all developers should use static analysis tools, since the information they can give can find faults very early when they are very cheap to fix. WinRunner 7.0  Developed by Mercury Interactive  Functionality testing tool ( Not suitable to Performance, Usability and Security Testing)  Supports c/s and web technologies ( VB, vc++, java, d2k, power builder, Delphi, HIML etc…  WinRunner wont supports .Net, XML, SAP, People Soft, Maya, Flash, oracle applications etc…  To support .Net, XML, SAP, People Soft, Maya, Flash, XML, oracle applications etc… we can use QTP ( Quick Test Professional )  QTP is an extension of WinRunner. WinRunner Recording Process:

Page 50 of 132

Software Testing Material

Learning

Recording

Edit Script Run Script Analyze that Test Results Learning: Recognization of objects and windows in your application by testing tool is called Learning. Recording: A test engineer records our manual process in winrunner to automate. Edit Script: Test Engineer inserts required check points into that recorded test script. Run Script: A test engineer executes automated test script to get results. Analyze Results: A test engineer analyzes test results to concentrate on defect tracking. User Id

***** Passwor ***** d Ok

Exp: Ok enabled after entering user id and password. Explain Icons in WinRunner Note: WinRunner 7.0 provides auto learning facility to recognize objects and windows in your project without your interaction. Every statement ends with ; like C. Test Script: A test script consists of Navigational Statements & Check Points. In winrunner scripting language is also called as TSL ( Test Script Language ) like as C. Page 51 of 132

Software Testing Material

Add-in Manager: This window provides a list of WinRunner supported technologies with respect to our purchased license. Note: If all options in Add in Manager are off by default it supports VB, VC++ interface (Win32 API). Recording Modes: To record our business operations (Navigations) in winrunner we can use 2 types of recording modes. 1. Context Sensitive mode (Default Mode) 2. Analog mode Analog Mode: To record mouse pointer movements on the desktop, we can use this mode. In Analog Mode tester maintains constant monitor resolution and application position during recording and learning Application areas: Digital Signatures, Graphs drawing, image movements. Note: 1. In analog mode, WinRunner records mouse pointer movements with respect to desktop co-ordinates. Due to this reason, test engineer maintains corresponding context sensitive mode window in default position in recording and running. 2. If u want to use Analog mode for recording, we can maintain monitor resolution as constant during recording and running. move_locator_track() : WinRunner use this function to record mouse pointer movements on the desktop in one unit of time. Syntax: move_locator_track(track number); By default it starts with 1. It is not based on time. But based on operation. mtype(): WinRunner uses this operations this function to record mouse pointer operations on the desktop. Syntax: mtype( < K key on the mouse used > + / - ); Ex:

mtype ("+");

Track no – Deck top coordinates in which you operate the mouse. It stores the mouse coordinates. Actually it is a memory location. Type(): We can use this function to record keyboard operations in analog mode. Syntax: type(“Typed characters”/”ASCII notation”); Context Sensitive mode: To record mouse and key board operations on our application build, we can use this mode. It is a default mode. Context Sensitive Mode: In general functionality test engineer creates automation test scripts in Context Sensitive mode with required check points. In this mode WinRunner

Page 52 of 132

Software Testing Material records our application operation with respect to objects and windows. To record mouse and key board operations on our application build, we can use this mode. It is a default mode. Ex: Focus to Window Set_window(“Window Name”, time); TextBox Edit_set(“Edit Name”,”Typed Characters”); Password text box Password_edit_set(“Pwd Object”,”Encrypted Pwd”); Push Button Button_press(“Button Name”); Radio Button Button_set(“Button Name”,ON); Button_set(“Button Name”,OFF); Check Box Button_set(“Button Name”,ON); Button_set(“Button Name”,OFF); List/Combo Box List_select_item(“List1”, “Selected Item”); Menu Menu_select_item(“Menu Name; Option Name”); Base State: An application state to start test is called as Base State. End State: An application state to stop test is called as Base State. Call State: An intermediate state of an application between base state and end state is call state. Functionality Testing Techniques: Behavioral Coverage ( Object Properties Checking ). Input Domain Coverage ( Correctness of Size and Type of every i/p Object ). Error Handling Coverage ( Preventing negative navigation ). Calculations Coverage ( correctness of o/p values ). Backend Coverage ( Data Validation & Data Integrity of database tables ). Service Levels ( Order of functionality or services ). Successful Functionality ( Combination of above all ) Check points: WinRunner is a functionality testing tool, it provides a set of facilities to cover below sub tests. To automate above sub tests, we can use 4 check points in WinRunner: 1. GUI check points 2. Bitmap check points 3. Data Base check points 4. Text check points GUI Check point: To automate behavior of objects we can use this check point. It consists of sub options. 1. For Single Property 2. For Object/Window 3. For Multiple Properties For Single Property: To test a single property of an object we can use this option. Navigation: select a position in Script, Create Menu, GUI check point, for single property, select testable object(Double Click), select required property with expected, click paste. Ex: Update Object Page 53 of 132

Software Testing Material

Focus to Window Open a Record Perform Change

Update Order Disable Disable Enable

Syntax: object_check_info("Object Name", "Property", Expected value); Ex: button_check_info("Update Order","enabled",0); If the checkpoints are for numeric value, then no need for double quotes. If the checkpoints are for string value, then place the data in between double quotes. But winrunner takes any value by default in string with double quotes. Problem: Focus to Window – Item No should be Focused Ok enabled after filling itemno & qty.

NagaRaju Shopping

Item No Quantity Ok Expected: No. of Items in Fly to equal to, no of items in Fly From -1, when you select an item in fly from. NagaRaju Journey

Fly From Fly To Ok

Ex: if u select an item in a list box then the no of items in next list boxes decreased by 1. Problem: Focus to Window – Ok should be Disabled Page 54 of 132

Software Testing Material Enter Roll No - Ok should be disabled Enter Name – Ok should be disabled Enter Class – Ok should be disabled

NagaRaju Shopping

Item No NagaRaju Journey Quantity List1

Ok

List2 List3

Ok

Problem: If type is A, Age is Focused, If type is B, Gender is Focused, If type is C, Qualification is Focused. Else others is focused. (use switch stmt)

Type

Age

Gender

Qualification

Others

switch(x) { case “A”: edit_check_info(“Age”,”focused”, 1); break; }

List

Text

Page 55 of 132

Software Testing Material

Ok

Exp: Selected Item in List box appears in text box after Clicking Ok button. Exp: Selected Item in List box appears in Sample 2 text object after clicking display button. Sample1

Sample2

List1

Display Text

Ok

NagaRaju Employee

Emp No Dept No Ok B Sal

Comm

Problem: If basic salary >= 10000 then commission = 10% of basic salary. Else If basic salary in between 5000 & 10000 then commission = 5% of basic salary. Else If basic salary < 5000 then commission = 200 Rs. Problem: If Total >= 800 then Grade = A. Else If Total in between 800 & 700 then Grade = B. Else Grade = C.

Roll No Ok

Page 56 of 132 Grade

Software Testing Material

For Object/Window: To test more than one properties of a single object, we can use this option. Ex: Update Object Focus to Window Open a Record Perform Change

Update Order Disable Disable Enable & Focused

Syntax: obj_check_gui(“obj name”, “Check List File.ckl”, “expected values file.txt”, time to create ); In the above syntax check list file specifies list of properties to test of a single object. And its extension is .ckl Expected values file specifies list of expected values for that selected or testable properties. And its extension is .txt Ex: obj_check_gui("Update Order", "list1.ckl", "gui1", 1); For Multiple Objects: To test more than one property of more than one object in a single checkpoint we can use this option. To create this checkpoint tester selects multiple objects in a single window. Ex: Focus to Window Open a Record Perform Change

Insert Order Disable Disable Disable

Update Order Disable Disable Enable & Focused

Delete Order Disable Enable Enable

Navigation: select position in script, create menu, GUI checkpoint, For Multiple Objects, click add, select testable objects, right click to relieve, specify expected for required properties for every selected object, click ok. Syntax: win_check_gui("Object Name", "Check List File.ckl", "Expected Values File", Time to Create); Ex: win_check_gui("Flight Reservation", "list3.ckl", "gui3", 1); Case Study: What type of properties you check for what objects? Object Type Push Button Radio Button Check Box

Properties Enabled, Focused Status ( On , Off ) Status ( On , Off ) Page 57 of 132

Software Testing Material List Box Table Grid Text / Edit Box

Count ( No of items in List Box ), Value ( Current Selected Value ) Rows, Columns, Table Content Enabled, Focused, Value, Range, Regular Expression, Date Format, Time Format

Changing Check Points: WinRunner allows us to perform changes in the existing check points. There are 2 types of changes in existing checkpoints due to project sudden changes or tester mistake. 1. Change expected values 2. Add new properties to test Change expected values Wr allows u to perform changes in expected values in existing checkpoints Navigarion: execute test script, click results, perform changes in expected values in results window of required, click ok, reexecute the test script to get right result Add new properties to test Sometimes test engineer add extra properties to existing checkpoint due to incompleteness of test through below navigation. Navigation: Create menu, edit gui check list, select check list file name, click ok, select new properties to test, click ok, to overwrite, change run mode to update, click run executed (default values selected as exp values), click run in verify mode to get results, perform changes in result if required

Enable d Focuse d Value

ON OFF Default Value

Running Modes in WinRunner: Verify mode: in this mode wr compare our expected values with actual. Update mode: in this runmode, default values select as expected value Debug mode: to run our test scripts line by line. During GUI check point creation Winrunner creates checklist files and expected values files in HardDisk. Winrunner maintains the test scripts by default in tmp folder Script: c:\program files\mi\wr\tmp\testname\script Checklists: c:\program files\mi\wr\tmp\testname\chklist\list1.ckl Exp values: c:\program files\mi\wr\tmp\testname\exp\gui1 Input Domain Coverage: Range and Size

Page 58 of 132

Software Testing Material

Navigation: Create Menu, GUI Check point, for object/window, select object, select range property, enter from & to values, click ok. Syntax: obj_check_gui(“obj name”, “What Property you are checking”, “Range of Values from & To”, time to create ) In the above syntax check list file specifies list of properties to test of a single object. And its extension is .ckl Expected values file specifies list of expected values for that selected or testable properties. And its extension is .txt Ex: obj_check_gui("Update Order", "list1.ckl", "gui1", 1);

NagaRaju Sample Age

Input Domain Coverage: Valid and Invalid Classes Navigation: Create Menu, GUI Check point, for object/window, select object, select Regular Expression property, enter Expected Expression as []*, click ok. Syntax: obj_check_gui(“obj name”, “What Property you are checking”, “Range of Values from & To”, time to create ) In the above syntax check list file specifies list of properties to test of a single object. And its extension is .ckl Expected values file specifies list of expected values for that selected or testable properties. And its extension is .txt Ex: obj_check_gui("Update Order", "list1.ckl", "gui1", 1); Problem: The Name text box should allow only lower level characters

NagaRaju Sample

Name

1. 2. 3. 4. 5.

Alphabets in lower case and initial cap only Alpha numeric and starting and ending with alphabets only Alphabets in lower case but starts with R ending with o only Alphabets in lower case with Under Score in middle Alphabets in lower case with Space and Under Score in middle

Page 59 of 132

Software Testing Material Bitmap Check Point: It is an optional checkpoint in functionality testing tool. Tester can use this checkpoint to compare images, logos, graphs and other graphical objects.( Like signatures) This checkpoint consists of two sub types: 1. For Object/Window (Entire Image Testing) 2. For Screen Area (Part of Image Testing) These options supports testing on static images only. WinRunner doesn't support dynamic images developed using Flash, Maya… For Object/Window: To compare our expected image with actual image in your application build, we can use this option. Navigation: select a position in script, create menu, bitmap checkpoint, for object/window, select image object. obj_check_bitmap(“Image object Name”, “Expected image file. Bmp”, time to create the image check point) win_check_bitmap("About Flight Reservation System", "Img1", 1); Run on different versions. Expected – Record time Actual – Run time Differences – what are differences For Screen Area (Part of Image Testing): To compare our expected image part with actual image in your application build, we can use this option. Navigation: select a position in script, create menu, bitmap checkpoint, for Screen Area, select required region in testable image, right click to releave. obj_check_bitmap(“Image object Name”, “Image file. Bmp”, time to create the check point, x, y, width, height ) win_check_bitmap("About Flight Reservation System", "Img2", 1, 191, 29, 122, 71); Run on different versions. Expected – Record time Actual – Run time Differences – what are differences Note: TSL supports variable size of parameter line a function overloading For every project functionality testing, gui checkpoint is obligatory to use. By bitmap check point used by tester depends on requirements Database Check Point: To conduct backend testing using WinRunner we can use this option.

Page 60 of 132

Software Testing Material

Back End Testing: Validating Completeness and Correctness of front end operation impact on the backend tables. This process is also known as the database testing. In general, the Backend testing is also known as validation or data and integrity of data. To automate this test, Database checkpoint provides three sub options 1. Default Check (Depends on Content) 2. Custom Check (Depends on rows count, columns count and content) 3. Runtime Record Check (New option in WinRunner7.0)

Application

DataBase

DSN Front End

Back End

Default Check: To check data validation and data integrity in database, depends on content, we can use this option. DSN: Data Source Name. It is a connection string between front end and back end. It will maintain the connection process. Steps: 1. 2. 3. 4.

Connect to the database Execute the select statement Return results in Excel Sheet Analyze the results manually

Application

DataBase

DSN 1

Front End Data Base Check Point Wizard

2

Back End

Select

3 In bitmap checking test between two versions of images.

Page 61 of 132

Software Testing Material In GUI checking test same application but with expected behavior. In Database checking test twice on the original data. To conduct testing, test engineer collects some information from development team. • • •

Connection String or DSN Table definitions or Data dictionary Mapping between front end forms and backend tables.

Database Testing Process: Create Database checkpoint (Current content of database selected as Expected.) Insert / Delete / Update operation through front end. Execute Database checkpoint (Current content of database selected as Actual) Navigation: In GUI & Bitmap checkpoints we will starts with selecting the position in script. Create Menu, Database Checkpoint, default checkpoint, specify connection to database (ODBC / Data Junction) , select sql statement(c:\\PF\MI\WR\temp\testname\msqr1.sql), click next, click create to select DSN, write select statement ( select * from orders ), click finish. Syntax: db_check("Check List File .cdl", "Query Result File.Xls(EXE File)"); Ex: db_check("list5.cdl", "dbvf5"); Criteria: Expected Difference – Pass Wrong Difference – Fail What Updated: Data Validation Who and when updated: Data Integrity New Record - Green Color. Modified Record - Yellow Color. Custom Check: Test engineer use this option to conduct backend testing depends on rows count or columns count or table content or combination of above three properties. Default Checkpoint: Content is Property & Content is Expected Custom Checkpoint: Rows Count is Property & No of rows is Expected. During Custom check point creation, winrunner provides a facility to select these properties, in general test engineers are using default check option as maximum. Because content is also suitable to find the number of rows and columns. Syntax: db_check("Check List File .cdl", "Query Result File.Xls(EXE File)"); db_check("list11.cdl", "dbvf8");

Page 62 of 132

Software Testing Material

A

B

1

1

Expected:

2

2

X Y

3

3

X

Y

A B

Front End – Programmers (Programming Division) Back End – DataBase Administrators (DB Division) The front objects names should be understandable to the end user. (WYSIWYG) Runtime Record Checkpoint: Sometimes test engineer use this option to find mapping between front end objects and backend columns, it is optional checkpoint. Navigation: Create Menu, Database Checkpoint, runtime record check, specify SQL statement, click next, click create to select DSN, write select statement with doubtful columns ( select orders.order_number, orders.customername from orders), select doubtful front end objects for that columns, click next, select any of below options • Exactly one match • One or more match • No match record Click finish. Note: For custom and default check points you have to give ; at the end of the sql statement. But in Runtime record check point u have no need to give it. Syntax: db_record_check("Check list File Name.cvr", DVR_ONE_MORE_MATCH / DVR_NO_MATCH, Variable);

DVR_ONE_MATCH/

Ex: db_record_check("list1.cvr", DVR_ONE_MATCH, record_num); In the above syntax checklist specifies expected mapping to test and variable specifies number of records matched. If mapping correct the same values will be presented. Runtime record checkpoint allows you to perform changes in existing mapping, through below navigation.

Page 63 of 132

Software Testing Material Create menu, edit runtime recordlist, select checklist file name, click next, change query (if u want to test on new columns), click next, change object selection for new objects testing, click finish. Synchronization: To define the time mapping between testing tool and application, we can use synchronization point concepts. Wait(): To define fixed waiting time during test execution, test engineer use this function. Syntax: wait (time in seconds); Ex: wait (10); Drawback: This function defines fixed waiting time, but our applications are taking variable times to complete, depends on test environment.

Change Runtime settings: During our test script execution, winrunner doesn't depends on recording time parameters. To maintain any waiting state, in winrunner we can use wait () function or change runtime settings. It maintains mainly following information: Delay: Time to wait between window focusing Timeout: How much time application should wait for context sensitive stage and checkpoints. There are two runtime settings time parameters Delay: For window synchronization Timeout: For execute in context sensitive and check points Window based statements are not able to execute: Delay + Timeout. Object based statements are not able to execute: Timeout. Navigation: Settings, General options, run tab, change delay & timeout depends on requirement, click apply, click ok. Window statements : delay – 1sec To focus – 10 sec Set_window(“”,6) ; -time = 11sec -2. button_press(ok); time = 10 3. button_check_info(“ok”,”enabled”,1);

Page 64 of 132

Software Testing Material time = 10sec Drawbacks in Change Settings: If you are changing the Settings once they will be applied to each and every test without user specifications. Due to this most of the times they are not using this change runtime settings option. Now a days most of the test engineers are using the for object / window property for avoiding the time mismatch problems For Object/Window Property: Navigation: Select position in script, create menu, synchronization point, for object/window property, select object, specify property with expected ( Ex: Status / Progress Bar – 100% completed and enabled…), specify maximum time to wait, click ok. Syntax: obj_wait_info("Object Name", "Property", Expected Value, Maximum time to wait); Ex:

obj_wait_info("Insert Done...","enabled",1,10);

For Object / Window Bitmap: Sometimes test engineer defines time mapping between tool and project depends on images in that application. Navigation: Select position in script, create menu, synchronization point, for object/window Bitmap, select the required image, Syntax: obj_wait_info("Object Name", "image1.bmp", Maximum time to wait); For Screen Area Bitmap: Sometimes test engineer defines time mapping between tool and project depends on image area in that application. Navigation: Select position in script, create menu, synchronization point, for Screen Area Bitmap, select the required image region, right click to releave. Syntax: obj_wait_info("Object Name", "image1.bmp", Maximum time to wait, x, y, width, height); Text Check Point: To cover calculation and other text based tests, we can use this option / concept in WinRunner. To create this type of check points in testing, we can use this “Get Text” option from the create menu. This option consists of two sub options : 1. From object / window 2. From screen area

Page 65 of 132

Software Testing Material From object / window: To capture object values into variables we can use this option. Navigation: Create Menu, Get Text, From Object / Window, select required object (Dbl Click) Syntax: obj_get_text("Object Name", Variable); Ex: obj_get_text("Flight No:", text); Syntax: obj_get_info("Object Name", "Property", Variable); Ex: obj_get_info("ThunderTextBox_3","value",v1);

From Screen Area: To capture static text in your application build screen we can use this option. Navigation: Create Menu, Get Text, From Screen area, select required required region to capture value [+sign] , right click to relieve Syntax: obj_get_text("Object Name", Variable, x1,y1,x2,y2); Ex: obj_get_text("Flight No:", text,2,3,50,60);

NagaRaju Sample Input Output

Item No

Quantity Ok

Price

$

Total

Retesting: Re execution of our test on same application build, with multiple test data is called retesting. In WinRunner retesting is also called as Data Driven Test (DDT). Data is driving or changing to test the application.

Page 66 of 132

Software Testing Material In WinRunner test engineers are conducting Retesting in 4 ways 1. Dynamic test data submission 2. Through flat file (notepad) 3. From front end grids (List box) 4. Through excel sheet During the test execution based on first type, tester gives values based on that test execution will be completed (like scanf() in C) But in the remaining three types can be done with out tester execution. Dynamic test data submission: To conduct retesting, to validate functionality, test engineer submits required test data to tool dynamically. To read keyboard values during the test execution, test engineer use below TSL statements. Syntax: create_input_dialog(“ Message “); Ex: create_input_dialog(“ Enter Your Account Number : “);

Build

Key Board

Test Script No1 No2 Multiply Result

Exp: res = no1 * no2 Item No

Quantity Ok

Price

$

Total

$

Page 67 of 132

Software Testing Material

Tl_step(): tl stands for test log. Test log means that test result. We can use this function to define user defined pass or fail message. Pass – green – 0 Fail – red – 1 Password_edit_set(“pwd”,password_encrypt(y))

User Id Password Login

Next

Sample 2 Display Text2

Sample 1 Text1 Ok

Problem: First enter EmpNo and Click Ok Button. Then it will displays bsal, comm. and gsal. Exp: gsal = bsal + comm. Bsal >= 15000 then comm. is 15% If bsal between 15000 and 8000 then commission is 5% If bsal < 8000 then comm. is 200.

Page 68 of 132

Software Testing Material

Through flat file (notepad) Sometimes test engineer conducts data driven testing, depends on multiple test data in flat files (like notepad .txt files). To manipulate file data for testing test engineer uses below TSL functions file_open(): To load required flat file into RAM, with specified permissions, we can use function. Syntax: file_open(“Path of the File”,FO_MODE_READ/ FO_MODE_WRITE/ FO_MODE_APPEND); file_getline(): We can use this function to read a line from a opened file. Syntax: file_getline( “Path of the File”, Variable); Like in C file pointer incremented automatically. file_close(): We can use function to swap out a opened file into RAM. Syntax: file_close(“Path of the File”); file_printf(): We can use this function to write specified text into a opened file in WRITE or APPEND mode. Syntax: file_printf(“Path of the File”, ”Format”, what values you want to write or which variable values you want to write); %d - integer, %s - string, %f – floating point, \n – New Line, \t - Tab, \r – Carriage return Substr: we can use this function to separate a substring from given string. Syntax: Substr (main string, start position, length of substring); Split: we can use this function to divide a string to field: Syntax: Split(main string, array name, separator); In the above syntax separator must be a single character. File-compare: to compare two file contents. Syntax: file_compare(“ path of file1”, “ path of file2”, “path of file3”); File3 is optional. And it specifies concatenate of content of both files.

Page 69 of 132

Software Testing Material

File

Build

Values

.txt Test Script

No1 No2 Multiply Result

Exp: res = no1 * no2 Item No

Quantity Ok

Price

$

Total

$

User Id Password Login

Next

Page 70 of 132

Software Testing Material

From Front End Grids (ListBox): Sometimes test engineer conducts retesting depends on multiple test data objects (like list box). To manipulate file data for testing test engineer uses below TSL functions list_get_item(): We can use this function to capture specified list box item through item number. list_get_item(“ListBox Name”,Item No,Variable); list_select_item(): We can use this function to select specified list box item through given variable. list_select_item(“ListBox Name”,Variable); list_get_info(): We can use this function to information about the specified property(like enabled, focused, count) of list box item into given variable. list_get_info(“ListBox Name”, Property, variable);

NagaRaju Journey

Fly From Fly To Ok Page 71 of 132

Data Build Software Testing Material

Test Script Sample 2

Sample 1

Display

Text1

Text2

Ok

NagaRaju Sample1

NagaRaju Sample2 Display

List1 Text Ok

Type

Age

Gender

Qualification

Others

Data Driven Testing: In generally test engineers are creating data driven tests, depends on excel sheet data.

Page 72 of 132

Software Testing Material

Loop

Test Data

Excel Sheet Data

----

Build

Test Script

From Excel Sheet: In general test engineers are creating retest test scripts depnds on multiple test data in excel sheet. To generate this type of script, test engineer use data driven test wizard. In this type of retesting, test engineer fills excel sheet with test data in two ways. 1. From data base tables using select statement (Back End) 2. Our own test data Navigation: Create test script for one input, tools menu, data driven wizard, click next, browse the path of the excel sheet ( path ), specify variable name to assign path of excel sheet ( by default table as variable ), select add statements to create ddt, select import data from database, optimized text 1. line by line 2. automatically, click next, specify connection to database, specify database connection (ODBC/Data Junction), select specify sql statement mssql1.sql , click next, click create to select dsn (machine data sourse – flight32), write select statement to capture database content for testing into excel sheet, specify position to replace excel sheet column in ur test script, select show data table now, click finish.

Col1

Col2

Col3

Test Script

C3 = c1 + c2

Problems: 1. Prepare a data driven program to find factorial of given number. Write result into same excel sheet. 2. Prepare a TSL script to write a list box item into excel sheet one by one. Page 73 of 132

Software Testing Material

Ddt_open(): We can use this function to open excel sheet into Ram. In specified mode. Syn: ddt_open(“path of file”, DDT_MODE_READ/ DDT_MODE_READWRITE) This function will returns E_FILE_OPEN when that file is opened into RAM. Else it returns E_FILE_NOT_OPEN. Ddt_update_from_db(): To extend excel sheet data depends on dynamic changes in the database ( Insert , Delete, Update Syn: Ddt_update_from_db(“path of excel sheet”, “path of query file”, variable); In the above syntax variable specifies that how many rows newly altered. Ddt_save(): To save recent modifications in excel sheet. Ddt_save(“ path of excel sheet”); Ddt_get_row_count(): To find the no of rows in excel sheet. Ddt_get_row_count(“path of excel sheet”, variable) Var stores the no of rows in sheet. Ddt_set_row(): To point a row in excel sheet. Ddt_set_row(“path of excel file”,row no): Ddt_val(): To read a value from specified column & pointed row. Ddt_val(“path of excel file”,col no): Ddt_set_val(): To write a value into a specified column and pointed row. Ddt_set_val(“path of excel file”, “col name”, value or variable): Ddt_close(): To swap out excel sheet from ram Ddt_close(“path of excel file”): Write a program to write list box items into a excel sheet one by one. Test Suite / Test Batch:- Arranging all tests in one proper order based on their functionality. It gives what test output is used as a input to all other values. Batch Testing: In general test engineers are executing their scripts as batches. Every batch consists of a set of tests, they all are dependent. In every batch end state of one test is base state to next test. When you are executing our tests as batches you are getting a chance to increase our probability of defect detection. Syntax: call “test name” () call “path of the test”(); In the above syntax we can use first one, when calling and called tests both are in the same folder. We can use second syntax when both are in different folders.

Page 74 of 132

Software Testing Material

Calling Test Call TestName() -------

Main Test

Test Name1 -------

SubTest

Parameter passing: Winrunner allows you to pass arguments between, calling test to called test, or main test to subtest. Navigation: Open subtest, file menu, test properties, select parameters table, click add to create more parameters, click apply, click ok, use that parameters in required place to test script.

From the above model main test is passing values to subtest. To receive that values, subtest maintains a set of parameter variables. Data Driven Batch Test: WinRunner allows you to execute our batches with multiple test data.

Calling Test Call TestName(n) -------

Main Test

Test Name

n = 10

-------

SubTest

texit(): sometimes test engineers are using the statement in test script to stop test execution in the middle of the process. Treturn(): we can use this statement to return a value from a called test to a calling test. Treturn(variable or value); Treturn(10); Page 75 of 132

Software Testing Material

Calling Test

Test Name

Temp = Call TestName(n) If(temp==1) Printf(); Else Printf();

n = 10

Edit_set(“”,n); If(condition) Treturn(0) Else Treturn(1);

Main Test

SubTest

Silent Mode: In general winrunner returns pause message, any standard checkpoint is failed during test execution. If u want to execute our tests scripts without any initiation when a checkpoint is failed we can follow below navigation to define silent mode. Navigation: Settings, general options, run tab, select “run in batchmode” option, click apply, click ok.

Fail

Test1

Next

Enabled

Test2 Test3 Test4

Fail

Test1

Test2

Sample Windo w appear s

Test3 Test4

Page 76 of 132

Software Testing Material

window appears: if (win_exists(“sample”) == E_OK) win_exists() we can use this function to find existence of a window. In the desktop in min, max or hidden position. Syn: win_exists(“ window name “, time); Time – is optional. Homework: Login after 5 secs. If next enabled go to next window. Else try for other user. Shopping: Prepare above batch test for ten users which information available in excel sheet during this batch execution tester passing item no & quantity as parameters. User Defined Functions: Like as programming languages winrunner also provides a facility to create user defined functions. In TSL user defined functions are created by test engineer to initiate repeatable navigation. In the above example, test engineer creates four automation test scripts to test four different functionalities depends on functionality dependency. Test engineers are calling this login process as base state. Public / static function function name(in/out/inout argument name, ……) { Repeatable Navigation return (value or variable ) } if u want to create a user defined function to maintain end state of one time execution is base state to next execution we can use static functions. But static maintains constant locations for internal variables in that current test execution. Out put of one test execution is input to other test.

Page 77 of 132

Software Testing Material

a = 100 Static a=0 ------a = 100 Test Note1: User Defined Functions allows only context sensitive statements and Control statements and doesn't allow check points and Analog statements. Note2: In batch testing one test calling other test through saved test name. One test invoking one function depends on function name. to call one function in test, that function .exe must reside in RAM.

Public function add (in a, in b, out c) { c = a + b; }

Calling Test: X = 6; Y = 6; Add( x , y , z Printf(z);

);

Page 78 of 132

Software Testing Material

Public function add (in a, in b) { c = a + b; return c; } Calling Test: X = 6; Y = 6; Z = Add( x , y); Printf(z); Public function add (in a, inout b) { c = a + b; }

Calling Test: X = 6; Y = 6; Add( x , y ); Printf(y); In - general args Out – return values Inout – both Return: to return one value Note: udf allow only cs statements & control stmts and doesn't allow check points & analog statements. Compiled Module: Open winrunner and build, click new in winrunner, record repeatable navigations as user defined functions, save that test in dat folder, file menu, test properties, general tab, change test type to compiled module, click apply, click ok, write load() statement of that compiled module in startup script of winrunner. Note: WinRunner maintains a default program as a startup script. This script executed automatically when u launching winrunner. In this script we can write load() statement to load your function.

Page 79 of 132

Software Testing Material Load(“ Name of the compiled Module”, 0/1,0/1) 0-User Defined compiled module 1-system Defined compiled module 0-Path appears in the winrunner window menu 1-Hides the path unload(): We can use this function to unload unwanted functions from RAM. Syntax: unload(“ Path of the Compiled Module “, “ Unwanted Function Name “); Reload(): We can use this function to reload, unloaded functions again. Syntax: reload(“ Path of the compiled Module”, 0/1,0/1) 0-User Defined compiled module 1-system Defined compiled module 0-Path appears in the winrunner window menu 1-Hides the path Predefined Functions: These functions are also known as built in functions or system defined functions. WinRunner provides a facility to search required tsl function in a library, called function generator. To search for a required function in function generator, we can follow below navigation. Create menu, insert function, from function generator, select required category, select required function depends on description, enter arguments, click paste. invoke_application(): WinRunner allows you to open a project automatically. invoke_application("Path of .exe", "commands", "working directory" , SW_SHOW / SW_HIDE / SW_MINIMIZE / SW_RESTORE /SW_SHOWMAXIMIZED / SW_SHOWMINIMIZED / SW_SHOWMINNOACTIVE / SW_SHOWNOACTIVE); Commands – Used in X Runner for Unix OS. Working directory – At the time of running the temporary files are stored in this directory. If u didn’t specify any directory by default it takes c:\windows\temp folder. Executing a Prepared Query: Db_connect(): We can use this function to connect to database using existing DSN or Connection. Syntax: db_connect(“Session Name”, “DSN=*******”); Ex: db_connect("Query1","DSN=Flight32");

Page 80 of 132

Software Testing Material Db_execute_query(): We can use this function to execute required “Select” statement on connected database. Syntax: db_execute_query(“Session Name”,”DSN=******”,variable”); Ex: db_execute_query("Query1","select * from orders where order_number
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF