Software Testing Made Easy
April 2, 2017 | Author: vijay.er123 | Category: N/A
Short Description
Download Software Testing Made Easy...
Description
Software Testing – made easy
Software Testing - made easy
Prepared By
K. Muthuvel, B.Com.,M.C.A. E.P.G.D.S.T * (Software Testing)
K. Muthuvel
Page 1 of 127
Software Testing – made easy
History Version
Description / Changes
Author
1.0
Baseline version
K. Muthuvel
Approver
Effective Date 10th Aug.'2005
For “Maveric Systems” Internal Use Only No part of this volume may be reproduced or transmitted in any form or by any means electronic or mechanical including photocopying and recording or by any information storage or retrieval system except as may be expressly permitted.
K. Muthuvel
Page 2 of 127
Software Testing – made easy
This book is dedicated to
Lord Vignesh
K. Muthuvel
Page 3 of 127
Software Testing – made easy
Table of Contents 1.Testing Fundamentals....................................................................................................... 9 1.1.Definition................................................................................................................... 9 1.2.Objective.................................................................................................................... 9 1.3.Benefits of Testing..................................................................................................... 9 2.Quality Assurance, Quality Control, Verification & Validation.....................................10 2.1.Quality Assurance.................................................................................................... 10 2.2.Quality Control........................................................................................................ 10 2.3.Verification.............................................................................................................. 10 2.4.Validation.................................................................................................................10 3.SDLC & STLC................................................................................................................11 3.1.STLC – Software Testing Life Cycle ......................................................................11 3.2.Models of SDLC & STLC....................................................................................... 11 3.2.1.V-Model............................................................................................................ 11 3.2.2.W-Model........................................................................................................... 12 3.2.3.Waterfall Model................................................................................................ 13 3.2.4.Extreme Programming Model...........................................................................13 3.2.5.Spiral Model......................................................................................................14 4.Testing Standards............................................................................................................15 4.1.SW – CMM:.............................................................................................................15 4.2.SW – TMM.............................................................................................................. 16 4.2.1.Levels of SW –TMM........................................................................................ 16 4.2.1.1.Level 1: Initial............................................................................................ 16 4.2.1.2.Level 2: Phase Definition...........................................................................16 4.2.1.3.Level 3: Integration.................................................................................... 16 4.2.1.4.Level 4: Management and Measurement................................................... 16 4.2.1.5.Level 5: Optimization / Defect Prevention and Quality Control............... 16 4.2.2.Need to use SW-TMM......................................................................................17 4.2.3.SW-TMM Assessment Process.........................................................................17 4.2.4.SW-TMM Summary......................................................................................... 17 4.3.ISO : International Organisation for Standardisation...............................................18 4.4.ANSI / IEEE Standards............................................................................................18 4.5.BCS - SIGIST.......................................................................................................... 18 5.Testing Techniques......................................................................................................... 19 5.1.Static Testing Techniques........................................................................................ 19 5.1.1.Review - Definition...........................................................................................19 5.1.2.Types of Reviews..............................................................................................19 5.1.2.1.Walkthrough.............................................................................................. 19 5.1.2.2.Inspection................................................................................................... 19 5.1.2.3.Informal Review........................................................................................ 20 5.1.2.4.Technical Review.......................................................................................20 5.1.3.Activities performed during review.................................................................. 20 5.1.4.Review of the Specification / Planning and Preparing System Test................. 21 5.1.5.Roles and Responsibilities................................................................................ 22 5.2.Dynamic Testing Techniques...................................................................................23
K. Muthuvel
Page 4 of 127
Software Testing – made easy
5.2.1.Black Box Testing: .......................................................................................... 23 5.2.1.1.Equivalence Class Partitioning.................................................................. 24 5.2.1.2.Boundary Value Analysis...........................................................................24 5.2.1.3.Cause and Effect Graphs............................................................................25 5.2.1.4.Comparison Testing................................................................................... 26 5.2.2.White-Box Testing:...........................................................................................27 5.2.2.1.Statement Coverage:.................................................................................. 27 5.2.2.2.Branch Coverage:.......................................................................................28 5.2.2.3.Condition Coverage:.................................................................................. 28 5.2.2.4.Path Coverage:........................................................................................... 29 5.2.2.5.Data Flow-Based Testing:..........................................................................31 5.2.2.6.Mutation Testing:.......................................................................................32 5.2.3.Grey Box Testing.............................................................................................. 32 6.Difference Tables............................................................................................................ 33 6.1.Quality Vs Testing................................................................................................... 33 6.2.Testing Vs Debugging............................................................................................. 33 6.3.Quality Assurance Vs Quality Control.................................................................... 33 6.4.Verification & Validation........................................................................................ 34 6.5.Black Box Testing & White Box Testing................................................................ 34 6.6.IST & UAT.............................................................................................................. 34 6.7.SIT & IST.................................................................................................................35 6.8. Alpha Testing & Beta Testing................................................................................ 35 6.9.Test Bed and Test Environment ..............................................................................35 6.10.Re-testing and Regression Testing.........................................................................35 7.Levels of Testing.............................................................................................................36 7.1.Unit Testing............................................................................................................. 36 7.1.1.Benefits of Unit Testing....................................................................................36 7.1.2.Pre-requisites.....................................................................................................36 7.2.Integration Testing................................................................................................... 38 7.2.1.Incremental Integration Testing........................................................................ 39 7.2.1.1.Top Down Integration................................................................................ 39 7.2.1.2.Bottom up Integration................................................................................ 40 7.2.1.3.Stub and Drivers........................................................................................ 41 7.2.2.Non-Incremental Testing.................................................................................. 42 7.2.2.1.Big Bang Integration.................................................................................. 42 7.2.2.2.Validation Testing......................................................................................42 7.2.2.3.Configuration review................................................................................. 42 7.3.System Testing......................................................................................................... 42 7.3.1.Functional Testing............................................................................................ 43 7.3.1.1.Requirement based Testing........................................................................43 7.3.2.Business-Process based Non-Functional Testing............................................. 44 7.3.2.1.Recovery testing.........................................................................................44 7.3.2.2.Security testing...........................................................................................45 7.3.2.3.Stress testing.............................................................................................. 45 7.3.2.4.Performance testing....................................................................................46 7.3.3.Alpha and Beta testing...................................................................................... 46 7.4.User Acceptance Testing......................................................................................... 47
K. Muthuvel
Page 5 of 127
Software Testing – made easy
7.4.1.Entry Criteria.....................................................................................................47 7.4.2.Exit Criteria.......................................................................................................47 7.5.Regression Testing and Re-testing...........................................................................48 7.5.1.Factors favour Automation of Regression Testing........................................... 48 7.5.2.Tools used in Regression testing ......................................................................48 8.Types of Testing..............................................................................................................49 8.1.Compliance Testing................................................................................................. 49 8.2.Intersystem Testing / Interface Testing.................................................................... 49 8.3.Parallel Testing........................................................................................................ 49 8.4.Database Testing...................................................................................................... 49 8.5.Manual support Testing........................................................................................... 50 8.6.Ad-hoc Testing.........................................................................................................50 8.7.Configuration Testing.............................................................................................. 50 8.8.Pilot Testing............................................................................................................. 50 8.9.Automated Testing...................................................................................................50 8.10.Load Testing.......................................................................................................... 51 8.11.Stress and Volume Testing.................................................................................... 51 8.12.Usability Testing.................................................................................................... 51 8.13.Environmental Testing...........................................................................................51 9.Roles & Responsibilities.................................................................................................52 9.1.Test Associate.......................................................................................................... 52 9.2.Test Engineer........................................................................................................... 52 9.3.Senior Test Engineer................................................................................................52 9.4.Test Lead..................................................................................................................53 9.5.Test Manager............................................................................................................53 10.Test Preparation & Design Process...............................................................................54 10.1.Baseline Documents...............................................................................................54 10.1.1.Business Requirement.....................................................................................54 10.1.2.Functional Specification................................................................................. 54 10.1.3.Design Specification....................................................................................... 54 10.1.4.System Specification.......................................................................................54 10.2.Traceability ........................................................................................................... 54 10.2.1.BR and FS....................................................................................................... 54 10.2.2.FS and Test conditions....................................................................................55 10.3.Gap Analysis.......................................................................................................... 55 10.4.Choosing Testing Techniques................................................................................55 10.5.Error Guessing....................................................................................................... 56 10.6.Error Seeding......................................................................................................... 56 10.7.Test Plan.................................................................................................................56 10.7.1.Test Plan Identifier..........................................................................................56 10.7.2.Introduction.....................................................................................................56 10.7.3.Test Items........................................................................................................56 10.7.4.Features to be Tested.......................................................................................57 10.7.5.Features Not to Be Tested............................................................................... 57 10.7.6.Approach.........................................................................................................57 10.7.7.Item Pass/Fail Criteria.....................................................................................57 10.7.8.Suspension Criteria and Resumption Requirements.......................................57
K. Muthuvel
Page 6 of 127
Software Testing – made easy
10.7.9.Test Deliverables............................................................................................ 57 10.7.10.Testing Tasks................................................................................................ 58 10.7.11.Environmental Needs....................................................................................58 10.7.12.Responsibilities............................................................................................. 58 10.7.13.Staffing and Training Needs......................................................................... 58 10.7.14.Schedule........................................................................................................ 58 10.7.15.Risks and Contingencies............................................................................... 59 10.7.16.Approvals...................................................................................................... 59 10.8.High Level Test Conditions / Scenario.................................................................. 59 10.8.1.Processing logic.............................................................................................. 59 10.8.2.Data Definition ...............................................................................................59 10.8.3.Feeds Analysis................................................................................................ 60 10.9.Test Case................................................................................................................ 61 10.9.1.Expected Results.............................................................................................61 10.9.1.1.Single Expected Result............................................................................ 61 10.9.1.2.Multiple Expected Result.........................................................................61 10.9.2.Pre-requirements............................................................................................. 62 10.9.3.Data definition................................................................................................ 62 11.Test Execution Process................................................................................................. 63 11.1.Pre- Requirements .................................................................................................63 11.1.1.Version Identification Values......................................................................... 63 11.1.2.Interfaces for the application...........................................................................63 11.1.3.Unit testing sign off........................................................................................ 63 11.1.4.Test Case Allocation....................................................................................... 64 11.2.Stages of Testing: ..................................................................................................64 11.2.1.Comprehensive Testing - Round I.................................................................. 64 11.2.2.Discrepancy Testing - Round II...................................................................... 64 11.2.3.Sanity Testing - Round III...............................................................................64 12.Defect Management...................................................................................................... 65 12.1.Defect – Definition................................................................................................ 65 12.2.Types of Defects.................................................................................................... 66 12.3.Defect Reporting ................................................................................................... 66 12.4.Tools Used............................................................................................................. 66 12.4.1.ClearQuest (CQ)............................................................................................. 66 12.4.2.TestDirector (TD):.......................................................................................... 67 12.4.3.Defect Tracker.................................................................................................67 12.5.Defects Meetings....................................................................................................67 12.6.Defects Publishing................................................................................................. 67 12.7.Defect Life Cycle................................................................................................... 68 13.Test Closure Process..................................................................................................... 69 13.1.Sign Off..................................................................................................................69 13.2.Authorities..............................................................................................................69 13.3.Deliverables........................................................................................................... 69 13.4.Metrics................................................................................................................... 69 13.4.1.Defect Metrics.................................................................................................69 13.4.2.Defect age: ..................................................................................................... 69 13.4.3.Defect Analysis: ............................................................................................. 69
K. Muthuvel
Page 7 of 127
Software Testing – made easy
13.4.4.Test Management Metrics...............................................................................70 13.4.5.Debriefs With Test Team................................................................................70 14.Testing Activities & Deliverables.................................................................................71 14.1.Test Initiation Phase...............................................................................................71 14.2.Test Planning Phase............................................................................................... 71 14.3.Test Design Phase.................................................................................................. 71 14.4.Test Execution & Defect Management Phase........................................................72 14.5.Test Closure Phase................................................................................................. 72 15.Maveric Systems Limited............................................................................................. 73 15.1.Overview................................................................................................................73 15.2.Leadership Team....................................................................................................73 15.3.Quality Policy.........................................................................................................73 15.4.Testing Process / Methodology..............................................................................74 15.4.1.Test Initiation Phase........................................................................................75 15.4.2.Test Planning Phase........................................................................................ 76 15.4.3.Test Design Phase........................................................................................... 78 15.4.4.Execution and Defect Management Phase......................................................79 15.4.4.1.Test Execution Process............................................................................ 79 15.4.4.2.Defect Management Process.................................................................... 79 15.4.5.Test Closure Phase.......................................................................................... 81 15.5.Test Deliverables Template................................................................................... 82 15.5.1.Project Details Form...................................................................................... 82 15.5.2.Minutes of Meeting.........................................................................................84 15.5.3.Top Level Project Checklist............................................................................85 15.5.4.Test Strategy Document..................................................................................86 15.5.5.Configuration Management and Quality Plan.................................................86 15.5.6.Test Environment Request.............................................................................. 87 15.5.7.Risk Analysis Document.................................................................................88 15.5.8.Clarification Document...................................................................................88 15.5.9.Test condition / Test Case Document............................................................. 88 15.5.10.Test Script Document................................................................................... 88 15.5.11.Traceability Matrix....................................................................................... 89 15.5.12.Daily Status Report....................................................................................... 89 15.5.13.Weekly Status Report....................................................................................91 15.5.14.Defect Report................................................................................................ 92 15.5.15.Final Test Checklist...................................................................................... 93 15.5.16.Final Test Report...........................................................................................94 15.5.17.Project De-brief Form................................................................................... 94 16.Q & A............................................................................................................................95 16.1.General................................................................................................................... 95 16.2.G.E. – Interview .................................................................................................... 99 17.Glossary...................................................................................................................... 119
K. Muthuvel
Page 8 of 127
Software Testing – made easy
1.
Testing Fundamentals
1.1.Definition “The process of exercising software to verify that it satisfies specified requirements and to detect errors “ …BS7925-1 “Testing is the process of executing a program with the intent of finding errors” …Glen Myers Testing identifies faults, whose removal increases the software quality by increasing the software’s potential reliability. Testing is the measurement of software quality. We measure how closely we have achieved quality by testing the relevant factors such as correctness, reliability, usability, maintainability, reusability and testability. `
1.2.Objective ·
Testing is a process of executing a program with intent of finding an error.
·
A good test is one that has a high probability of finding an as-yet-undiscovered error.
·
A successful test is one that uncovers an as-yet-undiscovered error.
·
Testing should also aim at suggesting changes or modifications if required, thus adding value to the entire process.
·
The objective is to design tests that systematically uncover different classes of errors and do so with a minimum amount of time and effort.
·
Demonstrating that the software application appears to be working as required by the specification
·
Meeting performance requirements.
·
Software reliability and software quality based on the data collected during testing
1.3.Benefits of Testing ·
Increase accountability and Control
·
Cost reduction
·
Time reduction
·
Defect reduction
·
Increase productivity of the Software developers
·
Quantitative Management of Software delivery
K. Muthuvel
Page 9 of 127
Software Testing – made easy
2.
Quality Assurance, Quality Control, Verification & Validation
2.1.Quality Assurance “A planned and systematic pattern for all actions necessary to provide adequate confidence that the item or product conforms to established technical requirements”
2.2.Quality Control “QC is a process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected.” “Quality Control is defined as a set of activities or techniques whose purpose is to ensure that all quality requirements are being met. In order to achieve this purpose, processes are monitored and performance problems are solved.”
2.3.Verification “The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase.” … [IEEE]
2.4.Validation Determination of the correctness of the products of software development with respect to the user needs and requirements. … BS7925-1
Difference Table: Quality Analysis Study on Process followed in Project development
Quality Control Study on Project for its Function and Specification
Verification Process of determining whether output of one phase of development conforms to its previous phase
Validation Process of determining whether a fully developed system conforms to its SRS document
Verification is concerned containment of errors
Validation is concerned about the final product to be error free
K. Muthuvel
with
phase
Page 10 of 127
Software Testing – made easy
3.
SDLC & STLC
3.1.STLC – Software Testing Life Cycle · · · · · · ·
Preparation of Testing Project Plan which includes Test Strategy. Preparation of Test Scripts which contains Test Scenarios. Preparation of Testing Bed. i.e.: Setting up the Test Environment Executing the Test Scripts (Automated as well as Manual Tests).. Defect Tracking with any bug tracking tools. Preparation of Test Completion Report and Test Incident Report. Preparation of Test Metrics for Continuous Process Improvement.
3.2.Models of SDLC & STLC There are a number of different models for software development life cycle. One thing which all models have in common is that at some point in the life cycle, software has to be tested. This paper outlines some of the more commonly used software development life cycle, with particular emphasis on the testing activities in each model.
3.2.1.V-Model The figure shows the brief description of the V-Model kind of testing. Every phase of the STLC in this model corresponds to some activity in the SDLC. The Requirement Analysis would correspondingly have an acceptance testing activity at the end. The design has Integration Testing (IT) and the System Integration Testing (SIT) and so on.
K. Muthuvel
Page 11 of 127
Software Testing – made easy ·
V model is model in which testing is done parallel with development. Left side of v model, reflect development input for the corresponding testing activities.
·
V model is the classic software development model. It encapsulates the steps in Verification and Validation phases for each step in the SDLC. For each phase, the subsequent phase becomes the verification (QA) phase and the corresponding testing phase in the other arm of the V becomes the validating (Testing) phase
·
In the Software Development Life Cycle, both the Development activity and the testing activities start almost at the same time with the same information in their hands. The development team will apply "do-procedures" to achieve the goals and the testing team will apply "Check-procedures" to verify that. Its a parallel process and finally arrives to the product with almost no bugs or errors
·
V-model is one of the SDLC STLC; it includes testing from the unit level to business level. That is after completing the coding tester starts testing the code by keeping the design phase documents that all the modules had been integrated or not, after that he will verify for system is according to the requirements or not, and at last he will go for business scenarios where he can validate by the customer and he can do the alpha testing and beta testing. And at last he decides to have the complete stable product.
·
The V model shows the Development Cycle Stages and Maps it to Testing Cycles, but it fails to address how to start for all these test levels in parallel to development. It is a parallel activity which would give the tester the domain knowledge and perform more value added, high quality testing with greater efficiency. Also it reduces time since the test plans, test cases, test strategy are prepared during the development stage itself.
3.2.2.W-Model From the view of testing, all of the models presented previously are deficient in various ways. The test activities first start after the implementation: · The connection between the various test stages and the basis for the test is not clear · The tight link between test, debug and change tasks during the test phase is not clear In the following, the W-model is presented. This is based on the general V-model and the disadvantages previously mentioned are removed.
K. Muthuvel
Page 12 of 127
Software Testing – made easy 3.2.3.Waterfall Model
One of the first models for software development is the so-called waterfall-model by B.W.Boehm. The individual phases i.e. activities that were defined here are to be found in nearly all models proposed since. In this it was set out that each of the activities in the software development must be completed before the next phase begins. A return in the development process was only possible to an immediate previous phase. In the waterfall-model, testing directly follows the implementation. By this model it was suggested that activities for testing could first be started after the implementation. Preparatory tasks for the testing were not clear. A further disadvantage is that testing, as the last activity before release, could be relatively easily shortened or omitted altogether. This, in practice, is unfortunately all too common. In this model, the expense of the removal of faults and defects found is only recognizable through a return to the implementation phase.
3.2.4.Extreme Programming Model
K. Muthuvel
Page 13 of 127
Software Testing – made easy
3.2.5.Spiral Model
In the spiral-model a cyclical and prototyping view of software development was shown. Tests were explicitly mentioned (risk analysis, validation of requirements and of the development) and the test phase was divided into stages. The test activities included module, integration and acceptance tests. However, in this model the testing also follows the coding. The exception to this is that the test plan should be constructed after the design of the system. The spiral model also identifies no activities associated with the removal of defects
K. Muthuvel
Page 14 of 127
Software Testing – made easy
4.
Testing Standards
Testing of software is defined very differently by different people and different corporations. You have process standards bodies, like ISO, SPICE, IEEE, etc. that attempt to impose a process to whatever types of development projects you do (be it hardware, software, embedded systems, etc.) and some of that will, by proxy, speak to testing. However, these are more there to guide the process and not the testing, such as it is. So, for example, IEEE will give you ideas for templates for such things as test case specifications, test plans, etc. That may help you out. On the other hand, those IEEE templates tell you nothing about actually testing the product itself. They basically just show you how to document that you are testing the product. The same thing pretty much applies with ISO. ISO is the standard for international projects and yet it, like IEEE, does not really force or even advocate a certain "testing standard." You also have other process and project oriented concepts out there like the Capability Maturity Model (CMM) Some of the organization that define testing standards are · · · · · ·
BS – British Standards ISO- International Organization of Standards CMM- Capability Maturity Model SPICE- Software Process Improvement and Capability Determination NIST-National institute of Standards and Technology DoD-Department of Defense
4.1.SW – CMM: SEI - Software Engineering Institute, Carnegie Mellon University. CMM - Capability Maturity Model Software Process A software process can be defined as a set of activities, methods, practices, and transformations that people use to develop and maintain software and the associated products Software Process Capability Software Process Capability describes the range of expected results that can be achieved by following a software process.The software process capability of an organization provides one means of predicting the most likely outcomes to be expected from the next software project the organization undertakes. Software Process Maturity Software Process Maturity is the extent to which a specific process is explicitly defined, managed, measured, controlled, and effective. Maturity implies a potential growth in capability and indicates both the richness of an organization’s software process and the consistency with which it is applied in projects throughout the organization The five levels of SW- CMM Level 1: Initial Level 2: Repeatable Level 3: Managed Level 4: Defined Level 5: Optimum
K. Muthuvel
Page 15 of 127
Software Testing – made easy 4.2.SW – TMM SW-TMM is a testing process improvement tool that can be used either in conjunction with the SW-CMM or as a stand-alone tool.
4.2.1.Levels of SW –TMM 4.2.1.1.Level 1: Initial · · ·
· ·
A chaotic process Not distinguished from debugging and ill defined The tests are developed ad hoc after coding is complete Usually lack a trained professional testing staff and testing tools The objective of testing is to show that the system and software work
4.2.1.2.Level 2: Phase Definition
· · · ·
Identify testing as a separate function from debugging Testing becomes a defined phase following coding Standardize their process to the point where basic testing techniques and methods are in place The objective of testing is to show that the system and software meets specifications
4.2.1.3.Level 3: Integration
· · · · ·
· ·
·
Integrate testing into the entire life cycle Establish a formal testing organization establishes formal testing technical trainings controls and monitors the testing process begins to consider using automated test tools The objective of testing is based on system requirements Major milestone reached at this level: management recognizes testing as a professional activity
4.2.1.4.Level 4: Management and Measurement
· ·
· ·
Testing is a measured and quantified process Development products are now tested for quality attributes such as Reliability, Usability and Maintainability. Test cases are collected and recorded in a test database for reuse and regression testing Defects found during testing are now logged, given a severity level, and assigned a priority for correction
4.2.1.5.Level 5: Optimization / Defect Prevention and Quality Control
·
K. Muthuvel
Testing is institutionalized within the organization
Page 16 of 127
Software Testing – made easy
· · · ·
Testing process is well defined and managed Testing costs and effectiveness are monitored Automated tools are a primary part of the testing process There is an established procedure for selecting and evaluating testing tools
4.2.2.Need to use SW-TMM · ·
· · ·
· · · ·
easy to understand and use provide a methodology to baseline the current test process maturity designed to guide organization selecting process improvement strategies identifying critical issues to test process maturity provide a road map for continuous test process improvement provide a method for measuring progress allow organizations to perform their own assessment
Organizations that are using SW-CMM ·
·
SW-TMM fulfills the design objective of being an excellent companion to SW-CMM SW-TMM is just another assessment tool and easily incorporated into the software process assessment
Organizations that are not using SW-CMM
· ·
·
provide an unbiased assessment of the current testing process provide a road map for incremental improvements save testing cost as the testing process moves up the maturity levels
4.2.3.SW-TMM Assessment Process · · · · · · · · · ·
Prepare for the assessment choose team leader and members choose evaluation tools (e.g. questionnaire) training and briefing Conduct the assessment Document the findings Analyze the findings Develop the action plan Write the final report Implement the improvements
best to implement the improvements either in a pilot project or in phases track progress and achievements prior to expanding organization wide also good in a limited application easier to fine-tune the new process prior to expanded implementation
4.2.4.SW-TMM Summary
·
K. Muthuvel
baseline the current testing process level of maturity
Page 17 of 127
Software Testing – made easy ·
· · · ·
identify areas that can be improved identify testing processes that can be adopted organization-wide provide a road map for implementing the improvements provide a method for measuring the improvement results provide a companion tool to be used in conjunction with the SW-TMM
4.3.ISO : International Organisation for Standardisation · · ·
Q9001 – 2000 – Quality Management System : Requirements Q9000 – 2000 – Quality Management System : Fundamentals and Vocabulary Q9004 – 2000 – Quality Management System : Guidelines for performance improvements
4.4.ANSI / IEEE Standards ANSI - ‘American National Standards Institute’ IEEE Standards: Institute of Electrical and electronics Engineers (Founded in 1884) Have an entire set of standards devoted to software. Testers should be familiar with all the standards mentioned in IEEE. 1. 610.12-1990 IEEE Standard Glossary of Software Engineering Terminology 2. 730-1998 IEEE Standard for Software Quality Assurance plans 3. 828-1998 IEEE Standard for Software Configuration Management 4. 829-1998 IEEE Standard for Software Test Documentation 5. 830-1998 IEEE Recommended Practice for Software Requirement Specifications. 6. 1008-1987(R1993) IEEE Standard for Software Unit Testing 7. 1012-1998 IEEE Standard for Software Verification and validation 8. 1012a-1998 IEEE Standard for Software Verification and validation – Supplement to 1012-1998 Content 9. 1016-1998- IEEE Recommended Practice for Software Design description 10. 1028-1997 IEEE Standard for Software Reviews 11. 1044-1993 IEEE Standard Classification for Software Anomalies 12. 1045-1992 IEEE Standard for Software Productivity metrics 13. 1058-1998 IEEE Standard for Software Project Management Plans 14. 1058.1-1987 IEEE Standard for Software Management 15. 1061-1998.1 IEEE Standard for Software Quality Metrics Methodology.
4.5.BCS - SIGIST A meeting of the Specialist Interest Group on Software Testing was held in January 1989 (this group was later to affiliate with the British Computer Society). This meeting agreed that existing testing standards are generally good standards within the scope which they cover, but they describe the importance of good test case selection, without being specific about how to choose and develop test cases. The SIG formed a subgroup to develop a standard which addresses the quality of testing performed. Draft 1.2 was completed by November 1990 and this was made a semi-public release for comment. A few members of the subgroup trialed this draft of the standard within their own organisations. Draft 1.3 was circulated in July 1992 (it contained only the main clauses) to about 20 reviewers outside of the subgroup. Much of the feedback from this review suggested that the approach to the standard needed re-consideration.
K. Muthuvel
Page 18 of 127
Software Testing – made easy
5.
Testing Techniques
5.1.Static Testing Techniques “Analysis of a program carried out without executing the program.”
… BS 7925-1 5.1.1.Review - Definition
Review is a process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. [IEEE]
5.1.2.Types of Reviews There are three general classes of reviews: · · ·
Informal / peer reviews Semiformal / walk-through Formal / inspections.
5.1.2.1.Walkthrough “A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review. “ [BS 7925-1] A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. These are led by the author of the document, and are educational in nature. Communication is therefore predominately one-way in nature. Typically they entail dry runs of designs, code and scenarios/ test cases.
5.1.2.2.Inspection A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). [BS 7925-1] An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements specification or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality.
K. Muthuvel
Page 19 of 127
Software Testing – made easy
Led by trained moderator (not author), has defined roles, and includes metrics and formal process based on rules and checklists with entry and exit criteria.
5.1.2.3.Informal Review · · ·
Unplanned and Undocumented Useful, Cheap and widely used Contrast with walkthroughs is that communication is very much two-way in nature
5.1.2.4.Technical Review Technical reviews are also known as peer review as it is vital that participants are made up from the 'peer group', rather than including managers. · Documented · Defined fault detection process · Includes peers and technical experts · No management participant Comparison of review types Review type
Primary purpose
Led by
Participants
Degree of formality
Walkthrough
Education
Author
Peers
Presentational
Inspection
Finding faults and process Moderator improvement
Reader, Recorder, Author, Inspector
Formal defined Inspection process
Informal review
Find problems quickly and cheaply
Not defined
Not defined
Largely Unplanned and Undocumented
Technical review
Finding faults
Chairperson
Peers, technical Formal fault experts detection process
5.1.3.Activities performed during review Activities in Review: Planning, overview meeting, Review meeting and follow-up. Deliverables in Review: Product changes, source document changes and improvements. Factors for pitfall of review: Lack of training, documentation and management support. Review of the Requirements / Planning and Preparing Acceptance Test At the beginning of the project the test activities must start. These first activities are: · Fixing of test strategy and test concept · risk analysis · determine criticality · expense of testing · test intensity · Draw up the test plan · Organize the test team · Training of the test team - If necessary · Establish monitoring and reporting
K. Muthuvel
Page 20 of 127
Software Testing – made easy · ·
Provide required hardware resources (PC, data base, …) Provide required software resources (software version, test tools, …)
The activities include the foundations for a manageable and high-quality test process. A test strategy is determined after a risk evaluation, a cost estimate and test plan are developed and progress monitoring and reporting are established. During the development process all plans must be updated and completed and all decisions must be checked for validity. In a mature development process reviews and inspections are carried out through the whole process. The review of the requirement document answers questions like: Are all customers’ requirements fulfilled? Are the requirements complete and consistent? And so on. It is a look back to fix problems before going on in development. But just as important is a look forward. Ask questions like: Are the requirements testable? Are they testable with defensible expenditure? If the answer is no, then there will be problems to implement these requirements. If you have no idea how to test some requirements then it is likely that you have no idea how to implement these requirements. At this stage of the development process all the knowledge for the acceptance tests is available and to hand. So this is the best place for doing all the planning and preparing for acceptance testing. For example one can · Establish priorities of the tests depending on criticality · Specify (functional and non-functional) test cases · Specify and - if possible - provide the required infra-structure · At this stage all of the acceptance test preparation is finished and can be achieved.
5.1.4.Review of the Specification / Planning and Preparing System Test In the review meeting of the specification documents ask questions like: Is the specification testable? Are they testable with defensible expenditure? Only these kinds of specifications can be realistically implemented and be used for the next steps in the development process. There must be a re-work of the specifications if the answers to the questions are no. Here all the knowledge for the system tests is available and to hand. Tasks in planning and preparing for system testing include: · · ·
Establishing priorities of the tests depending on criticality Specifying (functional / non-functional) system test cases Defining and establishing the required infra-structure
As with the acceptance test preparation, all of the system test preparation is finished at this early development stage. · · ·
Review of the Architectural Design Detailed Design Planning and Preparing Integration/Unit Test
During the review of the architectural design one can look forward and ask questions like: What is about the testability of the design? Are the components and interfaces testable? Are they testable with defensible expenditure? If the components are too expensive to test a re-work of the architectural design has to be done before going further in the development process. Also at this stage all the knowledge for integration testing is available. All preparation, like specifying control flow and data flow integration test cases, can be achieved. All accordingly activities of the review of the architectural design and the integration tests can be done here at the level of unit tests.
K. Muthuvel
Page 21 of 127
Software Testing – made easy 5.1.5.Roles and Responsibilities In order to conduct an effective review, everyone has a role to play. More specifically, there are certain roles that must be played, and reviewers cannot switch roles easily. The basic roles in a review are: · The moderator · The recorder · The presenter · Reviewers Moderator: The moderator makes sure that the review follows its agenda and stays focused on the topic at hand. The moderator ensures that side-discussions do not derail the review, and that all reviewers participate equally. Recorder: The recorder is an often overlooked, but essential part of the review team. Keeping track of what was discussed and documenting actions to be taken is a full-time task. Assigning this task to one of the reviewers essentially keeps them out of the discussion. Worse yet, failing to document what was decided will likely lead to the issue coming up again in the future. Make sure to have a recorder and make sure that this is the only role the person plays. Presenter: The presenter is often the author of the artifact under review. The presenter explains the artifact and any background information needed to understand it (although if the artifact was not selfexplanatory, it probably needs some work). It’s important that reviews not become “trials” - the focus should be on the artifact, not on the presenter. It is the moderator’s role to make sure that participants (including the presenter) keep this in mind. The presenter is there to kick-off the discussion, to answer questions and to offer clarification. Reviewer: Reviewers raise issues. It’s important to keep focused on this, and not get drawn into side discussions of how to address the issue. Focus on results, not the means.
K. Muthuvel
Page 22 of 127
Software Testing – made easy 5.2.Dynamic Testing Techniques “The process of evaluating a system or component based upon its behaviour during execution. “ … [IEEE]
5.2.1.Black Box Testing: “Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.” …BS7925-1 Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component It attempts to find: · · · · ·
Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors Initialization and termination errors
Black-box test design treats the system as a "black-box", so it does not explicitly use knowledge of the internal structure. Black box testing is based solely on the knowledge of the system requirements. Black-box test design is usually described as focusing on testing functional requirements. In comparison, White-box testing allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Black box testing focuses on testing the function of the program or application against its specifications. Specifically, this technique determines whether combinations of inputs and operations produce expected results. Test Case design Techniques under Black Box Testing: · · · · · ·
Equivalence class partitioning Boundary value analysis Comparison testing Orthogonal array testing Decision Table based testing Cause Effect Graph
K. Muthuvel
Page 23 of 127
Software Testing – made easy 5.2.1.1.Equivalence Class Partitioning Equivalence class: A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification. …BS7925-1 Equivalence partition testing: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes. …BS7925-1 Determination of equivalence classes ·
Examine the input data. ·
Few general guidelines for determining the equivalence classes can be given
·
If the input data to the program is specified by a range of values: o e.g. numbers between 1 to 5000. o One valid and two invalid equivalence classes are defined.
·
If input is an enumerated set of values: o e.g. {a,b,c} o one equivalence class for valid input values
·
Another equivalence class for invalid input values should be defined. o Example o A program reads an input value in the range of 1 and 5000: o computes the square root of the input number o There are three equivalence classes: o the set of negative integers, o set of integers in the range of 1 and 5000, o Integers larger than 5000. o The test suite must include: o representatives from each of the three equivalence classes: o A possible test suite can be: {-5, 500, 6000}.
5.2.1.2.Boundary Value Analysis Boundary value: An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary. …BS7925-1 Boundary value analysis: A test case design technique for a component in which test cases are designed which include representatives of boundary values. …BS7925-1 Example · ·
For a function that computes the square root of an integer in the range of 1 and 5000: Test cases must include the values: {0, 1, 5000, and 5001}.
K. Muthuvel
Page 24 of 127
Software Testing – made easy 5.2.1.3.Cause and Effect Graphs “A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases” …BS7925-1 Cause-effect graphing attempts to provide a concise representation of logical combinations and corresponding actions. A Cause-and-Effect Diagram is a tool that helps identify, sort, and display possible causes of a specific problem or quality characteristic (Viewgraph 1). It graphically illustrates the relationship between a given outcome and all the factors that influence the outcome. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each. A cause-effect graph developed. · ·
Graph converted to a decision table. Decision table rules are converted to test cases.
The C&E diagram is also known as the Fishbone/Ishikawa diagram because it was drawn to resemble the skeleton of a fish, with the main causal categories drawn as "bones" attached to the spine of the fish, as shown below Example C&E diagram for a Server crash issue:
K. Muthuvel
Page 25 of 127
Software Testing – made easy Advantages · · · · ·
Helps determine root causes Encourages group participation Indicates possible causes of variation Increases process knowledge Identifies areas for collecting data
5.2.1.4.Comparison Testing · · · · · · · · ·
In some applications, the reliability is critical. Redundant hardware and software may be used. For redundant s/w, use separate teams to develop independent versions of the software. Test each version with same test data to ensure all provide identical output. Run all versions in parallel with a real-time comparison of results. Even if will on run one version in final system, for some critical applications can develop independent versions and use comparison testing or back-to-back testing. When outputs of versions differ, each is investigated to determine if there is a defect. Method does not catch errors in the specification. Exercise on Live Application
K. Muthuvel
Page 26 of 127
Software Testing – made easy 5.2.2.White-Box Testing: “Test case selection that is based on an analysis of the internal structure of the component.” …BS7925-1 Testing based on an analysis of internal workings and structure of a piece of software. Also known as Structural Testing / Glass Box Testing / Clear Box Testing. Tests are based on coverage of code statements, branches, paths, conditions .
• •
Aims to establish that the code works as designedExamines the internal structure and implementation of the program Target specific paths through the programNeeds accurate knowledge of the design, implementation and code
Test Case design techniques under White Box Testing:
· · · · · ·
Statement coverage Branch coverage Condition coverage Path coverage Data flow-based testing Mutation testing
5.2.2.1.Statement Coverage: “A test case design technique for a component in which test cases are designed to execute statements.”
… BS7925-1 Design test cases so that every statement in a program is executed at least once. Unless a statement is executed, we have no way of knowing if an error exists in that statement Example: Euclid's GCD computation algorithm: int f1(int x, int y){ while (x != y){ if (x>y) then x=x-y; else y=y-x; } return x; }
By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3, y=4)} all statements are executed at least once.
K. Muthuvel
Page 27 of 127
Software Testing – made easy
5.2.2.2.Branch Coverage: Branch : A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component. Branch Testing: A test case design technique for a component in which test cases are designed to execute branch outcomes.
… BS7925-1 Branch testing guarantees statement coverage
Example Test cases for branch coverage can be: {(x=3, y=3), (x=4, y=3), (x=3, y=4)}
5.2.2.3.Condition Coverage: Condition: “A Boolean expression containing no Boolean operators. For instance, A
View more...
Comments