ST Module 1 Notes
Short Description
softwaretesting...
Description
Soft Softwa warre Testi esting ng-1 -15I 5IS6 S63 3 I
Dept Dept., ., of ISE ISE
Modu Module le -
Module1- BASIS !" S!"T#A$E S!"T#A$E TESTI%& BASI DE"I%ITI!%S' Error' When people make mistakes while coding, we call these mistakes bugs. Errors tend to propagate; a requirements error may be magnified during du ring design and amplified still more during coding. "ault' a fault is the result of an error. It is more precise to say that a fault is the representation of an error, and can be represented as narrative text, data flow diagrams, hierarchy charts, source code, and so on. Defe(t synonym for fault fault,, as is )ug. When a designer makes an error of omission, the Defe(t is a good synonym resulting fault is that something is missing that should be present in the representation. T*pes of "ault'
Fault of Omission: Faults of omission occur when we fail to enter correct information. Of these two types, faults of omission are more difficult to detect and resole.
faultt of comm commis issi sion on occu occurs rs when when we ente enterr some someth thin ing g into into a Fault Fault of Commis Commission sion:: ! faul representation that is incorrect. E" subtracting a figure that should have been added
"ailure' a failure occurs when a fault e"ecutes and is usually represented to be source code, or loaded ob#ect. In(ident' when a failure occurs, it may or may not be readily apparent to the user $or customer or tester%. !n incident is the symptom associated with a failure that alerts the user to the occurrence of a failure. Test' testing is obiously concerned with errors, faults, failures, and incidents. ! test is the act of e"ercis e"ercising ing softwa software re with with test test cases. cases. ! test test has two disti distinct nct goals& goals& to find failures or to demonstrate correct execution. Test ase' test case has an identity and is associated with a program behaior. ! test case also has a set of inputs and a list of e"pected outputs. Software Testing' Definition' !ccording to the definition gien by 'ae (elperin and William ). *et+el -oftware testing can be stated as the process of alidating and erifying that a software& eets the requirements that guided its design and deelopment. Works as e"pected. -atisfies the needs of stakeholders. -oftware -oftware testing is the process of analy+ing analy+ing a software software item to detect the differences differences between e"isting and required conditions $that is, bugs% and to ealuate the features of the software item. One of the test Krupa K S, Asst. Professor
1 Global Academy of Technology echnolog y
Soft Softwa warre Testi esting ng-1 -15I 5IS6 S63 3 I
Dept Dept., ., of ISE ISE
Modu Module le -
techniques includes the process of e"ecuting a program or application with the intent of finding software bugs $errors or other defects%.
%e(essit*'
/o determine a set of test cases. /o improe the quality of software. /o build confidence in the software /o demonstrate to the deeloper and the customer that the software meets its requirements. /o discoer faults or defects in the software.
Software Testing +ife *(le'
-ignificance of testing life cycle is that it depicts how and when bugs are introduced, tested and fi"ed in a software deelopment life cycle $-'0)%. ases putting Bugs I%'
1equirement specification& Errors are made during requirement collection and analysis. 'esign& /esters work with deelopers in determining what aspects of a design are testable and under what parameter parameter those testers testers work. Errors Errors introduced in preious preious phase results results into faults faults and more errors are made. )oding& 'esign is implemented. Faults and errors introduced in preious phases propagate and more errors are made.
ase "I%DI%& Bugs'
/esting& /est strategy or planning is done to generate test cases or scenarios, which are e"ecuted to find bugs or errors. 2ugs found are reported in error logs.
ases getting Bugs !T'
Fault classification& 1eported faults are assigned to different seerity classes like catastrophic, serious, mild etc.
Fault isolation& )auses and location of different faults are pinpointed.
Fault resolution& 3atch work is done to fi" the faults which gie another opportunity for error and new faults to be made.
Krupa K S, Asst. Professor
2 Global Academy of Technology echnolog y
Soft Softwa warre Testi esting ng-1 -15I 5IS6 S63 3 I
Dept Dept., ., of ISE ISE
Modu Module le -
S!"T#A$E /A+IT0 '-
-oftware quality is a multidimensional quantity and is measurable. /ualit* Attri)utes' /here e"ist seeral measures of software quality. /hese can be diided into stati( and d*nai( quality attributes.
-tatic quality attributes refer to the actual code and related documentation.
'ynamic quality attributes relate to the behavior of the application while in use.
Stati( 2ualit* attri)utes include structured, maintainable, testable code along with aailability of correct and complete documentation. Krupa K S, Asst. Professor
3 Global Academy of Technology echnolog y
Soft Softwa warre Testi esting ng-1 -15I 5IS6 S63 3 I
Dept Dept., ., of ISE ISE
Modu Module le -
4ou might hae come across complaints such as 53roduct 6 is e"cellent, I like the features it offers, but its user manual stinks78 In this case, the user manual brings down do wn the oerall product quality. !s a part of correctie maintenance on any application code, we need to understand portions of the code before we make any changes to it. /his is where attributes such as code documentation, understandability, understandability, and structure come into play. ! poorly structured and poorly documented piece of code will be harder to understand, modify and difficult to test.
include software software reliabili reliability ty,, correctnes correctness, s, completeness completeness,, consistency consistency,, D*nai( D*nai( 2ualit* 2ualit* attri)ut attri)utes es include usabil usability ity,, and perfor performan mance. ce. 'ynami 'ynamicc quality quality attrib attribute utess are general generally ly determ determine ined d through through multip multiple le e"ecutions of a program.
$elia)ilit* refers to the probability of failurefree operation. orre(tness refers to the correct operation of an application and is always with reference to some artifact. For a tester, correctness is w.r.t requirements; for a user, it is often with respect to a user manual. opleteness refers to the aailability of all features listed in the requirements, or user manual. !n incomplete software is one that does not fully implement all features required. )ompleteness implements features that might itself be a subset of a larger set of features that are to be implemented in some future ersion of the application. /herefore eery piece of software that is correct is also complete with respect to some feature set. onsisten(* refers to adherence to a common set of conentions and assumptions. For e"ample, to follow a common color coding conention in a user interface. !n e"ample of inconsistency is to display date of birth of a person in different formats, without any regard for the user9s preferences. sa)ilit* refers to the ease with which an application can be used. :sability testing refers to the testing testing of a product by its potential potential users. :sers in turn test test for ease of use, functionalit functionality y as e"pected, performance, safety, and security. :sers thus sere as an important source of tests that deelo deeloper perss or tester testerss within within the organ organi+a i+atio tion n might might not hae concei conceied ed.. :sabil :sability ity testi testing ng is sometimes referred to as usercentric testing.
refers rs to the the time time the the appl applic icat atio ion n take takess to perfo perform rm a requ reques este ted d task task and and is erforan(e refe considered as a nonfunctional requirement. It is specified in terms such as 5/his task must be performed at the rate of 6 units of actiity in one second on a machine running at speed 4, 4, haing gigaby gigabytes tes of memory memory.8 .8 For example, the performance requirement requirement for a compiler might be stated in terms of the minimum average time to compile of a set of numerical numerical applications. $elia)ilit* can be considered as a statistical measure of correctness.
Krupa K S, Asst. Professor
4 Global Academy of Technology echnolog y
Software Testing-15IS63 I
Dept., of ISE
Module -
Software reliability is the probability of failure-free operation of software over a given time interval and under given conditions . /his definition requires the user operational profile, which is difficult or impossible to estimate accurately. *oweer, if an operational profile can be estimated for a gien class of users, then an accurate estimate of the reliability can be found for this class of users. Software reliability is the probability of failure-free operation of software in its intended environment . /he term 5enironment8 refers to the software and hardware elements needed to e"ecute the application, such as operating system, hardware requirements, and any other applications needed for communication.
$E/I$EME%TS, BEA4I!$, A%D !$$ET%ESS'
3roducts, software in particular, are designed in response to requirements.
1equirements specify the functions that a product is e"pected to perform.
Once the product is ready, it is the requirements that determine the e"pected b ehaior.
'uring deelopment phase, the requirements might get changed from what was stated originally.
/esters test the e"pected behaior by understanding the requirements.
Eaple *ere are the two requirements, each of which leads to a different program.
1equirement
Krupa K S, Asst. Professor
6 Global Academy of Technology
Software Testing-15IS63 I
Dept., of ISE
Module -
< D 23 78.> < A .>
orre(tness ' A program is considered correct if it behaves as expected on each element of its input domain. 4alid and inalid inputs'
In the e"amples aboe, the input domains are deried from the requirements. *oweer, the requirements are incomplete. /he requirement mentions that the request characters can be 5!8 or 5'8, but it fails to answer the question 5What if the user types a different character 8 other than 5!8 or 5'8 which is considered as inalid input to sort. /he requirement for sort does not specify what action it should take when an inalid input is encountered. Identifying the set of inalid inputs and testing the program against these inputs is an important part of the testing actiity. /esting a program against inalid inputs might reeal errors in the program. Eaple 1& -uppose that we are testing the sort program against the input& G H B access Wrong flag > inde" alue Incorrect packing > unpacking Wrong ariable used Wrong data reference -caling or unit errors
Krupa K S, Asst. Professor
27 Global Academy of Technology
Software Testing-15IS63 I
Dept., of ISE
Module -
Incorrect data dimension Incorrect -ubscript , Incorrect type Incorrect data scope -ensor data out of limits
+E4E+S !" TESTI%&'-
Figure below shows the leel of abstraction and testing in the waterfall model.
Te leels are as follows' ; $e2uireent spe(ifi(ation' /his step inoles taking down requirements from the customer and making a complete sense of the specified requirements. It corresponds to -ystem /esting. ; reliinar* design' ! rough estimate $replica or sketch% of the product required by the customer is made in this phase taking into account the requirements specified in the preious leel. It corresponds to Integration /esting ; Detailed design' ! complete detailed design of the product is made in this phase taking into account all the requirements specified in the first phase and the sketch made in the second phase. . It corresponds to :nit /esting. ; oding' /he detailed design deeloped in the preious phase $i.e. detail design% is ne"t coded and a product is deeloped $in segments or subunits%. ; nit Testing' /he deeloped subunits or segments are then tested for their specific functionality. -tructural testing is more appropriate at unit leel testing.
Krupa K S, Asst. Professor
28 Global Academy of Technology
Software Testing-15IS63 I
Dept., of ISE
Module -
; Integration Testing' /he deeloped subunits in the coding phase are then integrated into a specific module which is again tested to check the interactions between the subunits that are integrated. Functional testing is more appropriate at Integration testing. ; S*ste Testing' /he integrated modules are then finally assembled into a finished product which is tested again to check whether the product performs to the required standards as mentioned by the customer. Functional testing is more appropriate at system leel testing.
TESTI%& A%D 4E$I"IATI!% '
/esting aims at uncoering errors in a program. 3rogram erification aims at proing the correctness of programs by showing that it contains no errors. !lso erification aims at showing that a gien program works for all possible inputs that satisfy a set of conditions, 3rogram erification and testing are best considered as complementary techniques. In the deelopment of critical applications, such as smart cards, or control of nuclear plants, one often makes use of erification techniques to proe the correctness of some artifact created during the deelopment cycle, not necessarily the complete program. 1egardless of such proofs, testing is used inariably to obtain confidence in the correctness of the application. Perification might appear to be a perfect process as it promises to erify that a program is free from errors. *oweer, a close look at erification reeals that it has its own weaknesses. /he person wh o erified a program might hae made mistakes in the erification process; there might be an incorrect assumption on the input conditions; /hus, neither erification nor testing are perfect techniques for proing the correctness of programs.
STATI TESTI%&'
-tatic testing is carried out without e"ecuting the application under test. In contrast, dynamic testing requires one or more e"ecutions of the application under test. -tatic testing is useful in that it may lead to the discoery of faults in the application, as well as ambiguities and errors in requirements and other application relation documents, at a relatiely low cost.
Krupa K S, Asst. Professor
29 Global Academy of Technology
Software Testing-15IS63 I
Dept., of ISE
/his is especially so when dynamic testing is e"pensie.
-tatic testing is complementary to dynamic testing.
Module -
Organi+ations often sacrifice static testing in faor of dynamic testing though this is not considered a good practice. -tatic testing is best carried out by an indiidual who did not write the code, or by a team of indiiduals. ! sample process of static testing is illustrated in Figure. /he test team responsible for static testing has access to requirements documents, application, and all associated documents such as design document and user manuals. /he team also has access to one or more static testing tools. ! static testing tool takes the application code as input and generates a ariety of data useful in the test process.
Figure : -lements of static testing.
1.11.1 Walkthroughs
Walkthroughs and inspections are an integral part of static testing.
Walkthrough is an informal process to reiew any applicationrelated document. For e"ample, requirements are reiewed using a process termed requirements walkthrough.
)ode walkthrough, also known as peer code reiew, is used to reiew code and may be considered as a static testing technique
! walkthrough begins with a reiew plan agreed upon by all members of the team. Each item of the document, for e"ample, a source code module, is reiewed with clearly stated ob#ecties in iew. ! detailed report is generated that lists items of concern regarding the document reied.
Krupa K S, Asst. Professor
30 Global Academy of Technology
Software Testing-15IS63 I
Dept., of ISE
Module -
In requirements walkthrough, the test team must reiew the requirements document to ensure that the requirements match user needs, and are free from ambiguities and inconsistencies.
1.11.2 Inspecons
Inspection is a more formally defined process than a walkthrough. -eeral organi+ations consider formal code inspections as a tool to improe code quality at a lower cost than incurred when dynamic testing is used. Organi+ations hae reported significant increases in productiity and software quality due to the use of code inspections.
)ode inspection is a rigorous process for assessing the quality of code. )ode inspection is carried out by a team which works according to an inspection plan that consists of the following elements& $a% statement of purpose, $b% work product to be inspected, this includes code and associated documents needed for inspection, $c% team formation, roles, and tasks to be performed, $d% rate at which the inspection task is to be completed, and $e% data collection forms where the team will record its findings such as defects discoered, coding standard iolations, and time spent in each task.
embers of the inspection team are assigned roles of moderator, reader, recorder, and author.
/he moderator is in charge of the process and leads the reiew.
!ctual code is read by the reader, perhaps with the help of a code browser and with monitors for all in the team to iew the code.
/he recorder records any errors discoered or issues to be looked into.
/he author is the actual deeloper whose primary task is to help others to understand the code.
1.11.3 Sofware complexity and stac tesng
Often a team must decide which of the seeral modules should be inspected first. -eeral parameters enter this decisionmaking processVone of these being module comple"ity. ! more comple" module is likely to hae more errors and must be accorded higher priority for inspection than a module with lower comple"ity.
Krupa K S, Asst. Professor
31 Global Academy of Technology
Software Testing-15IS63 I
Dept., of ISE
Module -
-tatic analysis tools often compute comple"ity metrics using one or more comple"ity metrics, could be used as a parameter in deciding which modules to inspect first. )ertainly, the criticality of the function a module seres in an application could oerride the comple"ity metric while prioriti+ing modules.
$!B+EM STATEME%TS'
&E%E$A+I
View more...
Comments