ST Module 1 Notes

June 19, 2019 | Author: Yashaswini | Category: Software Testing, Software Bug, Reliability Engineering, Software Quality, Computer Program
Share Embed Donate


Short Description

softwaretesting...

Description

Soft Softwa warre Testi esting ng-1 -15I 5IS6 S63 3 I

Dept Dept., ., of ISE ISE

Modu Module le -

Module1- BASIS !" S!"T#A$E S!"T#A$E TESTI%& BASI DE"I%ITI!%S' Error' When people make mistakes while coding, we call these mistakes bugs. Errors tend to propagate; a requirements error may be magnified during du ring design and amplified still more during coding.   "ault'  a fault is the result of an error. It is more precise to say that a fault is the representation of an error, and can be represented as narrative text, data flow diagrams, hierarchy charts, source code, and so on. Defe(t synonym for fault fault,, as is )ug. When a designer makes an error of omission, the Defe(t is a good synonym resulting fault is that something is missing that should be present in the representation. T*pes of "ault' 



Fault of Omission:  Faults of omission occur when we fail to enter correct information. Of these two types, faults of omission are more difficult to detect and resole.

faultt of comm commis issi sion on occu occurs rs when when we ente enterr some someth thin ing g into into a Fault Fault of Commis Commission sion:: ! faul representation that is incorrect. E" subtracting a figure that should have been added

"ailure' a failure occurs when a fault e"ecutes and is usually represented to be source code, or loaded ob#ect. In(ident'  when a failure occurs, it may or may not be readily apparent to the user $or customer or tester%. !n incident is the symptom associated with a failure that alerts the user to the occurrence of a failure. Test'  testing is obiously concerned with errors, faults, failures, and incidents. ! test is the act of  e"ercis e"ercising ing softwa software re with with test test cases. cases. ! test test has two disti distinct nct goals& goals& to find failures or to demonstrate correct execution. Test ase'  test case has an identity and is associated with a program behaior. ! test case also has a set of inputs and a list of e"pected outputs. Software Testing' Definition'  !ccording to the definition gien by 'ae (elperin and William ). *et+el  -oftware testing can be stated as the process of alidating and erifying that a software&  eets the requirements that guided its design and deelopment.  Works as e"pected.  -atisfies the needs of stakeholders. -oftware -oftware testing is the process of analy+ing analy+ing a software software item to detect the differences differences between e"isting and required conditions $that is, bugs% and to ealuate the features of the software item. One of the test Krupa K S, Asst. Professor

1 Global Academy of Technology echnolog y

Soft Softwa warre Testi esting ng-1 -15I 5IS6 S63 3 I

Dept Dept., ., of ISE ISE

Modu Module le -

techniques includes the process of e"ecuting a program or application with the intent of finding software  bugs $errors or other defects%.

%e(essit*'     

/o determine a set of test cases. /o improe the quality of software. /o build confidence in the software /o demonstrate to the deeloper and the customer that the software meets its requirements. /o discoer faults or defects in the software.

Software Testing +ife *(le'

-ignificance of testing life cycle is that it depicts how and when bugs are introduced, tested and fi"ed in a software deelopment life cycle $-'0)%. ases putting Bugs I%' 





1equirement specification& Errors are made during requirement collection and analysis. 'esign& /esters work with deelopers in determining what aspects of a design are testable and under what parameter parameter those testers testers work. Errors Errors introduced in preious preious phase results results into faults faults and more errors are made. )oding& 'esign is implemented. Faults and errors introduced in preious phases propagate and more errors are made.

ase "I%DI%& Bugs' 

/esting& /est strategy or planning is done to generate test cases or scenarios, which are e"ecuted to find bugs or errors. 2ugs found are reported in error logs.

ases getting Bugs !T' 

Fault classification& 1eported faults are assigned to different seerity classes like catastrophic, serious, mild etc.



Fault isolation& )auses and location of different faults are pinpointed.



Fault resolution& 3atch work is done to fi" the faults which gie another opportunity for error and new faults to be made.

Krupa K S, Asst. Professor

2 Global Academy of Technology echnolog y

Soft Softwa warre Testi esting ng-1 -15I 5IS6 S63 3 I

Dept Dept., ., of ISE ISE

Modu Module le -

S!"T#A$E /A+IT0 '-

-oftware quality is a multidimensional quantity and is measurable. /ualit* Attri)utes' /here e"ist seeral measures of software quality. /hese can be diided into stati( and d*nai( quality attributes. 

-tatic quality attributes refer to the actual code and related documentation.



'ynamic quality attributes relate to the behavior of the application while in use.

Stati( 2ualit* attri)utes  include structured, maintainable, testable code along with aailability of correct and complete documentation. Krupa K S, Asst. Professor

3 Global Academy of Technology echnolog y

Soft Softwa warre Testi esting ng-1 -15I 5IS6 S63 3 I

Dept Dept., ., of ISE ISE

Modu Module le -

4ou might hae come across complaints such as 53roduct 6 is e"cellent, I like the features it offers, but its user manual stinks78 In this case, the user manual brings down do wn the oerall product quality. !s a part of correctie maintenance on any application code, we need to understand portions of the code  before we make any changes to it. /his is where attributes such as code documentation, understandability, understandability, and structure come into play. ! poorly structured and poorly documented piece of code will be harder to understand, modify and difficult to test.

include software software reliabili reliability ty,, correctnes correctness, s, completeness completeness,, consistency consistency,, D*nai( D*nai( 2ualit* 2ualit* attri)ut attri)utes es include usabil usability ity,, and perfor performan mance. ce. 'ynami 'ynamicc quality quality attrib attribute utess are general generally ly determ determine ined d through through multip multiple le e"ecutions of a program.  











$elia)ilit*  refers to the probability of failurefree operation. orre(tness  refers to the correct operation of an application and is always with reference to some artifact. For a tester, correctness is w.r.t requirements; for a user, it is often with respect to a user  manual. opleteness  refers to the aailability of all features listed in the requirements, or user manual. !n incomplete software is one that does not fully implement all features required. )ompleteness implements features that might itself be a subset of a larger set of features that are to be implemented in some future ersion of the application. /herefore eery piece of software that is correct is also complete with respect to some feature set. onsisten(* refers to adherence to a common set of conentions and assumptions. For e"ample, to follow a common color coding conention in a user interface. !n e"ample of inconsistency is to display date of birth of a person in different formats, without any regard for the user9s  preferences. sa)ilit* refers to the ease with which an application can be used. :sability testing refers to the testing testing of a product by its potential potential users. :sers in turn test test for ease of use, functionalit functionality y as e"pected, performance, safety, and security. :sers thus sere as an important source of tests that deelo deeloper perss or tester testerss within within the organ organi+a i+atio tion n might might not hae concei conceied ed.. :sabil :sability ity testi testing ng is sometimes referred to as usercentric testing.

refers rs to the the time time the the appl applic icat atio ion n take takess to perfo perform rm a requ reques este ted d task task and and is erforan(e refe considered as a nonfunctional requirement. It is specified in terms such as 5/his task must be  performed at the rate of 6 units of actiity in one second on a machine running at speed 4, 4, haing  gigaby gigabytes tes of memory memory.8 .8  For example, the performance requirement requirement for a compiler might be  stated in terms of the minimum average time to compile of a set of numerical numerical applications. $elia)ilit* can be considered as a statistical measure of correctness.

Krupa K S, Asst. Professor

4 Global Academy of Technology echnolog y

Software Testing-15IS63 I

Dept., of ISE

Module -

 Software reliability is the probability of failure-free operation of software over a given time interval and under given conditions . /his definition requires the user operational profile, which is difficult or impossible to estimate accurately. *oweer, if an operational profile can be estimated for a gien class of users, then an accurate estimate of the reliability can be found for  this class of users.  Software reliability is the probability of failure-free operation of software in its intended  environment . /he term 5enironment8 refers to the software and hardware elements needed to e"ecute the application, such as operating system, hardware requirements, and any other  applications needed for communication.

$E/I$EME%TS, BEA4I!$, A%D !$$ET%ESS'

3roducts, software in particular, are designed in response to requirements.



1equirements specify the functions that a product is e"pected to perform.



Once the product is ready, it is the requirements that determine the e"pected b ehaior.



'uring deelopment phase, the requirements might get changed from what was stated originally.



/esters test the e"pected behaior by understanding the requirements.

Eaple *ere are the two requirements, each of which leads to a different program.

 1equirement

Krupa K S, Asst. Professor

6 Global Academy of Technology

Software Testing-15IS63 I

Dept., of ISE

Module -

< D 23 78.> < A .>

orre(tness '  A program is considered correct if it behaves as expected on each element of its input  domain. 4alid and inalid inputs'

In the e"amples aboe, the input domains are deried from the requirements. *oweer, the requirements are incomplete. /he requirement mentions that the request characters can be 5!8 or 5'8, but it fails to answer the question 5What if the user types a different character 8 other than 5!8 or 5'8 which is considered as inalid input to sort. /he requirement for sort does not specify what action it should take when an inalid input is encountered. Identifying the set of inalid inputs and testing the program against these inputs is an important part of the testing actiity. /esting a program against inalid inputs might reeal errors in the program. Eaple 1& -uppose that we are testing the sort program against the input& G H B access Wrong flag > inde" alue Incorrect packing > unpacking Wrong ariable used Wrong data reference -caling or unit errors       

Krupa K S, Asst. Professor

27 Global Academy of Technology

Software Testing-15IS63 I    

Dept., of ISE

Module -

Incorrect data dimension Incorrect -ubscript , Incorrect type Incorrect data scope -ensor data out of limits

+E4E+S !" TESTI%&'-

Figure below shows the leel of abstraction and testing in the waterfall model.

Te leels are as follows' ; $e2uireent spe(ifi(ation'   /his step inoles taking down requirements from the customer and making a complete sense of the specified requirements. It corresponds to -ystem /esting. ; reliinar* design'  ! rough estimate $replica or sketch% of the product required by the customer is made in this phase taking into account the requirements specified in the preious leel. It corresponds to Integration /esting ; Detailed design'  ! complete detailed design of the product is made in this phase taking into account all the requirements specified in the first phase and the sketch made in the second phase. . It corresponds to :nit /esting. ; oding' /he detailed design deeloped in the preious phase $i.e. detail design% is ne"t coded and a  product is deeloped $in segments or subunits%. ; nit Testing'   /he deeloped subunits or segments are then tested for their specific functionality. -tructural testing is more appropriate at unit leel testing.

Krupa K S, Asst. Professor

28 Global Academy of Technology

Software Testing-15IS63 I

Dept., of ISE

Module -

; Integration Testing'  /he deeloped subunits in the coding phase are then integrated into a specific module which is again tested to check the interactions between the subunits that are integrated. Functional testing is more appropriate at Integration testing. ; S*ste Testing' /he integrated modules are then finally assembled into a finished product which is tested again to check whether the product performs to the required standards as mentioned by the customer. Functional testing is more appropriate at system leel testing.

TESTI%& A%D 4E$I"IATI!% ' 













/esting aims at uncoering errors in a program. 3rogram erification aims at proing the correctness of programs by showing that it contains no errors. !lso erification aims at showing that a gien program works for all possible inputs that satisfy a set of conditions, 3rogram erification and testing are best considered as complementary techniques. In the deelopment of critical applications, such as smart cards, or control of nuclear plants, one often makes use of erification techniques to proe the correctness of some artifact created during the deelopment cycle, not necessarily the complete program. 1egardless of such proofs, testing is used inariably to obtain confidence in the correctness of the application. Perification might appear to be a perfect process as it promises to erify that a program is free from errors. *oweer, a close look at erification reeals that it has its own weaknesses. /he person wh o erified a program might hae made mistakes in the erification process; there might be an incorrect assumption on the input conditions; /hus, neither erification nor testing are perfect techniques for proing the correctness of  programs.

STATI TESTI%&' 



-tatic testing is carried out without e"ecuting the application under test. In contrast, dynamic testing requires one or more e"ecutions of the application under test. -tatic testing is useful in that it may lead to the discoery of faults in the application, as well as ambiguities and errors in requirements and other application relation documents, at a relatiely low cost.

Krupa K S, Asst. Professor

29 Global Academy of Technology

Software Testing-15IS63 I

Dept., of ISE



/his is especially so when dynamic testing is e"pensie.



-tatic testing is complementary to dynamic testing.







Module -

Organi+ations often sacrifice static testing in faor of dynamic testing though this is not considered a good practice. -tatic testing is best carried out by an indiidual who did not write the code, or by a team of indiiduals. ! sample process of static testing is illustrated in Figure. /he test team responsible for static testing has access to requirements documents, application, and all associated documents such as design document and user manuals. /he team also has access to one or more static testing tools. ! static testing tool takes the application code as input and generates a ariety of data useful in the test process.

Figure : -lements of static testing.

1.11.1 Walkthroughs 

Walkthroughs and inspections are an integral part of static testing.



Walkthrough is an informal process to reiew any applicationrelated document. For e"ample, requirements are reiewed using a process termed requirements walkthrough.



)ode walkthrough, also known as peer code reiew, is used to reiew code and may be considered as a static testing technique



! walkthrough begins with a reiew plan agreed upon by all members of the team. Each item of  the document, for e"ample, a source code module, is reiewed with clearly stated ob#ecties in iew. ! detailed report is generated that lists items of concern regarding the document reied.

Krupa K S, Asst. Professor

30 Global Academy of Technology

Software Testing-15IS63 I 

Dept., of ISE

Module -

In requirements walkthrough, the test team must reiew the requirements document to ensure that the requirements match user needs, and are free from ambiguities and inconsistencies.

1.11.2 Inspecons  



 

Inspection is a more formally defined process than a walkthrough. -eeral organi+ations consider formal code inspections as a tool to improe code quality at a lower cost than incurred when dynamic testing is used. Organi+ations hae reported significant increases in productiity and software quality due to the use of code inspections.

)ode inspection is a rigorous process for assessing the quality of code. )ode inspection is carried out by a team which works according to an inspection plan that consists of the following elements& $a% statement of purpose, $b% work product to be inspected, this includes code and associated documents needed for inspection, $c% team formation, roles, and tasks to be performed, $d% rate at which the inspection task is to be completed, and $e% data collection forms where the team will record its findings such as defects discoered, coding standard iolations, and time spent in each task.



embers of the inspection team are assigned roles of moderator, reader, recorder, and author.



/he moderator is in charge of the process and leads the reiew.



!ctual code is read by the reader, perhaps with the help of a code browser and with monitors for all in the team to iew the code.



/he recorder records any errors discoered or issues to be looked into.



/he author is the actual deeloper whose primary task is to help others to understand the code.

1.11.3 Sofware complexity and stac tesng  



Often a team must decide which of the seeral modules should be inspected first. -eeral parameters enter this decisionmaking processVone of these being module comple"ity.  ! more comple" module is likely to hae more errors and must be accorded higher priority for inspection than a module with lower comple"ity.

Krupa K S, Asst. Professor

31 Global Academy of Technology

Software Testing-15IS63 I 

Dept., of ISE

Module -

-tatic analysis tools often compute comple"ity metrics using one or more comple"ity metrics, could be used as a parameter in deciding which modules to inspect first. )ertainly, the criticality of the function a module seres in an application could oerride the comple"ity metric while  prioriti+ing modules.

$!B+EM STATEME%TS'

&E%E$A+I
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF