Domain Testing.pptxryryfghgfjhgjgfjgjgjgfjgf

durga520075 1 views 33 slides Sep 09, 2025
Slide 1
Slide 1 of 33
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33

About This Presentation

fhfdfdhfhfdh


Slide Content

Domain Testing

Domain Testing Domain testing based on knowledge and expertise in the domain of the application. This type of testing requires business domain knowledge rather than the knowledge of what the software specification contains or how the software is written. Thus domain testing can be considered as an extension of black box testing. The depth in business domain is a prerequisite for this type of testing, it is easier to hire testers from the domain area and train them in the business domain. It reduces the effort and time required for training the testers in domain testing and also increases the effectiveness of domain testing.

Domain testing is the ability to design and execute test cases that relate to the people who will buy and use the software. It helps in understanding the problems they are trying to solve and the ways in which they are using the software to solve them. It is also characterized by how well an individual test engineer understands the operation of the system and the business processes that system is supposed to support. If a tester does not understand the system or the business processes, it would be very difficult for them to use. Domain testing involves testing the product, not by going through the logic built into the product. The business flow determines the steps, not the software under test. This is also called “business vertical testing”.

Consider an example of cash withdrawal functionality in an ATM. The user performs the following actions. Step1: Go to the ATM Step2: Put ATM card inside Step3: Enter correct PIN Step 4: Choose cash withdrawal Step 5: Enter amount Step 6: Take cash Step 7: Exit and retrieve the card In the above example, a domain tester is not concerned about testing everything in the design. There are several steps in the design logic that are not necessarily tested by the above flow. When the test case is written for domain testing, intermediate steps may be missed. These “missed steps” are expected to be working before the start of domain testing.

Context of white box, black box and domain testing

Requirements based testing Requirements testing deals with validating the requirements given in the Software Requirements Specification (SRS) of the software system. Not all requirements are explicitly stated; some of the requirements are implied or implicit. Explicit requirements are stated and documented as part of the requirements specification. Implied or implicit requirements are those that are not documented but assumed to be incorporated in the system.

The precondition for requirements testing is a detailed review of the requirements specification. Requirements review ensures that they are consistent, correct, complete and testable. This process ensures that some implied requirements are converted and documented as explicit requirements. All explicit and implied requirements are collected and documented as “Test Requirements Specification” (TRS).

Requirements based testing can also be conducted based on such a TRS, as it captures the testers perspective as well . A requirements specification for the lock and key example

Once the test case creation is completed, the RTM helps in identifying the relationship between the requirements and test cases. The following combinations are possible One to one –For each requirement there is one test case One to Many – For each requirement there are many test cases Many to one – A set of requirements can be tested by one test case Many to Many – Many requirements can be tested by many test cases One to none – The set of requirements can have no test cases

The RTM provides a wealth of information on various test metrics. Some of the metrics that can be collected or inferred from this matrix are as follows. Requirements addressed priority wise – this metric helps in knowing the test coverage based on the requirements. Number of test cases requirement wise – for each requirement, the total number of test cases created. Total number of test cases prepared – total of all the test cases prepared for all requirements.

Once the test cases are executed, the test results can be used to collect metrics such as Total number of test cases passed Total number of test cases failed Total number of defects in requirements Number of requirements completed Number of requirements pending

U S I N G WHI T E B O X A PPR OA C H T O TE S T D E SI G N T he t e st e r‘ s g oa l i s to d e t e r m i n e if al l t h e l o g i ca l a n d d a ta e l e me n t s i n t h e s of t w a r e u ni t a re fun c tioning proper l y . T h is is c a ll e d the white b o x , or g lass bo x , appr o ac h to test c ase desi g n. B l a c k b o x t e st d esi g n s trat e gy c a n be u s ed f o r b o th small a n d lar g e softw a re c o m p o ne n t s, wher e a s w hi te b o x -b a s e d t e s t d e s ign is m o s t u s eful w h en t e s ting s m a l l comp o n e n t s .

White Box Testing is the testing of a software solution's internal coding and infrastructure. It focuses primarily on strengthening security, the flow of inputs and outputs through the application, and improving design and usability. White box testing is also known as Clear Box testing, Open Box testing , Structural testing, Transparent Box testing, Code-Based testing , and Glass Box testing.

What do you verify in White Box Testing? White box testing involves the testing of the software code for the following : Internal security holes Broken or poorly structured paths in the coding processes The flow of specific inputs through the code Expected output The functionality of conditional loops Testing of each statement, object and function on an individual basis

How do you perform White Box Testing? STEP 1) UNDERSTAND THE SOURCE CODE The first thing a tester will often do is learn and understand the source code of the application. Since white box testing involves the testing of the inner workings of an application, the tester must be very knowledgeable in the programming languages used in the applications they are testing

Step 2) CREATE TEST CASES AND EXECUTE The second basic step to white box testing involves testing the Application's source code for proper flow and structure . One way is by writing more code to test the application's source code. The tester will develop little tests for each process or series of processes in the application.

TEST ADEQUACY CRITERIA The goal for white box testing is to ensure that the internal components of a program are working properly . A common focus is on structural elements such as statements and branches . The tester develops test cases that exercise these structural elements to determine if defects exist in the program structure

Testers need a framework for deciding which structural elements to select as the focus of testing, for choosing the appropriate test data, and for deciding when the testing efforts are adequate enough to terminate the process with confidence that the software is working properly . Such a framework exists in the form of test adequacy criteria. Formally a test data adequacy criterion is a stopping rule. Rules determine whether or not sufficient testing has been carried out .

The application scope of adequacy criteria also includes: helping testers to select properties of a program to focus on during test; helping testers to select a test data set for a program based on the selected properties; supporting testers with the development of quantitative objectives for testing; indicating to testers whether or not testing can be stopped for that program.

A program is said to be adequately tested with respect to a given criterion if all of the target structural elements have been exercised according to the selected criterion. Using the selected adequacy criterion a tester can terminate testing when he/she has exercised the target structures, and have some confidence that the software will function in manner acceptable to the user.

If a test data adequacy criterion focuses on the structural properties of a program it is said to be a program-based adequacy criterion . Program-based adequacy criteria are commonly applied in white box testing . They use either logic and control structures, data flow, program text, or faults as the focal point of an adequacy evaluation.

Other t y p es of test da t a ad e q u a c y crite r i a f o cus o n p rog r a m s pe c i f ic a ti o n s . T hese a re ca l l e d s pec i fic a t io n - b a s e d t e st data a d e q u a c y c r ite r ia . F inal l y , some test data ad e qua c y crite r ia i g nore both pro g r a m structu r e and spe c ific a tion in the sel e c t i on a n d e v a l u a t i o n of te s t d a t a . A n e x a m ple i s the r a n d om s e le c t i on c r i te r i o n .

Adequacy criteria are usually expressed as statements that depict the property, or feature of interest, and the conditions under which testing can be stopped (the criterion is satisfied). For example, an adequacy criterion that focuses on statement/branch properties is expressed as the following: A test data set is statement, or branch, adequate if a test set T for program P causes all the statements, or branches, to be executed respectively.

Other types of program-based test data adequacy criteria are in use; for example, those based on exercising program paths from entry to exit, and execution of specific path segments derived from data flow combinations such as definitions and uses of variables

The concept of test data adequacy criteria, and the requirement that certain features or properties of the code are to be exercised by test cases, leads to an approach called coverage analysis, which in practice is used to set testing goals and to develop and evaluate test data. In the context of coverage analysis, testers often refer to test adequacy criteria as coverage criteria For example, if a tester sets a goal for a unit specifying that the tests should be statement adequate, this goal is often expressed as a requirement for complete, or 100%, statement coverage. It follows from this requirement that the test cases developed must insure that all the statements in the unit are executed at least once.

When a coverage-related testing goal is expressed as a percent, it is often called the degree of coverage . The planned degree of coverage is specified in the test plan and then measured when the tests are actually executed by a coverage tool. The planned degree of coverage is usually specified as 100% if the tester wants to completely satisfy the commonly applied test adequacy, or coverage criteria. Under some circumstances, the planned degree of coverage may be less than 100% possibly due to the following: • The nature of the unit —Some statements/branches may not be reachable. —The unit may be simple, and not mission, or safety, critical, and so complete coverage is thought to be unnecessary. • The lack of resources —The time set aside for testing is not adequate to achieve 100% coverage. —There are not enough trained testers to achieve complete coverage for all of the units. —There is a lack of tools to support complete coverage. Other project-related issues such as timing, scheduling, and marketing constraints

Example Suppose that a tester specifies branches as a target property for a series of tests. A reasonable testing goal would be satisfaction of the branch adequacy criterion. This could be specified in the test plan as a requirement for 100% branch coverage for a software unit under test. In this case the tester must develop a set of test data that insures that all of the branches (true/false conditions) in the unit will be executed at least once by the test cases. When the planned test cases are executed under the control of a coverage tool, the actual degree of coverage is measured.

If t h e r e a r e , fo r e x a m p le , f o u r b r a n c he s i n t h e so f tw ar e un i t , an d o n l y t w o a r e e x ec u t e d b y th e planned s e t of test c a s e s, then t he de g r ee o f b r a n ch c o ver ag e i s 5 0%. A l l four of the b r a nc h e s m u s t b e e x ecute d b y a t e s t s e t i n o r de r t o ac h iev e t h e plan n e d t e s t in g g o a l . W he n a c o v e r a g e g o a l is not met, as in this e x ampl e , the t ester d e velops additional test cases and r e - e x e c utes the c od e . T h i s c y c l e c on ti nu e s un ti l t h e de si r e d l e v e l o f c ov e r a g e is a c h i e v e d . Th e g r e a t e r t h e deg r e e of c o v e rag e, t h e m o r e a d eq u at e t h e t e s t s e t .

When the tester achieves 100% coverage according to the selected criterion, then the test data has satisfied that criterion; it is said to be adequate for that criterion. An implication of this process is that a higher degree of coverage will lead to greater numbers of detected defects.   It should be mentioned that the concept of coverage is not only associated with white box testing. Coverage can also be applied to testing with usage profiles. In this case the testers want to ensure that all usage patterns have been covered by the tests. Testers also use coverage concepts to support black box testing. For example, a testing goal might be to exercise, or cover, all functional requirements, all equivalence classes, or all system features. In contrast to black box approaches, white box-based coverage goals have stronger theoretical and practical support.