SOFTWARE TESTING unit 1 types of software testing.pptx
dishamasane
70 views
75 slides
Sep 22, 2024
Slide 1 of 75
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
About This Presentation
St
Size: 687.96 KB
Language: en
Added: Sep 22, 2024
Slides: 75 pages
Slide Content
SOFTWARE TESTING UNIT 1 Basics of Software Testing and Testing Methods Presented by M S RATHOD LIF, GPA
COURSE OUTCOMES (COs) At the end of this course, student will be able to: 1) Apply various software testing methods. 2) Prepare test cases for different types and levels of testing. 3) Prepare test plan for an application. 4) Identify bugs to create defect report of given application. 5) Test software for performance measures using automated testing tools. 6) Apply different testing tools for a given application.
Basics of Software Testing and Testing Methods Identify errors and bugs in the given program. Prepare test case for the given application. Describe the Entry and Exit Criteria for the given test application. Validate the given application using V model in relation with quality assurance. Describe features of the given testing method.
What is Software Testing? Software testing is a process of identifying the correctness of software by considering its all attributes (Reliability, Scalability, Portability, Usability, Re-usability) and evaluating the execution of software components to find the software bugs or errors or defects. It provides objective of the software and gives surety of fitness of the software. It involves testing of all components under the required services to confirm that whether it is satisfying the specified requirements or not. The process is also providing the client with information about the quality of the software.
Objectives of Testing Verification :- allows testers to confirm that the software meets the various business and technical requirements stated by the client before the inception of the whole project. Validation:- Confirms that the software performs as expected and as per the requirements of the clients. Defects:- to find different defects in the software to prevent its failure or crash during implementation or go live of the project. Providing Information:- during the process of software testing , testers can accumulate a variety of information related to the software and the steps taken to prevent its failure.
Objectives of Testing Quality Analysis:- Testing helps improve the quality of the software by constantly measuring and verifying its design and coding. Compatibility:- It helps validate application’s compatibility with the implementation environment, various devices, Operating Systems, user requirements, among other things. Verifying Performance & Functionality:- It ensures that the software has superior performance and functionality.
Software Development Life Cycle (SDLC)
Some Terminologies Comparison basis Bug Defect Error Fault Failure Definition It is an informal name specified to the defect. The Defect is the difference between the actual outcomes and expected outputs. An Error is a mistake made in the code; that's why we cannot execute or compile code. The Fault is a state that causes the software to fail to accomplish its essential function. If the software has lots of defects, it leads to failure or causes failure. Raised by The Test Engineers submit the bug. The Testers identify the defect. And it was also solved by the developer in the development phase or stage. The Developers and automation test engineers raise the error. Human mistakes cause fault. The failure finds by the manual test engineer through the development cycle .
Test, Test Case and Test Suite Test and test case terms are synonyms and may be used interchangeably. A test case consists of inputs given to the program and its expected outputs. Every test case will have a unique identification number. The set of test cases is called a test suite. All test suites should be preserved as we preserve source code and other documents.
Software Testing Life cycle What is Entry and Exit Criteria in STLC? Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin. Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded
Software Testing Life cycle Requirement Analysis:- tester analyses requirement document of SDLC to examine requirements stated by the client. After examining the requirements, the tester makes a test plan to check whether the software is meeting the requirements or not. Entry Criteria Activities Exit Criteria For the planning of test plan requirement specification, application architecture document and well-defined acceptance criteria should be available. Prepare the list of all requirements and queries, and get resolved from Technical Manager/Lead, System Architecture, Business Analyst and Client. Make a list of all types of tests (Performance, Functional and security) to be performed. Make a list of test environment details, which should contain all the necessary tools to execute test cases. List of all the necessary tests for the testable requirements and Test environment details
Software Testing Life cycle Test plan creation :- Tester determines the estimated effort and cost of the entire project. Test case execution can be started after the successful completion of Test Plan Creation. Entry Criteria Activities Exit Criteria Requirement Document. Define Objective as well as the scope of the software. List down methods involved in testing. Overview of the testing process. Settlement of testing environment. Preparation of the test schedules and control procedures. Determination of roles and responsibilities. List down testing deliverables, define risk if any. Test strategy document. Testing Effort estimation documents are the deliverables of this phase.
Software Testing Life cycle Environment setup: - Setup of testing environment is and independent activity and can be started along with Test Case Development. Entry Criteria Activities Exit Criteria Test strategy and test plan document. Test case document. Testing data. Prepare the list of software and hardware by analyzing requirement specification. After the setup of the test environment, execute the smoke test cases to check the readiness of the test environment. Successful smoke test, Environment setup must work as per plan and checklist.
Software Testing Life cycle Test case Execution:- the testing team starts case development and execution activity. The testing team writes down the detailed test cases, also prepares the test data if required. Entry Criteria Activities Exit Criteria Requirement Document Creation of test cases. Execution of test cases. Mapping of test cases according to requirements. Test execution result. List of functions with the detailed explanation of defects.
Software Testing Life cycle Defect Logging:- Testers and developers evaluate the completion criteria of the software based on test coverage, quality, time consumption, cost, and critical business objectives. This phase determines the characteristics and drawbacks of the software. Test cases and bug reports are analyzed in depth to detect the type of defect and its severity. Entry Criteria Activities Exit Criteria Test case execution report. Defect report It evaluates the completion criteria of the software based on test coverage, quality, time consumption, cost, and critical business objectives. Defect logging analysis finds out defect distribution by categorizing in types and severity. Closure report Test metrics
Software Testing Life cycle Test Cycle Closure:- The test cycle closure report includes all the documentation related to software design, development, testing results, and defect reports. Entry Criteria Activities Exit Criteria All document and reports related to software. Prepare test metrics based on parameters in last phase. Document the learning out of the project Prepare Test closure report Qualitative and quantitative reporting of quality of the work product to the customer. Test result analysis to find out the defect distribution by type and severity. Test closure report
V-Model V-Model also referred to as the Verification and Validation Model. In this, each phase of SDLC must complete before the next phase starts. It follows a sequential design process same as the waterfall model. Testing of the device is planned in parallel with a corresponding stage of development.
V-Model Verification It involves a static analysis method (review) done without executing code. It is the process of evaluation of the product development process to find whether specified requirements meet. Validation It involves dynamic analysis method (functional, non-functional), testing is done by executing code. Validation is the process to determine whether the software meets the customer expectations and requirements. So V-Model contains Verification phases on one side of the Validation phases on the other side. Verification and Validation process is joined by coding phase in V-shape. Thus it is known as V-Model.
V-Model .
V-Model There are the various phases of Verification Phase of V-model: Business requirement analysis: This is the first step where product requirements understood from the customer's side. System Design: In this stage system engineers analyze and interpret the business of the proposed system by studying the user requirements document. Architecture Design: The baseline in selecting the architecture is that it should understand all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology detail, etc. The integration testing model is carried out in a particular phase. Module Design: In the module design phase, the system breaks down into small modules. The detailed design of the modules is specified, which is known as Low-Level Design Coding Phase: After designing, the coding phase is started. Based on the requirements, a suitable programming language is decided.
V-Model There are the various phases of Validation Phase of V-model: Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the module design phase. These UTPs are executed to eliminate errors at code level or unit level. A unit is the smallest entity which can independently exist, e.g., a program module. Unit testing verifies that the smallest entity can function correctly when isolated from the rest of the codes/ units. Integration Testing: Integration Test Plans are developed during the Architectural Design Phase. These tests verify that groups created and tested independently can coexist and communicate among themselves.
V-Model System Testing: System Tests Plans are developed during System Design Phase. Unlike Unit and Integration Test Plans, System Tests Plans are composed by the client’s business team. System Test ensures that expectations from an application developer are met. Acceptance Testing: Acceptance testing is related to the business requirement analysis part. It includes testing the software product in user atmosphere. Acceptance tests reveal the compatibility problems with the different systems, which is available within the user atmosphere. It conjointly discovers the non-functional problems like load and performance defects within the real user atmosphere.
V-Model System Testing: System Tests Plans are developed during System Design Phase. Unlike Unit and Integration Test Plans, System Tests Plans are composed by the client’s business team. System Test ensures that expectations from an application developer are met. Acceptance Testing: Acceptance testing is related to the business requirement analysis part. It includes testing the software product in user atmosphere. Acceptance tests reveal the compatibility problems with the different systems, which is available within the user atmosphere. It conjointly discovers the non-functional problems like load and performance defects within the real user atmosphere.
V-Model When to use V-Model? When the requirement is well defined and not ambiguous. The V-shaped model should be used for small to medium-sized projects where requirements are clearly defined and fixed. The V-shaped model should be chosen when sample technical resources are available with essential technical expertise.
V-Model Advantage (Pros) of V-Model: Easy to Understand. Testing Methods like planning, test designing happens well before coding. This saves a lot of time. Hence a higher chance of success over the waterfall model. Avoids the downward flow of the defects. Works well for small plans where requirements are easily understood.
V-Model Disadvantage (Cons) of V-Model: Very rigid and least flexible. Not a good for a complex project. Software is developed during the implementation stage, so no early prototypes of the software are produced. If any changes happen in the midway, then the test documents along with the required documents, has to be updated.
Methods of Testing: Static and dynamic Testing Static testing is testing, which checks the application without executing the code. It is a verification process. Some of the essential activities are done under static testing such as business requirement review, design review, code walkthroughs, and the test documentation review . Static testing is performed in the white box testing phase, where the programmer checks every line of the code before handling over to the Test Engineer. Static testing can be done manually or with the help of tools to improve the quality of the application by finding the error at the early stage of development; that why it is also called the verification process. The documents review, high and low-level design review, code walkthrough take place in the verification process.
Methods of Testing: Static and dynamic Testing Dynamic testing is testing, which is done when the code is executed at the run time environment. It is a validation process where functional testing [unit, integration, and system testing] and non-functional testing [user acceptance testing] are performed. We will perform the dynamic testing to check whether the application or software is working fine during and after the installation of the application without any error.
Difference Static Testing Dynamic Testing It is performed in the early stage of the software development. It is performed at the later stage of the software development. In static testing whole code is not executed. In dynamic testing whole code is executed. Static testing prevents the defects. Dynamic testing finds and fixes the defects. Static testing is performed before code deployment. Dynamic testing is performed after code deployment. Static testing is less costly. Dynamic testing is highly costly. Static Testing involves checklist for testing process. Dynamic Testing involves test cases for testing process. It includes walkthroughs, code review, inspection etc. It involves functional and nonfunctional testing. It generally takes shorter time. It usually takes longer time as it involves running several test cases. It can discover variety of bugs. It expose the bugs that are explorable through execution hence discover only limited type of bugs. Static Testing may complete 100% statement coverage in comparably less time. While dynamic testing only achieves less than 50% statement coverage.
Black Box Testing a method in which the functionalities of software applications are tested without having knowledge of internal code structure, implementation details and internal paths. mainly focuses on input and output of software applications it is entirely based on software requirements and specifications. also known as Behavioral Testing.
Black Box Testing Initially, the requirements and specifications of the system are examined. Tester chooses valid inputs (positive test scenario) to check whether SUT processes them correctly. Also, some invalid inputs (negative test scenario) are chosen to verify that the SUT is able to detect them. Tester determines expected outputs for all those inputs. Software tester constructs test cases with the selected inputs. The test cases are executed. Software tester compares the actual outputs with the expected outputs. Defects if any are fixed and re-tested.
Black Box Testing/Methods Equivalence Partitioning and Boundary Value Analysis. Boundary Value Analysis. the process of testing between extreme ends or boundaries between partitions of the input values. So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just Inside-Just Outside values are called boundary values and the testing is called “boundary testing”.
Boundary Analysis The basic idea in normal boundary value testing is to select input variable values at their: Minimum Just above the minimum A nominal value Just below the maximum Maximum
Equivalence Partitioning Equivalence Class Partitioning is type of black box testing technique which can be applied to all levels of software testing like unit, integration, system, etc. In this technique, input data units are divided into equivalent partitions that can be used to derive test cases which reduces time required for testing because of small number of test cases. It divides the input data of software into different equivalence data classes. You can apply this technique, where there is a range in the input field.
Example 1: Equivalence and Boundary Value Let’s consider the behavior of Order Pizza Text Box Below Pizza values 1 to 10 is considered valid. A success message is shown. While value 11 to 99 are considered invalid for order and an error message will appear, “Only 10 Pizza can be ordered” Here is the test condition Any Number greater than 10 entered in the Order Pizza field(let say 11) is considered invalid. Any Number less than 1 that is 0 or below, then it is considered invalid. Numbers 1 to 10 are considered valid Any 3 Digit Number say -100 is invalid.
Example 1: Equivalence and Boundary Value we divide the possible values of tickets into groups or sets as shown below where the system behavior can be considered the same.
Example 1: Equivalence and Boundary Value The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick only one value from each partition for testing. The hypothesis behind this technique is that if one condition/value in a partition passes all others will also pass . Likewise , if one condition in a partition fails, all other conditions in that partition will fail .
Example 1: Equivalence and Boundary Value Boundary Value Analysis - in Boundary Value Analysis, you test boundaries between equivalence partitions In our earlier equivalence partitioning example, instead of checking one value for each partition, you will check the values at the partitions like 0, 1, 10, 11 and so on. As you may observe, you test values at both valid and invalid boundaries . Boundary Value Analysis is also called range checking . Equivalence partitioning and boundary value analysis(BVA) are closely related and can be used together at all levels of testing.
Example 2: Equivalence Partitioning Following password field accepts minimum 6 characters and maximum 10 characters That means results for values in partitions 0-5, 6-10, 11-14 should be equivalent Test Scenario # Test Scenario Description Expected Outcome 1 Enter 0 to 5 characters in password field System should not accept 2 Enter 6 to 10 characters in password field System should accept 3 Enter 11 to 14 character in password field System should not accept
Examples 3: Input Box should accept the Number 1 to 10 Here we will see the Boundary Value Test Cases Test Scenario Description Expected Outcome Boundary Value = 0 System should NOT accept Boundary Value = 1 System should accept Boundary Value = 2 System should accept Boundary Value = 9 System should accept Boundary Value = 10 System should accept Boundary Value = 11 System should NOT accept
Why Equivalence & Boundary Analysis Testing This testing is used to reduce a very large number of test cases to manageable chunks. Very clear guidelines on determining test cases without compromising on the effectiveness of testing. Appropriate for calculation-intensive applications with a large number of variables/inputs
Summary Boundary Analysis testing is used when practically it is impossible to test a large pool of test cases individually Two techniques - Boundary value analysis and equivalence partitioning testing techniques are used In Equivalence Partitioning, first, you divide a set of test condition into a partition that can be considered. In Boundary Value Analysis you then test boundaries between equivalence partitions Appropriate for calculation-intensive applications with variables that represent physical quantities
White Box Testing software testing technique in which internal structure, design and coding of software are tested to verify flow of input-output and to improve design, usability and security. In white box testing, code is visible to testers so it is also called Clear box testing, Open box testing, Transparent box testing, Code-based testing and Glass box testing. The clear box or WhiteBox name symbolizes the ability to see through the software’s outer shell (or “box”) into its inner workings.
What do you verify in White Box Testing? Internal security holes Broken or poorly structured paths in the coding processes The flow of specific inputs through the code Expected output The functionality of conditional loops Testing of each statement, object, and function on an individual basis.
White Box Testing One of the basic goals of white box testing is to verify a working flow for an application. It involves testing a series of predefined inputs against expected or desired outputs so that when a specific input does not result in the expected output, you have encountered a bug.
Example Consider the following piece of code Printme ( int a, int b) { int result = a+ b; If (result> 0) Print ("Positive", result) Else Print ("Negative", result) }
Example The goal of WhiteBox testing in software engineering is to verify all the decision branches, loops, statements in the code. To exercise the statements in the above white box testing example, WhiteBox test cases would be A = 1, B = 1 A = -1, B = -3
White Box Testing Techniques Technical Review a static white-box testing technique which is conducted to spot the defects early in the life cycle that cannot be detected by black box testing techniques. Technical Reviews are documented and uses a defect detection process that has peers and technical specialist as part of the review process. The Review process doesn't involve management participation. It is usually led by trained moderator who is NOT the author. The report is prepared with the list of issues that needs to be addressed.
White Box Testing Techniques 1. Walkthrough : Walkthrough is a method of conducting informal group/individual review. In a walkthrough, author describes and explain work product in a informal meeting to his peers or supervisor to get feedback. Here, validity of the proposed solution for work product is checked. It is cheaper to make changes when design is on the paper rather than at time of conversion. Walkthrough is a static method of quality assurance. Walkthrough are informal meetings but with purpose.
White Box Testing Techniques Inspection An inspection is defined as formal, rigorous, in depth group review designed to identify problems as close to their point of origin as possible. Inspections improve reliability, availability, and maintainability of software product . Anything readable that is produced during the software development can be inspected. Inspections can be combined with structured, systematic testing to provide a powerful tool for creating defect-free programs. Inspection activity follows a specified process and participants play well-defined roles. An inspection team consists of three to eight members who plays roles of moderator, author, reader, recorder and inspector.
White Box Testing Techniques A major White box testing technique is Code Coverage analysis. Code Coverage analysis eliminates gaps in a Test Case suite. It identifies areas of a program that are not exercised by a set of test cases. Once gaps are identified, you create test cases to verify untested parts of the code, thereby increasing the quality of the software product
White Box Testing Techniques Below are a few coverage analysis techniques a box tester can use: Statement Coverage :- This technique requires every possible statement in the code to be tested at least once during the testing process of software engineering . Branch Coverage – This technique checks every possible path (if-else and other conditional loops) of a software application. Apart from above, there are numerous coverage types such as Condition Coverage, Multiple Condition Coverage, Path Coverage, Function Coverage etc. Each technique has its own merits and attempts to test (cover) all parts of software code. Using Statement and Branch coverage you generally attain 80-90% code coverage which is sufficient.
Working process of white box testing Input: Requirements, Functional specifications, design documents, source code. Processing: Performing risk analysis for guiding through the entire process. Proper test planning: Designing test cases so as to cover entire code. Execute rinse-repeat until error-free software is reached. Also, the results are communicated. Output: Preparing final report of the entire testing process.
Techniques: Statement coverage In this technique, the aim is to traverse all statement at least once. Hence, each line of code is tested. In case of a flowchart, every node must be traversed at least once. Since all lines of code are covered, helps in pointing out faulty code.
Techniques: Statement coverage
Techniques: Branch coverage In this technique, test cases are designed so that each branch from all decision points are traversed at least once. In a flowchart, all edges must be traversed at least once.
Techniques: Branch coverage 4 test cases required such that all branches of all decisions are covered, i.e , all edges of flowchart are covered
Techniques: Condition coverage In this technique, all individual conditions must be covered as shown in the following example:READ X, Y IF(X == 0 || Y == 0) PRINT ‘0’ In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get TRUE and FALSE as their values. One possible example would be: #TC1 – X = 0, Y = 55 #TC2 – X = 5, Y = 0
Techniques: Multiple Condition coverage In this technique, all the possible combinations of the possible outcomes of conditions are tested at least once. Let’s consider the following example:READ X, Y IF(X == 0 || Y == 0) PRINT ‘0’ #TC1: X = 0, Y = 0 #TC2: X = 0, Y = 5 #TC3: X = 55, Y = 0 #TC4: X = 55, Y = 5 Hence, four test cases required for two individual conditions. Similarly, if there are n conditions then 2 n test cases would be required.
Techniques: Basis Path Testing In this technique, control flow graphs are made from code or flowchart and then Cyclomatic complexity is calculated. CC defines the number of independent paths so that the minimal number of test cases can be designed for each independent path.
Techniques: Basis Path Testing Steps: Make the corresponding control flow graph Calculate the cyclomatic complexity Find the independent paths Design test cases corresponding to each independent path
Control Flow graph It is a directed graph consisting of nodes and edges. Each node represents a sequence of statements, or a decision point. A predicate node is the one that represents a decision point that contains a condition after which the graph splits. Regions are bounded by nodes and edges.
Flow graph notation
Cyclomatic Complexity It is a measure of the logical complexity of the software and is used to define the number of independent paths. For a graph G, V(G) is its cyclomatic complexity. Calculating V(G): V(G) = P + 1, where P is the number of predicate nodes in the flow graph V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes V(G) = Number of non-overlapping regions in the graph
Use of Cyclomatic Complexity Determining the independent path executions thus proven to be very helpful for Developers and Testers. It can make sure that every path have been tested at least once. Thus help to focus more on uncovered paths. Code coverage can be improved. These metrics being used earlier in the program helps in reducing the risks.
Loop Testing Loops are widely used and these are fundamental to many algorithms hence, their testing is very important. Errors often occur at the beginnings and ends of loops. Simple loops: For simple loops of size n, test cases are designed that: Skip the loop entirely Only one pass through the loop 2 passes m passes, where m < n n-1 ans n+1 passes
Loop Testing Nested loops: For nested loops, all the loops are set to their minimum count and we start from the innermost loop. Simple loop tests are conducted for the innermost loop and this is worked outwards till all the loops have been tested. Concatenated loops: Independent loops, one after another. Simple loop tests are applied for each. If they’re not independent, treat them like nesting.
Advantages of WBT White box testing is very thorough as the entire code and structures are tested. It results in the optimization of code removing error and helps in removing extra lines of code. It can start at an earlier stage as it doesn’t require any interface as in case of black box testing. Easy to automate.
Advantages of WBT Disadvantages: Main disadvantage is that it is very expensive. Redesign of code and rewriting code needs test cases to be written again. Testers are required to have in-depth knowledge of the code and programming language as opposed to black box testing. Missing functionalities cannot be detected as the code that exists is tested. Very complex and at times not realistic.
Quality Assurance method of making the software application with less defects and mistakes when it is finally released to the end users. Quality Assurance is defined as an activity that ensures the approaches, techniques, methods and processes designed for the projects are implemented correctly. It recognizes defects in the process. Quality Assurance is completed before Quality Control.
Quality Control a software engineering process that is used to ensure that the approaches, techniques, methods and processes are designed in the project are following correctly. Quality control activities operate and verify that the application meet the defined quality standards. It focuses on examination of the quality of the end products and the final outcome rather than focusing on the processes used to create a product
QA Vs QC
QA Vs QC Quality Assurance (QA) Quality Control (QC) It focuses on providing assurance that the quality requested will be achieved. It focuses on fulfilling the quality requested. It is the technique of managing quality. It is the technique to verify quality. It is involved during the development phase. It is included during testing phase. It does not include the execution of the program. It always includes the execution of the program. The aim of quality assurance is to prevent defects. The aim of quality control is to identify and improve the defects. It is a preventive technique. It is a corrective technique. All team members of the project are involved. Generally, the testing team of the project is involved. Example: Verification Example: Validation