Lecture 04 Software Testing (1hhhhhhhhhhhhhh).pptx

ZainabNoor83 6 views 40 slides Oct 31, 2025
Slide 1
Slide 1 of 40
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40

About This Presentation

hjgkjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj


Slide Content

Software Testing

Software Testing Goals Types of tests Levels of tests Test measures Test plan

Goals Verification: Have we built the software right? Validation: Have we built the right software? Bug-free, meets specs Meets customers’ needs Are Verification and Validation different or variation of the same idea My argument: “meeting specs” should equal “meeting customer needs”... not generally true (my kitchen, satellite systems)

Software Testing Goals Types of tests Levels of tests Test measures Test plan most of this discussion focuses on verification (more specifically bug testing)

Types of tests Black box White box Gray box

Black box tests input output interface 1. Does it perform the specified functions? Does it handle obvious errors in input? Ariane5 – lousy error handling Classic ints vs floats, yards vs meters Black box should catch these if there is adequate “test coverage”

Example: ordered list of ints class OrdInts create getFirst getNext insert delete print L=create() L.insert(5) L.insert(-1) L.insert(-1) p=L.getFirst() print (p) L.delete(p) p=L.getFirst() print(p) p=L.getNext(p) print(p) p=L.getNext(p) -1 -1 5 error

Black box tests Advantage: black box tester ≠developer is unbiased by implementation details. e.g., Use Case testing, just work through all the Use Cases Disadvantage: black box tester is uninformed about implementation details unnecessary tests – test same thing in different way insufficient tests – can miss the extremes, especially if actual use follows a different pattern

black box tests Input Code choose good distribution of input – hope good distribution of code tested x x x x

unnecessary tests Input Code large range of input may exercise a small part of code e.g., operator test of satellite control stations, run through each input and output light/key option. Testing same functions, whereas no one had a test for my map function.

insufficient tests Input Code a small range of input may exercise a large range of code but can you ‘know’ this without knowing the code? Did we miss the 20%

sufficient tests Input Code complex code a small range of input may exercise a small but important/error-prone region of code

White box tests Based on code test 1 test 2

Example: ordered list of ints class ordInts { public: … private: int vals[1000]; int maxElements=1000; … }

Example: ordered list of ints bool testMax() { L=create(); num=maxElements; for (int i=0; i<=num; i++) print i L.insert(i) print maxElements; }

White box tests Advantage: design tests to achieve good code coverage and avoid duplication can stress complicated, error-prone code can stress boundary values ( fault injection) Disadvantage: tester=developer may have bias if code changes, tests may have to be redesigned (is this bad?)

Gray box tests Look at code to design tests But test through interface Best of both worlds - maybe

Example: ordered list of ints class OrdInts create getFirst getNext insert delete print L=create() L.insert(1) L.insert(2) L.insert(3) L.insert(1001) p=L.getFirst() print(p) p=L.getNext(p) print p 1 2 3 … … …

Types of tests Black box: test based on interface, through interface Gray box: test based on code, through interface White box: test based on code, through code Testing strategy can/should include all approaches! My experience: Black Box = non developer, outside testing idiots in your case who? White Box = developer, part of development process in your case who? Gray Box = hated, non developer within developer organization in your case who?

Levels of tests Unit Integration System System integration white black

Testing measures (white box) Code coverage – individual modules Path coverage – sequence diagrams Code coverage based on complexity – test of the risks, tricky part of code (e.g., Unix “you are not expected to understand this” code)

Code coverage How much of code is “tested” by tests? manually profiling tools Design new tests to extend coverage Is 100% good?

Path coverage How many execution paths have been exercised by tests? 100% path coverage is usually impossible aim to cover common paths and error prone ( complex ) paths aim to break code with tests – good testers are not liked by developers....

convention wisdom 95% of errors are in 5% of the code …or maybe 80% of the errors are in 20% of the code…

code complexity measures cyclomatic complexity measure (McCabe) measures number of linearly independent paths through program ignores asynchronous interaction, fallibility of services etc. developer’s best guess – what problems are likely and which will be hardest to diagnose started with Knuth and others who gather stats on programs.

Test plan Collection of tests: unit, integration, system Rationale for test: why these tests? Strategy for developing/performing tests e.g., test units as developed, test integration at each build, etc.

Testing Strategy TDD – test driven development Regression Test harness Bug tracking User tests Write test BEFORE your write the code!

TDD – test Driven Unit tests are written first by the SEs. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and integrated with regression tests. Unit tests are maintained like software and integrated into the build process. The goal is to achieve continuous deployment with frequent updates.

Testing Strategy TDD – test driven development Regression Test harness Bug tracking User tests Automate Build up tests Run tests before you check in code My mess up....

Testing Strategy TDD – test driven development Regression Test harness Bug tracking User tests Useful for interactive graphics applications automated test framework, built one for satellite console to run through all console interactions

How is Test Harness done in software testing? Test Harness can be called a process that does all the testing works, such as executing tests via test libraries and generating reports. For that, developers and testers have to develop specific test scripts to handle particular test scenarios and test data.

Test harness everything else input output inject test input query state need to have a way to evaluate output

Strategy TDD – test driven development Regression Test harness Bug tracking User test Design system for tracking Scrupulously record problems, their context, the effect, ideas on the cause, attempts to fix, create new tests as bugs uncovered

Strategy TDD – test driven development Regression Test harness User test Validation Nonfunctional requirements Not best for discovering bugs Need to know how they generated bug

Test plan Collection of tests: unit, integration, system Rationale for test: why these tests? Strategy for developing/performing tests Be thoughtful in designing Be diligent in executing

Risk Based Testing Risk Based Testing (RBT)  is a software testing type which is based on the probability of risk. It involves assessing the risk based on software complexity, criticality of business, frequency of use, possible areas with  Defect  etc. Risk based testing prioritizes testing of features and functions of the software application which are more impactful and likely to have defects.

Risks can be positive or negative. Positive risks   are referred to as opportunities and help in business sustainability. For example investing in a New project, Changing business processes, Developing new products. Negative Risks   are referred to as threats and recommendations to minimize or eliminate them must be implemented for project success.

When to implement Risk based Testing Risk based testing can be implemented in  Projects having time, resource, budget constraints, etc. Projects where risk based analysis can be used to detect vulnerabilities to SQL injection attacks. Security Testing in Cloud Computing Environments. New projects with high risk factors like Lack of experience with the technologies used, Lack of business domain knowledge. Incremental and iterative models, etc.
Tags