Understanding Key Concepts and Applications in Week 11: A Comprehensive Overview of Critical Topics, Practical Insights, and Real-World Examples for Effective Learning and Mastery of Essential Skills in the Course Curriculum."

bahay78365 16 views 40 slides Feb 28, 2025
Slide 1
Slide 1 of 40
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40

About This Presentation

Understanding Key Concepts and Applications in Week 11: A Comprehensive Overview of Critical Topics, Practical Insights, and Real-World Examples for Effective Learning and Mastery of Essential Skills in the Course Curriculum."


Slide Content

Introduction to Software Testing Week 11 Presentation by Abdur Rakib Tim Brailsford CSCT December, 2022

Today in POP… Custom Software Development Softwares Chronic Crisis Software Testing Key terms: fault, failure, error Testing strategies

Custom Software Development Gather as much information as possible about the details and specifications of the desired software from the client. Refine Analysis Model with the goal of creating a Design model. Design a software structure that realises the specification. Code the software Test the software to verify that it operates as per the specifications provided by the client Requirements Maintenance Testing Implementation Design

Waterfall Method Requirements Design Implementation Testing Deployment & Maintenance

Iterative software Life Cycle Requirements Maintenance Testing Implementation Design Requirements Maintenance Testing Implementation Design Requirements Maintenance Testing Implementation Design Phase 1 Phase 2 Phase 3 Agile Development

Testing in life cycle models There are numerous software development models A development model selected for a project, depends on the aims and goals of that project Testing is always vitally important A stand-alone activity that forms a part of the overall development model chosen for the project

Software V&V Life Cycle

Software Engineering Software Engineering, by Ian Sommerville (10 th edition 2015) Pearson.

Types of Software Errors Syntax errors Runtime errors Logic errors (”bugs”) Requirements errors

Waterfall Method Requirements Design Implementation Testing Deployment & Maintenance

Software V&V Life Cycle

What is this?

Harvard University Mark II

The first computer “bug” Grace Hopper (1906 – 1992) Working on the Mark II in 1947 Log entry reads: “moth trapped between the points of Relay #70, in Panel F… First actual case of bug being found”. Credited with coining the term “debugging”.

Software’s Chronic Crisis Large software systems often: Fail to provide desired functionality Fall behind schedule Run over budget Cannot evolve to meet changing needs For every 6 large software projecs that become operational 2 are cancelled On average software projects overshoot their schedule by half Three quarters of large systems do not provide required functionality

Software Failures There is a very long list of failed software projects and software failures These are extremely expensive and kill people! On average software projects overshoot their schedule by half Three quarters of large systems do not provide required functionality

Famous software horror stories Ariane 5 (1996) Exploded 40 seconds into flight – cost $500 million. Mars Climate Orbiter (1998) Software errors caused the probe to miss Mars. Cost $327 million. Boeing 737 Max (2018 – 2019) Software problems contributed to TWO crashes (Lion Air and Ethopian ). These cost 346 lives and cost Boeing at least $60 billion. London Ambulance Dispatching (1992) Cost cutting in testing Lack of stress testing

Software Testing Goal of testing Find faults in the software Demonstrate that there are no faults (for the test cases used during testing) It is never possible to prove that there are no faults Testing should help locate errors, not just detect their presence A yes/no answer to the question ”does the program work” is not very helpful Testing should be repeatable Can be difficult for concurrent or distributed software Need to consider the effect of the environment an uninitialized varables

Faults, Errors and Failures Software Fault: A static defect in the software Equivalent to design mistakes in hardware Software Error: An incorrect internal state that is the manifestation of some fault A fault may lead to an error (i.e. the error causes the fault to become apparent) Software Failure: Unexpected or incorrect behaviour with respect to the requirements or other specifications

A medical analogy A patient gives a doctor a list of symptoms Failures Software Error: An incorrect internal state that is the manifestation of some fault A fault may lead to an error (i.e. the error causes the fault to become apparent) Software Failure: Unexpected or incorrect behaviour with respect to the requirements or other specifications

An Example of a fault def numOfZero (array): # initialise zero counter zeroCount = 0 n = 1 while n < len (array): if array[n] == 0: zeroCount += 1 n += 1 return zeroCount Fault: Should start searching at 0, not 1

Test 1 (pass) def numOfZero (array): # initialise zero counter zeroCount = 0 n = 1 while n < len (array): if array[n] == 0: zeroCount += 1 n += 1 return zeroCount Fault: Should start searching at 0, not 1 Test 1 [2, 7, 0] Expected: 1 Actual: 1 Error: i is 1, not 0, on the first iteration Failure: none

Test 2 (fail) def numOfZero (array): # initialise zero counter zeroCount = 0 n = 1 while n < len (array): if array[n] == 0: zeroCount += 1 n += 1 return zeroCount Fault: Should start searching at 0, not 1 Test 2 [0, 2, 7] Expected: 1 Actual: 0 Error: i is 1 not 0 Error propagates to the variable count Failure: count is 0 at the return statement

Testing-partial verification Verification locates problems it doesn’t explain them! Testing checks only the values we select Even small systems have millions (of millions) of possible tests The number of test cases increases exponentially with the number of input/output variables Testing software is hard and can never be complete

Drilling into your code Once you have found a problem you need to understand it You need to consider the state of the programme as it runs Contents of variables, inputs and outputs You can do this on paper (desk tracing) You can add print statements to write contents of variables to the console You can use a debugger numOfZero.py

Testing Strategies Offline (static) Syntax checking and “lint” testers Walkthroughs (“dry runs”) Inspections Online (live) Black box testing White box testing

Syntax checking Detecting errors before the program is run is preferable to having them occur in a running program Syntax checking will determine whether a program “looks” acceptable “Lint” programs do deeper tests on code – for example: Detecting lines that will never be executed Detecting variables that have not ben intialised Compilers do a lot of this as “warnings”

Inspections A team of programmers read the code and consider what it does The inspectors play “devils advocate”, trying to break it! Very time consuming (therefore expensive) Often only used for critical code

Walkthroughs Similar to inspections, but the inspectors “execute” the code using simple test data Effectively formalised desk tracing Expensive and time consuming Not always possible – especially for large systems Inspections / walkthroughs will typically find 30-70% of errors

Black Box Testing Generate test cases from the specification i.e. don’t look at the code Advantages Avoids making the same assumptions as the programmers Test data is independent of the implementation Results can be interpreted without knowing implementation details

Consider this method def largestElement (array): largest = array[0] for n in array: if n > largest: largest = n return largest largestElement.py

A Test Set Input Output OK? 3 16 4 32 9  32 YES 9 32 4 16 3  32 YES 22 32 59 17 88 1  88 YES 1 88 17 59 32 22  88 YES 1 3 5 7 9 1 3 5 7  9 YES 7 5 3 1 9 7 5 3 1  9 YES 9 6 7 11 5  11 YES 5 11 7 6 9  11 YES 561 13 1024 79 86 222 97  1024 YES 97 222 86 79 1024 13 561 1024 YES Is this enough testing? Automated testing frameworks (Junit / PyUnit for Python)

Choosing your test set Test sets should be chosen using a knowledge of the data that is most likely to cause problems Equivalence partitioning Boundaries Off-nominal (extremes)

Equivalence Partitioning Suppose the system asks for “a number between 100 and 999” This gives three equivalence classes of input: Less than 100 100 to 999 Greater than 999 We thus test characteristic values from each equivalence class Example: 50 (invalid), 500 (valid), 1500 (invalid)

Boundary Analysis Arises from the observation that most programs fail at input boundaries Suppose the system asks for “a number between 100 and 999” The boundaries are 100 and 999 We therefore test for values: 99 100 101 998 999 1000 Lower boundary Upper boundary

Off-nominal testing Extreme data Largest possible number Smallest possible number Negative numbers Zero Large strings Empty strings

White (clear) box testing Use a knowledge of the program structure to guide the development of tests Aim to test every statement at least once Test all paths through the code A test is path complete if each possible path through the code is exercised at least once by the test case

Simple white box example There are two possible paths through this code signal > 5 and signal <= 5 Both should be executed by the test set if signal>5: print ("Hello") else: print("Goodbye")

Overall Goal Establish confidence that the software is fit for purpose This does NOT mean completely free of defects It means good enough for intended use, and the type of use will determine the degree of confidence that is needed

Tips for debugging coursework Learn how to use a debugger and get into the habit of routinely using it Test it by predicting behaviour for a test set of data When you find unexpected behaviour (a bug) try to repeat it Think like a detective!
Tags