Something about Software Testing: taxonomies and more
NunoSilva238662
0 views
74 slides
Oct 10, 2025
Slide 1 of 74
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
About This Presentation
Something about Software Testing: taxonomies and more
Size: 1.97 MB
Language: en
Added: Oct 10, 2025
Slides: 74 pages
Slide Content
Something about software
testing
“the defining characteristic of quality in code is our ability to change it” (Dave Farley)
“legacy code is code without tests” (Michael Feathers)
2
Observations about Testing
•“Testing is the process of executing a program with the intention of finding errors.” – Myers
•“Testing can show the presence of bugs but never their absence.” – Dijkstra
•“A quality control activity aimed at evaluating a software item against the given system requirements. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs.”
3
Verification vs. Validation
CriteriaVerificationValidation
DefinitionThe process of evaluating work-products (not the actual final
product) of a development phase to determine whether they
meet the specified requirements for that phase.
“Software verification provides objective evidence that the
design outputs of a particular phase of the software
development life cycle meet all of the specified requirements
for that phase” 1
The process of evaluating software during or at the end of the
development process to determine whether it satisfies
specified business requirements.
“Confirmation by examination and provision of objective
evidence that software specifications conform to user needs
and intended uses, and that the particular requirements
implemented through software can be consistently fulfilled”1
ObjectiveTo ensure that the product is being built according to the
requirements and design specifications. In other words, to
ensure that work products meet their specified requirements.
To ensure that the product actually meets the user’s needs and
that the specifications were correct in the first place. In other
words, to demonstrate that the product fulfils its intended use
when placed in its intended environment.
QuestionAre we building the productright? Are we building therightproduct?
Evaluation
Items
Plans, Requirement Specs, Design Specs, Code, Test CasesThe actual product/software.
1U.S. FoodandDrugAdministration. General PrinciplesofSoftware Validation; Final Guidancefor IndustryandFDA Staff; pp. 47, 2002.4
•Unit Testing
•Integration Testing
•System Testing
•Acceptance Testing
Levels of Testing
https://www.edureka.co/blog/software-testing-levels/http://www.professionalqa.com/v-model
Is this correct?
Is this complete?
5
A taxonomy of testsIs this correct?
Is this complete?
Why? Can’t unit tests
be back box tests?
BTW: did you ever
developed a white box
unit test?
6
another…
Why here? Can’t
unit/integration tests
be regression tests
too?
Is this correct?
Is this complete?
How does this
taxonomy compare to
the previous one?
Why? Can’t integration
(or system) tests be
back/white box tests?
BTW: how do you
describe your integration
(or system) tests?
7
another…
Is this better?
Is this complete?
How to use this?
8
A thorough taxonomy
•16 Categories/Dimensions of Testing Types Answering the 5W+2H Questions:
•• What?• When?• Where?• Who?• Why?• How?• How Well?
•These supertypes are not disjoint (think multiple inheritance)!
https://resources.sei.cmu.edu/asset_files/Webinar/2016_018_101_450425.pdf
9
A simpler, though
incomplete, model that
effectively captures the
multi-dimensional
nature of tests.
This model proposes that every test can be characterized
along three key dimensions:
1. Level or phase (e.g., unit testing)
2. Test case design strategy/technique (e.g., black-box testing)
3. Quality attributes (e.g., functionality)
3 dimensions model
transparent boxopaque box
Preferred due
to the
semantics it
conveys!
13
6 dimensions model
1. Levels: different phases at which tests are performed, such as:
•Unit Testing
•Integration Testing
•System Testing
•Acceptance Testing
2. Techniques: covers the strategies used to design test cases, such as:
•Opaque-box Testing
•Transparent-box Testing
•Semi-transparent-box Testing
3. Quality Attributes: addresses the aspects of quality that the tests focus on,
including:
•Performance
•Security
•Usability
•… quality attributes)
4. Automation Level: describes whether the tests are:
•Manual
•Automated
•Semi-Automated
5. Environment: the various environments in which tests are conducted, such as:
•Development
•Staging
•Production
6. Data Type: represents the types of data used in testing, such as:
•Real Data
•Synthetic Data
•Boundary Values
This model is not widely recognized, but it serves
as an example of multi-dimensional
characterization in testing.
Note that additional dimensions can also be
incorporated.
According to this model, every test can/must be
characterized along 6 key dimensions:
1.Level (e.g., unit testing)
2.Technique (e.g., black-box testing)
3.Quality attributes (e.g., functionality)
4.Automation Level (e.g., automated)
5.Environment (e.g., development)
6.Data Type (e.g., boundary values)
14
Methods (or Accessibility)
•White (Transparent) Box Testing: Structural Testing
•the internal structure/design/implementation of the item being tested is known to the tester
•the internals are checked, e.g.:
•Algorithms and logic
•Data structures
•Global
•Local
•Interfaces
•Independent paths
•Boundary conditions
•Error handling
•Black (Opaque) Box Testing: Behavioral Testing
•the internal structure/design/implementation of the item being tested is not known to the tester
•only the result is checked
•Gray (Semi-transparent) Box Testing
•the internal structure is partially known
15
Comparison of test methods (accessibility)
White/Transparent BoxGray/Semi-Transparent BoxBlack/Opaque Box
White/Transparent Box Integration
Tests have access to and verify all the
parts relevant to the artifact under test.
In Gray/Semi-transparent Box Testing
input and output data is apparent, but
also data emitted and recorded by
constituent components can be
collected for analysis.
Everything but the input and output of
an artifact under test is apparent and
subject to verification in a
Black/Opaque Box Test.
Cf. https://www.mabl.com/blog/the-fundamentals-of-integration-testing16
Opaque-box vs. Transparent-box testing (i)
ParameterOpaque-box testingTransparent-box testing
Definition
Used to test the software without the
knowledge of the internal structure of
program or application.
Internal structure is known to the tester.
AliasIt is also known as data-driven, box
testing, data-, and functional testing.
It is also called structural testing, clear
box testing, code-based testing, or glass
box testing.
Base of Testing
Testing is based on external
expectations; internal behavior of the
application is unknown.
Internal working is known, and the
tester can test accordingly.
Objective
The main objective of this testing is to
check the functionality (output) of the
SUT.
The main objective is to check the
quality of the code.
Basis for test casesTesting can start after preparing
requirement specification document.
Testing can start after preparing for
Detail design document.
https://www.guru99.com/back-box-vs-white-box-testing.html
17
Opaque-box vs. Transparent-box testing (ii)
ParameterOpaque-box testingTransparent-box testing
Tested byPerformed by the end user, developer,
and tester.
Usually done by tester and
developers.
GranularityGranularity is coarse-grain.Granularity is fine-grain.
Usage
This type of testing is ideal for higher
levels of testing like System Testing,
Acceptance testing.
Best suited for a lower level of
testing like Unit Testing, Integration
testing.
AutomationTest and programmer are dependent on
each other, so it is tough to automate.Easy to automate.
Programming knowledgeNot needed.Required.
Implementation knowledgeImplementation knowledge is not
required.
Complete understanding needs to
implement Transparent Box testing.
https://www.guru99.com/back-box-vs-white-box-testing.html
I don’t agree!
I don’t agree!
18
Opaque-box vs. Transparent-box testing (iii)
ParameterOpaque-box testingTransparent-box testing
Testing methodIt is based on trial and error method.Data domain and internal boundaries can be
tested.
TimeIt is less exhaustive and time-
consuming.Exhaustive and time-consuming method.
Algorithm testNot the best method for algorithm
testing.Best suited for algorithm testing.
Code AccessCode access is not required.Requires code access. Thereby, the code
could be stolen if testing is outsourced.
BenefitWell suited and efficient for large code
segments.
It allows removing the extra lines of code,
which can bring in hidden defects.
https://www.guru99.com/back-box-vs-white-box-testing.html
I don’t agree!
19
Opaque-box vs. Transparent-box testing (iv)
ParameterOpaque-box testingTransparent-box testing
Skill level
Low skilled testers can test the application
with no knowledge of the implementation
of programming language or operating
system.
Need an expert tester with vast experience
to perform transparent box testing.
Techniques
Equivalence partitioning divides input values
into valid and invalid partitions and
selecting corresponding values from each
partition of the test data.
Boundary value analysis
checks boundaries for input values.
Statement Coverage validates whether every
line of the code is executed at least once.
Branch coverage validates whether each
branch is executed at least once
Path coverage method tests all the paths of
the program.
DrawbacksUpdate to automation test script is essential
if you modify application frequently.
Automated test cases can become useless if
the code base is rapidly changing.
https://www.guru99.com/back-box-vs-white-box-testing.html
Depends on the rest of the
characterization of tests.
20
Levels of tests
•Unit
•Integration
•System
•Acceptance
The SDLC V-model
emphasizes the
relationship between
development and testing
activities.
verification
verification
verification
validation
21
•Tests each module individually (i.e. isolated from collaborators)
•SUT: System Under Test
•MUD: Module Under Test
•OUT: Object Under Test
•Examples:
•Procedure
•Object/Class
•Component
•Library
•Framework
•System
Unit Testing
22
•Individual units are combined and tested as a group
•The purpose is to expose faults in the interaction between integrated
units
•One module can have an adverse effect on another
•Subfunctions, when combined, may not produce the desired major function
•Individually acceptable imprecision in calculations may be magnified to unacceptable
levels
•Interfacing errors not detected in unit testing may appear
•Timing problems (in real-time systems) are not detectable by unit testing
•Resource contention (concurrency) problems are not detectable by unit
testing
Integration Testing
23
•Part of Validation (vs. Verification)
•Determine if the software meets all the requirements defined in the
software requirements specification (SRS)
System Testing
24
•Similar to system testing except that customers are present or directly
involved.
•Usually, the tests are developed and run by the customer
•Two phases: Alpha and Beta
Acceptance Testing
https://usersnap.com/blog/types-user-acceptance-tests-frameworks/ 25
•Performed to identify all possible issues/bugs before releasing the
product
•The focus is to simulate real users by using black box and white box
techniques.
•Alpha testing is carried out in a lab environment and usually, the testers are internal employees of the organization.
•Is called alpha because it is done early on, near the end of the
development of the software.
Acceptance Testing - Alpha Testing
26
•Is performed by "real users" of the software application in a "real
environment“.
•Can be considered as a form of external User Acceptance Testing.
•Beta version of the software is released to a limited number of end-
users of the product to obtain feedback on the product quality.
•Beta testing reduces product failure risks and provides increased quality of the product through customer validation.
•Direct feedback from customers is a major advantage of Beta Testing. This testing helps to tests the product in customer's environment.
Acceptance Testing - Beta Testing
27
•It’s best to provide customers with an outline of the things that you
would like them to focus on and specific test scenarios for them to
execute.
•Provide with customers who are actively involved with a commitment to fix defects that they discover.
Alpha and Beta Testing
https://www.guru99.com/alpha-beta-testing-demystified.htm
28
•Type of software testing to confirm that a recent program or code change has not adversely affected existing features.
•Regression Testing is nothing but a full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.
•Regression Testing is required when there is a:
•Change in requirements and code is modified according to the requirement
•New feature is added to the software
•Defect fixing
•Performance issue fix
•Coded and automatic tests are regression tests
Regression Testing
29
NB: Automatictestsare
regressiontests!
Unit vs. Integration Testing
Example
30
Unit vs. Integration tests
•Aunit testtests a logical unit, a single isolated component, a function, or a feature. A unit test isolates this component to test it without any dependencies like I did in the last post. First, I tested the actions of a controller, without testing the actual service in behind. Then I tested the service methods in an isolated way with a fakedDbContext. Why? Because unit tests shouldn't break because of a failing dependency. A unit test should be fast in development and in execution. It is a development tool. So it shouldn't cost a lot of time to write one. And, in fact, setting up a unit test is much cheaper than setting up an integration test. Usually, you write a unit test during or immediately after implementing the logic. In the best case, you'll write a unit test before implementing the logic. This would be the TDD way, test-driven development or test-driven design.
•Anintegration testdoes a lot more. It tests the composition of all units. It ensures that all units are working together in the right way. This means it may need a lot more effort to set up a test because you need to set up the dependencies. An integration test can test a feature from the UI to the database. It integrates all the dependencies. On the other hand, an integration test can be isolated on a hot path of a feature. It is also legit to fake or mock aspects that don't need to be tested in this special case. For example, if you test a user input from the UI to the database, you don't need to test the logging. Also, an integration test shouldn't fail because on an error outside the context. This also means to isolate an integration test as much as possible, maybe by using an in-memory database instead of a real one.
https://dzone.com/articles/integration-testing-data-access-in-aspnet-core
31
Unit vs. Integration tests
32
Unit testsIntegration tests
Granularity 1
Granularity 2
What is a unit?
e.g. Entity that is
Aggregate root
e.g. Entity
e.g. Aggregate
33
34
35
Some common
misunderstandings
36
(some) Definitions
•Functional testing
•checking the behavior of the software
•checking the correctness of the outputs
•Acceptance testing (validation)
•formal test determining whether a system satisfies its acceptance criteria, i.e., the software can or cannot be used by the end user to perform the functions for which the software was built
•some are performed by final users
•Regression testing
•performed to determine if the software still meets all its requirements considering the changes and modifications to the software
•ensure that no degradation of baseline functionality has occurred with modifications
•involves selectively repeating existing tests, not developing new tests
Better use automated
(programmed) tests.
Having written requirements is essential.
Some of them can be written as software
programs and executed by the testing
software, but other are to be executed by
humans.
37
Unit tests vs. Functional tests
•Unit Testingis a subtypeof Functional Testing.
•There are other types of functional tests rather than unit tests
38
Acceptance testing
•Some acceptance tests are non-functional
•e.g. does the system respond in 10ms in average as requested?
39
Unit vs. Performance tests
•Performance tests arenotfunctional tests:
•Functional tests answerifa thing works.
•Performance tests answerhow efficientlya thing works.
•Performance testing is challenging to set up and measure properly. While unit tests will run the same in any environment, performance tests are inherently sensitive to the environment.
•Performance checks in unit tests make the build process more vulnerable to environmental issues.
•Proper performance testsrequire lots of setup beyond basic unit test support.
•Performance tests yield metrics that should not be shoehorned into a binary pass/fail status.
https://automationpanda.com/2017/05/18/can-performance-tests-be-unit-test/40
Isolation with doubles
41
Scenario/Application (example)
42
Scenario/Application (example)
Y has no collaborators,
hence the tests to Y are
naturally isolated.
43
Unit tests (example)
For eachunittest, onemust isolatethe
method/objectundertestwith
doubles/simulationsofthecollaborating
objects/methods.
BecauseX collaborateswithY, hence to unit test
X, X must be isolated from Y with a double of Y
(i.e. a simulation or non-production of Y).
44
Unit tests (example)
Idem for Repository and X.
45
Unit tests (example)
Idem for Service and
Repository.
46
Unit tests (examples)
Idem for Controller and
Service.
47
Integration tests (examples)
In order to test the integration of all
participating parts, no doubles should
be used, so that every implemented
part of the system participates in the
test.
Accordingly, a de facto instance of
Repository should be used instead of an
instance of RepositoryDouble.
48
Types of Test Doubles
49
Test doubles
•Dummies, Fakes, stubs, spies, and mocks all belong to the category oftest doubles.
•A test double is any object or system you use in a testinstead ofsomething else.
•Most automated software testing involves the use of test doubles of some kind or another.
•Rather than focusing onhowthese things are different, I think it's more enlightening to focus onwhythese are distinct concepts. Each exists for a different purpose.
•cf. https://martinfowler.com/bliki/TestDouble.html
In https://stackoverflow.com/a/5503045550
Definitions
•Dummy objects are passed around but never actually used. Usually, they are just used to fill parameter lists.
•Fake objects that actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase is a good example).
•Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
•Stub is an object that holds predefined data and uses it to answer calls during tests. It is used when we cannot or don’t want to involve objects that would answer with real data or have undesirable side effects.
•Spies are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent.
•Mocks are pre-programmed with expectations which form a specification of the calls [and order] they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.
In https://martinfowler.com/bliki/TestDouble.html
51
Fake
•Fake objects actually have working implementations, but usually take some shortcut that makes them not suitable for production. A good example of this is the in-memory database.
•Afakeis an implementation that behaves "naturally", but is not "real". These are fuzzy concepts and so different people have different understandings of what makes things a fake.
•One example of a fake is an in-memory database (e.g. using sqlite with the:memory:store). You would never use this for production (since the data is not persisted), but it's perfectly adequate as a database to use in a testing environment. It's also much more lightweight than a "real" database.
•As another example, perhaps you use some kind of object store (e.g. Amazon S3) in production, but in a test you can simply save objects to files on disk; then your "save to disk" implementation would be a fake. (Or you could even fake the "save to disk" operation by using an in-memory filesystem instead.)
•As a third example, imagine an object that provides a cache API; an object that implements the correct interface but that simply performs no caching at all but always returns a cache miss would be a kind of fake.
•The purpose of a fake isnotto affect the behavior of the system under test, but rather tosimplify the implementationof the test (by removing unnecessary or heavyweight dependencies).
In https://stackoverflow.com/a/5503045552
Fake
•Fakes are objects that have working implementations, but not same
as production one. Usually, they take some shortcut and have
simplified version of production code.
53
Stub
•Stubs provide canned answers to the calls made during the test, usually not responding at all to anything outside what’s programmed in for the test.
•Astubis an implementation that behaves "unnaturally". It is preconfigured (usually by the test set-up) to respond to specific inputs with specific outputs.
•The purpose of a stub is to get your system under test into a specific state.For example, if you are writing a test for some code that interacts with a REST API, you couldstub outthe REST API with an API that always returns a canned response [predefined response], or that responds to an API request with a specific error. This way you could write tests that make assertions about how the system reacts to these states; for example, testing the response your users get if the API returns a 404 error.
•A stub is usually implemented to only respond to the exact interactions you've told it to respond to. But the key feature that makes something a stub is itspurpose: a stub is all about setting up your test case.
In https://stackoverflow.com/a/5503045554
Stub
•Stub is an object that holds predefined data and uses it to answer
calls during tests. It is used when we cannot or don’t want to involve
objects that would answer with real data or have undesirable side effects.
55
Mock
•Mocks are preprogrammed with expectations that form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don’t expect and are checked during verification to ensure they got all the calls they were expecting.
•A mock is similar to a stub, but with verification added in. The purpose of a mock is to make assertions about how your system under test interacted with the dependency.
•For example, if you are writing a test for a system that uploads files to a website, you could build a mock that accepts a file and that you can use to assert that the uploaded file was correct. Or, on a smaller scale, it's common to use a mock of an object to verify that the system under test calls specific methods of the mocked object.
•Mocks are tied to interaction testing, which is a specific testing methodology. People who prefer to test system state rather than system interactions will use mocks sparingly if at all.
In https://stackoverflow.com/a/5503045556
Mock
•Mocks are objects that register calls they receive.
•In test assertion we can verify on Mocks that all expected actions
were performed
Algo mais em http://xunitpatterns.com/Mock%20Object.html57
Spy
•Spies are stubs that also record some information based on how they
were called. One form of this might be an email service that records
how many messages it was sent.
•A stub that uses behavior.
•It is a simple mock.
58
Mocks vs. Stubs vs. Spies
•Mock objects always use
behaviour verification
•A stub can go either way.
•Meszaros refers to stubs that
use behaviour verification as a Test Spy.
https://newbedev.com/what-is-wrong-with-stubs-for-unit-testing 59
End2End Tests
aka e2e tests
60
Automatic end-to-end tests
•No isolation of parts exists à Doubles are not used
•How to characterize e2e tests?
Usability is partially testable by automatic tests.
E.g. the ARIA (https://www.w3.org/TR/wai-aria-
1.1/) text in should be [descriptive text].
Yet, usability is a human decision, and therefore
should be ultimately validated by users.
61
Introduction to E2E tests
•These tests make expectations about what the user sees and reads in
the browser.
•Simulate a user interacting with the application:
•Navigating to an address
•Reading text
•Clicking on a link or button
•Filling out a form
•Moving the mouse
•Typing on the keyboard.
62
Introduction to E2E tests (ii)
•From the user’s perspective, it does not matter that your application is implemented in Angular.
•Technical details like the inner structure of your code are not relevant.
•There is no distinction between front-end and back-end, between parts of your code. The full experience is tested.
•These tests are calledend-to-end (E2E) testssince they integrate all parts of the application from one end (the user) to the other end (the darkest corners of the back-end).
•End-to-end tests also form the automated part ofacceptance testssince they tell whether the application works for the user.
63
Introduction to E2E tests (ii)
•Speed: Expensive and slow
•Reliability: unreliable and indeterministic
•It is often hard to find the root cause of the problem
•Often, they fail for no apparent reason; next execution suddenly pass
•Setup
•Use a real browser, even if there is no UI (headless)
•Need to deploy frontend, backend, databases, caches, network, etc. in nodes
64
End-to-end testing frameworks
•Since version 12 (?) Angular do not provide a default e2e testing
framework
•Protractor
•Angular-specific e2e testing framework
•Not recommended because it is deprecated (after Angular version 12)
•Free
•Cypress
•Generic e2e testing framework
•Free, but some features (e.g. dashboard for CI/CD) is a paid service
65
Good testing practices
66
Good Testing Practices
•Having written requirements is essential
•Testing, like almost every other activity, must start with objectives
•It is impossible to test your own program
•A good test case is one that has a high probability of detecting an undiscovered defect, not one that shows that the program works
correctly
•A necessary part of every test case is a description of the expected
result
This collides with many
development paradigms, e.g.
eXtreme Programming.
67
Good Testing Practices
•Assign your best people to testing
•Write test cases for valid as well as invalid input conditions.
•Thoroughly inspect the results of each test
•Ensure that testability is a key objective in your software design
•It typically leads to better software design
•Never alter the program to make testing easier
•But testing may drive to/suggest better software design
S
T
U(ntestability)
P
I
D
68
Good Testing Practices (cont’d)
•Avoid nonreproducible or on-the-fly testing
•As the number of detected defects in a piece of software increases,
the probability of the existence of more undetected defects also
increases
•Avoid knowing
•A lot about a few types of testing
•A little about some additional types of testing
•Very little about a sizable number of testing types
Better use automated
(programmed) tests.
69
MythFact
Quality Control = TestingTesting is just one component ofsoftware quality control. Quality Control
includes other activities such as Reviews.
The objective of Testing is to
ensure a 100% defect- free
product
The objective of testing is to uncover as manydefectsas possible while
ensuring that the software meets the requirements. Identifying and getting rid
of all defects is impossible.
Testing is easyTesting can be difficult and challenging (sometimes, even more so than coding).
Anyone can testTesting is a rigorous discipline and requires many kinds of skills.
There is no creativity in
testing
Creativity can be applied when formulating test approaches, when designing
tests, and even when executing tests.
Automated testing eliminates
the need for manual testing
100% test automation cannot be achieved. Manual Testing, to some level, is
always necessary.
When a defect slips, it is the
fault of the Testers
Quality is the responsibility of all members/stakeholders, including developers,
of a project.
Software Testing does not
offer opportunities for career
growth
Gone are the days when users had to accept whatever product was dished to
them; no matter what the quality. With the abundance of competing software
and increasingly demanding users, the need for software testers to ensure high
quality will continue to grow.Software testing jobsare hot now.70
Quiz
71
True or false?
qTestesfuncionaisnuncasãotestesdedesempenho
qTestesfuncionaisnuncasãotestesdeusabilidade
qTestesfuncionaisnuncasãotestesdeintegração
qTestesfuncionaisnuncasãotestesunitários
qTestesfuncionaissãosempretestesunitários
qTestesunitáriosnuncasãotestesderegressão
qTestesunitáriosnuncasãotestesfuncionais
qTestesunitáriosnuncasãotestesdedesempenho
qTestesfuncionaissãosempretestesdecaixaopaca
qTestesunitáriossãosempretestesdecaixaopaca
qTestesunitáriossãosempredecaixatransparente
qTestesdeaceitaçãosãosempredecaixaopaca
q...
qFunctional tests are never performance tests
qFunctional tests are never usability tests
qFunctional tests are never integration tests
qFunctional tests are never unit tests
qFunctional tests are always unit tests
qUnit tests are never regression tests
qUnit tests are never functional tests
qUnit tests are never performance tests
qFunctional tests are always opaque box tests
qUnit tests are always opaque box tests
qUnit tests are always of transparent box
qAcceptance tests are always of opaque box
q…
72
True or false?
qAadoçãodeduplos(doubles)écomumnostestesunitáriosdepartesdeSWqueadotamDI
qOsduplosdetipomocksãousadosfundamentalmenteemtestesdecaixaopaca
qOsduplosdetipomockregistamofuncionamentointernodoSUTduranteaexecuçãodoteste
qTestesdeintegraçãotestampartesconstituídasporpartesjátestadasunitariamente/isoladamente
qTestesdeaceitaçãosubstituemtodososoutrostestes,esãoautomatizáveis
qNemtodosostestesderegressãosãoautomáticos,mastodosostestesautomáticospodemserderegressão
q...
qThe adoption of doubles (doubles) is common in unit tests of parts of SW that adopt DI
qMock type doubles are primarily used in tests of opaque box
qMock type doubles record the inner workings of the SUT during test execution
qIntegration tests test parts made up of parts already tested unitary/isolated
qAcceptance tests replace all other tests, and are automatable
qNot all regression tests are automated, but all automated tests can be regression
q…
73
Bibliography and References
•Cf. references in slides
•https://www.youtube.com/watch?v=x-29vnDLP4Q
•https://www.youtube.com/watch?v=gnrBqLbj1_Q
•https://www.youtube.com/watch?v=SQUI9Ixb790
•http://softwaretestingfundamentals.com
•https://www.edureka.co/blog/software-testing-levels/
•Simple example projects with tests:
•https://bitbucket.org/nunopsilva/incenzzo
•https://bitbucket.org/nunopsilva/smarthomee
•https://bitbucket.org/nunopsilva/absantee
•https://bitbucket.org/nunopsilva/springbootdemo
74