SOFTWARE TESTINg ghhhhhhhgggggfdhnhhjjju

nehalsiddiqui7 17 views 28 slides Mar 05, 2025
Slide 1
Slide 1 of 28
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28

About This Presentation

Software testing


Slide Content

Types and Levels of Testing Software Testing BCA VI Faculty of Science and Technology

Competency Competency Number Competency CO adressed Levels of Testing, Unit Testing: Driver Stub Integration Testing: Top-Down Integration, Bottom Up Integration, Bi- Directional Integration. CO 2

Specific Learning Objectives At the end of this class, students should be able to: Levels of Testing Unit Testing: Driver Stub Integration Testing: Top-Down Integration, Bottom Up Integration, Bi- Directional Integration Testing on Web Application: Performance Testing, Load Testing, Stress Testing, Security Testing , Client Server Testing Acceptance testing: Alpha Testing, Beta Testing, Special Tests: Regression Testing, GUI testing

Types of Testing 1. Unit Testing Definition: Testing individual components or modules of a software application. Purpose: To verify that each unit functions correctly in isolation. Tools: JUnit, NUnit, PHPUnit. 2. Integration Testing Definition: Testing the interaction between integrated units/modules. Purpose: To detect issues in the interaction between components. Approaches: Top-down, bottom-up, sandwich (hybrid). 3. System Testing Definition: Testing the complete and integrated software application. Purpose: To evaluate the system's compliance with specified requirements. Types: Functional, non-functional. 4. Acceptance Testing Definition: Testing the system for acceptability. Purpose: To ensure the software meets the business requirements and is ready for deployment. Types: User Acceptance Testing (UAT), Operational Acceptance Testing (OAT).

Types of Testing 5. Performance Testing Definition: Testing to determine the system’s performance under load. Purpose: To ensure the system performs well under expected or higher load conditions. Types: Load testing, stress testing, endurance testing, spike testing. Tools: JMeter, LoadRunner. 6. Security Testing Definition: Testing to identify vulnerabilities in the software. Purpose: To ensure that the software is protected against threats. Types: Penetration testing, vulnerability scanning. Tools: OWASP ZAP, Burp Suite. 7. Usability Testing Definition: Testing the user interface and user experience. Purpose: To ensure the application is user-friendly and intuitive. Methods: Surveys, A/B testing, user observations.

Types of Testing 8. Compatibility Testing Definition: Testing how well the software performs in different environments. Purpose: To ensure compatibility across various browsers, devices, and operating systems. Tools: BrowserStack, CrossBrowserTesting. 9. Regression Testing Definition: Testing after modifications to ensure that new changes have not affected existing functionalities. Purpose: To verify that previously developed and tested software still performs after a change. Tools: Selenium, TestComplete. 10. Alpha and Beta Testing Alpha Testing: Conducted by the developers and internal testers before the product is released to external users. Beta Testing: Conducted by external users in a real-world environment before the final release.

Types of Testing 8. Compatibility Testing Definition: Testing how well the software performs in different environments. Purpose: To ensure compatibility across various browsers, devices, and operating systems. Tools: BrowserStack, CrossBrowserTesting. 9. Regression Testing Definition: Testing after modifications to ensure that new changes have not affected existing functionalities. Purpose: To verify that previously developed and tested software still performs after a change. Tools: Selenium, TestComplete. 10. Alpha and Beta Testing Alpha Testing: Conducted by the developers and internal testers before the product is released to external users. Beta Testing: Conducted by external users in a real-world environment before the final release.

Levels of Testing 1. Unit Testing Focus: Individual units or components. Purpose: To validate that each unit performs as expected. Performed By: Developers. 2. Integration Testing Focus: Combined units/modules. Purpose: To identify issues in the interaction between integrated units. Performed By: Developers or dedicated testers. 3. System Testing Focus: The entire integrated system. Purpose: To validate the system's compliance with specified requirements. Performed By: QA team. 4. Acceptance Testing Focus: The complete system from an end-user perspective. Purpose: To ensure the system is ready for production use. Performed By: End-users, clients, or stakeholders.

Unit Testing: Driver Stub Unit Testing: Driver and StubIn U nit testing, driver/stub is a valuable technique to isolate and effectively test individual units (functions, modules) of code. Driver: A temporary program or code snippet specifically designed to call the unit under test and provide it with different inputs (data, parameters) to simulate various scenarios. The driver observes the unit's behavior and verifies its outputs against expected results. Stub: A simplified substitute for a module that the unit under test normally interacts with. Stubs provide a basic level of functionality required by the unit under test to proceed, but they don't necessarily replicate the full functionality of the actual module. They can be pre-programmed to return specific values or perform limited actions. Key characteristics of a Driver : Focused: Designed to test a specific unit in isolation. Input Provider: Provides a variety of test cases with different inputs to exercise the unit's logic thoroughly. Output Verifier: Compares the unit's outputs with expected results based on the provided inputs. Simple and Maintainable: Should be easy to understand and modify for different test cases.

Benefits of Driver/Stub Technique Isolation: Allows testing a unit independently of its dependencies, simplifying the testing process and reducing external factors that could influence the results. Control: The driver provides complete control over the inputs provided to the unit under test, enabling the testing of various edge cases and error conditions. Focus: Facilitates a concentrated focus on the unit's internal logic and ensures it functions correctly in isolation. Example: Imagine you're testing a function that calculates the area of a rectangle. The function takes two arguments: length and width . Driver: Your driver code would call the area calculation function with different combinations of length and width values (e.g., positive values, zero values, negative values). It would then compare the returned area with the expected results based on the input values. Stub: If the area calculation function relies on another function to retrieve data (e.g., fetching length and width from a database), you might create a stub function that simply returns pre-defined values for these measurements. This simplifies the test by isolating the area calculation logic from external data retrieval concerns.

Integration Testing: Top-Down Integration In integration testing, we verify how individual software modules work together after being integrated into a larger system. There are three main approaches to achieve this: 1. Top-down Integration 2. Bottom-up Integration 3. Bi-directional Integration.

Integration Testing: Top-Down Integration 1. Top-Down Integration: Concept: Testing starts from the highest-level module in the system hierarchy. This module typically interacts with several lower-level modules. Process: Use stubs (simulated replacements) for the lower-level modules initially. Test the functionality of the high-level module using the stubs. Progressively integrate and test lower-level modules, replacing stubs with actual modules as you move down the hierarchy. Benefits: Early feedback on interface and integration issues. Focus on overall system functionality from a user's perspective. Modular testing approach for easier debugging. Challenges: • Time-consuming to create and maintain stubs for all lower-level modules. • Limited initial testing of lower-level modules. • Debugging issues might be challenging due to reliance on incomplete modules.

Integration Testing: Top-Down Integration 2. Bottom-Up Integration: Concept: Testing begins with the most basic, independent modules at the lowest level. Process: Test the functionality of individual low-level modules. Gradually integrate and test small groups of modules with well-defined interfaces. Utilize driver programs to simulate the behavior of higher-level modules that the integrated modules interact with. Benefits: • Early verification of individual module functionality. • Reusable driver programs for regression testing. • Focus on testing module interactions and data flow. Challenges: • System-level functionality might not be fully tested until later stages. • Creating and maintaining driver programs can be time-consuming. • Limited top-down validation from the user's perspective initially.

Integration Testing: Top-Down Integration 3. Bi-Directional Integration: Concept: Combines elements of both top-down and bottom-up approaches. Process: Modules are integrated and tested from both the top and bottom levels simultaneously. Doesn't require integrating all modules at the lowest level first. Grouping is based on functionality and dependencies. Benefits: • Comprehensive testing approach – addresses module interactions and overall system functionality. • Faster feedback on integration issues across the system. • Flexibility in the order of integration. Challenges: • Increased complexity in managing both top-down and bottom-up testing. • Requires more resources for testing and coordination between teams.

Testing on Web Application Testing a web application involves various types of testing to ensure it functions correctly, performs well under different conditions, and is secure. Here’s an in-depth look at performance testing, load testing, stress testing, security testing, and client-server testing: 1. Performance Testing Definition: Performance testing measures how well a web application performs under different conditions. It evaluates the speed, responsiveness, and stability of the application. Objectives: • Ensure the application meets performance criteria. • Identify performance bottlenecks. • Validate the reliability of the system. Key Metrics: • Response Time: Time taken to respond to a user request. • Throughput: Number of transactions processed within a given time frame. • Resource Utilization: CPU, memory, and network usage. Tools: • JMeter • LoadRunner • Apache Benchmark

Testing on Web Application 2. Load Testing Definition: Load testing evaluates how the web application behaves under expected user loads . It checks the system's performance under normal and peak conditions . Objectives: • Determine the maximum operating capacity of the application. • Identify performance degradation points. • Ensure the application can handle expected user load. Approach: • Simulate multiple users accessing the application simultaneously. • Monitor system behavior and resource utilization. Tools: • JMeter • LoadRunner • BlazeMeter

Testing on Web Application 3. Stress Testing Definition: Stress testing assesses the application’s performance under extreme or peak load conditions. It determines the application’s robustness and identifies its breaking point. Objectives: • Identify how the application behaves under high load. • Determine the system’s capacity to handle extreme conditions. • Identify potential points of failure. Approach: • Gradually increase the load on the system until it fails. • Monitor response times, error rates, and resource usage. Tools: • JMeter • LoadRunner • Gatling

Testing on Web Application 4. Security Testing Definition: Security testing identifies vulnerabilities in the web application and ensures it is protected against threats such as unauthorized access, data breaches, and other malicious attacks. Objectives: • Identify security vulnerabilities and weaknesses. • Ensure data protection and privacy. • Prevent unauthorized access and breaches. Types: • Penetration Testing: Simulate attacks to find vulnerabilities. • Vulnerability Scanning: Scan the application for known vulnerabilities. • Security Audits: Review code and configurations for security flaws. Tools: • OWASP ZAP • Burp Suite • Nessus

Testing on Web Application 5. Client-Server Testing Definition: Client-server testing focuses on the interaction between the client and server components of the web application. It ensures the communication and data exchange between the client (front-end) and server (back-end) work correctly. Objectives: • Verify the client-server interaction. • Ensure data integrity and consistency. • Validate performance and reliability of client-server communication. Approach: • Test the application on different client devices and browsers. • Monitor requests and responses between the client and server. • Verify the correctness of data transactions and error handling. Tools: • Postman (for API testing) • Fiddler (for network traffic analysis) • Selenium (for automated functional testing)

Acceptance Testing Acceptance testing ensures that a software application meets the business requirements and is ready for deployment. It validates that the end product is ready for use by the intended audience. Here are details about acceptance testing, including alpha testing, beta testing, and special tests like regression testing and GUI testing. Acceptance Testing Definition: Acceptance testing is a type of testing conducted to determine if the software is ready for release. It is usually the final phase of testing before the application goes live. Objectives: Validate the end-to-end business flow. • Ensure the application meets the requirements. • Identify any critical issues before production release.

Acceptance Testing Alpha Testing Definition: Alpha testing is an internal form of acceptance testing conducted by the developers and QA team within the organization. Objectives: • Identify bugs and issues in the early stages. • Ensure all major functionalities work correctly. • Gather feedback from internal users. Approach: • Conducted in a controlled environment. • Involves both white-box and black-box testing. • Performed in stages: first by developers, then by QA testers. Benefits: • Early detection of bugs. • Immediate feedback for developers. • Ensures a stable version for beta testing.

Acceptance Testing Beta Testing Definition: Beta testing is an external form of acceptance testing conducted by a select group of end-users in a real-world environment. Objectives: • Validate the application in a real-world environment. • Gather feedback from actual users. • Identify any issues that were not found during alpha testing. Approach: • Conducted in the user’s environment. • Involves black-box testing. • Users provide feedback on usability, functionality, and performance. Benefits: • Real-world usage reveals unexpected issues. • Direct feedback from end-users. • Enhances product reliability and user satisfaction.

Special Tests Regression Testing Definition : Regression testing ensures that new changes or enhancements do not adversely affect the existing functionality of the application. Objectives : Verify that recent changes have not introduced new bugs. Ensure that previously fixed issues remain resolved. Confirm that the application still meets its requirements. Approach : Re-run previously executed test cases. Automate test cases for efficiency. Focus on critical areas and impacted functionalities. Benefits : Maintains software stability. Prevents the re-introduction of bugs. Ensures continuous quality assurance.

Special Tests GUI Testing Definition: GUI testing focuses on validating the graphical user interface of the application to ensure it meets design specifications and user experience standards. Objectives: • Verify that the GUI functions as expected. • Ensure that the interface is user-friendly and accessible. • Check for visual consistency and correctness. Approach: • Manual testing by interacting with the application. • Automated testing using GUI testing tools. • Focus on visual elements, usability, and accessibility. Benefits: • Enhances user experience. • Ensures visual and functional correctness. • Identifies interface-related issues early.

Summary The various Concepts about Levels of Testing: Unit Testing: Driver Stub Integration Testing: Top-Down Integration, Bottom Up Integration, Bi- Directional Integration Testing on Web Application: Performance Testing, Load Testing, Stress Testing, Security Testing , Client Server Testing Acceptance testing: Alpha Testing, Beta Testing, Special Tests: Regression Testing, GUI testing

Expected Questions BAQ Define Bi-Directional Integration Testing. Define Security Testing and its importance in web applications. Define Regression Testing and its role in software maintenance. Define Testing Tasks and explain their role in the test plan. SAQ Explain Top-Down Integration Testing. Describe Load Testing and its purpose. Explain Regression Testing and its role in software maintenance. Explain the importance of effective Test Management in software projects. Describe the importance of Test Reporting in software testing. Describe the criteria for selecting a suitable Testing Tool. LAQ Explain the challenges and benefits of each approach in Integration Testing. Describe Client-Server Testing and its importance in ensuring seamless communication between components. Elaborate on how to decide on a Test Approach based on project requirements and constraints. Explain the process of Base Lining a Test Plan and why it is essential in software testing. Elaborate on the stages involved in the Defect Management Process and their significance. Explain the criteria for selecting an appropriate Testing Tool for a software testing project.

References Software Testing: Principles and Practices Srinivasan Desikan Gopalaswamy Ramesh PEARSON Publisher: Pearson India 2005, ISBN: 9788177581218, Software Testing: Principles, Techniques and Tools Limaye M. G. Tata McGraw Hill Education, New Delhi., 2007 ISBN 13:9780070139909 Software Testing: Principles and Practices Chauhan Naresh Oxford University Press Noida Software Testing Singh Yogesh Cambridge University Press, Bengaluru. ISBN 978-1-107-65278-1

Thank you
Tags