Learning Objectives Understand core definitions and purpose of software testing Learn each phase of the Software Testing Life Cycle (STLC) Distinguish testing scopes and select appropriate coverage Compare testing approaches and pick techniques for projects
Agenda Introduction & key terms Software Testing Lifecycle (STLC) — phases Testing Scopes (what to test & when) Testing Approaches & techniques Sample test plan, lab exercises, best practices
What is Software Testing? Process to evaluate if software meets requirements Detect defects, ensure quality, and validate behavior Includes verification (are we building it right?) and validation (are we building the right thing?)
Why Testing Matters Prevents defects from reaching users (reduces cost) Improves user confidence and product reliability Ensures compliance with requirements and standards
Key Terms (Glossary) Test case: inputs, actions, expected results Defect (bug): difference between expected & actual Test plan, test suite, test run, test environment
High-level Types of Testing Functional testing — verifies features against requirements Non-functional testing — performance, security, usability Manual vs Automated testing
Software Testing Lifecycle (STLC) — Overview Sequence of activities to plan, design, execute, and close testing Typical phases: Requirement Analysis → Planning → Design → Environment Setup → Execution → Closure STLC complements SDLC (Software Development Life Cycle)
STLC: Requirement Analysis Understand functional & non-functional requirements Identify testable items and clarify ambiguities Prepare a requirement traceability matrix (RTM)
STLC: Test Planning Define scope, objectives, resources, schedule Choose entry/exit criteria and testing tools Estimate effort, set responsibilities (roles)
Test Case Design & Test Data Create clear test cases: preconditions, steps, expected result Use test data for positive/negative and boundary checks Apply design techniques: equivalence partitioning, BVA, decision tables
Test Environment Setup Configure hardware, software, networks, test tools Use production-like data (anonymized) and seeds Ensure CI/CD integration for automated tests
Test Execution & Defect Reporting Run test cases, log results, capture evidence (screenshots, logs) Record defects: steps, severity, priority, environment Triage defects and track until closure
Test Closure & Metrics Collect exit criteria evidence and lessons learned Report metrics: test coverage, pass rate, defect density, MTTR Archive artifacts: test cases, data, environment configs
Testing Scopes: Levels Unit testing: individual functions/modules (developer-driven) Integration testing: how modules work together System testing: full product behavior in target environment Acceptance testing: user/Business validation (UAT)
Testing Scopes: Types & Focus Smoke testing: basic sanity checks on critical paths Sanity testing: quick verification after minor changes Regression testing: ensure old features still work after changes
Scope Decisions: Risk-based Testing Prioritize testing by business impact and likelihood of failure Define measurable acceptance criteria for features Use traceability to ensure high-risk items get more coverage
Testing Approaches: Static vs Dynamic Static testing: reviews, walkthroughs, static analysis (no execution) Dynamic testing: executing code with test inputs Both are complementary: catch different defect classes
White-box / Black-box / Grey-box White-box: tests using internal code/structure knowledge (unit tests) Black-box: tests only based on requirements (functional tests) Grey-box: partial knowledge (integration/service-level tests)
Modern Approaches: Shift-left & Shift-right Shift-left: move testing earlier into development (unit, TDD, static analysis) Shift-right: testing in production (canary, chaos engineering, monitoring) DevTestOps: integrate testing into CI/CD pipeline for continuous quality
Test Design Techniques Equivalence Partitioning: reduce tests by grouping similar inputs Boundary Value Analysis: test edge values around limits Decision Tables & State Transition for complex rules
Test Automation: Strategy & ROI Automate repetitive, stable, high-value tests (regression, smoke) Select tools based on stack (Selenium, Playwright, JUnit, pytest) Measure ROI: maintenance cost vs time saved and failure detection speed
Reporting & Metrics Key metrics: test coverage, pass/fail rate, defect density, escaped defects Dashboards: provide real-time health for stakeholders Use metrics to spot trends, not as absolute 'quality' scores
Sample Test Plan (Template) Scope & Objectives | Test items | Features not to be tested Resources, schedule, risks, entry & exit criteria Test deliverables: cases, reports, environments
Common Pitfalls & Best Practices Pitfalls: unclear requirements, lack of environment parity, skipping regression Best practices: define acceptance criteria, automate smartly, keep small fast tests Collaborate with product & dev; tests should enable velocity, not block it
Hands-on Lab Tasks Beginner: Write 5 test cases for a login page; run manual tests and log defects Intermediate: Design test cases using equivalence & BVA; automate smoke test Advanced: Create a CI pipeline stage that runs unit & integration tests; measure metrics
Summary & Next Steps Testing spans verification, validation, and continuous quality practices Apply STLC, choose scope by risk, and pick approaches suiting your lifecycle References: ISTQB syllabus, IEEE 829 test plan guidelines, books & online labs
Generated For You File: Software_Testing_Presentation.pptx Created: 2025-10-08 06:11 UTC Use, modify, and redistribute for teaching and labs