Software Testing Concepts: STLC, Bug Life Cycle & Metrics
Input Domain Testing
The input domain of a program is the set of all possible inputs that it can accept. Since testing every single input is impossible, input domain testing is used to partition the input space into manageable subsets, ensuring effective test coverage.
Explanation:
- Input domain testing is a black-box testing technique.
- Inputs are divided into equivalence classes (valid and invalid).
- Testers choose representative values from each class.
- Special focus is given to boundary values, since most errors occur at limits.
- It reduces redundant test cases by avoiding repetition within the same equivalence class.
- Helps in early bug detection by targeting likely error-prone areas.
- Supports systematic and logical test case design.
Exhaustive Testing
Exhaustive testing means testing the software with all possible inputs and execution paths. Although it guarantees correctness in theory, in practice it is impossible because of infinite input combinations, time constraints, and resource limitations.
Explanation:
- Input space is often infinite or extremely large.
- Paths in software explode due to loops and nested conditions.
- Testing all combinations requires impractical time and cost.
- For example, a program with an 8-character password (95 ASCII characters) requires 958 combinations, which equals over 6 quadrillion test cases.
- Even if 1 test case executes per millisecond, testing would take thousands of years.
- Resources such as manpower, hardware, and budgets are limited in real-world projects.
- Hence, risk-based, domain-based, and coverage-based testing techniques are adopted instead.
Conclusion: Exhaustive testing is practically impossible, which is why testers rely on systematic test design techniques such as Boundary Value Analysis, Equivalence Partitioning, and Decision Table Testing to maximize effectiveness with fewer test cases.
Software Testing Life Cycle (STLC)
Software Testing Life Cycle (STLC) defines sequential stages followed to conduct testing in a systematic manner.
Explanation:
- Requirement Analysis: Identify what needs to be tested.
- Test Planning: Define scope, strategy, resources, and schedules.
- Test Case Development: Write test cases, scripts, and prepare test data.
- Test Environment Setup: Configure required hardware and software.
- Test Execution: Run the test cases and record results.
- Defect Tracking: Log and monitor defects until resolved.
- Test Closure: Summarize testing activities, coverage achieved, and lessons learned.
Each phase has defined entry and exit criteria. For example: In a banking application, STLC ensures that requirements like “fund transfer” are planned, test cases are created, environment is prepared, execution is done, and results documented.
Conclusion: STLC ensures a well-organized and repeatable testing process, improving efficiency and reliability of software products.
Bugs in the Software Development Life Cycle (SDLC)
Bugs can arise at different stages of the Software Development Life Cycle (SDLC). Identifying their types by stage helps in early prevention and efficient debugging.
Requirement Phase Bugs: Missing or ambiguous requirements. e.g., Login requires “strong password” but no clear definition is given.
Design Phase Bugs: Errors in architecture or algorithm design. e.g., Choosing an inefficient sorting algorithm for large data sets.
Coding Phase Bugs: Syntax errors, logical mistakes, incorrect data handling, memory leaks. e.g., Using = instead of == in a condition.
Testing Phase Bugs: Integration errors, system misconfigurations, performance issues. e.g., Module A works alone but fails when integrated with Module B.
Maintenance Phase Bugs: Regression issues due to updates or environment changes. e.g., A bug appears in the billing module after patching the login module.
Bug Life Cycle (Defect Life Cycle)
The Bug Life Cycle (also called Defect Life Cycle) refers to the stages a bug goes through from discovery to closure. It ensures proper tracking and accountability of defect resolution.
- New: A tester discovers a defect and logs it.
- Assigned: The bug is assigned to a developer.
- Open: Developer begins work on the fix.
- Fixed: Code is corrected by the developer.
- Retest: Tester re-executes the test to confirm the fix.
- Closed: If retest passes, the bug is marked closed.
- Reopened: If still failing, the bug is reopened.
- Deferred: If low priority, it is postponed for a future release.
- Rejected: If invalid or duplicate, the bug is rejected.
Example: In an e-commerce app, a tester finds “discount not applied at checkout.” Logged → Assigned → Fixed by developer → Retested → Closed.
Testing Group Responsibilities
The testing group in an organization performs multiple key activities to ensure product quality. Naresh Chauhan (pg. 275) defines these responsibilities clearly.
- Define & Apply Test Policies & Standards: Establish guidelines for consistent testing.
- Participate in Reviews: Requirements, design, and code reviews to detect early defects.
- Test Planning & Execution: Designing test cases, setting objectives, and executing them systematically.
- Measurement & Monitoring: Track test coverage, effort, and defect density.
- Defect Tracking & Reporting: Logging bugs, tracking severity, and preparing reports.
- Tool Acquisition: Choosing and managing automation and defect-tracking tools.
- Training & Mentoring: Developing skills of junior testers and ensuring team growth.
Cost of Fixing Bugs
The cost of fixing a bug increases exponentially the later it is detected in the SDLC. Early detection is therefore crucial.
- Requirement Stage: Fixing here is cheapest (clarifying documents).
- Design Stage: Fixing requires changes in design documents and models, costlier than requirements.
- Coding Stage: Developers must change and recompile code, medium-high cost.
- Testing Stage: Fixing requires rework in code, retesting, regression testing.
- Post-release Stage: Fixing requires patches, hotfixes, or recalls → very expensive.
Example: A requirement bug costing ₹100 at analysis can cost ₹10,000 after release. Studies show cost multiplies by roughly 10× at each later phase.
Boundary Value Analysis (BVA) and Limitations
Boundary Value Analysis (BVA) is a widely used black-box technique focusing on testing at the boundaries of input ranges. Despite its effectiveness, it has limitations:
- Covers only boundaries, not the entire domain — may miss defects inside ranges.
- Not effective for Boolean or categorical variables (e.g., Male/Female).
- Works best for independent numeric inputs but fails if inputs are interdependent.
- May ignore invalid combinations of inputs beyond simple boundaries.
- Cannot ensure logical correctness, only input correctness.
- Limited coverage of business rules, unlike decision tables; not suitable for complex workflows with multiple decision points.
Verification vs Validation
Verification: “Are we building the product right?”
Nature: Static, no code execution.
Activities: Requirement reviews, design inspections, walkthroughs, static analysis.
Goal: Detect defects early, cost-effective.
Validation: “Are we building the right product?”
Nature: Dynamic, involves executing code.
Activities: Unit testing, integration testing, system testing, acceptance testing.
Goal: Ensure software meets user expectations and real-world needs.
Analogy: Verification = checking the blueprint of a house before building. Validation = living in the house to confirm it meets comfort and safety.
Cyclomatic Complexity (CC)
Cyclomatic Complexity (CC) is a metric that measures the number of independent paths through a program’s source code.
Why Important:
- Test Coverage: CC gives the minimum number of test cases needed to achieve branch coverage.
- Maintainability: High CC means code is complex, harder to understand, and maintain.
- Risk Identification: Modules with higher CC are more error-prone.
- Resource Planning: Helps project managers allocate extra review/testing effort to complex modules.
- Industry Practice: NASA and safety-critical industries often limit CC ≤ 10 to reduce risks.
Analogy: CC is like the number of traffic intersections in a city. More intersections → more routes → more chances of accidents → higher testing needed.
Testing Group Hierarchy
Need: Software testing is teamwork requiring different levels of responsibility. Hierarchy ensures coordination, accountability, and specialization.
Hierarchy Levels:
- Test Manager: Defines policies and strategy; allocates resources and manages risks.
- Test Lead / Coordinator: Prepares detailed plans; supervises team execution.
- Senior Test Engineer: Designs complex test cases; reviews others’ work; mentors juniors.
- Test Engineer: Executes test cases; logs defects and prepares reports.
- Junior Tester / Trainee: Assists in regression tests; handles automation scripts and test environment setup.
Importance: Provides role clarity and prevents overlap/duplication of efforts. Similar to an organizational chain of command in companies.
Test Plan
A Test Plan is a formal document that defines the testing scope, strategy, approach, resources, schedule, and risks for a project.
Contents:
- Test Items: Identifies modules, features, or software versions to be tested. e.g., Login module, Payment gateway, Report generation.
- Features to be Tested: Specifies functional (e.g., data validation, calculations) and non-functional features (e.g., performance, usability).
- Testing Tasks: Defines activities such as test case design, execution, defect logging, regression testing; assigns responsibilities.
- Schedule & Resources: Defines timeline, effort estimates, required tools, and tester allocation. e.g., Selenium automation for regression; JMeter for performance.
- Risks & Mitigation: Identifies risks like schedule slippage, tool failures, environment issues and provides backup strategies.
Importance: Acts as a blueprint for testing. Aligns testing with business priorities, reduces confusion, and improves accountability.
Test Management
Test management is the process of planning, controlling, and monitoring software testing activities to ensure software quality. According to Naresh Chauhan (STQA Mod 3), test management includes five key elements.
- Test Organization: Defines the structure and hierarchy of the testing team. Specifies roles such as Test Manager, Test Lead, Test Engineers, and required skills. Ensures smooth communication and accountability.
- Test Planning: Documents the scope, objectives, risks, resources, and schedule for testing. Aligns testing activities with business goals. Includes what to test, how to test, and who will test.
- Detailed Test Design & Specifications: Converts high-level plans into detailed test cases. Uses a traceability matrix to map requirements → test cases → design/code. Ensures no requirement is left untested.
- Test Monitoring & Assessment: Tracks execution progress, defect density, and coverage metrics. Uses dashboards and reports to provide feedback and help managers identify risks and adjust schedules.
- Product Quality Assurance: Ensures the product meets quality standards and customer expectations. Involves audits, reviews, and compliance with ISO and CMMI standards.
Software Metrics
Software metrics are quantitative measures used to evaluate the quality of a software product, the efficiency of processes, or the progress of a project.
Classification of Metrics:
- Product Metrics (focus on software attributes): Measure size, complexity, quality, performance. e.g., Lines of Code (LOC), Cyclomatic Complexity, Defect Density, Response Time. Use: Helps compare different versions of product quality.
- Process Metrics (focus on efficiency of processes): Measure effectiveness of SDLC and testing processes. e.g., Defect Removal Efficiency, Test Execution Productivity, Review Efficiency. Use: Supports continuous improvement (Kaizen).
- Project Metrics (focus on management & progress): Track resource usage, schedule, and cost. e.g., Cost Variance, Schedule Variance, Team Productivity, Effort Variance. Use: Helps managers forecast risks and adjust planning.
Importance: Provides objective insight into development and testing, supports estimation of future projects, enables risk prediction and timely corrective action, and ensures compliance with CMMI, ISO, and Six Sigma quality models.
Glossary and Terms
| Term | Meaning | Example |
|---|---|---|
| Error | Human mistake in coding/design | Wrong formula written |
| Bug | Informal term for defect | Login button not working |
| Defect | Deviation from expected result found in testing | Wrong discount applied |
| Fault | Root cause in code logic | Divide by zero |
| Failure | System behaves incorrectly at runtime | App crash on upload |
| Testware | Artifacts for testing | Test cases, scripts |
| Incident | Unexpected event during testing | Network timeout |
Comparison: Black Box vs White Box
| Aspect | Black Box Testing | White Box Testing |
|---|---|---|
| Focus | Tests functionality (what system does) | Tests code logic (how system works) |
| Knowledge | No knowledge of code needed | Requires programming knowledge |
| Techniques | BVA, Equivalence Partitioning, Decision Tables | Statement, Branch, Path Coverage |
| Defects found | Missing features, wrong outputs | Logic errors, unreachable code |
| Example | Test login by entering inputs | Check if both if and else paths of login run |
