Software Quality Assurance & Testing Fundamentals
Software Quality: Definitions & QA vs QC
Quality: The degree to which software meets specified requirements and customer expectations.
Quality Assurance (QA): A preventive, process-oriented activity that ensures quality in the processes by which products are developed.
Quality Control (QC): A corrective, product-oriented activity that involves identifying defects in actual products using inspections and testing.
Difference between QA & QC:
Aspect | QA | QC |
Focus | Process | Product |
Goal | Prevent defects | Detect defects |
Type | Proactive | Reactive |
Responsibility | Everyone (esp. QA team) | Testing/inspection team |
SQA Challenges:
Managing frequent requirement changes
Integration of tools and environments
Time/budget constraints
Achieving full test coverage
Maintaining test data and documentation
SQA Planning & ISO 9000 Standards
SQA: A systematic approach that applies defined quality standards and practices to the software development lifecycle to ensure high-quality software.
SQA Planning includes:
Setting quality objectives
Identifying SQA activities
Roles and responsibilities
Tools and resources
Risk management
ISO 9000:
Family of standards for quality management systems (QMS)
Focus on process consistency and customer satisfaction
Requires documentation of procedures, internal audits, corrective actions
Key Software Quality Assurance Activities
Requirement Analysis: Ensuring requirements are testable and clear
Design Reviews: Verifying that architecture/design meets requirements
Code Reviews/Inspections: Peer reviewing code for errors
Test Planning: Defining test strategy, cases, tools, metrics
Configuration Management: Version control and change tracking
Defect Management: Logging, analyzing, and fixing bugs
Process Audits: Verifying process adherence
Metrics Collection: Tracking defect density, test coverage, etc.
Core Building Blocks of SQA
Software Engineering Standards: ISO, IEEE, CMMI
Formal Technical Reviews: Peer review of requirements, code, etc.
Testing Strategy: Unit, Integration, System, Acceptance testing
Change Control Management: Tracks changes and approvals
Measurement & Metrics: Quantitative analysis (e.g., defect trends)
Record Keeping & Documentation: SRS, plans, review logs, reports
Training and Certification: ISTQB, Six Sigma, etc.
McCall’s Software Quality Factors
Divided into 3 categories:
Product Operation:
Correctness, Reliability, Efficiency, Integrity, Usability
Product Revision:
Maintainability, Flexibility, Testability
Product Transition:
Portability, Reusability, Interoperability
Explanation:
Correctness: Functionality meets the user’s needs.
Reliability: Performs consistently under expected conditions.
Efficiency: Uses minimal system resources.
Integrity: Protects against unauthorized access.
Usability: User-friendly and easy to learn.
Maintainability: Easy to fix bugs or add features.
Flexibility: Adapts to changes easily.
Testability: Ease of testing the software.
Portability: Runs on different platforms.
Reusability: Code/components can be reused.
Interoperability: Works with other systems.
Software Reliability & Key Metrics
Software Reliability: The probability of failure-free software operation for a specified time in a specified environment.
Key Metrics:
Metric | Description |
ROCOF (Rate of Occurrence of Failure) | Number of failures over time |
MTTF (Mean Time To Failure) | Average time before the system fails |
MTTR (Mean Time To Repair) | Average time taken to repair after failure |
MTBF (Mean Time Between Failures) | MTTF + MTTR |
POFOD (Probability of Failure on Demand) | Probability that the software fails when required |
Availability | Proportion of time system is functioning = MTTF / (MTTF + MTTR) |
Example:
If a system runs for 1000 hours (MTTF) and takes 2 hours to repair each failure (MTTR),
Availability = 1000 / (1000 + 2) = 0.998 or 99.8%
Software Testing: Definition & Objectives
Testing: The process of executing software to identify defects and ensure the product meets user requirements.
Objectives:
Find defects
Validate correctness
Improve quality
Ensure requirements are met
Gain confidence in software performance
Testing’s Role & Impact on Software Quality
Acts as a quality gate to identify and eliminate defects.
Ensures conformance to requirements, increases customer satisfaction, and reduces maintenance cost.
Helps in risk mitigation by identifying failure-prone areas early.
Common Causes of Software Failure
Error: Human mistake in code/design.
Bug/Defect: Fault in the software code or logic.
Fault: Condition that causes failure.
Failure: Deviation of software from expected behavior during execution.
The Economics of Software Testing
Early detection of defects is cheaper.
Cost of fixing bugs increases over time (requirements < design < testing < maintenance).
Quality assurance reduces rework, improves customer trust, and lowers Total Cost of Ownership (TCO).
Seven Fundamental Testing Principles
Testing shows presence of defects
Exhaustive testing is impossible
Early testing saves time and money
Defects cluster together
Pesticide paradox – repetitive tests lose effectiveness
Testing is context-dependent
Absence of errors ≠ useful software
Software Testing Life Cycle (STLC)
Requirement Analysis
Test Planning
Test Case Development
Environment Setup
Test Execution
Test Cycle Closure
Verification & Validation (V&V) Concepts
Verification: “Are we building the product right?” (Static)
Validation: “Are we building the right product?” (Dynamic)
V Model:
Development & testing activities run in parallel.
Each dev stage has a corresponding test phase.
W Model:
Emphasizes verification and validation at every stage, including test planning with development.
Agile Testing & Test-Driven Development (TDD)
Testing is continuous in Agile.
TDD (Test Driven Development):
Write test
Write minimal code to pass test
Refactor
Promotes better design and test coverage.
Levels of Software Testing
Unit Testing:
Tests individual components/functions.
Done by developers.
Integration Testing:
Verifies interaction between modules.
Approaches: Top-down, Bottom-up, Big Bang.
System Testing:
Tests the entire system against requirements.
User Acceptance Testing (UAT):
Done by end-users to ensure product is acceptable.
Software Test Types
Functional Testing (Black-box):
Based on specifications; tests what the system does.
Non-functional Testing:
Tests performance, usability, security, etc.
Structural Testing (White-box):
Tests internal structure, logic, code paths.
Change-related Testing:
Confirmation (Re-testing): Ensure bug fix works.
Regression Testing: Ensure new changes don’t affect existing features.
Non-Functional Testing Categories
Performance Testing:
Load: Normal load behavior.
Stress: Beyond capacity behavior.
Usability Testing:
Checks UI friendliness and accessibility.
Maintainability:
Ease of fixing defects and updating software.
Portability:
Ability to operate on different platforms.
Security Testing:
Identifies vulnerabilities and protection strength.
Localization & Internationalization:
Localization: Region-specific UI and content.
Internationalization: Global adaptability (e.g., multi-language support).
Smoke Testing vs. Sanity Testing Comparison
Type | Smoke Testing | Sanity Testing |
Purpose | Checks basic functionalities work | Checks new bug fixes or features |
Level | Shallow, wide coverage | Deep, narrow coverage |
Performed when | After build release | After bug fix or minor change |
Automation | Often automated | Often manual |
Static Testing: Review Techniques
Static testing involves examining the software without executing the code. It focuses on preventing defects through reviews and analysis.
Review Process: Informal & Formal
Informal Reviews: Casual, no documentation required. Example: peer discussions.
Formal Reviews: Structured, documented, and include roles like moderator, scribe, reviewers. Example: Inspections.
Technical & Peer Reviews
Review by fellow developers or testers.
Aimed at identifying defects in documents, design, or code before testing.
Focuses on correctness, logic, and adherence to standards.
Walkthroughs in Software Review
Author presents the document/code to a group.
Participants ask questions and comment.
Objective: Gain understanding and identify defects early.
Software Inspection Process
Most formal type of review.
Includes planning, preparation, inspection meetings, rework, and follow-up.
Roles: Moderator, Author, Scribe, Reviewer.
Finds defects, improves quality, and ensures standard adherence.
Static Analysis Techniques
Analyzing code without executing it using tools or manual techniques.
Static Analysis Tools
Tools analyze source code or compiled code to find:
Syntax violations
Coding standard violations
Security vulnerabilities
Dead code, unused variables
Examples: SonarQube, Checkstyle, PMD.
Black Box Test Design Techniques
Tests software functionality without knowing internal code structure.
Equivalence Partitioning
Divides input into valid and invalid partitions.
Test one value from each partition to reduce number of test cases.
Example: For age input 1–100, partitions can be: <1 (invalid), 1–100 (valid), >100 (invalid).
Boundary Value Analysis
Tests at the edges of input ranges.
Example: For 1–100, test 0, 1, 2 and 99, 100, 101.
Decision Table Testing
Uses rules and conditions to identify actions.
Useful when combinations of inputs lead to different outputs.
A decision table has conditions in rows and actions in columns.
State Transition Testing
Models the software behavior using states and transitions.
Useful when the system has various states (e.g., login status).
Tests valid and invalid transitions.
White Box Test Design Techniques
Tests the internal logic and structure of the code.
Statement Coverage
Ensures every statement is executed at least once.
Branch & Decision Coverage
Ensures every decision (if/else) leads to true and false outcomes.
Path Coverage
Ensures all possible paths in the code are tested.
More comprehensive than branch coverage.
McCabe’s Cyclomatic Complexity
Measures the number of linearly independent paths in a program.
Formula: V(G) = E − N + 2P
where E = edges, N = nodes, P = connected componentsHigh complexity means more test cases needed.
Data Flow Testing
Focuses on data variables and their flow:
Definitions (when value is assigned)
Uses (when value is used)
Detects anomalies like use-before-initialization.
Mutation Testing
Introduces small changes (mutants) in code.
Tests check if these changes are detected.
If not, test cases are weak.
Experience-Based Test Design
Rely on tester’s knowledge and intuition.
Error Guessing
Based on experience and historical defects.
Example: guessing division by zero or null pointer.
Exploratory Testing
Simultaneous learning, test design, and execution.
No predefined test cases; tester explores the system actively.
Useful when documentation is lacking.
Test Organization: Roles & Skills
Tester Role & Skills
Role: Execute test cases, report bugs, document results.
Skills: Attention to detail, basic coding knowledge, analytical thinking, tool familiarity.
Test Lead Role & Skills
Role: Manages test team, prepares test plans, allocates tasks.
Skills: Leadership, communication, defect tracking, estimation skills.
Test Manager Role & Skills
Role: Defines test strategy, oversees quality goals, ensures deadlines.
Skills: Risk management, project tracking, resource planning, stakeholder communication.
Test Planning: IEEE 829 Standard
A Test Plan defines the scope, approach, resources, and schedule of testing activities.
IEEE 829 Standard Test Plan Template Includes:
Test Plan Identifier
Introduction
Test Items
Features to be Tested
Features Not to be Tested
Approach
Item Pass/Fail Criteria
Suspension/Resumption Criteria
Test Deliverables
Testing Tasks
Environmental Needs
Responsibilities
Staffing and Training Needs
Schedule
Risks and Contingencies
Approvals
Test Process Monitoring & Control
Test Monitoring
Test Log (IEEE 829): Chronological record of test execution.
Includes date/time, test item, environment, tester, outcome.
Defect Density: Defect Density = Number of Defects / Size of Software (e.g., KLOC)
Helps assess software quality.
Reporting Test Status
IEEE 829 Test Summary Report Template includes:
Summary of testing activities
Variances from the plan
Defects discovered
Assessment of software
Recommendations
Test Control
Taking actions based on monitoring results:
Re-planning
Changing scope or resources
Updating schedules
Test Scenarios, Suites, & Cases
Test Scenario: High-level test objective.
E.g., “Verify user login functionality.”
Test Suite: A group of related test cases.
Test Case:
Positive Case: Valid input → expected output.
Negative Case: Invalid input → error handling.
IEEE 829 Test Case Specification Template:
Test Case Identifier
Description
Input Specification
Output Specification
Environmental Needs
Special Procedures
Dependencies
Configuration Management in Testing
Purpose: Track and control changes in software components.
Ensures correct version of test artifacts, test cases, and code.
Tools: Git, SVN, Azure DevOps
Benefits:
Version control
Reproducibility
Traceability
Coordination across teams
Risk Management in Testing
Project Risk
Related to test planning and management.
Examples: delay in delivery, budget issues, team attrition.
Product Risk
Related to the software quality.
Examples: Security flaw, functional failure, performance issue.
Risk-Based Testing:
Prioritize testing of high-risk areas first.
Incident & Defect Management
Defect Life Cycle
New
Assigned
Open
Fixed
Retest
Reopened (if failed)
Verified (if passed)
Closed
IEEE 829 Incident Report Template
Defect ID
Summary
Description
Steps to Reproduce
Severity & Priority
Detected by
Status
Associated Test Case
Practical Case Studies
Case Study 1: E-Commerce Test Plan
Test Items: Login, Cart, Checkout
Features Tested: Login security, cart calculation, payment
Risks: Payment gateway failure
Environment: Chrome, Firefox, Mobile
Case Study 2: Login Feature Test Cases
Test Case ID | Input | Expected Output | Type |
TC_001 | Valid user/pass | Login successful | Positive |
TC_002 | Invalid pass | Error message displayed | Negative |
TC_003 | Empty fields | Validation errors | Negative |
Types of Test Tools (CAST)
Categories, Purposes, Benefits & Risks:
Tool Type | Purpose | Benefits | Risks |
Test Management Tools | Plan, schedule, track test activities | Centralized control, traceability | Initial setup cost, learning curve |
Test Execution Tools | Automate test cases (e.g., Selenium) | Saves time, repeatable tests | Maintenance effort, false positives |
Defect Tracking Tools | Report and manage bugs (e.g., JIRA) | Easy communication, accountability | Improper usage leads to confusion |
Performance Testing Tools | Measure load, stress (e.g., JMeter) | Simulate users, find bottlenecks | Requires skill to interpret results |
Unit Testing Tools | Automate developer-level tests | Early bug detection | May not test integration properly |
ETL Testing Tools | Validate data in Data Warehousing | Ensures data accuracy | Complex setup |
API Testing Tools | Test REST/SOAP APIs (e.g., Postman) | Easy to use, test automation | Not suited for UI or functional tests |
Introducing Testing Tools to an Organization
Steps to successfully introduce a testing tool:
Assessment of Needs: Understand what the team needs (manual → automation, performance, etc.).
Tool Evaluation: Compare available tools based on features, compatibility, cost.
Pilot Project: Test the tool on a small project before full rollout.
Training & Skills Development: Conduct training for team members.
Integration with Existing Systems: Ensure compatibility with current CI/CD pipelines.
Monitoring & Feedback: Regularly check tool performance and adoption.
Popular Software Testing Tools
Selenium (WebDriver + TestNG)
Selenium WebDriver: Open-source tool for automating browser actions.
Supports multiple browsers and languages (Java, Python, etc.).
Tests web UI by simulating user actions.
TestNG: Testing framework used with Selenium.
Manages test suites, supports annotations, parallel testing, and detailed reporting.
Benefits: Free, open-source, cross-browser testing
Risks: No built-in reporting, manual effort to maintain scripts
JMeter for Performance Testing
Purpose: Load and performance testing.
Features:
Simulates thousands of users.
Supports HTTP, FTP, JDBC, SOAP, etc.
Use Cases: Web app load testing, API performance analysis
Benefits: Open-source, graphical interface
Risks: High memory usage for complex tests, results need interpretation
Postman for API Testing
Purpose: API testing (manual & automated)
Features:
Send HTTP requests (GET, POST, PUT, DELETE)
Test APIs with different data sets
Generate documentation and test collections
Benefits: Easy to use, great for REST APIs
Risks: Not ideal for complex test flows or UI testing
ETL Testing Tools
Purpose: Verify data movement from source to destination in data warehouses.
Features:
Check for data integrity, accuracy, and completeness
Detect data loss, transformation logic issues
Popular Tools: Informatica, Talend, QuerySurge
Benefits: Automates large data set verification
Risks: Requires database knowledge, expensive tools
JIRA: Project & Defect Management
Purpose: Manage tasks, bugs, and testing in Agile teams.
Features:
Track defects and assign to developers
Create Scrum or Kanban boards
Link issues to test cases and user stories
Dashboards and reporting
Benefits: Customizable workflows, integrates with Confluence, Bitbucket
Risks: Costly for large teams, complex for beginners