Software Engineering: SDLC, SRS, and Project Estimation

1. Software Requirements Engineering and SRS

Definition: Requirements Engineering (RE) is the systematic process of eliciting, analyzing, specifying, validating, and managing software requirements.

Software Requirements Specification (SRS): A formal document stating what the system shall do (functional requirements) and the constraints or quality expectations (non-functional requirements).

The RE Process Model

Stakeholders → Elicitation → Analysis/Negotiation → Specification (SRS) → Validation → Requirements Management

Importance of SRS

  • Acts as a formal contract between the customer and the developer.
  • Provides a baseline for design, coding, testing, and acceptance.
  • Reduces ambiguity, rework, and scope creep.
  • Helps in the estimation of effort, cost, and schedule.
  • Supports maintenance by preserving the original intent.

Characteristics of a High-Quality SRS

  • Correct, complete, unambiguous, and consistent.
  • Verifiable/Testable: Each requirement must be measurable.
  • Traceable: Source → Design → Test Cases.
  • Modifiable: Easy to update without breaking the structure.
  • Ranked/Prioritized: Categorized as Must, Should, or Could.

Functional vs. Non-Functional Requirements

  • Functional Requirements (FR): Describes services or behavior. Example: “The system shall allow fund transfers only after OTP verification.”
  • Non-Functional Requirements (NFR): Quality constraints on operation. Example (Performance): “95% of requests must respond within 2 seconds.” Example (Security): “All sensitive data shall be protected in transit using TLS.”

Requirement Elicitation Techniques

  • Interviews (structured or unstructured).
  • Questionnaires.
  • Observation and shadowing.
  • Document analysis.
  • Workshops and JAD sessions.
  • Prototyping.

Typical SRS Contents Outline

  • Introduction: Purpose, scope, and definitions.
  • Overall Description: Users, environment, assumptions, and constraints.
  • System Features: Functional requirements.
  • External Interfaces: UI, hardware, software, and communication.
  • Non-Functional Requirements: Performance, security, and reliability.
  • Appendices: Glossary and references.

Conclusion: SRS and RE reduce misunderstandings and enable correct, testable, and controlled development.

2. Estimation, COCOMO, and Putnam Models

Definition: Software estimation predicts size, effort, schedule, and cost before development to support planning and control.

Estimation Flow

Project Scope → Size Estimate (LOC/FP) → Effort (Person-Months) → Schedule (Months) → Cost → Staffing

Key Estimation Metrics

  • Size: Lines of Code (LOC) or Function Points (FP).
  • Effort: Person-Months (PM).
  • Schedule: Calendar time.
  • Cost: Effort multiplied by the cost per PM.

Estimation Approaches

  • Expert Judgment / Delphi Technique.
  • Analogous Estimation (comparison with past projects).
  • Decomposition (WBS-based).
  • Algorithmic Models (COCOMO, Putnam).
  • Top-down and bottom-up estimation.

Basic COCOMO Model

  • Effort (PM) = a × (KLOC)b
  • Time (Months) = c × (Effort)d
  • People = Effort / Time

COCOMO Project Classes

  • Organic: Small, simple projects with stable requirements and an experienced team.
  • Semi-detached: Medium-sized projects with mixed team experience.
  • Embedded: Tight constraints and complex environments (often hardware or real-time).

COCOMO Levels

Basic COCOMO (Size only) ↓ Intermediate COCOMO (Size + Cost drivers) ↓ Detailed COCOMO (Phase-wise effort + Cost drivers)

Typical Constants

  • Organic: a=2.4, b=1.05, c=2.5, d=0.38
  • Semi-detached: a=3.0, b=1.12, c=2.5, d=0.35
  • Embedded: a=3.6, b=1.20, c=2.5, d=0.32

Mini Worked Example (Organic, 10 KLOC)

  • Effort: 2.4 × (10)1.05 ≈ 2.4 × 11.22 ≈ 26.9 PM.
  • Time: 2.5 × (26.9)0.38 ≈ 2.5 × 3.49 ≈ 8.7 months.
  • People: 26.9 / 8.7 ≈ 3 persons.

Putnam Model Concept

  • Based on the Rayleigh staffing curve.
  • Relates size, effort, and delivery time.
  • Key Result: Compressing the schedule increases effort sharply (non-linear rise).
  • Used for realistic trade-offs between time and manpower.

Rayleigh Staffing Curve

Staffing Level ^ [Start → Peak → End] over Time.

Conclusion: Use historical data and algorithmic models to produce defendable estimates for cost, schedule, and staffing.

3. SDLC and Process Models

Waterfall Model

Definition: A linear sequential model where each phase must be completed before the next begins.

Flow: Requirements → System Design → Coding → Testing → Deployment → Maintenance

  • Merits: Simple to manage with clear milestones; strong documentation control.
  • Limitations: Changes are costly; working software appears late; not suitable for uncertain requirements.

Spiral Model

Definition: An iterative model with explicit risk analysis in each cycle.

Cycle: Objectives → Risk Analysis → Development/Test → Plan Next Iteration.

  • Key Points: Risk-driven; customer feedback is available each iteration; suitable for large, complex projects.

Prototyping Model

Definition: Building a quick prototype to clarify and validate requirements with users.

Flow: Initial Req → Quick Design → Build Prototype → User Feedback → Refine Req → Final System

  • Merits: Reduces ambiguity; improves user involvement.
  • Limitation: Prototype may be mistaken for the final product; can harm architecture if converted directly.

Cleanroom Software Engineering

Definition: A defect prevention approach using formal specification, correctness verification, and statistical quality control.

Process: Formal Spec → Incremental Design → Correctness Verification → Statistical Testing → Certification

  • Key Points: Focuses on preventing defects rather than debugging; uses formal methods and statistical testing.

Conclusion: The selection of a process model depends on requirement stability, project size, risk, and constraints.

4. Testing, Verification, and Validation

Definitions

  • Verification: “Are we building the product right?” (Process-oriented; reviews, inspections, static checks).
  • Validation: “Are we building the right product?” (Product-oriented; testing against requirements).

The V-Model

  • Requirements → Acceptance Testing
  • System Design → System Testing
  • Architecture/Module Design → Integration Testing
  • Coding → Unit Testing

Levels of Testing

  • Unit Testing: Individual modules or functions.
  • Integration Testing: Interfaces and interactions among modules.
  • System Testing: Complete system vs. SRS.
  • Acceptance Testing: Customer validation in real or simulated environments.

Black-Box vs. White-Box Testing

  • Black-Box: Specification-based; no code knowledge. Techniques: Equivalence partitioning, Boundary Value Analysis (BVA), decision tables.
  • White-Box: Code-based. Techniques: Statement, branch, and path coverage; Control Flow Graphs (CFG).

Alpha vs. Beta Testing

  • Alpha: Conducted at the developer site in a controlled environment by internal users.
  • Beta: Conducted at the customer site in a real environment by external users.

Boundary Value Analysis (BVA) Example

  • Rule: Test at boundaries: Min, Min+1, Nominal, Max-1, Max, and invalid values just outside.
  • Example (Range 1–100): 1, 2, 50, 99, 100; Invalid: 0, 101.

Conclusion: Verification reduces defects early, while validation confirms the software meets user needs.

5. Software Design Fundamentals

Definition: Software design converts the SRS into a blueprint describing architecture, modules, data, interfaces, and algorithms.

The Design Process

  1. Study SRS, constraints, and quality requirements.
  2. Architectural Design: Define major components and interfaces.
  3. Modular Design: Define internal logic (data structures, algorithms).
  4. Evaluate design quality (maintainability, security, testability).
  5. Prepare design documentation.

Architectural vs. Modular Design

  • Architectural Design: High-level structure and subsystem decomposition.
  • Modular Design: Detailed internal logic and interface definitions.

Layered Architecture Example

[Presentation/UI] → [Business/Service Layer] → [Data Access Layer] ↔ [Database]

Coupling and Cohesion

  • Coupling (Aim: Low): The degree of interdependence between modules. Types (Worst to Best): Content, Common, Control, Stamp, Data.
  • Cohesion (Aim: High): How strongly related the responsibilities inside a module are. Types (Worst to Best): Coincidental, Logical, Temporal, Procedural, Communicational, Sequential, Functional.

Conclusion: Good design aims for high cohesion, low coupling, and maintainable architecture.

6. Software Quality and McCall’s Quality Factors

Definition: Software quality is the degree to which software satisfies stated requirements and user expectations.

McCall’s Quality Factors

CategoryFactors
Product OperationCorrectness, Reliability, Efficiency, Integrity, Usability
Product RevisionMaintainability, Flexibility, Testability
Product TransitionPortability, Reusability, Interoperability

QA vs. QC

  • Quality Assurance (QA): Process-focused; prevents defects (standards, audits).
  • Quality Control (QC): Product-focused; detects defects (testing, inspections).

Conclusion: Quality models provide measurable factors for evaluation and continuous improvement.

7. Configuration Management and Change Control

Definition: SCM is the discipline of identifying, organizing, controlling, and tracking changes to software items.

  • Configuration Item (CI): Any item under control (SRS, code, test cases).
  • Baseline: An approved version of a set of CIs serving as a reference.

Core SCM Activities

  • Configuration Identification.
  • Configuration Control (Change requests).
  • Status Accounting (History).
  • Configuration Auditing.

Change Control Process

Change Request (CR) → Impact Analysis → CCB Decision → Implement → Test → New Baseline

Conclusion: SCM ensures controlled evolution, traceability, and reliable releases.

8. Reverse Engineering and Re-engineering

Definitions

  • Reverse Engineering: Analyzing an existing system to derive higher-level specifications from code.
  • Re-engineering: Transforming an existing system to improve maintainability or migrate to a new platform.

Key Differences

  • Reverse engineering focuses on understanding; re-engineering focuses on improvement.
  • Reverse engineering may not change code; re-engineering always does.

Conclusion: Re-engineering extends system life and reduces maintenance costs.

9. Data Flow Diagrams and Data Dictionaries

DFD Definition: A graphical representation of data movement through processes, stores, and entities.

DFD Levels

  • Level-0 (Context): The whole system as one process.
  • Level-1: Major sub-processes.
  • Level-2: Detailed decomposition.

Data Dictionary

A repository defining data elements, structures, and flows. Example: Member_ID = numeric(10).

Conclusion: DFDs explain data movement, while the data dictionary standardizes definitions.

10. Risk Management in Software Projects

Definition: Risk is an uncertain event that can cause loss in cost, schedule, scope, or quality.

Risk Management Process

Identify → Analyze (Probability/Impact) → Prioritize → Plan Response → Monitor & Control

Handling Strategies

  • Avoid: Remove the risk.
  • Mitigate: Reduce probability or impact.
  • Transfer: Outsource the risk.
  • Accept: Maintain a contingency buffer.

Conclusion: Risk management reduces uncertainty and improves delivery predictability.

11. Mobile and Web Software Engineering

Development Approaches

  • Native: Platform-specific; best performance.
  • Hybrid: Web tech inside a native container.
  • Cross-platform: Single codebase for multiple platforms.
  • PWA: Web apps with offline behavior.

Common Issues

  • Frequent requirement changes and short cycles.
  • Browser and device compatibility.
  • Security risks (SQLi, XSS, session hijacking).
  • Deployment complexity (CI/CD).

Conclusion: Mobile and web engineering demand rapid delivery with a focus on usability and security.