MCS-213 Software Engineering: Priority Topics Summary
MCS-213 (Software Engineering) – High-Yield Topics (Priority A + B)
Condensed exam-ready answers with diagrams (copyright format).
1) SRS and Requirements Engineering (10 Marks)
Definition: Requirements Engineering (RE) is the process of eliciting, analyzing, specifying, validating, and managing requirements. The Software Requirements Specification (SRS) is the formal document stating what the system shall do (Functional Requirements, FR) and constraints/quality expectations (Non-Functional Requirements, NFR).
RE Process Model:
Stakeholders
|
v
Elicitation → Analysis/Negotiation → Specification (SRS) → Validation → Requirements Management
Importance of SRS:
- Contract between customer and developer.
- Baseline for design, coding, testing, and acceptance.
- Reduces ambiguity, rework, and scope creep.
- Supports effort, cost, and schedule estimation.
- Preserves system intent for maintenance.
Characteristics of a Good SRS:
- Correct, complete, unambiguous, and consistent.
- Verifiable/testable (measurable).
- Traceable (source → design → test).
- Modifiable and Prioritized (must/should/could).
FR vs NFR Examples:
- FR: System shall allow fund transfer only after OTP verification.
- NFR-Performance: 95% of requests respond within 2 seconds.
- NFR-Security: Sensitive data protected in transit using TLS.
- NFR-Availability: 99.5% monthly uptime.
Elicitation Techniques: Interviews, Questionnaires, Observation, Document Analysis, Workshops/JAD, Prototyping.
Typical SRS Contents: Introduction (purpose/scope), Overall description (users/environment), Functional requirements, External interfaces, Non-functional requirements, Appendices.
Conclusion: SRS and RE provide a testable baseline, reduce misunderstandings, and enable controlled change.
2) Estimation, COCOMO, and Putnam Model (10 Marks)
Definition: Software estimation predicts size, effort, schedule, and cost before development for planning and control.
Estimation Flow:
Project Scope → Size (LOC/FP) → Effort (PM) → Schedule (months) → Cost → Staffing
Approaches: Expert judgment/Delphi, Analogy (past projects), Decomposition (WBS), Algorithmic models (COCOMO/Putnam), Top-down and bottom-up.
Basic COCOMO Equations:
- Effort (PM) = a × (KLOC)b
- Time = c × (Effort)d
- People = Effort / Time
COCOMO Project Classes:
- Organic: Small, simple, stable requirements, experienced team.
- Semi-detached: Medium size, mixed experience.
- Embedded: Tight constraints, complex environment.
COCOMO Levels:
Basic (size only)
↓
Intermediate (size + cost drivers)
↓
Detailed (phase-wise effort + cost drivers)
Mini Example (Organic, 10 KLOC): Effort ≈ 26.9 PM; Time ≈ 8.7 months; People ≈ 3 persons.
Putnam Model Concept: Based on the Rayleigh staffing curve; relates size, effort, and time. Schedule compression sharply increases effort.
Rayleigh Curve (Staffing vs. Time): Staffing rises to a peak (peak effort) and then declines as the project nears completion.
Conclusion: COCOMO and Putnam provide defendable estimates for staffing, cost, and schedule.
3) SDLC / Process Models: Waterfall, Spiral, Prototyping, Cleanroom (10 Marks)
Waterfall Model
Definition: Linear sequential model where each phase completes before the next begins.
Flow: Requirements → System Design → Coding → Testing → Deployment → Maintenance.
Merits: Simple management, strong documentation, suitable for stable requirements.
Limitations: Changes are costly after requirements; working software appears late.
Spiral Model
Definition: Iterative model with explicit risk analysis in every cycle.
Cycle: Objectives → Risk Analysis → Development/Test → Plan Next Iteration (loop).
Points: Risk-driven; reduces major risks early; suitable for large/high-risk projects.
Prototyping Model
Definition: Build a quick prototype to clarify requirements with users.
Flow: Initial Req → Quick Design → Build Prototype → User Feedback → Refine Req → Final System.
Points: Best when requirements are unclear; risk is that the prototype might be treated as the final product.
Cleanroom Approach
Definition: Defect prevention approach using formal specification, correctness verification, and statistical quality control.
Process: Formal Spec → Incremental Design → Correctness Verification → Statistical Testing → Certification.
Points: Focuses on preventing defects rather than debugging; reliability assessed statistically.
Conclusion: Choose the model based on requirement stability, risk, size, and constraints.
4) Testing, Verification, and Validation (10 Marks)
Definitions:
- Verification: “Are we building the product right?” (Static checks, reviews).
- Validation: “Are we building the right product?” (Execution, testing against requirements).
V-Model Mapping:
Requirements → Acceptance Testing
System Design → System Testing
Architecture/Module Design → Integration Testing
Coding ← Unit Testing
Levels of Testing: Unit, Integration, System, Acceptance.
Black-box vs White-box:
- Black-box: Specification-based (e.g., Equivalence Partitioning (EP), Boundary Value Analysis (BVA)).
- White-box: Code-based (e.g., statement/branch coverage, Control Flow Graph (CFG)).
Alpha vs Beta Testing:
- Alpha: Performed at the developer site with controlled users.
- Beta: Performed by external users in the real environment.
BVA Example: For a range 1–100, test values 1, 2, 50, 99, 100, and invalid values 0, 101.
Conclusion: Verification and Validation ensure conformance to the SRS and fitness for user needs.
5) Software Design: Architectural vs Modular; Coupling and Cohesion (10 Marks)
Definition: Software design converts the SRS into a blueprint describing architecture, modules, interfaces, data, and algorithms.
Design Process Steps: Study SRS → Architectural design (subsystems) → Modular/detailed design (module internals) → Evaluate quality → Prepare documentation.
Architectural vs Modular Design:
- Architectural: High-level structure and connections between subsystems. Drives global qualities (e.g., performance).
- Modular: Internal logic of each module and precise interfaces. Drives clarity and correctness.
Sample Layered Architecture:
[Presentation/UI]
|
v
[Business/Service Layer] → [Auth] [Reporting]
|
v
[Data Access Layer] ↔ [Database]
Structure Chart Example (Library System): Shows hierarchy: Library System controls Search, Issue, Return, where Issue might control Fine Calculation.
Coupling (Aim Low): Degree of interdependence between modules. (Worst to Best: Content, Common, Control, Stamp, Data).
Cohesion (Aim High): Degree of relatedness within a module. (Worst to Best: Coincidental, Logical, Temporal, Procedural, Communicational, Sequential, Functional).
Example: An Authentication module handling only login/OTP/session shows high functional cohesion. Passing only required parameters improves data coupling.
Conclusion: Good design targets high cohesion, low coupling, and clear interfaces.
6) Software Quality and McCall’s Quality Factors (10 Marks)
Definition: Software quality is the degree to which software satisfies stated requirements and user expectations.
Key Quality Attributes: Correctness, Reliability, Efficiency, Integrity/Security, Usability, Maintainability, Testability, Portability, Reusability.
McCall’s Factors Grouped:
- Product Operation: Correctness, reliability, efficiency, integrity, usability.
- Product Revision: Maintainability, flexibility, testability.
- Product Transition: Portability, reusability, interoperability.
QA vs QC:
- Quality Assurance (QA): Process-focused; prevents defects (e.g., audits, standards enforcement).
- Quality Control (QC): Product-focused; detects defects (e.g., testing, inspections).
Conclusion: Quality models provide measurable factors for evaluation and improvement.
7) Software Configuration Management (SCM) and Change Control (10 Marks)
Definition: SCM identifies, organizes, controls, and tracks changes to software items across the lifecycle.
Key Terms:
- Configuration Item (CI): Any controlled item (SRS, code, test cases, build scripts).
- Baseline: An approved version of CIs used as a reference point for future work.
SCM Activities: Identification (versioning), Control (change approval), Status Accounting (history tracking), Auditing (verification).
Change Control Process:
Change Request → Impact Analysis → CCB Decision → Implement → Test → Update Docs → New Baseline
Version Control (Branch/Merge Example): Development on a feature branch (E, F) diverges from the main line (A, B, C, D) before being merged back (G).
Example: A Change Request (CR) to enforce password length 12 impacts UI validation, authentication logic, database schema, and test cases.
Conclusion: SCM ensures controlled evolution, traceability, repeatable builds, and reliable releases.
8) Reverse Engineering and Re-engineering (10 Marks)
Definitions:
- Reverse Engineering (RE): Derive design or specification knowledge from existing code or artifacts (understanding).
- Re-engineering (RE-E): Transform or improve an existing system for maintainability, performance, or migration (improvement).
Differences: RE focuses on understanding (output: recovered design); RE-E focuses on improvement (output: improved system, often involving code changes).
Re-engineering Process:
Inventory Analysis → Document Restructuring → Reverse Engineering → Code Restructuring → Data Restructuring → Forward Engineering
Example: Understanding legacy payroll modules (RE), then restructuring the code and migrating the database to a modern stack (RE-E).
Conclusion: Re-engineering extends system life and reduces maintenance costs while preserving essential business logic.
9) DFD and Data Dictionary (10 Marks)
DFD Definition: Data Flow Diagram (DFD) shows how data moves through processes, data stores, and external entities.
DFD Symbols: Process, Data Flow, Data Store, External Entity.
Levels: Level-0 (Context), Level-1 (major processes), Level-2 (detailed decomposition).
Context DFD Example (Library System):
[Member] ↔ Issue/Return Req ↔ (Library System) ↔ Book/Receipt ↔ [Member]
[Librarian] ↔ Update Books/Members ↔ (Library System)
(Library System) ↔ ||Book/Member Data Store||
Data Dictionary Definition: A repository defining data elements, structures, flows, and stores used in the system.
Sample Data Dictionary Entries:
- Member_ID = numeric(10)
- Book_ID = alphanumeric(12)
- Issue_Record = {Member_ID, Book_ID, Issue_Date, Due_Date}
- Fine = max(0, Days_Late × Rate)
Conclusion: DFD explains data movement; the data dictionary standardizes and documents data definitions.
10) Risk Management in Software Projects (10 Marks)
Definition: Risk is an uncertain event that may cause loss in cost, schedule, scope, or quality.
Risk Management Process:
Identify → Analyze (probability/impact) → Prioritize → Plan response → Monitor & Control
Risk Exposure (RE): RE = Probability × Loss (Impact).
Response Strategies: Avoid, Mitigate, Transfer, Accept.
Risk Register Sample Entry:
Risk: Requirement changes late
Prob/Impact: High / High
Mitigation: Baseline SRS + strict change control + prototype critical screens.
Contingency: Add schedule buffer and prioritize mandatory features.
Conclusion: Risk management reduces uncertainty and improves predictability and quality.
11) Mobile/Web-Oriented Software Engineering (10 Marks)
Mobile Development Approaches:
- Native: Best performance and device access.
- Hybrid: Web content within a native container.
- Cross-platform: Single codebase for multiple platforms.
- PWA (Progressive Web Apps): Offline caching and install-like behavior.
Typical Web Architecture:
Browser/UI ↔ Web Server ↔ Application Server ↔ Database
Common Web/Mobile Issues:
- Frequent requirement changes and short cycles.
- Browser/device compatibility challenges.
- Performance, scalability, and security (input validation, session management).
- Deployment complexity (requiring CI/CD).
Conclusion: Mobile/web engineering demands rapid delivery with strong focus on usability, performance, security, and continuous deployment.
