Software Engineering Process Models and Concepts Summary

Q1: Generic Process Model Phases

  • Communication – Elicit & analyze requirements, make SRS.
  • Planning – Feasibility, estimates, schedule, risk plan.
  • Modeling/Design – Architecture + detailed design (UML, DB).
  • Construction – Coding + unit testing.
  • Integration & Testing – Integration, system, acceptance tests.
  • Deployment – Install, configure, data migration, training.
  • Maintenance – Corrective, adaptive, perfective, preventive.

Agile & Cost of Change

Short iterations, customer in loop, prioritized backlog, evolving design + refactoring, heavy automation (CI + tests) → changes cheap at any time.

Q2: Requirements and Behaviour Modelling

Elicitation Steps

  • Identify stakeholders & goals.
  • Define scope & information needed.
  • Choose techniques.
  • Conduct sessions (interview, workshop, observe, prototype).
  • Document (SRS, use‑cases).
  • Validate & resolve conflicts.
  • Manage & trace changes.

Behaviour Modelling Role

  • Shows how system reacts over time.
  • Clarifies flows, states, concurrency, exceptions.
  • Main diagrams: use‑case, sequence, activity, state machine.
  • Helps find missing/conflicting requirements + derive test cases.

Q3: Risk & Project Management

Risk Management Steps

  • Identify risks (tech, people, business, external).
  • Assess: probability + impact ⇔ priority.
  • Plan: avoid / mitigate / transfer / accept + contingency.
  • Monitor: review, update, track indicators.

Software Project Management – Why

  • Plan and track time, cost, scope, quality.
  • Coordinate people & communication.
  • Manage risks & changes (scope creep).
  • Ensure stakeholder satisfaction + process improvement.

Q4: Design / OOD

OOD & UML

  • OOD: design using classes, objects, encapsulation, inheritance, polymorphism.
  • Main UML structure diagrams:
    • Class (core), object, package, component, deployment.
  • Behavioural: use‑case, sequence, activity, state.

Design Concepts

  • Abstraction – Show essentials, hide details.
  • Modularity – Split system into small, cohesive, loosely coupled modules.
  • Refactoring – Improve internal code structure, same behaviour.

Q5: Testing

Core Principles

  • Testing shows presence, not absence, of bugs.
  • Exhaustive testing impossible ⇔ select tests.
  • Test early to save cost.
  • Defect clustering (few modules, most bugs).
  • Pesticide paradox: refresh test cases.
  • Context dependent (domain‑specific).
  • Absence‑of‑errors fallacy: right product matters.
  • Independence + traceability to requirements.

Web Testing Types

  • Functional (links, forms, logic).
  • Usability (UI, navigation).
  • Compatibility (browsers, devices).
  • Performance (load, stress).
  • Security (OWASP issues).
  • DB testing, localization, accessibility, regression.

Q6: Re‑engineering & Maintenance

Software Re‑engineering Steps

  • Inventory analysis (choose systems).
  • Reverse engineering (recover design/requirements).
  • Program restructuring (clean/refactor code).
  • Data re‑engineering (clean/migrate DB).
  • Forward engineering (new design/impl).
  • Integration + regression testing; deploy.

Maintenance Types

  • Corrective – Fix bugs.
  • Adaptive – Fit new env/regs/API.
  • Perfective – New features, better perf/UX.
  • Preventive – Refactor, upgrades to avoid future issues.

Compare Waterfall Model and V‑Model with Example

Waterfall and V‑model are both linear, phase‑driven life cycle models, but they differ in how they treat testing. In the waterfall model, the phases flow sequentially: requirements, design, implementation, testing, deployment and maintenance, and testing is considered one main phase that comes after implementation. If defects appear late, it is costly to fix them because going back to earlier phases is difficult.

The V‑model extends waterfall by explicitly associating each development phase with a corresponding testing phase, forming a “V” shape. On the left side are requirements, system design and detailed design; on the right side are acceptance testing, system testing and integration/unit testing matched to those phases. This encourages early test‑planning and validation of each level.

For example, in developing a banking transaction system, the waterfall model would design and code the whole system, then test it at the end. In the V‑model, acceptance tests are derived from user requirements, system tests from system design and integration tests from detailed design, so each level is verified systematically.

Process of Building a Use Case and Capturing Functional Requirements

To build a use case, first identify the actors, that is, external entities that interact with the system such as users, other systems or hardware devices. Then, for each actor, determine the goals they want to achieve with the system, which become candidate use cases like “Place Order” or “Withdraw Cash”. Each use case is then described using a structured template containing preconditions, main flow of events, alternative flows, postconditions and any business rules.

Diagrams such as use‑case diagrams are drawn to show relationships between actors and use cases, including include and extend relationships. Use cases are refined iteratively based on stakeholder feedback until they are complete and consistent.

Use cases help capture functional requirements because they describe the system behavior from the user’s perspective as observable actions and responses. Each step of the flow corresponds to one or more functional requirements, so the collection of all use cases provides a clear, testable specification of what the system must do in different scenarios.

Analysis Model: Bridging Requirements and Design

An analysis model is a set of structured representations that describe what the system must do without deciding how it will be implemented. It typically includes elements such as use‑case models, data models (like entity‑relationship diagrams), class diagrams, state diagrams and interaction diagrams. These models focus on the problem domain concepts, relationships and behaviors.

The analysis model is important because it acts as a bridge between informal requirements and detailed design. Requirements are often written in natural language and can be ambiguous or incomplete. By transforming them into formal or semi‑formal models, ambiguities are removed and missing information is revealed.

Designers then use the analysis model as a foundation to make architectural and design decisions, such as defining subsystems, interfaces and data structures. This reduces the risk that the implemented system diverges from user needs, since the analysis model provides a validated and traceable link from requirements to final design.

Software Project Management Definition and Essential Role

Software project management is the application of knowledge, skills, tools and techniques to plan, execute, monitor, control and close software projects. It involves activities such as scope management, time and cost estimation, scheduling, risk management, quality assurance, communication and resource allocation. The goal is to deliver software that meets requirements within budget and schedule constraints.

It is essential for successful software development because software projects are complex, involve many stakeholders and face changing requirements and technological risks. Without proper project management, projects can suffer from scope creep, schedule slippage, cost overruns and poor quality outcomes.

Effective project management provides clear objectives, realistic plans and continuous tracking of progress so that problems are detected early and corrective actions can be taken. It also ensures better coordination among team members, improved communication with customers and systematic handling of risks, leading to higher chances of project success.

Factors Influencing Project Approach Selection

The selection of a project approach (such as waterfall, incremental, iterative, agile) depends on several factors. These include stability and clarity of requirements, project size and complexity, risk level, customer involvement, regulatory constraints and organizational culture. When requirements are well understood and unlikely to change, a predictive approach like waterfall can be considered.

If requirements are expected to evolve, or early delivery of partial functionality is valuable, incremental or agile approaches are more suitable. Team size and experience also matter: small, co‑located and skilled teams often work well with agile methods, while large distributed teams may prefer more formal processes.

A traditional waterfall approach emphasizes upfront documentation and sequential phases, which improves control but handles change poorly. Incremental models deliver the system in small increments, allowing feedback and partial deployment. Agile approaches like Scrum focus on short iterations, continuous customer collaboration and embracing change, which can increase customer satisfaction but require strong communication and disciplined teams.

Q4(a): Design Concepts: Abstraction, Modularity, Refactoring

Abstraction is the design concept of focusing on essential characteristics of an object or system while hiding irrelevant details. In software, abstraction can be data abstraction (e.g., defining an abstract data type) or procedural abstraction (e.g., specifying what a function does without exposing its internal steps). Abstraction helps manage complexity by enabling developers to reason about high‑level behavior rather than low‑level implementation.

Modularity is the division of a software system into separate modules or components, each responsible for a specific set of related tasks. Good modules have high cohesion (their elements are closely related) and low coupling (minimal dependencies on other modules). Modularity supports easier development, testing, maintenance and reuse because changes in one module typically have limited impact on others.

Refactoring is the process of improving the internal structure of existing code without changing its external behavior. Common refactoring activities include renaming variables for clarity, extracting methods, removing code duplication and reorganizing classes. Refactoring increases code readability, maintainability and extensibility, and often reduces defect rates by simplifying complex logic.

Q4(b): Design Concepts: Pattern, Modularity, Refinement

A design pattern is a general reusable solution to a commonly occurring design problem in a particular context. It is not finished code but a template describing classes, objects and interactions that solve a design issue, such as the Singleton, Observer or Factory patterns. Using patterns improves design quality, provides a shared vocabulary among developers and leverages proven solutions.

Modularity, in this context, again refers to decomposing a system into loosely coupled, highly cohesive modules. Each module presents a clear interface and hides its internal details, enabling independent development and replacement. Modularity supports parallel work by different teams and makes large systems more understandable.

Refinement (or stepwise refinement) is the process of starting with a high‑level description of a system and repeatedly decomposing it into more detailed designs. At each step, designers add more specifics about algorithms, data structures and interfaces while preserving the correctness of the higher level description. Refinement leads from abstract specifications to concrete implementations in a systematic way.

Q5(a): Difference between White Box and Black Box Testing

White box testing, also called structural or glass‑box testing, examines the internal structure and implementation of the code. Test cases are designed using knowledge of control flow, loops and paths, with techniques such as statement coverage, branch coverage and path coverage. This type of testing is usually performed by developers and aims to verify that the code logic is correctly implemented.

Black box testing, also called functional testing, focuses only on the external behavior of the software without considering internal code details. Testers design cases based on requirements and specifications, using techniques like equivalence partitioning, boundary value analysis and cause‑effect graphs. This testing is often performed by independent testers or QA engineers.

The key difference is that white box tests how something is done internally, while black box tests what the system does from the user’s or specification’s point of view. In practice, both techniques are complementary and used together to achieve better test coverage.

Q5(b): What is Software Testing? Explain Different Types

Software testing is the process of executing a program with the intent of finding defects and verifying that the software meets specified requirements. It provides information about the quality of the product and helps ensure reliability, performance and security before release. Testing can be done at various levels and using different strategies.

Major types include unit testing, where individual components or functions are tested in isolation; integration testing, which checks the interaction between combined modules; and system testing, which validates the complete integrated system against requirements. Acceptance testing is performed with user involvement to decide whether the software is ready for deployment.

There are also specialized types such as regression testing (retesting after changes), performance testing (measuring speed, scalability and resource usage), usability testing (evaluating user friendliness) and security testing (checking vulnerabilities). A good test strategy selects appropriate types based on project risk and quality goals.

Q6(a): The 4 Types of Maintenance in Software Engineering

The four main types of software maintenance are corrective, adaptive, perfective and preventive maintenance. Corrective maintenance involves fixing faults discovered after the software has been deployed, such as logic errors, design defects or documentation mistakes. Its aim is to restore correct behavior when failures occur in operation.

Adaptive maintenance modifies the software so it remains usable in a changed environment, for example when the operating system, hardware platform, database or external interfaces are upgraded. This ensures that the system continues to work under new conditions. Perfective maintenance enhances existing functionality or adds new features to improve performance, usability or other quality attributes based on user feedback.

Preventive maintenance focuses on making changes to reduce the probability of future failures, such as code refactoring, updating libraries and improving documentation. It is done even when the software currently works correctly, in order to increase maintainability and reliability over the long term. Together, these four types cover most post‑delivery changes that a software system undergoes during its life cycle.

Q6(b): Difference between Software Maintenance and Reengineering

Software maintenance is the ongoing process of modifying a software system after its initial delivery to correct faults, adapt it to changes in the environment, improve performance or add new features. It deals with incremental changes and routine updates that keep the system useful and reliable over time. Maintenance activities typically operate on the existing structure of the system with limited redesign.

Software reengineering, on the other hand, is a more radical process that examines and transforms an existing system to improve its structure, understanding or quality without changing its overall functionality. It may involve reverse engineering, restructuring, data reengineering and forward engineering to migrate to a new architecture or technology. Reengineering is chosen when a system has become too difficult or expensive to maintain in its current form.

Maintenance is important because software has a long operational life and user needs, business rules, technologies and environments continually change. Without regular maintenance, software degrades in quality, becomes insecure or incompatible with new platforms and may no longer support critical business processes. Effective maintenance protects the investment made in developing the system and ensures continued value to users.