Web Application Security Testing and Secure SDLC Practices

PART 1 – FOUNDATIONS (Week 9)

  • SDLC Phases (exact order): Planning → Requirements → Architecture & Design → Coding → Testing → Release → Maintenance. Definition: SDLC (Software Development Life Cycle): a structured framework for building and maintaining software to ensure quality and efficiency. (How: sequential or iterative like Agile; why: prevents chaos and integrates security early to avoid costly rework.)
  • Shift-Left Principle: Security from commit #1 → 60–100× cheaper than fixing in production. Definition: Shift-Left: embedding security practices earlier in the SDLC. (How: run SAST on code commits; why: bugs found early (e.g., design phase) cost less; post-release fixes disrupt users and reputation, as in the Equifax breach.)
  • V-Model – Full Mapping (memorize this): Requirements ↔ Acceptance Testing; System Design ↔ System Testing; Architecture Design ↔ Integration Testing; Detailed Design ↔ Component Testing; Code ↔ Unit Testing. Definition: V-Model: an SDLC variant emphasizing verification and validation, with development phases mirroring test phases. (How: left side builds down, right verifies up; why: ensures each development step has a corresponding test, catching mismatches early – e.g., requirements validated in acceptance testing to confirm the “right product”.)
  • Verification: “Are we building it right?” → static techniques, specifications, code reviews, SAST. Definition: Verification: confirms the product adheres to specifications without execution. (How: static analysis scans code for compliance; why: prevents process errors; static methods are efficient for large codebases.)
  • Validation: “Are we building the right thing?” → dynamic testing, UAT, penetration testing. Definition: Validation: ensures the product meets user needs via execution. (How: run the app in real scenarios; why: catches usability and security gaps that specs miss; dynamic tests simulate attacks for realism.)
  • Vulnerability Assessment (VA): automated, broad, finds known CVEs, no exploitation. Definition: VA: a systematic scan identifying known vulnerabilities. (How: tools like Nessus sweep networks and applications; why: broad coverage for compliance; automated for speed, but misses zero-days — use quarterly for a baseline.)
  • Penetration Testing (Pen Test): manual plus automated, goal = exploit and prove impact, requires ROE (Rules of Engagement). Definition: Pen Test: a simulated cyberattack to uncover and exploit flaws. (How: ethical hackers use Metasploit for exploit chains; ROE defines scope to avoid damage; why: proves real risk (e.g., data breach simulation), beyond VA’s list — annual for high-stakes applications.)
  • Threat Modeling Tools: STRIDE (Microsoft), DREAD (risk rating), MITRE ATT&CK, PASTA. STRIDE = Spoofing • Tampering • Repudiation • Information Disclosure • Denial of Service • Elevation of Privilege. Definition: Threat Modeling: proactive analysis of potential attacks and assets. (How: apply STRIDE to component diagrams (e.g., spoofing in authentication); DREAD scores risks; why: prioritizes mitigations and prevents oversights like in the SolarWinds supply chain attack.)

PART 2 – WEB VULNERABILITIES (Week 10)

OWASP Top 10 2025 RC + mechanics (updated from 2021): this update reflects evolving threats like supply chain incidents after 2023. The examples below explain key changes and practical mitigations.

RankNameBrief Description (Definition)Key Changes from 2021Classic Example (How)Key Mitigation (Why / How)
A01Broken Access ControlFlaws allowing unauthorized access or bypass.Consolidated SSRF into access control categories.IDOR (change URL ID), forced browsing.Server-side enforcement, ReBAC. (Why: client-side controls are easy to tamper with; how: check user permissions per request.)
A02Security MisconfigurationWeak defaults, exposed services, inconsistent controls.Moved up (was A05).Debug mode enabled, directory listing exposed.Hardening guides, auto-scanners. (Why: defaults exploited easily; how: use CIS benchmarks in CI/CD.)
A03Software Supply Chain FailuresVulnerabilities in dependencies, CI/CD, distribution; includes malware and tampering.Expanded from Vulnerable Components (A06:2021).Log4Shell, malicious npm packages exfiltrating credentials.SCA + SBOM + automated updates. (Why: dependencies constitute the majority of code; how: verify packages pre-install and use dependency graphs to block compromises.)
A04Cryptographic FailuresInsecure or outdated encryption exposing data.From A02:2021.MD5-hashed passwords, TLS 1.0 usage.Use Argon2 or bcrypt for hashing, TLS 1.3, and HSTS. (Why: weak crypto leads to breaches; how: enforce minimum standards.)
A05InjectionInput flaws like SQL, OS, or template injection.From A03:2021.SQLi example: ‘ OR 1=1– ; CMDi example: ; rm -rf /.Prepared statements and allow-lists. (Why: interpreters can execute malicious input; how: bind parameters to separate data from code.)
A06Insecure DesignRisks from poor architecture or absence of threat modeling.From A04:2021.No rate limits, weak password recovery flows.Threat modeling and secure design patterns. (Why: design flaws are hard to fix later; how: model threats during design.)
A07Authentication FailuresIssues in logins, passwords, or sessions that allow unauthorized entry.Updated from A07:2021 (Identification & Authentication).Credential stuffing, missing MFA.Implement MFA and secure credential storage. (Why: weak auth is a primary entry point; how: hash passwords and enforce complexity.)
A08Software/Data Integrity FailuresTampering in code, data, updates, or pipelines.From A08:2021.Insecure CI/CD pipelines.Code signing and subresource integrity. (Why: prevents injection in transit; how: verify hashes on artifacts.)
A09Logging & Alerting FailuresGaps that allow attacks to go undetected.From A09:2021.No alerts on repeated failed logins.Use SIEM and centralized logging. (Why: detection is key to response; how: log anomalies and integrate alerts.)
A10Mishandling of Exceptional ConditionsUnsafe error handling or resilience issues that reveal information or fail open.New category derived from poor code quality.Leaky error messages, unhandled exceptions.Consistent error handling and fail-safe defaults. (Why: errors can be exploited; how: use try/catch and mask internals.)

XSS Types (Definition: Cross-Site Scripting: script injection into trusted sites.)

  • Reflected → non-persistent (How: payload in request echoes immediately; Why: tricks users via crafted links).
  • Stored → persistent (wormable) (How: saved in database, served to all users; Why: amplifies impact across users).
  • DOM-based → client-side sink (innerHTML, document.write, eval) (How: JavaScript modifies DOM with input; Why: bypasses server-side filters).
  • CSRF (Definition: Cross-Site Request Forgery: tricks a browser into performing unauthorized actions.) → SameSite=Lax (default safe), Strict, or None + Secure + token. (How: token per request; Why: validates intent and blocks forged requests from other sites.)

PART 3 – TESTING TOOLS MATRIX (Week 11 + extras)

Tool TypeBoxWhen in PipelineFalse +Finds Exact Line?Example ToolsDefinition & How/Why
SASTWhiteCommit / PRHighYesCheckmarx, Fortify, SonarQubeStatic Analysis: scans code at rest. (How: pattern matching; why: early bug location, but high false positives that need triage.)
DASTBlackStaging / QALowNoOWASP ZAP, Burp ScannerDynamic Analysis: tests the running application. (How: sends payloads; why: finds runtime issues like misconfigurations, with fewer false positives.)
IASTGrayRuntime (test & prod)Very LowYesContrast, Synopsys SeekerInteractive Analysis: instruments during runtime. (How: tracks data flows; why: combines SAST and DAST accuracy.)
RASPGrayProductionYes + blocksImperva, Signal Sciences, SqreenRuntime Protection: self-defends the application. (How: blocks attacks in real time; why: virtual patching for zero-days.)
SCADependency installKnown CVEsSnyk, Dependabot, Black DuckSoftware Composition Analysis: scans third-party libraries. (How: CVE matching; why: dependencies are often vulnerable, as with Log4Shell.)
FuzzingCI or nightlyFinds crashesAFL++, libFuzzer, boofuzzInput Testing: random or malformed data. (How: mutate or generate inputs; why: uncovers edge cases that other tools miss.)

PART 4 – CODE REVIEW & FUZZING (Week 12)

  • Fuzzing Types (Definition: Fuzzing: automated invalid input testing to find bugs and crashes.)
    • Mutation-based (dumb) → AFL, radamsa (How: alter valid data; Why: simple and quick for basics).
    • Generation-based (smart) → Peach, boofuzz (How: spec-based creation; Why: targeted for protocols).
    • Coverage-guided → libFuzzer, Honggfuzz (How: feedback optimizes input selection; Why: efficient deep coverage).
  • Secure Code Review Top 10 Checklist (Definition: Code Review: manual security inspection. How: peers use checklists; Why: catches logic flaws automation misses.)
    1. All inputs validated (whitelist). (Why: blocks injection; How: regexes and allow-lists.)
    2. All outputs encoded for context. (Why: prevents XSS; How: HTML and JS encoders.)
    3. Authentication on every state-changing endpoint. (Why: enforces identity.)
    4. Authorization checked after authentication. (Why: prevents privilege escalation.)
    5. No hard-coded secrets or keys. (Why: prevents leaks in repositories.)
    6. Crypto uses approved algorithms only. (Why: avoid weak algorithms like MD5.)
    7. Errors never leak stack traces. (Why: prevents information disclosure.)
    8. Sessions regenerated on login and privilege change. (Why: prevents session fixation.)
    9. Secure + HttpOnly + SameSite cookie flags. (Why: protects cookies.)
    10. Third-party libraries pinned and monitored. (Why: allows controlled updates and monitoring for CVEs.)
  • Swiss Cheese Model = defense in depth – multiple imperfect layers produce strong overall protection. Definition: Layered security analogy. (How: stack tools and reviews; Why: one hole will be covered by others and reduces single point of failure risk.)

PART 5 – AUTHN / AUTHZ / SESSIONS (Week 13)

  • Authentication (AuthN) = “Who are you?” → passwords, MFA, WebAuthn, OAuth 2.0, OpenID Connect. Definition: Authentication: identity verification. (How: multi-factor methods; Why: single passwords are weak against credential stuffing.)
  • Authorization (AuthZ) = “What can you do?” → RBAC, ABAC, ReBAC (relations). Definition: Authorization: granting permissions. (How: role and attribute checks; Why: enforce least privilege.)
  • Session Attacks & Fixes (Definition: Session Management: tracking authenticated state.)
    • Hijacking → steal cookie via XSS or MITM → use HttpOnly and Secure flags. (Why: blocks direct JS access and forces TLS.)
    • Fixation → attacker sets session ID before login → regenerate on login. (Why: breaks preset session IDs.)
    • Prediction → weak randomness → use a cryptographically secure PRNG. (Why: makes IDs hard to guess.)
  • Cookie Flags (exact order) Set-Cookie: sid=abc123; Path=/; Domain=example.com; Secure; HttpOnly; SameSite=Lax; Max-Age=3600. Definition: Flags: attributes securing cookies. (How: set in response headers; Why: restrict exposure.)
  • JWT Pitfalls (Definition: JSON Web Tokens: compact authentication claims.)
    • Accepting “alg: none” — validate algorithm.
    • RS256 vs HS256 key confusion — avoid mixing asymmetric and symmetric validation keys.
    • No expiration or overly long exp — use reasonable expirations to avoid infinite sessions.

PART 6 – ADVANCED (Week 14)

  • IAST → an agent inside the app server that observes tainted data flow in real time. Definition: Interactive Application Security Testing: runtime code analysis. (How: instrument JVM or other runtimes; Why: contextual accuracy and low false positives.)
  • RASP → terminates attacks inside the app (virtual patching). Definition: Runtime Application Self-Protection. (How: blocks SQLi or dangerous flows; Why: immediate response without restarting services.)
  • SCA + SBOM → mandatory in several regulations and executive orders (e.g., CISA / EO 14028). Definition: SCA: dependency scanning; SBOM: software bill of materials (component inventory). (How: generate manifests and inventories; Why: transparency for supply chain risk management.)
  • Container and Cloud Additions
    • Image scanning → Trivy, Grype, Clair. (How: scan container images for CVEs; Why: containers are ephemeral but still vulnerable.)
    • Infrastructure-as-Code misconfig scanning → tfsec, Checkov. (How: scan IaC templates; Why: prevent misconfig propagation.)
    • Runtime defense → Falco, Tracee. (How: monitor behavior; Why: detect anomalies in running workloads.)

PART 7 – SYNTHESIS SCENARIO ANSWER TEMPLATE (Week 15)

(Use this exact five-step structure for every essay or case study.) Definition: Synthesis: holistic risk and tool application. (How: analyze the scenario; Why: ties concepts to real-world defense.)

  1. Data & Compliance — PII / PHI / PCI / GDPR / HIPAA? CIA triad impact? (How: classify sensitivity; Why: guides priorities, e.g., HIPAA fines.)
  2. Threat Model — apply STRIDE to each major component. (How: diagram threats; Why: uncovers vectors.)
  3. Tool Selection & Justification — Phase | Tools | Why
    • Commit: SAST + SCA — early detection; fail build on critical issues (cost savings)
    • PR: Code review + Semgrep — human review for logic flaws
    • Build: Unit + mutation fuzzing — crash detection for edge cases
    • Staging: DAST + IAST + ZAP baseline — runtime and configuration issues
    • Pre-release: Manual pen test + bug bounty — emulate real attackers for proof
    • Prod: RASP + WAF + continuous SCA — runtime protection and continuous dependency monitoring
  4. Remediation Priority (CVSS v3.1) — 9.0–10.0 → 24 hours; 7.0–8.9 → 7 days; 4.0–6.9 → 30 days; 0.1–3.9 → next sprint. Definition: CVSS: vulnerability scoring. (How: calculate exploitability and impact; Why: triage resources effectively.)
  5. Secure the Pipeline Itself — sign commits, lock dependency versions, run secret scanning (e.g., git-secrets). (Why: pipelines are often targeted too.)

EXTRA PROFESSOR FAVORITES (almost guaranteed to appear)

  • Real Breaches & Lessons — (Why add? provides applied examples)
    • Equifax → missed VA patching (how: unpatched Struts); lesson: regular scans and patching.
    • SolarWinds → supply chain compromise (how: compromised build); lesson: SCA and SBOM essential.
    • Capital One → SSRF in cloud metadata access (how: metadata exfiltration); lesson: tighten access controls.
    • Twitter 2020 → social engineering and weak AuthN (how: phishing); lesson: MFA and staff training.
  • Secure Headers (one-liner) — Strict-Transport-Security, Content-Security-Policy, X-Frame-Options: DENY, X-Content-Type-Options: nosniff, Referrer-Policy: strict-origin-when-cross-origin. Definition: Headers: browser security instructions. (How: set in responses; Why: block common attacks like clickjacking.)
  • Zero Trust Mantra: “Never trust, always verify.” Definition: Zero Trust: continuous verification model. (How: micro-segmentation; Why: assume breach, modern perimeter-less environments.)
  • OWASP Cheat Sheet Series — know these exist for every major topic. (How: reference for best practices; Why: standardized guidance.)


PART 1 – FOUNDATIONS OF SECURITY TESTING (Week 9 – Deep Dive)

  • Software Development Life Cycle (SDLC): a structured process for building software; phases: Planning / Requirements → Design → Implementation / Coding → Testing / Verification → Deployment → Maintenance. Security must integrate throughout to avoid retrofits.
  • Shift-Left Security: incorporating security early (e.g., requirements phase) vs. late; reduces fix costs (1× in design vs. 100× post-release). The professor might test: “Explain cost benefits with example.”
  • V-Model: an extension of waterfall; V-shaped with development on the left and testing on the right for parallel verification and validation.
    • Mappings (Expanded)
      • Requirements → User Acceptance Testing (UAT): validate business and security requirements (e.g., “App must encrypt PII” tested via scenarios).
      • High-Level Design → System Testing: find architecture flaws (e.g., secure communications between modules).
      • Low-Level Design → Integration Testing: verify component interfaces (e.g., API authentication).
      • Implementation → Unit Testing: code-level checks (e.g., function for password hashing).
    • Exam Trap: “Map a SQLi test to the V-Model phase” — answers could include Integration/Unit for code issues and System for end-to-end.
  • Verification: definition and methods — ensures the product is built correctly per documented specs. Methods: inspections and static analysis.
  • Validation: definition and methods — ensures the product meets user and business needs via dynamic tests and simulations.
    • Comparison Table (Deep):
      AspectVerificationValidation
      FocusInternal consistency (specs compliance)External effectiveness (user satisfaction)
      TimingEarly / ongoing (design and code)Late (integration / UAT)
      Tools / MethodsInspections, static analyzers (e.g., linters)Functional tests, simulations (e.g., load testing for DoS)
      Security ExampleVerify encryption algorithm per policyValidate that encryption protects data in breach simulation
  • Vulnerability Assessment (VA): definition and characteristics — automated, non-intrusive scans for known weaknesses.
    • Vs. Penetration Testing (Pen Test): penetration testing uses ethical hacking to simulate attacks, exploit vulnerabilities, and demonstrate impact.
      • Deep Comparison:
        AspectVAPen Test
        AutomationHigh (tools like Nessus)Low (manual + tools like Metasploit)
        RiskLow (scans only)High (exploits; needs ROE)
        OutputVulnerability list and scoresProof-of-concept exploits and remediation roadmaps
        FrequencyRegular (weekly / monthly)Periodic (annual / after major changes)
      • Added Prof Test: Rules of Engagement (ROE) for pen tests — define scope, permissions, and prohibit DoS on production unless explicitly allowed.
  • Threat Modeling: definition and methods — proactive identification of threats, assets, and attack vectors; e.g., STRIDE or PASTA.
    • STRIDE (Added Detail): Spoofing (impersonation), Tampering (data modification), Repudiation (deniability), Information Disclosure (leaks), Denial of Service (availability), Elevation of Privilege (privilege escalation).
    • DREAD Risk Rating: Damage potential, Reproducibility, Exploitability, Affected users, Discoverability (score 1–10 each).
  • Real-World Addition: Equifax breach (2017): poor VA process led to an unpatched Struts vulnerability and 147 million records exposed. Professor question: “How could VA have prevented this?”
  • Potential Exam Q: “Integrate security in Agile SDLC vs. Waterfall.”

PART 2 – WEB APPLICATION VULNERABILITIES (Week 10 – Deep Dive + OWASP Full)

  • OWASP Top 10: definition and significance — community-curated list of critical web application risks; know all ten deeply.
    • A01: Broken Access Control: bypassing permissions; e.g., IDOR. Impact: unauthorized access. Mitigation: enforce server-side checks (e.g., if user.id != resource.owner).
    • A02: Cryptographic Failures: weak or missing crypto; e.g., MD5 hashing. Impact: data exposure. Mitigation: use bcrypt or Argon2 and TLS 1.3.
    • A03: Injection: untrusted input passed to interpreters.
      • SQLi: mechanism: ‘ OR 1=1– bypasses WHERE clause. Impact: database dump. Mitigation: prepared statements (e.g., PreparedStatement in Java).
      • Command Injection: mechanism: ; ls -la. Impact: remote code execution. Mitigation: shell escaping (e.g., Python: shlex.quote).
      • Added: LDAP injection and related directory traversal risks.
    • A04: Insecure Design: flawed architecture; e.g., no rate limiting. Impact: brute-force attacks. Mitigation: secure defaults and design patterns.
    • A05: Security Misconfiguration: improper setup; e.g., exposed .git directory. Impact: leaks. Mitigation: automated hardening with tools like Ansible.
    • A06: Vulnerable/Outdated Components: unpatched libraries. Impact: exploits such as Log4Shell. Mitigation: SCA tools and timely updates.
    • A07: Identification & Authentication Failures: weak credentials and session management. Impact: impersonation. Mitigation: password policies and MFA.
    • A08: Software/Data Integrity Failures: unverified updates. Impact: supply chain attacks. Mitigation: code signing.
    • A09: Security Logging/Monitoring Failures: inadequate logs or alerts. Impact: undetected breaches. Mitigation: integrate with SIEM.
    • A10: Server-Side Request Forgery (SSRF): application fetches attacker-controlled URLs. Impact: internal network scanning. Mitigation: URL whitelisting and request validation.
  • XSS (Deep): definition and mitigation — script injection that exploits the trust in a site.
    • Reflected: immediate echo via input in the request (e.g., search parameter).
    • Stored: persistent payload stored in DB (e.g., comments).
    • DOM-based: client-side sources and sinks (e.g., eval(location.hash)).
    • Impact: cookie theft (document.cookie). Mitigation: output encoding (e.g., encodeURIComponent) and CSP (Content-Security-Policy: script-src ‘self’).
    • Added: Blind XSS — payload that triggers in admin consoles later.
  • CSRF: definition and mitigation — exploits browser’s automatic authentication. Mechanism: hidden form or image load. Impact: state-changing actions executed without user intent. Mitigation: anti-CSRF tokens and SameSite cookies.
  • Real-World: Heartbleed (2014) — OpenSSL crypto bug that exposed keys. Professor question: “Mitigate injection in Node.js code.”
  • Added Professor Topics: API security (OWASP API Top 10: e.g., Broken Object Level Authorization), mobile vulnerabilities (e.g., insecure storage on Android).

PART 3 – SAST & DAST (Week 11 – Deep Comparisons + Tools)

  • SAST: definition — static code analysis for vulnerabilities without running the app; white-box testing.
  • DAST: definition — dynamic testing on the running application; black-box testing that simulates external attacks.
    • Trade-offs (Deep Table):
      FeatureSASTDAST
      Language supportSpecific (some tools target specific languages)Language-agnostic
      Environment needsNone (code only)Full runtime setup required
      IntegrationCI (e.g., GitHub Actions)CD / staging (e.g., Selenium-driven tests)
      ExamplesFinds SQLi in code string concatenationDetects XSS via HTTP responses
    • Tools: SAST — SonarQube, Veracode. DAST — OWASP ZAP (proxy), Burp Suite (scanner).
    • Added: false positive mitigation — triage with context and integrate findings into IDEs for faster fixes.
    • Professor Question: “Why use both in a pipeline? Provide the recommended sequence.”

PART 4 – CODE REVIEW & FUZZING (Week 12 – Deep Methods)

  • Fuzzing: definition and methods — automated invalid input testing to uncover crashes and memory leaks.
    • Mutation: alter valid data (e.g., bit-flip file).
    • Generation: spec-based inputs (e.g., grammar for JSON).
    • Grey-box / coverage-guided: e.g., AFL++.
  • Secure Code Review: definition and checklist — human-led inspection for security issues.
    • Checklist (Deep): 1) Sanitization? 2) Authentication everywhere? 3) Crypto best practices? 4) Safe logging? 5) Dependencies secured?
    • Swiss Cheese: layered defenses — e.g., review + SAST + runtime checks.
  • Added: automated review tools such as Semgrep for custom pattern detection.
  • Professor Question: “Describe fuzzing a web form and expected findings.”

PART 5 – AUTHN, AUTHZ, & SESSION MGMT (Week 13 – Deep Risks)

  • Authentication (AuthN): factors — Knowledge (password), Possession (OTP), Inherence (biometrics).
  • Authorization (AuthZ): models — RBAC (roles), ABAC (attributes), PBAC (policies).
  • Session Management: tracking user state after authentication.
    • Hijacking: session ID theft (e.g., via unencrypted Wi-Fi).
    • Fixation: preset session ID before login — mitigation: regenerate on login.
    • Mitigation: always use HTTPS, regenerate session IDs on privilege change.
  • Cookie Flags (Deep): Secure (TLS only), HttpOnly (no JS access), SameSite (CSRF protection), Path/Domain (scope limit).
  • Added Vulnerabilities: OAuth misconfiguration (e.g., open redirects), JWT issues (alg=none, key confusion).
  • Real-World: Yahoo breach (2013) involved session cookie theft.
  • Professor Question: “Provide a code example for secure session handling in PHP.”

PART 6 – ADVANCED TECHNIQUES (Week 14 – Deep Hybrids)

  • IAST: runtime instrumentation to detect vulnerabilities; gray-box with low false positives.
  • RASP: application self-defense, blocking attacks at runtime and performing virtual patching.
  • Supply Chain Security: protect dependencies; use SCA to scan for CVEs.
    • SBOM: maintain an inventory of components for transparency and incident response.
  • Added: WAF (Web Application Firewall) provides signature-based blocking, while RASP offers contextual, in-app blocking.
  • Emerging: AI in testing (e.g., ML for anomaly detection).
  • Professor Question: “Compare IAST and RASP in production environments.”

PART 7 – SYNTHESIS (Week 15 – Deep Framework + Compliance)

  • Scenario Steps (Deep):
    1. Risk Identification: classify data (PII/PHI) and apply the CIA triad (confidentiality, integrity, availability).
    2. Tools: SAST early (code), DAST mid (runtime), Pen test late (proof); add SCA for dependencies.
    3. Schedule: pipeline sequence — Commit (SAST), Build (Unit / Fuzz), Stage (DAST), Release (Pen); recurring scans afterwards.
    4. Remediation: prioritize using CVSS (Base: exploitability / impact; Temporal: maturity; Environmental: asset criticality).
  • Added: compliance mapping — HIPAA for health data, GDPR for EU personal data; bug bounty programs for crowdsourced testing.
  • Real-World: SolarWinds (2020) supply chain compromise — missed checks in code reviews and build processes.
  • Professor Essay Question: “For a fintech app, outline a full security pipeline with justifications.”

EXTRA INFERRED EXAM TOPICS (Prof Likely Adds)

  • Zero Trust: assume breach and verify every request (e.g., micro-segmentation).
  • Compliance & Standards: GDPR (data protection), PCI-DSS (payment card security), ISO 27001 (information security management).
  • Bug Bounty Programs: reward ethical hackers (e.g., HackerOne).
  • Zero-Day Vulnerabilities: unknown exploits — mitigations include heuristics and rapid patching.
  • Mobile Security Additions: insecure data storage and reverse engineering; tools: MobSF.
  • API Vulnerabilities: missing rate limiting, broken object-level authorization; see OWASP API Top 10.
  • Breaches to Know: Target (2013: misconfiguration), Capital One (2019: SSRF).
  • General Tips: CVSS calculation example (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N = 7.5 High). Secure headers: HSTS, Content-Security-Policy.

Mnemonics: OWASP: “Broken Crypto Injections Insecurely Misconfigure Vulnerable IDs Software Logs SSRF”.