Core Information Security Concepts and Models
CIA Triad from a Data Protection Perspective
The CIA Triad is fundamental to data protection:
- Confidentiality: Ensures data is protected from unauthorized disclosure.
- Integrity: Ensures data is accurate and reliable.
- Availability: Ensures data is accessible when and where it is needed.
Confidentiality Examples
- Information Existence Confidentiality: E.g., juvenile records.
- Information Confidentiality: E.g., encrypting a medical record database and granting access only with proper authorization.
Integrity Requirements
Must protect against changes to information by:
- Unauthorized agents (e.g., unauthorized change to a bank balance by a hacker).
- Authorized agents in unauthorized ways (e.g., a bank programmer changing the
round()function to transfer funds to an offshore account).
Availability Requirements
Must be able to provide information:
- When being attacked (e.g., use redundancy control to ensure online services remain available during a Denial of Service attack).
- After being attacked (e.g., use data backup controls to ensure data availability even if the primary data center is compromised).
Security Functions Perspective
These concepts define how access and actions are managed:
- Identification: Be able to identify yourself to someone else (E.g., User Name, User ID).
- Authentication: Be able to prove your identity to someone else (E.g., Password, driver license).
- Authorization: Be able to control who can perform what action (E.g., Access Control List (ACL), role-based access control).
- Accountability/Nonrepudiation: Be able to track who performed what action (E.g., Logs, digital-signed document).
Common Security Terms and Definitions
- Vulnerability: An absence of a countermeasure or a weakness in a system.
- Threat: Any potential danger associated with the exploitation of a vulnerability.
- Exposure: An instance of a threat exploiting a vulnerability and causing losses.
- Exploit: A program or process used to cause an exposure.
- Control/Countermeasure/Safeguard: A control or mechanism that reduces the potential risk.
- Risk: The evaluation of the severity of a potential exposure. Two factors are usually used for risk evaluation: The probability that a potential exposure can happen, and the impact that a potential exposure can cause.
Common Security Control Types and Functionalities
Control Types
- Administrative
- Technical
- Physical
Control Functionalities
- Deterrent: Discourage a potential attacker.
- Preventive: Stop an incident from occurring.
- Corrective: Fix items after an incident has occurred.
- Recovery: Restore necessary components to return to normal operations.
- Detective: Identify an incident’s activities after it took place.
- Compensating: Alternative control that provides similar protection as the original control.
Common Approaches to Provide Security
- Fortress Model: Original Approach – Build a series of defenses.
- Operation Model: Modern Approach – Protection = Prevention + (Detection + Response).
Where:
- Prevention = firewalls, access controls, encryption.
- Detection = intrusion detection systems, honeypots, audit logs.
- Response = backups, incident response teams, computer forensics.
Know The “Enemy”
Attackers vary in skill:
- Vast majority are so-called script kiddies; they leverage existing tools. Keeping them out is feasible for the average organization.
- Some are script writers; they exploit known vulnerabilities. Keeping them out is more of a challenge.
- A select few are the elite; they discover new vulnerabilities (Zero-Day Vulnerability).
Many attacks today come from the organized criminal world, and in some cases, nation-states are involved in information warfare. Fending off nation-state sponsored attacks is in another league. Remember: Not all threats are on the outside; insiders can be more dangerous!
Security Dilemma
A defender must protect against all kinds of attacks, known and unknown. An attacker only needs to find one weak spot. No absolutely secure systems exist! Security issues can stem from deficiencies in technology, people management, and processes management.
Information Security is about Risk Management
Does it make sense to buy a $1000 safe to protect $100 worth of goods? Information Security is not about protecting against all potential risks but about taking proper actions against known risks.
Risk Management Actions
- Risk Mitigation: Use more or better controls to lower the risk level (e.g., replace single-factor authentication with 2-factor authentication).
- Risk Transfer: Transfer the risk to other parties (e.g., buy insurance).
- Risk Elimination: Get rid of the risk (e.g., remove the services that bring the risks).
- Risk Acceptance: Do nothing, just accept it (e.g., if the risk has a minor impact on the business).
Information Security Domains (From CISSP)
Security and Risk Management
This domain covers information systems management, including confidentiality, integrity, and availability (CIA); security governance principles; compliance requirements; legal and regulatory issues; IT policies and procedures; and risk-based management concepts.
Asset Security
This domain addresses general requirements of information security: Data Privacy, Data classification, and handling security controls.
Security Architecture and Engineering
This domain covers secure design principles, fundamental security models, assessing and mitigating vulnerabilities, cryptography, and designing/implementing physical security.
Communications and Network Security
This domain covers the design and protection of an organization’s networks, including secure design principles for network architecture, secure network components, and secure communication channels.
Identity and Access Management
This domain helps professionals control user access to data, covering physical and logical access, the identity and access provisioning lifecycle, identification, authentication, authorization mechanisms, and accountability mechanisms.
Security Assessment and Testing
This domain focuses on the design, performance, and analysis of security testing, including designing and validating assessment strategies, security testing techniques, and internal/third-party security audits.
Security Operations
This domain addresses operational activities: foundational concepts, logging and monitoring, incident management, disaster recovery, and business continuity.
Software Development Security
This domain addresses security concerns in software development, covering security in the Software Development Life Cycle (SDLC), security controls in development environments, and secure coding guidelines and standards.
Security Design Principles
Principle of Complete Mediation
All access to objects must be checked to ensure they are allowed. All access means every single access request.
Good Example: Web applications check each individual web request using session tokens.
Failure Example: Earlier UNIX versions checked access rights only upon file opening, not during subsequent reads. If a user lost rights while the file was open, the system would not enforce the revocation until the next open.
Principle of Economy of Mechanism
Security mechanisms should be as simple and small as possible. Simple designs are easier to analyze, leading to fewer errors.
Failure Example: Windows OS provides hundreds of security-related policies, making it hard to understand and increasing the likelihood of misconfiguration.
Principle of Fail-Secure Defaults
By default, no access should be granted in abnormal system states; access must be granted explicitly.
Example: In a fire hazard, a door to a restricted area should be locked (Fail-secure). When there is a system error, users should be logged off.
Fail-Secure Vs. Fail-Safe: Fail-secure protects information assets. Fail-safe protects people safety (e.g., opening the door during an emergency).
Principle of Least Common Mechanism
Avoid having multiple subjects share the same mechanisms to grant access to a resource. Every shared mechanism is a potential information path.
Good Examples: Avoid sharing variables between processes; use sandboxing to isolate processes.
Security Vs. Convenience: Sharing may be preferred for convenience but usually increases security risks.
Principle of Least Privilege
Only the minimum necessary rights should be assigned to a subject requesting access to a resource. Users and programs should operate using the least set of privileges necessary to complete their job.
Good Examples: Review user access periodically; remove access not used for over 30 days.
Failure Example: Using a super user account all the time for everything.
Principle of Open Design
The security of a mechanism should not depend on the secrecy of its design or implementation. Open design allows examination by more experts, leading to fewer errors.
Good Example: Industry standard encryption algorithms rely on the possession of keys, not secret algorithms.
Failure Example: Using “security through obscurity,” such as hiding account passwords in binary files, assuming nobody will find them.
Principle of Psychological Acceptability
Security mechanisms must be user-friendly. Security mechanisms will not provide protection if users do not accept them.
Good Example: User-transparent mechanisms (e.g., verifying user machine MAC address) or user-friendly mechanisms (CAPTCHA).
Bad Example: A 25-random-character password requirement is likely to be circumvented insecurely (e.g., sticky notes); many users disable annoying controls like Windows UAC.
Principle of Separation of Privilege
More than one authority should be involved in granting access to critical system operations. This increases scrutiny and reduces fraud chances.
Good Example: Two people are required to transfer organization funds; one orders, the other writes the cheque.
Failure Example: A security administrator is allowed to manage the roles of their own organization account.
Principle of Defense in Depth
Use multiple layers and multiple technologies for defensive measures to provide protection. Layering security defenses reduces the chance of a successful attack.
Good Example: Modern systems are protected by firewalls, intrusion detection, anti-malware, incident response, cryptography, and audit controls.
Information Security Models
Bell-LaPadula Model
Security Requirement: Secret information must be prevented from leaking to unauthorized parties. Developed in the 1970s for U.S. military time-sharing systems, it assigns classification levels to subjects and objects.
- Simple Security Rule: A subject cannot read data located at a higher security level than that possessed by the subject (no read up).
- *-Property Rule: A subject cannot write to a lower level than that possessed by the subject (no write down or the confinement rule).
- Strong Star Property Rule (Alternative): A subject can write only at the same level possessed by the subject.
Limitations: Only addresses data confidentiality; does not address data existence confidentiality.
Biba Model
Security Requirement: No one can compromise the data integrity. Developed after Bell-LaPadula, it focuses on information integrity, assigning integrity levels to subjects and objects.
- * (Star) Integrity Rule: A subject cannot write to a higher integrity level than that to which it has access (no write up).
- Simple Integrity Rule: A subject cannot read to a lower integrity level than that to which it has access (no read down).
- Invocation Rule: A subject cannot invoke (request service) of higher integrity.
Goals of Integrity Models
Integrity models aim to:
- Prevent unauthorized users from making modifications.
- Prevent authorized users from making improper modifications.
- Maintain internal and external consistency of data and programs.
The Biba Model addresses the first two goals.
Clark-Wilson Model
Developed after Biba, this model takes different approaches to protecting integrity. It uses the following elements:
- Users: Active agents.
- Transformation Procedures (TPs): Programmed abstract operations (read, write, modify).
- Constrained Data Items (CDIs): Data items manipulated only by TPs.
- Unconstrained Data Items (UDIs): Data items manipulated by users via primitive read/write operations.
- Integrity Verification Procedures (IVPs): Check the consistency of CDIs with external reality.
The Clark-Wilson Model addresses all three integrity goals by enforcing them through well-formed transactions (using access triples: subject, software [TP], object) and separation of duties.
Chinese Wall Model (Brewer and Nash Model)
Security Requirement: Conflict of Interest access must be prevented; unethical actions are not allowed. This model was created to provide access controls that can change dynamically depending upon a user’s previous actions.
