Storage Systems: A Comprehensive Guide to Technologies and Concepts
SCSI vs FC
SCSI (Small Computer System Interface) and Fibre Channel (FC) are both interfaces used for connecting internal storage to external disks, often used with Storage Area Networks (SANs).
- SCSI is a traditional interface with potential downtime issues.
- RAID controllers are typically SCSI hardware.
- SCSI is media-specific, using only copper cables.
- Fibre Channel is a more modern interface with higher performance and reliability.
- Fibre Channel hardware is known as a Host Bus Adapter (HBA).
- Fibre Channel is media-independent, supporting both copper and fiber optic cables.
FC vs. iSCSI
Fibre Channel (FC)
- Current market leader for shared storage technologies.
- Provides the highest performance levels.
- Designed for mission-critical applications.
- Relatively high component costs, especially per-server HBA costs.
- Can be difficult to implement and manage.
iSCSI (Internet Small Computer System Interface)
- Relatively new but rapidly growing in popularity.
- Performance can approach Fibre Channel speeds.
- A better fit for databases than Network Attached Storage (NAS).
- Suitable for small to medium-sized businesses.
- Relatively inexpensive compared to Fibre Channel.
- Relatively easy to implement and manage.
NAS Benefits
- Increases performance throughput for end users.
- Minimizes investment in additional servers.
- Provides storage pooling.
- Supports heterogeneous file serving.
- Utilizes existing infrastructure, tools, and processes.
Benefits of SAN
- Reduces the cost of external storage.
- Increases performance.
- Centralized and improved tape backup.
- LAN-less backup.
- High-speed, no single-point-of-failure clustering solutions.
- Consolidation.
Goals of BigTable
- Data is highly available at any time.
- Very high read/write rates.
- Efficient scans over all or interesting subsets of data.
- Asynchronous and continuously updates.
- High scalability.
- Data is organized as (row, column, timestamp) -> cell contents.
- No table-wide integrity constraints.
- No multirow transactions.
How is Chubby Used?
- Ensures at most one active master at any time.
- Stores the bootstrap location of BigTable data.
- Discovers tablet servers and finalizes tablet server deaths.
- Stores BigTable schema information (column family information for each table).
- Stores access control lists.
- If Chubby is unavailable for an extended period, BigTable becomes unavailable.
SSTable
An SSTable (Sorted String Table) is a sorted file of key-value string pairs, containing chunks of data plus an index.
Tablet
A tablet contains a range of rows from a table. It is built from multiple SSTables and stored on tablet servers.
Table
A table is composed of multiple tablets. SSTables can be shared between tablets, but tablets do not overlap. SSTables can overlap.
Fault Tolerance and Load Balancing
- The master is responsible for load balancing and fault tolerance.
- Chubby is used to keep locks on tablet servers, restart failed servers, and manage tablet recovery.
- The master monitors the status of tablet servers.
- The master keeps track of available tablet servers and unassigned tablets.
- If a server fails, tablet recovery is initiated.
Recovering a Tablet
- A new tablet server reads data from the METADATA table.
- Metadata contains a list of SSTables and pointers to any commit log that may contain data for the tablet.
- The server reads the indices of the SSTables into memory.
- The memtable is reconstructed by applying all updates since the redo points.
Refinements
- Group column families together into an SSTable.
- Compress locality groups.
- Use Bloom Filters on locality groups to avoid searching the entire SSTable.
What is Spanner?
Spanner is a globally distributed database system designed for strong consistency with wide-area replication. It offers:
- Auto-sharding and auto-rebalancing.
- Automatic failure response.
- User/application control over data replication and placement.
- Transaction serialization via global timestamps.
- Acknowledges clock uncertainty and guarantees a bound on it.
- Uses a novel TrueTime API for concurrency control.
- Enables consistent backups and atomic schema updates during ongoing transactions.
- Features lock-free distributed read transactions.
- Provides external consistency of distributed transactions.
- Integrates concurrency control, replication, and 2PC (2 Phase Commit).
TrueTime
TrueTime is a key enabling technology for Spanner. It provides:
- Interval-based global time.
- Exposes uncertainty in clock.
- Leverages hardware features like GPS and Atomic Clocks.
- A set of time master servers per datacenter and time slave daemons per machine.
- Daemons poll various masters and reach a consensus about the correct timestamp.
Dynamo
Dynamo is a distributed key-value store designed for high availability and fault tolerance. Key features include:
- Every node has the same responsibilities as its peers.
- No updates are rejected due to failures or concurrent writes.
- Conflict resolution is executed during reads, not writes, resulting in an”always writeabl” system.
Replica Synchronization
Dynamo implements an anti-entropy (replica synchronization) protocol to keep replicas synchronized, addressing scenarios where hinted replicas become unavailable before being returned to the original replica node. This protocol utilizes a Merkle tree.
Merkle Tree
A Merkle tree is a hash tree where leaves are hashes of individual key values. Parent nodes higher in the tree are hashes of their respective children.
Advantages of Merkle Tree
- Each branch of the tree can be checked independently without requiring nodes to download the entire tree.
- Reduces the amount of data transferred while checking for inconsistencies among replicas.
Membership Detection
Dynamo uses an explicit mechanism to initiate the addition or removal of nodes from the Dynamo ring. This mechanism involves:
- The node serving the request writes the membership change and its time of issue to a persistent store.
- Membership changes form a history as nodes can be removed and added back multiple times.
- A gossip-based protocol propagates membership changes and maintains an eventually consistent view of membership.
- Each node contacts a randomly chosen peer every second to reconcile their persisted membership change histories.
RAID
RAID (Redundant Array of Independent Disks) is a data storage virtualization technology that combines multiple disk drive components into a logical unit for data redundancy or performance improvement. Data is distributed across the drives using various schemes, known as RAID levels. Each scheme balances reliability, availability, performance, and capacity.
RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors and whole disk failures.
RAID Levels
- RAID 0: Striping without mirroring or parity. No redundancy.
- RAID 1: Mirroring without parity or striping. Full data redundancy.
- RAID 2: Bit-level striping with dedicated Hamming-code parity.
- RAID 3: Byte-level striping with dedicated parity.
- RAID 4: Block-level striping with dedicated parity. Block-interleaved parity. Wasted storage is small: one parity block for N data blocks. Parity disk becomes a hot spot.
- RAID 5: Block-level striping with distributed parity. Parity information is distributed among the drives. Requires at least three disks. RAID 5 is affected by array rebuild time and the chance of failure during rebuild.
Intelligent Storage System
An intelligent storage system consists of four key components:
- Front end
- Cache
- Back end
- Physical disks
High-end Storage Systems
High-end storage systems, also known as active-active arrays, are typically used by large enterprises for centralizing corporate data. These arrays are designed with a large number of controllers and cache memory. An active-active array allows the host to perform I/Os to its LUNs across any of the available paths.
Midrange Storage Systems
Midrange storage systems, also known as active-passive arrays, are designed for small and medium enterprises. They have two controllers, each with cache, RAID controllers, and disk drive interfaces. Hosts can only perform I/Os to LUNs through active paths. Other paths remain passive until the active path fails. Midrange arrays are less scalable than high-end arrays.