Mastering Algorithm Techniques and Complexity
Stable Matching Problem Proof Strategies
To demonstrate a matching’s stability, a common approach is proof by contradiction. Assume an instability exists between a man (m) and a woman (w’). Then, consider two scenarios: (1) m and w’ interacted at some point, or (2) they never interacted. In both cases, the assumption of instability leads to a contradiction, thereby proving stability.
To prove an algorithm yields a specific matching (e.g., all men paired with their best valid partners), we again use contradiction. Assume that for at least one pair in the algorithm’s output, this property does not hold. Given that partners were valid, there must exist another stable matching, S’, where these individuals are paired. We then demonstrate that if such a scenario were true, it would inevitably lead to an instability within S’. This requires verifying both conditions of the instability definition.
Algorithm Reductions and Instability Cases
In algorithm reductions, the run time of transformations should ideally not exceed the run time required to obtain a solution for the target problem (B). The goal is always to achieve transformations with at most polynomial run time. However, it’s important to note that a transformation is not always possible, valid, or useful in every context.
Instability Cases in Stable Matching
- Normal Pairs: For instability among standard pairs (m, w) and (m’, w’), the proof is straightforward, often by simply citing that the Gale-Shapley algorithm inherently produces a stable result.
- Specially Constructed Pairs: If instability arises from pairs constructed in a unique or ‘special’ manner for a particular problem, more detailed work is required. Nevertheless, the contradiction is still derived from the fundamental property that the Gale-Shapley algorithm guarantees a stable outcome.
Asymptotic Run Time Analysis
Common run time complexities, ordered from fastest to slowest, include: O(1) < O(log n) < O((log n)k) < O(nc) {0<c<1} < O(n) < O(n log n) < O(nd) {1<d<2} < O(n2) < O(n2 log n) < O(n3) < O(na log n) < O(na) < O(an) < O(n!) < O(nn).
Understanding Asymptotic Notations: O, Ω, Θ
It’s crucial to remember that for a function to be in O, Ω, or Θ of another function, the respective inequalities must hold only for all n greater than or equal to a fixed n0. We are not concerned if the inequality is false for n < n0. Thus, an alternative perspective on the f(n) ∈ Θ(g(n)) definition is that, as sets, O(f(n)) = O(g(n)).
Little O and Little Omega Notations
The definitions of little o (o) and little omega (ω) are derived from O and Ω, respectively, by changing the existential quantifier (∃) for c to a universal quantifier (∀). Consequently, any function f(n) that is in o(g(n)) is also in O(g(n)), but the converse is not true. Generally, for f(n) to be in o(g(n)), its run time must be an order of magnitude faster than g(n).
- o(f(n)) ≈ O(f(n)) − Θ(f(n))
- ω(f(n)) ≈ Ω(f(n)) − Θ(f(n))
To ensure a desired result holds for any constant c, a general formula for n0 in terms of c is beneficial. For instance, to ensure 1/n < c, we know that if n ≥ n0, then 1/n0 ≥ 1/n. By setting c = 1/n0 (or equivalently, n0 = 1/c), we satisfy the condition c ≥ 1/n.
Limit Definitions for Asymptotic Notations
- If limn→∞ f(n)/g(n) = 0, then f(n) ∈ o(g(n)).
- If limn→∞ f(n)/g(n) ≠ ∞, then f(n) ∈ O(g(n)).
- If limn→∞ f(n)/g(n) ≠ 0, ∞, then f(n) ∈ Θ(g(n)).
- If limn→∞ f(n)/g(n) ≠ 0, then f(n) ∈ Ω(g(n)).
- If limn→∞ f(n)/g(n) = ∞, then f(n) ∈ ω(g(n)).
Aggregate and Amortized Run Time Analysis
Aggregate Run Time: This is calculated as the sum of ([number of times an operation occurs × its run time] / total number of operations).
Amortized Analysis: Accounting Method: In the accounting method, the estimated cost of an elementary operation is set sufficiently high to accumulate enough ‘credit’ to cover the costs of infrequent, more complex operations when they occur.
Graph Theory Fundamentals
Key Graph Definitions
- Path: A sequence of vertices {v1, v2, …, vk} such that an edge exists between every consecutive pair of vertices.
- Strongly Connected Graph: A directed graph where for any two vertices u and v, there exists a path from u to v and also from v to u. Such vertices are termed mutually reachable, and a strongly connected graph is one where every pair of vertices is mutually reachable.
- Bipartite Graph: A graph G = (V, E) is bipartite if its set of vertices V can be partitioned into two disjoint sets, X and Y, such that every edge e ∈ E connects a vertex in X to a vertex in Y.
Combinations Formula
C(n, k) = n! / (k! * (n-k)!)
Effective Proof Strategies for Graphs
- Sketching diagrams can significantly aid understanding.
- Induction on the number of vertices is often a valuable approach.
- Proof by contradiction is effective, especially when dealing with properties like a graph being connected, acyclic, or a tree.
Graph Representations: Adjacency Matrix vs. List
In an adjacency matrix, for each pair of vertices, we store either a boolean value (for unweighted graphs) or a weight value (for weighted graphs). In an adjacency list, for a given vertex, a linked list stores the vertices reachable from it. For weighted graphs, the nodes of the linked list also store vertex IDs and their corresponding edge weights. The total run time for Breadth-First Search (BFS) or Depth-First Search (DFS) using an adjacency list is O(V + E), where V is the number of vertices and E is the number of edges.
Greedy Algorithms and Proof Techniques
Core Proof Methods for Greedy Algorithms
Two primary methods are used to prove the optimality of greedy algorithms:
- Greedy Stays Ahead: This method uses induction to demonstrate that the greedy algorithm maintains a superior or equal position compared to any other solution at every step of the process.
- The Exchange Argument: This involves starting with an arbitrary optimal solution and performing a series of ‘exchanges’ to transform it into the solution produced by the greedy algorithm. During this transformation, it’s shown that each exchange does not diminish the quality of the solution, thereby proving that the greedy algorithm’s solution is at least as good as any other.
Examples of Greedy Algorithms
- Dijkstra’s Algorithm: A greedy algorithm for finding the shortest path from a fixed source vertex s to all other vertices in a weighted graph.
- Prim’s Algorithm: Builds a Minimum Spanning Tree (MST) incrementally, starting from an arbitrary vertex.
- Kruskal’s Algorithm: Constructs an MST by starting with an empty set and iteratively adding the edge with the minimal weight from anywhere in the graph, provided it does not form a cycle.
Divide and Conquer: Master Theorem
Understanding the Master Theorem
The Master Theorem primarily compares the run time of f(n) with nlogba to determine which term dominates. For instance, if nlogba < f(n), the work performed at the root of the recurrence tree dominates, corresponding to Case 3. However, in such cases, f(n) must be polynomially larger or smaller than nlogba, as dictated by an additional nε or n-ε factor.
Master Theorem Formula
Let a ≥ 1 and b > 1 be constants, let f(n) : N → R+, and let the recurrence relation be T(n) = aT(n/b) + f(n).