Exploring Algorithm Design Techniques: From Knapsack to TSP
0/1 Knapsack Problem
The 0/1 knapsack problem involves deciding whether to include an item entirely or not in a knapsack. For instance, if we have two items weighing 2kg and 3kg, we can’t take just 1kg from the 2kg item. We must take the entire 2kg. This problem is solved using dynamic programming.
Fractional Knapsack
Unlike the 0/1 knapsack, the fractional knapsack problem allows us to divide items. For example, if we have a 3kg item, we can take 2kg and leave 1kg. This problem is solved using the Greedy approach.
Knapsack Problem Using DP
- Create a DP table: This 2D array, (n+1) x (W+1), where ‘n’ is the number of items and ‘W’ is the knapsack capacity, stores calculated values.
- Set base cases: If no items are picked (i=0), or the capacity is 0 (w=0), the value is 0.
- Fill the table: For each item and weight, choose the maximum value between excluding the item or including it (if it fits).
- Find the solution: The final value, dp[n][W], represents the maximum achievable value.
Computational Complexity
Computational complexity measures the resources (time and space) an algorithm needs. As input size grows, some algorithms become much slower. Complexity classes categorize algorithms based on their resource usage patterns.
Recurrence Relation
A recurrence relation defines a function based on its smaller inputs. For example, the Fibonacci sequence’s recurrence relation is F(n) = F(n-1) + F(n-2), with base cases F(1) = 1 and F(2) = 1.
Recurrence in Divide and Conquer
Divide-and-conquer algorithms often have recurrence relations describing their complexity:
T(n) = a * T(n/b) + f(n)
T(n): Time/space complexity for problem size ‘n’.a: Overhead of dividing and combining subproblems.T(n/b): Cost of solving subproblems of size ‘n/b’.f(n): Cost of work done directly on the problem of size ‘n’.
Minimum Spanning Tree (MST)
An MST is a subset of edges in a weighted graph that connects all vertices with the minimum total edge weight. Prim’s and Kruskal’s algorithms are used to find MSTs.
Binary Search
Binary search finds a target value’s position in a sorted array by repeatedly dividing the search interval in half. It’s faster than linear search, especially for large arrays.
Single-Source Shortest Path (SSSP)
SSSP algorithms find the shortest path from a starting point to all other reachable points in a weighted graph, like finding the most efficient route on a map.
Strassen’s Matrix Multiplication
Strassen’s Matrix Multiplication is a divide-and-conquer approach that reduces the time complexity of matrix multiplication from O(n3) to O(nlog7). It works only on square matrices where ‘n’ is a power of 2.
String Processing Algorithms
These algorithms manipulate and analyze strings (sequences of characters). Examples include searching, sorting, matching, parsing, and manipulating strings.
LC Searching and Bounding
Used in the Branch and Bound algorithm, LC searching (Least Cost) explores the solution space efficiently using a heuristic cost function. Bounding prunes the search space using upper and lower bounds on solution costs.
LC Branch and Bound Applications
- 0/1 Knapsack Problem: LC branch and bound can maximize value within a knapsack’s capacity.
- Traveling Salesman Problem (TSP): It can find the shortest route for a salesperson to visit all cities exactly once.
Matrix Multiplication
Matrix multiplication results in a new matrix by multiplying corresponding elements of rows and columns from two input matrices. The number of columns in the first matrix must equal the number of rows in the second.
Traveling Salesman Problem (TSP)
The Branch and Bound method helps solve the TSP:
- Start at a city and explore paths: Consider all possible next cities.
- Estimate remaining distance: Calculate a lower bound on the total distance for remaining cities.
- Prune unpromising paths: Discard paths with lower bounds higher than the best route found.
- Repeat for each city: Continue exploring and pruning until the shortest route is found.
Greedy Paradigm
The greedy paradigm chooses the best option at each step without considering future consequences. It works for problems divisible into subproblems with measurable choices where local optima lead to a global optimum.
Combinatorial Algorithm
Combinatorial algorithms produce output based solely on input values, without any internal state. Examples include sorting algorithms like quicksort and merge sort.
4 Queens
The 4-queens problem involves placing four queens on a 4×4 chessboard so that no queen can attack another. A recursive algorithm explores placements, backtracking if a conflict occurs.
8 Queens
Similar to the 4-queens problem, the 8-queens problem uses an 8×8 chessboard. The algorithm systematically tries placing queens, checking for conflicts and backtracking to find all valid solutions.
Polynomial vs. Non-Polynomial Time Complexity
- Growth Rate: Polynomial time grows slowly compared to input size, while non-polynomial time grows much faster.
- Big O Notation: Polynomial time is O(nk), while non-polynomial time is O(2n) or O(n!).
- Efficiency: Polynomial-time algorithms are efficient; non-polynomial ones become very slow for larger inputs.
- P vs. NP Problem: P is the class of problems solvable in polynomial time; NP problems can be verified quickly, but finding solutions might be slow.
NP-Hard and NP-Complete
NP-Hard
- Don’t have to be in NP.
- Time complexity is unknown.
- Not necessarily decision problems.
- Not all NP-hard problems are NP-complete.
NP-Complete
- Must be both NP and NP-hard.
- Time complexity is fixed in NP-Hard.
- Exclusively decision problems.
- All NP-complete problems are NP-hard.
Approximation Algorithms for NP-Hard Problems
- Greedy Algorithms: Make locally optimal choices at each step for a good, but not always optimal, solution.
- Dynamic Programming: Break down the problem into smaller subproblems and store solutions to avoid redundant calculations.
