Data Structures and Algorithms

Time Complexity

Determining the time complexity of an algorithm involves analyzing the number of loop executions. The growth function represents the exact relationship between the problem size (n) and the time complexity of the solution. The order of an algorithm refers to its asymptotic time complexity as the problem size and algorithm complexity approach their asymptotic limits.

Common time complexities include:

  • O(1) – Constant time
  • O(log n) – Logarithmic time
  • O(n) – Linear time
  • O(n log n) – Log-linear time
  • O(n^2) – Quadratic time
  • O(n^3) – Cubic time
  • O(2^n) – Exponential time

To determine loop execution count, analyze the order of the loop body multiplied by the number of times the loop executes relative to n. For example, a loop body with O(1) complexity executed n times results in O(n) complexity.

Constants do not affect the asymptotic complexity of an algorithm.

Searching

Searching algorithms aim to find a specific element within a data set. Common searching algorithms include:

  • Linear Search: Iterates through each element until the target is found or the end of the data set is reached. Best case: O(1), Worst case: O(n).
  • Binary Search: Works on sorted data and repeatedly divides the search interval in half. Best case: O(1), Worst case: O(log n).

Sorting

Sorting algorithms arrange elements in a specific order (ascending or descending). Common sorting algorithms include:

  • Selection Sort: Repeatedly finds the minimum element and places it at the beginning of the unsorted portion. Best case: O(n^2), Worst case: O(n^2).
  • Bubble Sort: Repeatedly compares adjacent elements and swaps them if they are in the wrong order. Best case: O(n), Worst case: O(n^2).
  • Insertion Sort: Builds the final sorted array one item at a time. Best case: O(n), Worst case: O(n^2).
  • Merge Sort: Divides the array into halves, sorts them recursively, and then merges the sorted halves. Best case: O(n log n), Worst case: O(n log n).
  • Quick Sort: Selects a pivot element and partitions the array around the pivot, such that elements smaller than the pivot are placed before it and elements larger than the pivot are placed after it. Best case: O(n log n), Worst case: O(n^2).

Stacks

A stack is a LIFO (Last-In-First-Out) data structure. Common stack operations include:

  • Push: Adds an element to the top of the stack.
  • Pop: Removes and returns the element at the top of the stack.
  • Peek: Returns the element at the top of the stack without removing it.
  • isEmpty: Checks if the stack is empty.

Stacks can be implemented using arrays or linked lists.

Queues

A queue is a FIFO (First-In-First-Out) data structure. Common queue operations include:

  • Enqueue: Adds an element to the rear of the queue.
  • Dequeue: Removes and returns the element at the front of the queue.
  • Peek: Returns the element at the front of the queue without removing it.
  • isEmpty: Checks if the queue is empty.

Queues can be implemented using arrays or linked lists.

Graphs

A graph is a data structure consisting of vertices (nodes) and edges that connect them. Graphs can be directed or undirected. Common graph algorithms include:

  • Breadth-First Search (BFS): Explores the graph level by level.
  • Depth-First Search (DFS): Explores the graph branch by branch.

Trees

A tree is a hierarchical data structure consisting of nodes connected by edges. The topmost node is called the root. Common tree types include binary trees, n-ary trees, and balanced trees. Trees can be implemented using linked structures or arrays.

Hashing

Hashing is a technique for mapping keys to indices in a table. Hash functions are used to compute the index for a given key. Collision resolution techniques are used to handle situations where multiple keys map to the same index. Common hashing methods include chaining, open addressing, and perfect hashing.