Understanding Quick Sort, Search Algorithms, and Sorting Techniques

Q) How does the choice of pivot element affect the running time of the Quick Sort algorithm?

The choice of the pivot in Quick Sort directly affects the balance of the partition and therefore its running time:

  • Good pivot (middle value): Produces nearly equal partitions, leading to O(n log n) time.

  • Bad pivot (smallest or largest element): Produces highly unbalanced partitions, leading to O(n²) time.

Thus, choosing an appropriate pivot (like the median or using randomization) improves average performance.

Q) Differentiate between sequential search and binary search.

Sequential SearchBinary Search
Searches each element one by one from start to end.Divides the list into halves and searches in the appropriate half.
Works on unsorted data.Works only on sorted data.
Time complexity: O(n).Time complexity: O(log n).
Simple but slower.Faster and more efficient.

Q) Differentiate between internal sorting and external sorting.

Internal SortingExternal Sorting
Sorting done entirely in main memory (RAM).Sorting done using both main memory and external storage like hard disk.
Used when data size is small enough to fit in memory.Used when data size is too large to fit in memory.
Faster because access time is low.Slower due to frequent disk I/O operations.

Q) What are the two different forms of hashing?

The two different forms of hashing are:

  1. Static Hashing – The size of the hash table is fixed; the structure does not change with the number of records.

  2. Dynamic Hashing – The hash table grows or shrinks dynamically as the number of records changes (e.g., extendible hashing, linear hashing).

Q) Explain tail recursion.

A function is said to be tail recursive when the recursive call is the last statement executed in the function, meaning no computation is left after the recursive call returns.

Because of this, the compiler can optimize it by reusing the same stack frame, making tail recursion more memory-efficient than normal recursion.

Example:
int fact(int n, int result) {
if (n == 0) return result;
return fact(n – 1, n * result); // Tail recursive call
}

Q) How does Bubble Sort work? Explain.

Bubble Sort is a simple sorting technique in which adjacent elements are repeatedly compared and swapped if they are in the wrong order. The process starts from the beginning of the list and continues to the end, causing the largest element to move to its correct position in each pass—this is why it is called “Bubble Sort.” These passes are repeated until the entire list becomes sorted. For example, if we take the array [5, 3, 2, 4], Bubble Sort will first compare 5 and 3 and swap them to get [3, 5, 2, 4]. Then it compares 5 and 2 and swaps again, resulting in [3, 2, 5, 4]. Continuing this way, after a few passes, the final sorted array becomes [2, 3, 4, 5]. Although easy to use, Bubble Sort is slow for large inputs due to its O(n²) time complexity.

Q) What are the advantages of linear probing in hashing? Discuss how quadratic probing can be used to solve some of these problems.

Linear probing is a simple and efficient collision resolution technique in hashing. Its main advantages are ease of implementation and good cache performance, since memory accesses happen in a sequential pattern.

Advantages of linear probing:

  • Simple logic and implementation, which makes it popular in practice.
  • Cache-friendly due to sequential slot checking, leading to faster searches and insertions in many cases.
  • Uses minimal extra memory compared to other collision resolution strategies.

However, linear probing suffers from primary clustering—when several keys hash to adjacent table positions, large clusters of occupied cells form, slowing down searches and further insertions.

Quadratic probing solves this problem by using a quadratic function to determine the next slot to probe (i.e., the probe distance grows as i²), which scatters the entries more evenly and significantly reduces primary clustering. This helps keys distribute better in the table, resulting in less clustering and improved search and insertion performance compared to linear probing.

In quadratic probing, the next slot to probe after a collision is calculated by:

Index = (hash + i²) % table size

where i represents the probe attempt number.

This technique is effective in practical scenarios to maintain efficient access times in hash tables and solve the major drawback of linear probing.

Q) Write a C program to sort 100 integer numbers using the selection sort procedure. Discuss the worst-case time complexity of the algorithm.


#include

void selectionSort(int arr[], int n) {
int i, j, minIndex, temp;

for (i = 0; i < n – 1; i++) {
minIndex = i;

// Find the index of the minimum element
for (j = i + 1; j < n; j++) {
if (arr[j] < arr[minIndex]) {
minIndex = j;
}
}

// Swap the found minimum with the first element
temp = arr[minIndex];
arr[minIndex] = arr[i];
arr[i] = temp;
}
}

int main() {
int arr[100];
int i;

printf(“Enter 100 integers:\n”);
for (i = 0; i < 100; i++) {
scanf(“%d”, &arr[i]);
}

selectionSort(arr, 100);

printf(“\nSorted numbers:\n”);
for (i = 0; i < 100; i++) {
printf(“%d “, arr[i]);
}

return 0;
}

Worst-Case Time Complexity of Selection Sort

Selection Sort works by repeatedly selecting the smallest element from the unsorted portion and placing it at the beginning.
Even in the worst case, the algorithm always performs:

  • (n – 1) comparisons in the first pass

  • (n – 2) in the second pass

  • 1 in the last pass

Thus, total comparisons:

(n−1)+(n−2)+⋯+1=n(n−1)/2

For big-O notation:

Time Complexity = O(n²)

Worst Case:

Occurs when the array is in reverse order, but selection sort still performs the same number of comparisons, so:

  • Worst-case time complexity: O(n²)

  • Worst-case space complexity: O(1) (in-place sorting)


Q) Write a program in C to implement the binary search algorithm. Also discuss the average behavior of the algorithm.

#include

int binarySearch(int arr[], int n, int key) {
int low = 0, high = n – 1, mid;

while (low <= high) {
mid = (low + high) / 2;

if (arr[mid] == key)
return mid; // Key found
else if (arr[mid] < key)
low = mid + 1; // Search in right half
else
high = mid – 1; // Search in left half
}

return -1; // Key not found
}

int main() {
int arr[100], n, key, i, result;

printf(“Enter number of elements: “);
scanf(“%d”, &n);

printf(“Enter %d sorted integers:\n”, n);
for (i = 0; i < n; i++)
scanf(“%d”, &arr[i]);

printf(“Enter element to search: “);
scanf(“%d”, &key);

result = binarySearch(arr, n, key);

if (result == -1)
printf(“Element not found.\n”);
else
printf(“Element found at index %d\n”, result);

return 0;
}

Average-Case Behavior of Binary Search

Binary search repeatedly divides the sorted array into two halves and searches only in the half where the element may exist.

Average Case:

  • In the average case, the element is found after repeatedly halving the search interval.

  • After each step, the remaining elements reduce to half:
    n, n/2, n/4, n/8, … , 1

Number of steps = number of times n can be divided by 2:

log₂n

Therefore:

  • Average-case time complexity = O(log n)

  • Reason: On average, the key will be found after a logarithmic number of comparisons because the search interval shrinks by half each time.