Comprehensive Guide to Sorting Algorithms and Data Structures

1. Lower Bound on Sorting Algorithm Complexity

Any correct sorting algorithm, regardless of its type (comparison-based or otherwise), has a lower bound on its worst-case complexity of Ω(n).

2. In-Place Sorting Algorithms and Auxiliary Storage

Statement: Sorting algorithms that sort objects “in place” require O(1) auxiliary storage (on top of the space needed to store the values to be sorted).

Assessment: Valid

3. Sorting Algorithms and In-Place Sorting

Question: All of these sorting algorithms sort objects “in place” except:

Answer: COUNTING-SORT

4. Linear Time Sorting Algorithms

Question: All of these sorting algorithms sort objects in linear time (worst-case or expected) except:

Answer: All of these are linear time algorithms (COUNTING-SORT, RADIX-SORT, BUCKET-SORT)

5. Location of Smallest Element in a Max-Heap

In a max-heap, the smallest element is stored at a leaf.

6. Location of Largest Element in a Binary Search Tree

In a binary search tree, the largest element is stored at a node without a right child.

7. Location of Smallest Element in a Min-Heap

In a min-heap, the smallest element is stored at the root.

8. Location of Smallest Element in a Binary Search Tree

In a binary search tree, the smallest element is stored at a node without a left child.

9. Worst-Case Time Complexity of Building a Binary Search Tree

Building a binary search tree from an input stream of sortable values can be done in at worst O(n2) time.

10. Best-Case Time Complexity of Building a Binary Search Tree

Building a binary search tree from an input stream of sortable values can be done in at best Ω(n log n) time.

11. Optimal Sorting Algorithm Using a Binary Tree

HEAP-SORT is an optimal sorting algorithm that makes use of a special binary tree that can be used to efficiently implement priority queues.

12. Skipping BUILD-MAX-HEAP in HEAP-SORT

Statement: In HEAP-SORT, if the incoming array A is in reverse order, one can skip the preliminary BUILD-MAX-HEAP step because A is already a max-heap.

Assessment: Valid

13. Data Structure Implemented with Binary Search Trees

Binary search trees can be used to implement this data structure efficiently: Dictionary

14. Lower Bound on HEAP-SORT’s Best-Case Running Time

HEAP-SORT admits Ω(n log n) as a lower bound on its best-case running time.

15. Upper Bound on HEAP-SORT’s Worst-Case Running Time

HEAP-SORT admits O(n log n) as an upper bound on its worst-case running time.

16. Impossibility of Linear Time Sorting

Statement: Linear time sorting is impossible because ______.

Answer: One of these is a valid answer (that is, linear time sorting is, in fact, possible).

17. Decision Tree in Sorting Algorithms

A decision tree is a binary tree that represents operations involving elements in sorting algorithms based on binary comparisons.

18. COUNTING-SORT and Cumulative Frequency Distribution

Statement: COUNTING-SORT determines the cumulative frequency distribution of the elements to be sorted. This information is then used to place those elements directly into their position in the sorted output.

Assessment: Valid

19. Key Assumption of COUNTING-SORT

COUNTING-SORT depends on this key assumption about the elements to be sorted: They are integers in the interval [0, k].

20. COUNTING-SORT and In-Place Sorting

Statement: COUNTING-SORT makes use of auxiliary storage that is more than O(1), thus making it an in-place sorting algorithm.

Assessment: Not valid

21. Inventor of Sorting Machine Using RADIX-SORT

RADIX-SORT is the algorithm that was once used by sorting machines invented by Herman Hollerith for the census of 1890, whose company later became IBM.

22. Stability Requirement of Intermediate Sorting Algorithm in RADIX-SORT

RADIX-SORT requires that the intermediate sorting algorithm (the one that sorts on the basis of the digits) be stable, which means that “numbers with the same value appear in the output array in the same order as they do in the input array.”

23. In-Place Nature of RADIX-SORT Using COUNTING-SORT

Statement: If RADIX-SORT makes use of COUNTING-SORT as its intermediate sorting algorithm (the one that sorts on the basis of the digits), then it becomes an “in-place” sorting algorithm.

Assessment: Not valid

24. Input Distribution Assumption in BUCKET-SORT

BUCKET-SORT assumes that the input is generated by a random process that distributes elements over [0, 1) in this manner: uniformly.

25. In-Place Nature of BUCKET-SORT

Statement: The pseudo-code for BUCKET-SORT assumes that the input is an n-element array A and makes use of an auxiliary array B[0..n-1]. This makes it an “in-place” sorting algorithm.

Assessment: Not valid

26. Time Complexity of BUCKET-SORT

BUCKET-SORT has an expected time complexity of Θ(n). In theory, it can have a worst-case time complexity of O(n2), although this is highly improbable.

27. Hash Table for Dictionary Implementation

Statement: A hash table is effective for implementing a dictionary, a data structure that supports these functionalities: INSERT, SEARCH, and DELETE.

Assessment: Valid

28. Oracle Usage in Hash Tables

Statement: In an ordinary array, we store the element whose key is k in position k of the array. By contrast, in a hash table, we use an oracle to generate from the key k a value O(k) to index into the hash table.

Assessment: Not valid

29. Search Time in Hash Tables

The expected time to search for an element in a hash table is O(1) under reasonable assumptions. However, the worst-case search time is Θ(n).

30. One-to-One Nature of Hash Functions

Statement: Hash functions are one-to-one functions since they take whole numbers (i.e., keys) as arguments and produce yet another whole number (array indices) as a functional value.

Assessment: Not valid

31. Load Factor in Hash Table Analysis

The analysis of hash table operations using chaining to resolve collisions is performed in terms of the load factor α, which is the ratio of n, the number of elements in the table, and m, the number of slots in the table. This ratio can also be understood as: Average number of items in each of the hash table’s linked lists.

32. Key Assumption for Hash Functions

Statement: Hash functions assume that the keys are natural numbers. This assumption is not too severe because all keys are stored in binary anyway and can readily be interpreted as unsigned binary numbers.

Assessment: Valid

33. Division Method in Hash Functions

Statement: Hash functions that use the division method, e.g., h(k) = k mod m, are fast but have the disadvantage of having to avoid certain values of m (e.g., powers of 2).

Assessment: Valid

34. Multiplication Method in Hash Functions

Hash functions that use the multiplication method, e.g., h(k) = |m(kA mod 1)|, have the advantage of not having to avoid certain values of m but may be: slower.

35. Probing in Open Addressing

In open addressing, the action of examining a slot in the hash table is known as a probe.

36. Primary Clustering in Linear Probing

The technique known as linear probing suffers from the increasing probability of primary clustering. This is when long runs of occupied (or once occupied) hash table slots build up.

37. Expected Complexity of Searches in Chained Hashing

The analysis of chained hashing resulted in the expected complexity of searches to be O(α), where α is the load factor.

38. Avoiding Stringy Binary Search Trees

Statement: Stringy binary search trees can be avoided by randomizing the input values.

Assessment: Valid

39. Number of Maximally Stringy Binary Search Trees

There are 2(n-1) maximally stringy binary search trees (where the height of the tree is n-1 if there are n nodes in the tree).

40. Number of Shapes for Binary Search Trees

There are the nth Catalan number Cn “shapes” that a binary search tree can have, where n is the number of nodes in the tree.