Understanding Algorithm Analysis and Design

What does it mean when one says that an algorithm is O(n2)?

By definition, the O() notation allows us to describe an upper bound on the growth of whatever it is that we wish to estimate. So, O(n2) would mean that whatever it is that we are estimating does not grow any faster than quadratic as the independent variable n grows. Now, since algorithm analysis has come to mean estimating the resource requirement of an algorithm, and time complexity is of primary importance, to say that an algorithm is O(n2) usually would mean that its time complexity is no worse than quadratic in growth. Some examples of algorithms we have seen that have O(n2) time complexity are INSERTION-SORT, BUBBLESORT, and MAXSORT, where n is taken to be the number of items to be sorted. Some examples of algorithms that have O(n2) space complexity are linear programming algorithms that employ n x n matrices to arrive at the solution, where n may be the number of unknown variables in the problem instance, or graph algorithms where the graph with n vertices is represented using an adjacency matrix that is n x n in size.

Suppose that computers were infinitely fast and computer memory was free. Would the study of algorithms still be relevant?

The study of algorithms would still be relevant even if we had infinitely fast computers and all the memory we can get our hands on (since it is free). Correctness would still be a desirable and critical characteristic for the algorithms designed. Who wants to implement an algorithm that does not work or solves the wrong problem correctly? Thus, techniques for establishing program correctness would still be a legitimate object of study. Also, algorithms usually do not “write themselves” and so the different algorithmic schemes and approaches would still be useful “stuff” to learn so that when a new problem presents itself, one can look at how old problems were tackled and solves to gain insight into how the new problem may itself be solved. Solution schemes like “divide-and-conquer”, “recursion”, “incremental approach”, “dynamic programming”, “randomization”, “simulated annealing”, “genetic algorithms”, “heuristics”, etc., would still be interesting and fruitful objects of study in algorithm analysis and design.

1. A(n) _________ can be defined as a well-defined computational procedure that takes input and produces output.

algorithm

2. Sorting is a fundamental operation in computer science that takes a collection of values as input and rearranges them into some specified ____ .

order

3. An algorithm is said to be correct if, for ______ input instance(s), it halts with the correct output.

every

4. A data structure is a way to store and organize data in order to facilitate _____ and ___.

access/modifications.

5. The usual measure of efficiency is speed, which is how long an algorithm takes to produce ____.

its results

6. _____ combines the advantages of storing in place and optimality by making use of a tree structure that can be utilized to implement priority queues.

HEAP-SORT

7. The ________ running time for all possible input is what we call the worst-case complexity of an algorithm.

longest

8. The divide-and-conquer algorithmic technique breaks the problem into several ___.

subproblems

9. When g(n) is an asymptotically _______ bound for f(n), we write f(n) = Θ(g(n)).

tight

10. When we say that the Ω() notation is reflexive, we mean that, for all f(n):

f(n) = Ω(f(n))

11. When we say that the Ω() notation is transitive, we mean that, for all f(n), g(n), h(n):

f(n) = Ω(g(n)) and g(n) = Ω(h(n)) => (f(n) = Ω(h(n))

12. The _________ method for solving recurrences makes use of mathematical induction.

substitution

13. The __________ method is best used to generate a good guess which, in turn, can be confirmed with one of the other two methods.

recursion-tree

14. Sorting based on ________ comparisons between sortable objects admits Ω(n lg n) as a lower bound to the worst-case complexity of the solution.

binary

15. All of these sorting algorithms sort objects in optimal worst-case time except:

ATROCIOUS-SORT ( merge-sort and heap-sort correct )

16. In a max-heap, the smallest element is stored at __.

a leaf

17. Linear time sorting is possible because __.

objects can be sorted without binary comparisons

18. A decision tree is a(n) ______ tree that represents the comparisons between elements that are performed by a particular sorting algorithm.

binary

19.COUNTING-SORT is stable, which means that:

duplicate values appear in the same order in the output as they did in the input

20. ____ is a sorting algorithm that was once used by card-sorting machines invented by Herman Hollerith.

Radix-Sort

21. __ has an expected time complexity of Θ(n) but in theory, it can have worst-case time complexity of O(n^2), although this is highly improbable.

Bucket-sort

22. In an ordinary array, we store the element whose key is k in position k of the array. By contrast, in a hash table, we use a hash function to generate from the key k a value h(k) to ______ the hash table.

index into

23. The analysis of hash table operations is performed in terms of the load factor, which is the ratio between n, the number of ______ in the table, and m, the number of _____ in the table.

elements/slots

24. Hash functions that use the division method, e.g., h(k) = k mod m, are _____ but have the disadvantage of having to avoid certain values.

fast

25. In open addressing, the action of ________ a slot in the hash table is known as a probe.

Examining

26. The nodes in a binary search tree can have this many parents:

Just (a) and (b) (0 and 1)

27. This binary search tree is also a min-heap:

All of these are appropriate answers. ( an empty tree, a tree with the root and no children, a tree with a root and a right child and no other nodes.)

28. The binary-search-tree property allows us to print keys in a binary search tree in ______ order, recursively, using an algorithm called an inorder tree walk :

sorted

29. n querying a binary search tree, the best-case time complexity is _____, where h is the height of the binary search tree.

Ω(1)

30. The extreme key values in a binary search tree can be determined in time _____, where h is the height of the tree.

O(h).

31. The predecessor of a key in a binary search tree can be determined in time _____, where h is the height of the tree.

O(h).

32. The height h of a binary search tree is a random value that depends on the order in which the keys are entered into the tree. Its range of values is best described by the “interval” [Θ((1) .. Θ((n)], where n is the number of keys in the tree. Assess the validity of this statement.

The statement is not valid.

33. The expected value of the height h of a binary search tree when the keys are inserted in random order is O(lg n), where n is the number of keys, which is why it is a common preprocessing step to randomize the keys. Assess the validity of this statement.

valid.

34. Given a graph G = (V, E), where we know that G is a tree, it is more economical in terms of space requirements to use ______ to represent G.

adjacency lists

35. Given a graph G= (V, E), where we know that G is a complete graph, it is more efficient to use a(n) ____ to represent G, if we need to have the determination of adjacency between two vertices in time O(1).

adjacency matrix

36. A unified graph search algorithm can be designed where the only difference would be the data structure used to store the vertices yet to be processed. For breadth-first search this would be a queue. Assess the validity of this statement.

valid.

37. Continuing the discussion of No. 36 above. For depth-first search the data structure would be a stack. Assess the validity of this statement.

valid.

38. A tree is a special graph T = (V, E) that is connected and has the smallest number of edges that allows connectedness of the vertices (i.e., any less and the graph cannot be connected). The equation that relates the two values |V| and |E| is |E| = lg |V|. Assess the validity of this statement.

not valid.

39. Computing a ___________ for a weighted graph can be done with either Kruskal’s or Prim’s algorithm.

minimum spanning tree

40. If the weights of the edges of a connected graph are all non-negative integers then its minimum spanning tree will be unique. Assess the validity of this statement.

not valid,