Pledge of Honor - Print Name here: |
1: 20 pts | 2: 15 pts | 3: 25pts | 4: 20 pts | 5a: 10pts | 5b: 10pts | Total: 100 pts | |
"Splay trees" or "Self-adjusting Binary Search Trees" are a variant of binary search trees that achieve better performance at little complexity. (In other words, the code is actually fairly simple.) The difference between splay trees and traditional binary search trees is that when you descend the tree in search of a key, whether for the purposes of lookup(), remove(), or insert(), you perform a "splay" operation. Intuitively, the effect of the "splay" operation is similar to moving an item with the desired key to the front of an LRStruct. In the context of a BiTree, that translates to making the item with the desired key the new root of the tree. We still, however, preserve the binary search tree property. So, all items with keys greater the new root form the new root's right subtree and all items with keys less than the new root form the new root's left subtree. The new root's left and right subtrees are built as follows:
As you descend the tree, if the desired key is less than the current BiTree's key, then you perform the following steps:
When we have either
we reassemble the tree: The current BiTree's left child becomes the rightmost child the new root's left subtree and the current BiTree's right child becomes the leftmost child the new root's right subtree, the current BiTree is assigned its new left and right subtrees, and finally it becomes the root.
Write a visitor implementing the above splay operation. You may find it helpful to define a helper class, akin to DictionaryPair, that is passed as input to the emptyCase() and nonEmptyCase() methods. For example, this object could contain references to the new left and right subtrees under construction.
Imagine running a simulation of heap sort by hand with pencil and paper. After simulating a few split()s, you would observe that the item swapped into the root of the heap, the variable temp from split(), almost always winds up as a leaf. Why? Prior to the split(), it was already a leaf in our heap. Thus, its value is expected to be less than the items near the top of our heap.
Given this observation, we can on average reduce by half the number of
comparisons between items performed by siftDown(). Specifically, this can
be achieved by eliminating the comparison between A[child] and
dat on the way down the heap from the root to a leaf. (See below.)
public void siftDown(int[] A, int lo, int cur, int hi){ int dat = A[cur]; // hold on to data. int child = 2 * cur + 1 - lo; // index of left child of A[cur]. boolean done = hi < child;
while (!done) {
if (child < hi && A[child + 1] > A[child]) { child++; } // child is the index of the larger of the two children.
if (A[child] > dat) { // ELIMINATE THIS COMPARISON. A[cur] = A[child]; cur = child; child = 2 * cur + 1 - lo; done = hi < child; } // A[cur] is greater than its children. else { // A[cur] <= A[child]. done = true; // heap condition is satisfied. } // A[cur] is greater than its children. } // location found for temp. A[cur] = dat; }
Write this optimized version of siftDown().
Consider the following extension to the IDictionary interface:
public interface IRangeDictionary extends IDictionary { /** * Removes every DictionaryPair with a key greater than or equal * to keyLo and less than or equal to keyHi. Returns the number * of DictionaryPairs removed. Throws an exception if keyLo is * greater than keyHi. */ public int removeRange(Comparable keyLo, Comparable keyHi); }
Write an implementation of removeRange for
Note well: It may NOT be possible to efficiently enumerate every item between keyLo and keyHi. Therefore, your solutions to (1) and (2) must not assume this. You are free to use all the visitors provided to you in the lectures.
(3 points) Finally, after writing (1) and (2), tell us which structure is better suited to supporting removeRange()and explain in one or two sentences why.
The problem of selecting the ith smallest element in an array of N elements can be done in worst case O(NlogN) time, by simply sorting the array using Heap Sort, Merge Sort, and in expected O(NlogN) time using Quick Sort. For example, the third smallest element in the array {79, 23, 48, -2, 0, 55} is 23. However, it is possible to find such an element without completely sorting the array in expected linear time, O(N). The key idea here is to keep splitting the array without ever rejoining the sub-arrays. Which split operation of the above three sort algorithms would you use?
Design and implement such an algorithm by subclassing from an appropriate sort algorithm and adding a method called
Integer selectIthSmallest(int i, Integer[] A)
to select the ith smallest element from an array A of Integer. For example, calling selectIthSmallest(3, {79, 23, 48, -2, 0, 55}) will return 23.