Exam #3 - Comp 212

Rice University - Spring 2002

Instructions

  1. The examination is open book/notes.  You are allowed to consult any written materials available under the sun, except code from your current Comp212 classmates.  If your code is based on materials that we did not provide you, just give us the appropriate references.
  2. You are not allowed to discuss the exam in any way with anybody beside the instructors (Cox and Nguyen) of this course.
  3. Do not post your questions on the newsgroup.  E-mail to both of us, dxnguyen@rice.edu and alc@rice.edu, your questions instead. We  will reply to you directly, and if deemed necessary, we will forward our reply to the newsgroup as well.  We will check our e-mail as often as we can to catch and respond to your questions
  4. Write your code in the most object-oriented way possible, that is with the fewest number of control statements and no checking of the states and class types of the objects involved in any way.
  5. We expect correct Java syntax.
  6. There are 5 questions for a total of  100 points.  You have 5 hours to do the exam.
  7. Submit a hard copy of your exam with all answers in type-written form, except perhaps diagrams (if any) which can be hand-drawn.  Be sure to print your name and sign the honor pledge.
  8. Turn in the exam to Nguyen only.  Send Nguyen an e-mail to notify your submission.  He will reply to confirm your submission.
  9. The deadline for submission, except for graduating students, is  Wednesday, May 08, 2002 at 5 PM.  For graduating students, the deadline is Thursday, May 2, 2002, 12:00 PM.
Pledge of Honor - Print Name here: 

 

1: 20 pts 2: 15 pts 3: 25pts 4: 20 pts 5a: 10pts 5b: 10pts Total: 100 pts

 

1. Splay Tree  (20 points total)

"Splay trees" or "Self-adjusting Binary Search Trees" are a variant of binary search trees that achieve better performance at little complexity. (In other words, the code is actually fairly simple.) The difference between splay trees and traditional binary search trees is that when you descend the tree in search of a key, whether for the purposes of lookup(), remove(), or insert(), you perform a "splay" operation. Intuitively, the effect of the "splay" operation is similar to moving an item with the desired key to the front of an LRStruct. In the context of a BiTree, that translates to making the item with the desired key the new root of the tree. We still, however, preserve the binary search tree property. So, all items with keys greater the new root form the new root's right subtree and all items with keys less than the new root form the new root's left subtree. The new root's left and right subtrees are built as follows:

As you descend the tree, if the desired key is less than the current BiTree's key, then you perform the following steps:

  1. Determine if the desired key is also less than the current BiTree's left child's key. If so, perform a right rotation around the current BiTree. (We will now refer to the BiTree that moved up to the top as a result of the right rotation as our "current BiTree.")
  2. Attach the current BiTree as the leftmost child of the right subtree under construction.
  3. Descend the tree to the left. If the desired key is greater than the current BiTree's key, perform the mirror image of the above steps.

When we have either

we reassemble the tree: The current BiTree's left child becomes the rightmost child the new root's left subtree and the current BiTree's right child becomes the leftmost child the new root's right subtree, the current BiTree is assigned its new left and right subtrees, and finally it becomes the root.

Write a visitor implementing the above splay operation. You may find it helpful to define a helper class, akin to DictionaryPair, that is passed as input to the emptyCase() and nonEmptyCase() methods. For example, this object could contain references to the new left and right subtrees under construction.

2. Optimized Heap Sort (15 points total)

Imagine running a simulation of heap sort by hand with pencil and paper.  After simulating a few split()s, you would observe that the item swapped into the root of the heap, the variable temp from split(), almost always winds up as a leaf.  Why?  Prior to the split(), it was already a leaf in our heap.  Thus, its value is expected to be less than the items near the top of our heap.

Given this observation, we can on average reduce by half the number of comparisons between items performed by siftDown().  Specifically, this can be achieved by eliminating the comparison between A[child] and
dat on the way down the heap from the root to a leaf.  (See below.)

public void siftDown(int[] A, int lo, int cur, int hi){
  int dat = A[cur];             // hold on to data.
  int child = 2 * cur + 1 - lo; // index of left child of A[cur].
  boolean done = hi < child;
  while (!done) {
    if (child < hi && A[child + 1] > A[child]) {
      child++;
    } // child is the index of the larger of the two children.
    if (A[child] > dat) { // ELIMINATE THIS COMPARISON.
      A[cur] = A[child];
      cur = child;
      child = 2 * cur + 1 - lo;
      done = hi < child;
    }                     // A[cur] is greater than its children.
    else {                // A[cur] <= A[child].
      done = true;        // heap condition is satisfied.
    }                     // A[cur] is greater than its children.
  }                       // location found for temp.
  A[cur] = dat;
}

Write this optimized version of siftDown().

3. More Dictionary (25 points total)

Consider the following extension to the IDictionary interface:

public interface IRangeDictionary extends IDictionary {
    /**
     * Removes every DictionaryPair with a key greater than or equal
     * to keyLo and less than or equal to keyHi.  Returns the number
     * of DictionaryPairs removed.  Throws an exception if keyLo is
     * greater than keyHi.
     */
    public int removeRange(Comparable keyLo, Comparable keyHi);
}

Write an implementation of removeRange for

  1. (10 points) hash tables and
  2. (12 points) binary search trees. 

Note well: It may NOT be possible to efficiently enumerate every item between keyLo and keyHi.  Therefore, your solutions to (1) and (2) must not assume this.  You are free to use all the visitors provided to you in the lectures.

(3 points) Finally, after writing (1) and (2), tell us which structure is better suited to supporting removeRange()and explain in one or two sentences why.

4. Selecting the ith smallest element (20 points total)

The problem of selecting the ith smallest element in an array of N elements can be done in worst case O(NlogN) time, by simply sorting the array using Heap Sort, Merge Sort, and in expected O(NlogN) time using Quick Sort.  For example, the third smallest element in the array {79, 23, 48, -2, 0, 55} is 23.  However, it is possible to find such an element without completely sorting the array in expected linear time, O(N).  The key idea here is to keep splitting the array without ever rejoining the sub-arrays.  Which split operation of the above three sort algorithms would you use?

 Design and implement such an algorithm by subclassing from an appropriate sort algorithm and adding a method called

Integer selectIthSmallest(int i, Integer[] A)

to select the ith smallest element from an array A of Integer.  For example, calling selectIthSmallest(3, {79, 23, 48, -2, 0, 55}) will return 23.

5. TreeN and the Generalized Visitor Pattern (20 points total)

  1. (10 points) Write a visitor called CountLeaves to count the number of leaves in a TreeN host. 
  2. (10 points) Write a visitor called TreeHeight to compute the height of the TreeN host.  The empty tree has height 0.  A tree with only one root node has height 1.