Comp201: Principles of Object-Oriented Programming I
Spring 2008 -- Recursion   


Recursion appears both as recursive algorithms and as recursive data structures. As we will see, the two are not separable. This discussion is predicated on the discussion on Lec12: Recursive Composition and the Composite Design Pattern.

Recursive Data Structures

A recursive data structure is an object or class that contains an abstraction of itself.

In mathematical terms, we say that the object is "isomorphic" to itself. The basic embodiment of a recursive data structure is the Composite Design pattern. Recursive data structures enable us to represent repetitive abstract patterns. In such, they enable us to generate or represent complexity from simplicity.

Characteristics of a recursive data structure:

Recursive data structures are arguably the most important data structure in computer science as they are able to represent arbitrarily complex data. Inded, if one looks across all the sciences, one sees that one of the fundamental modeling tools used is to attempt to represent complex systems as simple repetitive patterns.

Recursive Algorithms

In order to process a recursive data structure, it makes sense that any such algorithm should reflect the recursive nature of the data structure:

A recursive algorithm is a process that accomplishes its task, in part, by calling an abstraction of itself

Recursion is thus a special case of delegation.

In light of the above definition, it is not surprising that recursive algorithms and recursive data structures share common characteristics:

Characteristics of a recursive algorithm:

The similarity between recursive algorithms and recursive data structures is because in an OO system, the structure drives the algorithm. That is, it is the form of the data structure that determines the form if the algorithm. In an OO system, objects are asked to perform algorithms as they pertain to that object--that is, an algorithm on an object is a method of that object. The data has the behavior. The data is intelligent. This is in contrast to procedural or functional progamming, where data is handed to the behavior. That is, stand-alone functions are used to process non-intelligent data. (Caveat: With all that said, in more advanced designs, we will show the algorithm can be decoupled from its data structure and thus be removed as a method of the data. This will not change the above principles however.)

The basic notions of creating a recursive algorithm on a composite design pattern structure are

This is the Interpreter Design pattern. Notice that no checks of the type of data being processed (e.g. base case or inductive case) are necessary. Each data object knows intrinsically what it is and thus what it should do. This is called "polymorphic dispatching" when an abstract method is called on an abstract data object, resulting in a particular concrete behavior corresponding to the concrete object used. In other words, we call a method on a list, but get the behavior of an empty list if that what the list is, or we get the behavior of a non-empty list if that is what the list is.

In order to prove that a recursive algorithm will eventually complete, one must show that every time the recursive call is made, the "problem" is getting "smaller". The "problem" is usually the set pf possible objects that the recursive call could be called upon. For instance, when recursively processing a list, every call to the rest of the list is calling on a list that is getting progessively shorter. At times, one cannot prove that the problem is definitely getting smaller. This does not mean that the algorithm will never end, it just means that there is a non-zero probability that it will go on forever.

One of the key aspects of a recursive algorithm is that in the inductive case, the inductive method makes the recursive call to another object's method. But in doing so, it has to wait for the called method to return with the needed result. This method that is waiting for the recursive call to return is called a "pending operation". For instance, at the time the empty list (base case) is reached during a recursive algorithm on a list, every non-empty node in that list has a pending operation.

Below is an example of generally what is happening in three linked objects during the call to the recursive method of the first object:

The steps through a recursive method call through three objects. 

 

 

The only call by the outside object (the original caller) is to the recursive method in Object1.  The calls to the same method in Object2 and Object 3 are done from inside the recursive method itself.

Object1 and Object2 are inductive cases, so they make the recursive call to the next object.

Object3 is the base case, so it makes no recursive call and simply returns its result.

 

Additional materials:

http://www.exciton.cs.rice.edu/cs150/labs/lab5/

Tail Recursion

Consider the problem of finding the last element in a list.  Again we need to interpret what it means to be the last element of (a) the empty list and (b) a non-empty list.

To recapitulate, here is how a list can find its own last element.

How does rest use the first element of the enclosing list to help find the last element of the enclosing list?

Here is the code.

/**
 * Represents the abstract list structure.
 */
public interface IList {
    /**
     * Returns the last element in this IList.
     */
    Object getLast();
    

    /**
     * Given the first of the preceding list, returns the last element of the preceding list.
     * @param acc the first of the preceding list.
     */
    Object getLastHelp(Object acc);

}
/**
 * Represents empty lists.
 */
public class MTList implements IList {  
    // Singleton Pattern
    public static final MTList Singleton = new MTList();
    private MTList() {
    }
  

    /**
     * Returns null to signify there is no last element in the empty list.
     */
    public Object getLast() {
        return null;
    }
    /**
     * Returns acc, because being the first element of the preceding list, 
     * it is the last element.
     */
    public Object getLastHelp(Object acc) {
        return acc;
    }

}
/**
 * Represents non-empty lists.
 */
public class NEList implements IList {

   private Object _first;
   private IList _rest;
   public NEList(Object f, IList r) {
       _first = f;
       _rest = r;
   }

   

    /**
     * Passes first to rest and asks for help to find the last element.
     */
    public Object getLast() {
        return _rest.getLastHelp(_first);
    }

    
    /**
     * Passes first to rest and asks for help to find the last element.
     */
    public Object getLastHelp(Object acc) {
        return _rest.getLastHelp(_first);
    }
}

 

The above algorithm to compute the last element of a list is another example of forward accumulation.  Note that in the above, getLast is not recursive while getLastHelp is recursive.  Also note that for the NEList, the last computation in getLastHelp is a recursive call to getLastHelp on _rest.  There is no other computation after the recursive call returns.  This kind of recursion is called tail recursion.  Tail recursion is important for program performance.  A smart compiler can recognize tail recursion and generate code that speeds up the computation by bypassing unnecessary setup code each time a recursive call is made.

 

 


Last Revised Thursday, 03-Jun-2010 09:50:29 CDT

©2008 Stephen Wong and Dung Nguyen