Home | Intro | Proposal | Algorithmics | Results | Code | Images | References

Eigenfaces Group - Proposal

This is the original proposal that was submitted when the project was assigned.

Face recognition is an important task for computer vision systems, and it remains an open and active area of research. We propose to implement and experiment with a promising approach to this problem: eigenfaces.

Think of an image of a face (greyscale) as an N by N matrix - this can be rearranged to form a vector of length N2, which is just a point in RN2. That's a very high dimensional space, but picture of faces only occupy a relatively small part of it. By doing some straightforward principal component analysis (discussion of this part to be added later), a smaller set of M "eigenfaces" can be chosen (M is a design parameter), and the faces to be remembered can be expressed as a linear combination of these M eigenfaces. In other words the faces have been transformed from the image domain (where they take up lots of storage space: ~N2) to the face domain (where they require much less: ~M). This will necessarily be an approximation, but it turns out to be a pretty good one in practice. To recognize a new image of a face, simply transform it to the face domain and take an inner product with each of the known faces to see if we have a match. Easy!

Time for some simplifying assumptions. Faces presented for recognition will be scaled, rotated, and shifted the same as they were first seen. However, changes in lighting, facial expression, etc are fair game. No hats or heavy make-up or anything silly like that.

The general implementation plan is:

  1. Take some pictures with a handy digital camera (got one).
  2. Scale, rotate, crop, etc the images by hand using image editing software (yay gimp).
  3. Construct the eigenfaces.
  4. Compute and store face domain versions of each person's face.
  5. Grab and fix up some more images - some of known people and some of unknown people.
  6. Test the recognizer!

Most likely the actual implementation stuff will be done in some combination of Python and Matlab, unless we get crazy and decide to try this in real-time (it should be feasible - these are efficient algorithms), in which case, some C will be necessary.


  1. A. Pentland, B. Moghaddam, and T. Starner. View-based and modular eigenspaces for face recognition. IEEE Conference on Computer Vision and Pattern Recognition, 1994.
  2. L. Sirovich and M. Kirby. Low-dimensional procedure for the characterization of human faces. Journal of the Optical Society of America, March 1987
  3. M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 1991.
  4. M. Turk and A. Pentland. Face processing: models for recognition. SPIE Vol. 1192: Intelligent Robots and Computer Vision VIII: Algorithms and Techniques, 1989.

(More references and links coming - we have a number of other articles on paper, most provided by Paul Tevis, who gave a talk on the subject for Comp 540)

Tim Danner <tdanner@rice.edu>, Indraneel Datta <kashent@rice.edu>
Last modified: Fri Dec 17 20:46:31 CST 1999