Home | Intro | Proposal | Algorithmics | Results | Code | Images | References |
Face recognition is an important task for computer vision systems, and it remains an open and active area of research. We propose to implement and experiment with a promising approach to this problem: eigenfaces.
Think of an image of a face (greyscale) as an N by N matrix - this can be rearranged to form a vector of length N2, which is just a point in RN2. That's a very high dimensional space, but picture of faces only occupy a relatively small part of it. By doing some straightforward principal component analysis (discussion of this part to be added later), a smaller set of M "eigenfaces" can be chosen (M is a design parameter), and the faces to be remembered can be expressed as a linear combination of these M eigenfaces. In other words the faces have been transformed from the image domain (where they take up lots of storage space: ~N2) to the face domain (where they require much less: ~M). This will necessarily be an approximation, but it turns out to be a pretty good one in practice. To recognize a new image of a face, simply transform it to the face domain and take an inner product with each of the known faces to see if we have a match. Easy!
Time for some simplifying assumptions. Faces presented for recognition will be scaled, rotated, and shifted the same as they were first seen. However, changes in lighting, facial expression, etc are fair game. No hats or heavy make-up or anything silly like that.
The general implementation plan is:
Most likely the actual implementation stuff will be done in some combination of Python and Matlab, unless we get crazy and decide to try this in real-time (it should be feasible - these are efficient algorithms), in which case, some C will be necessary.
(More references and links coming - we have a number of other articles on paper, most provided by Paul Tevis, who gave a talk on the subject for Comp 540)