next up previous
Next: Skin Classification using EM Up: MIT Media Laboratory, Perceptual Previous: Introduction

Facial Feature Detection

Automatic face detection and facial feature localization has been a difficult problem in the field of computer vision for several years. This can be explained by the large variation a face can have in a scene due to factors such as facial position, expression, pose, illumination and background clutter. We propose a system that uses simple image processing techniques to find candidates for faces and facial features and then selects the candidate formation that maximizes the likelihood of being a face, thereby pruning the false alarm candidates.

Starting with skin classification, the system finds blob-like regions in the image which might be faces. The symmetry transform is applied to the skin regions to find dark blobs that could be eyes and horizontal limbs that could be a mouth. Simple vertical edge detection yields an approximation for the locus of the nose. A 3D model of the average human head is then aligned to anchor points at the position of the eyes, nose and mouth and warped into a canonical frontal view. By warping the image at various anchor points and minimizing ``Distance From Face Space'', the system finds the most likely locations of eyes, nose and mouth from all possible candidates. The algorithm [6] is explained in further detail below.



 
next up previous
Next: Skin Classification using EM Up: MIT Media Laboratory, Perceptual Previous: Introduction
Tony Jebara
1999-12-07