Visual Recognition of American Sign Language Using Hidden Markov Models

Thad Starner and Alex Pentland

To appear in 1995 International Workshop on Automatic Face and Gesture Recognition Zurich, Switzerland

Compressed ASCII PostScript 134 Kilobyte File

Hidden Markov models (HMM's) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe an HMM-based system for recognizing sentence-level American Sign Language (ASL) that attains a word accuracy of 99.2% without explicitly modeling the fingers (40 word lexicon, 5 word sentences with a strong grammar).