We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American Sign Language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92\% word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98\% accuracy (97\% with an unrestricted grammar). Both experiments use a 40 word lexicon.