next up previous
Next: Acknowledgments Up: Digital Processing of Affective Previous: Pattern Recognition Results

Summary

Affect recognition is an imperfect task even between humans. In studies of affect in speech, in absence of context clues, people were able to identify affective states with only about a 60% recognition rate [9] and approval or disapproval with a rate of 65-85% [10].

Computers are now able to begin to recognize affective facial expression through video analysis at rates of 80% - 98% on small sets of emotions [11], [12], with the best performance on deliberately expressed (mildly exaggerated) emotions.

Processing physiological signals offers yet another avenue to communicating affective state to a computer. These preliminary results show that single emotions such as anger and emotional attributes such as arousal and valence can be identified at a level comparable to human recognition of emotion. This may be used to help the computer learn the user's preferences in the same way the people learn from each other's affective responses.

Perhaps the best results will come when the computer has access to multiple modalities: face, voice, gesture, posture, physiological sensing, and so forth. With information from context such as user location, time of day, workload, proximity to friendly or unfriendly people, and diction analysis of direct user response, affect analysis should become even more precise in building a long term model of an individual's affective responses. This information can then be used to help computers choose more intelligent responses when interacting with people.


 
Table: Although all eight classes could not be separated, several subsets of three emotion classes could be differentiated using these features.
Emotion Set Linear Quadratic
em1 em2 em3 errors correct errors correct
no em joy rev 1-7-4 80 % 2-5-4 82 %
anger hate r.l. 6-4-3 78 % 5-4-3 80 %
anger hate rev 5-4-3 80 % 5-3-3 82 %
anger grief rev 2-6-2 83 % 3-4-1 87 %
grief love rev 6-6-4 73 % 5-5-5 75 %
anger r.love rev 4-5-3 80 % 4-6-3 78 %

           
 


next up previous
Next: Acknowledgments Up: Digital Processing of Affective Previous: Pattern Recognition Results
Jennifer Healey - fenn@media.mit.edu
1999-02-11