AFFECTIVE COMPUTING FOR HCI

Rosalind W. Picard

MIT Media Laboratory

Appears in Proceeding of HCI '99, Munich, Germany, August 1999.

1 Introduction

Not all computers need to pay attention to emotions, or to have emotional abilities. Some machines are useful as rigid tools, and it is fine to keep them that way. However, there are situations where the human-machine interaction could be improved by having machines naturally adapt to their users, and where communication about when, where, how, and how important it is to adapt involves emotional information, possibly including expressions of frustration, confusion, disliking, interest, and more. Affective computing expands human-computer interaction by including emotional communication together with appropriate means of handling affective information.

This paper highlights recent and ongoing work at the MIT Media Lab in affective computing, computing that relates to, arises from, or deliberately influences emotion. This work currently targets four broad areas related to HCI: (1) Reducing user frustration; (2) Enabling comfortable communication of user emotion; (3) Developing infrastructure and applications to handle affective information; and, (4) Building tools that help develop social-emotional skills.

2 Reducing User Frustration

Not only do many people feel frustration with technology, but they show it. A widely-publicized 1999 study by Concord Communications in the U.S. found that 84% of help-desk managers surveyed said that users admitted to engaging in "violent and abusive" behavior toward computers. It seems that no matter how hard we researchers work on perfecting the machine and interface design, frustration can occur in the interaction. Most HCI research has aimed to prevent frustration, which continues to be an important goal. However, there is also a need to address frustration at run-time. Affective computing can be used to address both: (1) Design-time and run-time identification of frustrating situations, and (2) Helping reduce user frustration during an interaction.

We have developed a system that gathers and analyzes two physiological signals together with mouse clicks in an effort to characterize episodes of user behavior when the user experiences problems (Fernandez and Picard 1997). Initial results were significantly better than random at detecting and recognizing such episodes in 21 out of 24 users. We are also adapting mice with pressure sensors to make it easy for people to deliberately express frustration at an application, and to have these moments of expression associated with software events. Even if the system is not smart enough to fix the problem that irritates you, it could (perhaps anonymously) begin to let designers know what those things are---providing a kind of continuous human factors analysis.

"It looks like things didn't go very well," and "We apologize to you for this inconvenience" are example statements that people use in helping one another manage negative emotions once they have occurred. Such statements are known to help alleviate strong negative emotions such as frustration or rage. But can a computer, which doesn't have feelings of caring, use such techniques effectively to help a user who is having a hard time? To investigate, we built an agent that practices some active listening, empathy, and sympathy, and tested it with 70 users who experienced various levels of frustration (Klein et al., 1999). The agent assesses frustration and interacts with the user through a text dialogue box (with no face, voice, fancy animation or use of the pronoun "I"). Compared to two control conditions, interactions with the emotion-savvy agent led to behavior indicative of a significant decrease in frustration. These results suggest that today's machines can begin to help reduce frustration, even when they are not yet smart enough to identify or fix the cause of the frustration.

3 Enabling Communication of User Emotion

People naturally express emotion to machines, but machines do not naturally recognize it. Emotion communication requires that a message be both sent and received. In addition to the efforts above aimed at user frustration, we are building tools to facilitate deliberate emotional expression by people, and to enable machines to recognize meaningful patterns of such expression.

Emotion can be sensed in an ongoing way, or by interrupting the user for feedback. Consider a focus group where participants are asked to indicate clarity of packaging labels. If while reading line 3, a subject furrows his brow in confusion, then he has communicated in parallel with the task at hand, which has many advantages. Alternatively, he could stop at the end of the task and rate the label as mildly confusing on a questionnaire---non-parallel affective communication---occuring via interruption of or at completion of the primary task. We are working to enable both kinds of communication, e.g., via eyeglasses that sense chances in facial muscles, such as furrowing the brow in confusion or interest (Scheirer et al., 1999). One advantage of these expression glasses is that they can be used in parallel with concentrating on a task or not, and can be activated either unconsciously or consciously. People are free to have a "poker face" to mask true confusion if they do not want to communicate their true feelings, and we think this is good.

We are also exploring multi-modal means of emotion communication. Current recognition rates are up to 81% in automatically detecting and recognizing which of eight emotions an actress expressed through four physiological channels (Vyzas and Picard 1999), which is at a level comparable to machine recognition of facial and vocal expressions. We are also beginning to analyze affect in speech jointly with other natural modes of expression. However, all these efforts seem to push the abilities of traditional pattern recognition and signal processing algorithms, which have difficulty handling the day-to-day and interpersonal variations of emotional expression; consequently, we are conducting basic research in machine learning theory and in pattern recognition to develop better methods.

It is important to keep in mind that some people do not feel comfortable with "parallel communication" of affect, especially with methods involving signals that people do not usually see. Users may prefer either no sensing, or non-parallel communication means such as dialogue boxes that they control, or tangible or non-tangible icons that they can "hit," "kick" or otherwise interact with to directly communicate affective feedback. People have strong feelings about if, when, where, and how they want to communicate their emotions, and it would be absurd if affective computing technology did not respect these feelings. It is important to develop a variety of means and give users choices.

4 Developing Infrastructure and Applications

Most people think it should be easy to gather data on frustration expression: Just sit a subject down in front of a computer running a certain operating system, and "voil¦!." Alternatively, hire an actor or actress to express emotions, and record them. If the actor uses method acting or another technique to try to self-induce true emotional feelings, then the results may closely approximate emotions that arise in natural situations. However, these examples are not as straightforward as they may seem at first: they are complicated by issues such as the artificiality of bringing people into laboratory settings, the mood and skill of an actor, whether or not an audience is present, the expectations of the subject who thinks you are trying to frustrate them, the unreliability of a given stimulus for inducing emotion, the fact that some emotions can be induced simply by a subject's thoughts (over which experimenters have little or no control), and the sheer difficulty of accurately sensing, synchronizing and understanding the "ground truth" of emotional data.

We have developed lab-based experimental methodologies for gathering data (Riseberg et al..1998). However, the best way to get realistic data may be to catch people expressing emotions to technology in everyday situations. Wearable and ubiquitous computing both offer new possibilities toward this goal. We have built "affective wearables" that sense information from a willing wearer going about daily activities (Picard and Healey 1997). Some of these wearables have been adapted to control devices for the user, such as a camera that saves video based on your arousal-response (Healey and Picard 1998), and a wearable "DJ" that not only tries to select music you like, but music that suits a feature of your mood (Healey et al.. 1998). We are sensing data from drivers in situ to learn about natural driving behaviors under stress (Healey et al.. 1999). We have also designed and built a wearable system to measure features of expression from professional conductors (Marrin and Picard 1998). Marrin is now adapting this "conductor's jacket" so the wearer can control the play of MIDI music in real-time while making expressive conducting gestures.

5 Building Tools to Develop Social-Emotional Skills

Autistics, who tend to have severely impaired social­emotional skills, have sometimes expressed that they love communicating by computer: computers allow for little transmission of non-verbal affective information and help level the playing field for them to communicate with non-autistics. Current intervention techniques for autistic children suggest that many of them can make progress recognizing and understanding the emotional expressions of people if given lots of examples to learn from, and extensive training with these examples. We have developed a system that is aimed at helping young autistic children learn to associate emotions with expressions and with situations. The system plays videos of both natural and animated situations giving rise to emotions, and the child interacts with the system by picking up one or more stuffed "dwarfs" that represent the set of emotions under study, and that wirelessly communicate with the computer. This effort, led by Kathi Blocher, is being tested with autistic kids aged 3-7 this month. We are also developing a stuffed animal, "Tigger," that exhibits expressive behaviors in response to how a child plays with it, discriminating potentially abusive actions like poking of the eyes from potentially playful actions like bouncing and light pulling on the tail. This work, led by Dana Kirsch, is also undergoing trials with young children.

Over the years, scientists have aimed to make machines that are intelligent and that help people be intelligent. However, they have almost completely ignored the role of emotion in intelligence, leading to an imbalance where emotions are almost always ignored. We do not wish to see the scale tilted out of balance the other way, where machines twitch at every emotional expression or become overly emotional and utterly intolerable. However, we think research is needed to learn about how affect can be used in a balanced, respectful and intelligent way; this should be the practical aim of affective computing in HCI.

6 References

These support this brief overview of HCI-related work in affective computing at the MIT Media Lab; for our references to related research not conducted at the MIT Media Lab, please see the lists in these articles.

Fernandez, R. and Picard, R.W. (1997) Signal Processing for Recognition of Human Frustration, Proc. IEEE ICASSP '98, Seattle, WA.

Healey, J., Dabek, F. and Picard, R.W. (1998). A New Affect-Perceiving Interface and its Application to Personalized Music Selection, Proc. 1998 Workshop on Perceptual User Interfaces, San Fransisco, CA.

Healey, J. and Picard, R.W. (1998). StartleCam: A Cybernetic Wearable Camera, Proc. Intl. Symp. on Wearable Computing, Pittsburgh, PA.

Healey, J., Seger, J., and Picard, R.W. (1999) Quantifying Driver Stress: Developing a System for Collecting and Processing Bio-Metric Signals in Natural Situations, Proc. Rocky-Mt. Bio-Eng. Symp.. Boulder, CO.

Klein, J., Moon, Y, and Picard, R. W. (1999). This Computer Responds to User Frustration. CHI 99, Pittsburgh, PA.

Marrin, T. and Picard, R. W. (1998). Analysis of Affective Musical Expression with the Conductor's Jacket, Proc XII Col. Musical Informatics, Gorizia, Italy.

Picard, R. W, and Healey, J., (1997). Affective Wearables, Personal Technologies Vol 1, No. 4 , 231-240.

Riseberg, J., Klein, J., Fernandez, R. and Picard, R.W. (1998). Frustrating the User on Purpose: Using Biosignals in a Pilot Study to Detect the User's Emotional State, CHI '98, Los Angeles, CA.

Scheirer, J., Fernandez, R. and Picard, R.W. (1999). Expression Glasses: A Wearable Device for Facial Expression Recognition, CHI '99, Pittsburgh, PA.

Vyzas, E., and Picard, R. W. (1999).Online and Offline Recognition of Emotion Expression from Physiological Data, submitted to Workshop on Emotion-Based Agent Architectures, Int. Conf. on Autonomous Agents, Seattle, WA.