There are dozens of applications of affective computing in addition to the medical and health applications mentioned above [Pic97].). For example, emotions are known to provide a keen index into human memory; therefore, a computer that pays attention to your affective state will be better at understanding what you are likely to recall on your own, and what is of interest to you. This is potentially very useful in helping people deal with information overload. For example, instead of a system recording everything you hear, see, or click on, the system might learn to record (or play back) just those places where you were interested. Or, it might play back just those places in a lecture that you missed, perhaps because your mind wandered or you were bored. Augmenting a system like Steve Mann's WearCam [Man97] with affective sensing and pattern recognition could help it learn when to ``remember'' the video it collects, as opposed to always relying on the user to tell it what to remember or forget. Of course the user can still direct what the system does; that function does not go away. The goal is simply to begin to automate those functions that the user typically applies, especially when they are predictable with affective information.
Suppose for example that you let the WearCam roll in a continuously learning mode while playing with a cute little child. It might notice that you always save the shots when the child makes you laugh, or smile. By detecting these events, it could become smarter about automatically saving these kinds of photos in the future. Moreover, by labeling the photos with these affective events, you can later ask the system to retrieve data by its affective qualities, ``Computer, please show us the funny images.'' Of course the wearer should be free to communicate with the system at all times, which includes sometimes overriding what the system has learned. But, if the wearable learns continuously, by watching what the wearer chooses, it should help reduce some of the users workload and enable the wearer to offload repetitive tasks.
We have built a prototype of an affective WearCam, based on the wearable described in the next section 1. This prototype includes a small camera worn as a pendant around the wearer's neck, together with skin conductivity sensors and pattern recognition software. The camera continuously records and buffers images in a rotating buffer, deleting the oldest images as the buffer fills. Simultaneously, the system uses small electrodes to sense skin conductivity in the wearer's skin, either across two fingers or across the arch of the foot. Pattern recognition software has been trained to recognize the wearer's ``startle response,'' a skin conductivity pattern that occurs when the wearer feels startled by a surprising event. Unlike many affective signals, the human startle response is fairly robust and easy to detect. With a matched filter and threshold detector, the startle pattern in the wearer's skin conductance signal is detected in real time. The skin conductivity response occurs with a typical latency of three seconds after the startling event. When the pattern is detected, the images leading up to the startle event are extracted from the buffer. The buffer can be set to hold arbitrary amounts of imagery, typically in the range of 5 seconds to 3 minutes of data. When the startle is detected, the images extracted from the buffer can then be saved into a more permanent memory for your later perusal, or automatically sent back to a remote location to be analyzed by a ``safety net'' [Man97], community of friends or family with whom you felt secure, to see if the event warranted any action on your behalf.
The StartleCam is an example where analysis of a wearer's affective patterns triggers actions in real time. In the future, a ``fear detector'' might also trigger a wearable camera to save a wide-angle view of the environment, and with a global positioning system attachment, the wearer's position, viewpoint, and fear state could all be transmitted using the wireless modem.
The applications extend beyond safety to many other domains. In an augmented reality role-playing game, a fear detector might change the wearer's appearance to other players, perhaps applying a ``cloak'' or updating an avatar's expression. Alternatively, a noted reduction in fear might be recognized, and the player rewarded for overcoming his fear, with bonus points for courage.
Applications of affective wearables extend to other forms of information management beyond image and video. An intelligent web browser responding to the wearer's degree of interest could elaborate on objects or topics that the wearer found interesting, until it detected the interest fading. An affective assistant agent could intelligently filter your e-mail or schedule, taking into account your emotional state or degree of activity.
The relationship between long-term affective state or ``mood'' and musical preferences can lead to other personal technology applications. Music is perhaps the most popular and socially-accepted form of mood manipulation. Although it is usually impossible to predict exactly which piece of music somebody would most like to hear, it is often not hard to pick what type of music they would prefer--a light piano sonata, an upbeat jazz improvisation, a soothing ballad--depending on what mood they are in. As wearable computers gain in their capacity to store and play music, to sense the wearer's mood, and to analyze feedback from the listener, they have the opportunity to learn patterns between the wearer's mood, environment, and musical preferences. The ultimate in a musical suggestion system, or ``affective CD player'' would be one that not only took into account your musical tastes, but also your present conditions - environmental and mood-related.
The possibilities are diverse - a wearer who jogs with her wearable computer might like it to surprise her sometimes with uplifting music when her wearable detects muscle fatigue and she starts to slow down. Another wearer might want the system to choose to play his favorite soft relaxing music whenever his stress indicators hit their highest levels. He might also want the computer to evaluate its own success in helping him relax, by verifying that, after some time, he did achieve a lower stress level. If the wearer's stress level increased with the music, or with a suggestion of music, then the computer might politely try another option later.
The whole problem of building systems which adapt to you is an important domain for affective wearables. Many times technology only increases stress, making users feel stupid when they do not know how to operate the technology, or making them actually become stupid when they rely on it in a way that causes their own abilities to atrophy. Our goal is to give computers the ability to pay attention to how the wearer feels, and to use this information to better adapt to what its wearer wants.
We need to be careful in considering the role of wearables in augmenting vs. replacing our abilities. For example, we know when a human is highly aroused (as in a very shocking or surprising situation) that she is more likely to remember what is happening-the so-called ``flash-bulb memory[BK77].'' If the human brain is recording with full resolution at these times, then the wearable imaging system may not need to record more than a snapshot, or it may wish to focus on a wide-angle view, to complement the data the person is likely to remember. In contrast, when the human is snoozing during a lecture, the wearable might want to kick in and record the parts the wearer is missing. A truly helpful system learns the wearer's preferences, and tries to please the wearer by adapting accordingly.