TR#534: What does it mean for a computer to "have" emotions?
Rosalind W. Picard
To appear as a chapter in book: "Emotions in Humans and Artifacts," ed. by R. Trappl, P. Petta and S. Payr
ABSTRACT
There is a lot of talk about giving machines emotions, some of it
fluff. Recently at a large technical meeting, a researcher stood up
and talked of how a Barney stuffed animal (the purple dinosaur for
kids) "has emotions." He did not define what he meant by this, but
after repeating it several times, it became apparent that children
attributed emotions to Barney, and that Barney had deliberately
expressive behaviors that would encourage the kids to think Barney had
emotions. But kids have attributed emotions to dolls and stuffed
animals for as long as we know; and most of my technical colleagues
would agree that such toys have never had and still do not have
emotions. What is different now, which prompts a researcher to make
such a claim? Is the computational plush an example of a computer
that really does have emotions? If not Barney, then what would be an
example of a computational system that has emotions? I am not a
philosopher, and this paper will not be a discussion of the meaning of
this question in any philosophical sense. However, as an engineer I
am interested in what capabilities I would require a machine to have
before I would say that it "has emotions," if that is even possible.
PDF
Full
list of tech reports