Rosalind Picard: [Full interview transcript]

, , Leave a comment

Rosalind Picard is the founder and director of the Affective Computing research group at the MIT Media Lab. Read Rosalind’s full bio here…

Click here to watch Rosalind’s interview, “Affective Computing.”

Rosalind: I’m Rosalind Picard, I direct research in Affective Computing at the MIT media lab and that is computing that relates to, arises from, or deliberately influences emotion. Many years ago I, encouraged to take risks, decided to write a book called “Affective Computing,” at a time when emotions were really seen as something that made you irrational and as something that was pretty much undesirable.

Ramona: How has it evolved over the years, this field?

Rosalind: Yeah it’s really remarkable how quickly, I think, based on findings from a lot of neuroscience, from psychology, from medicine, and marketing, and education that emotion has come to be seen as something that plays a constant role in our experience. It’s not like you’re usually unemotional and suddenly there is an outburst of emotion, although that can be the most visible but emotion is always there. It’s like weather, whether it’s sunny, boring, raining, thunderstorms, tornados, it’s always there and it’s changing and it’s biasing and affecting everything we do.

Ramona: What impact on the field of robotics does this understanding of emotion or machines and having emotion?

Rosalind: Yeah, I think for robots it’s going to be essential to understand human emotion. Think back to Clippy, that automated paper clip and one of the things, one of the reasons that Clippy got a standing – one of the reasons that Bill Gates got a standing ovation when he said Clippy was going away was because Clippy was so annoying. Now Clippy had great A.I., It had a really brilliant machine learning ability to tell that you are writing a letter. It was very good at knowing what you were doing but it was completely impoverished at knowing how you were feeling about the interaction. So in fact if you’re having a really bad time and Clippy kinda marches our smiling and happy, well that’s rude, right, for somebody to enter your space and completely ignore your distressed and to start dancing and acting happy, right, it’s like Schadenfreude in German, like they’re happy at your misfortune? So that’s a way to engender disliking. So it is no surprise that people started to hate Clippy.

Now imagine if Clippy were full size robot coming into your living room and acting like real overly friendly and happy when you’re in a bad mood or your in pain or your suffering some grief and Clippy doesn’t read, robot Clippy doesn’t read your emotions right, you know it’s going to look like the flesh fairs in the A.I. movie right? When Kubrick has the melting the robots, people are going to hate these things. So what people I think need to realize and an increasingly number of roboticists are realizing this, is that emotions–emotional intelligence is essential for any technology that is interacting with a human in a way that purports to be intelligent.

Ramona: When it comes to robotics or machines and emotions it’s one thing to have artificial intelligence, it’s another thing also even to be able to understand, to have that empathy, to understand, to read the emotion that the human is giving off but you also say that for them to be, for these machines to really be intelligent, that they’re also going to have to feel emotion. How do we have machines feel?

Rosalind: Yes, so the phrase for a machine to feel emotion is a bit of a complicated one; we might need to pull apart a little bit. I think the machine needs to able to express empathy. It needs to be able to like look sorry if it’s done something wrong. It needs to maybe share your joy and share some of your sorrow in terms of outward appearance. Now all of that can be done without the machine having feelings like you and I have. In fact, when it comes to one aspect of those feelings we have that qualia, that experiential component, nobody knows how to build that in a machine, yet. In fact, we don’t even–it’s not even that we don’t know how to build it, we don’t even see how it’s possible yet with current hardware and software, and biological computers and so forth. We just don’t see it emerging from the complex things that we know how to build. That said, we can’t prove that’s it’s impossible, right? So what we’re – the state of the art right now is that machines can look like they have feelings, they have some internal mechanisms that basis their decision making and actions in a way that performs a function similar to what feelings perform in us, but machines currently lack that experiential component of feelings that we have.

Ramona: So you think that then even whence a robot sense our feelings and feel for themselves that they still will be machines?

Rosalind: I don’t know if robots will ever have feelings the way that we do. I don’t again, see how that could happen right now but it doesn’t prove it couldn’t happen. I do think that it will be some time before the robots, on their own accord, go out and seek their rights as robots because they feel unjustly treated and if they do that, and if we get to that point it would be because we basically built them for that purpose. So do you see? We have the choice as the creators of these machines to design them within such a way that they go seeking and acting about seeking these rights or we keep them around more as one might say as servants or companions, or as partners in other things we want to accomplish.

Ramona: Why do you think there is resistance to the idea of emotion in machinery, or emotion in robotics?

Rosalind: Great question! Why is there resistance to emotion in machines? I think it just seems on one level that these things just don’t go together and its not like it’s a nice combination like oil and vinegar, it’s more like you wouldn’t puree your vegetables and your dessert together in the blender and then suck it through a straw. It’s kind of machines we want to be logical, rational, predictable and emotion has seemed like the antithesis of that and yet as we see people already attribute emotion to machine. Like when it crashes on you people are–I’ve had people say to me, “this machine hates me,” well of course it doesn’t hate them but people are already attributing human like characteristics even to desktop, very machine like boxes.

So, I think the question is not are people uncomfortable with it but in what ways can we make people–can we use emotion that makes the experience more comfortable, and usually that’s done with a way that emotion actually makes it smarter, more respectful, more acknowledging of your interests and your likes and dislikes without it actually being emotional in your face.

It’s interesting that A.I. for it’s first 50 were very few exceptions didn’t think emotions were important yet. They’ve got math, they’ve got language, they’ve got logic, and other decision making perceptual processes but they didn’t pay attention to emotion and as I started digging into–and I was of that camp for quite a while in fact I was trying to build machines that were better at perceiving information, at seeing and hearing and as I learned how our brains learn and perceive information I stumbled into these findings that deep down beneath our highly evolved cortex is the subcortical structures that are deeply involved in emotion, attention, and memory. And with my AI hat on, I’m not interested in emotion but attention and memory are important. I started to learn more about those deeper brain structures and I kept bumping into the fact that “oh dear,” emotion is actually key. It seems to be affecting everything that goes in our memory, everything that shift our attention to is guiding in part of this emotion system. So if we want to actually build an AI that works in the real world, that handles complex unpredictable information, with flexible intelligence we need basically an emotion system to do it. You can imagine this didn’t go over real well in the beginning but now actually I think a lot of people are starting to see that this is a core part of building an intelligent system.


  • Emotions play a crucial role in human experience
  • In terms of robotics, teaching AI to understand and emulate human emotions makes them respond better to human needs
  • Right now, machines can be taught to act appropriately to human emotions but they lack the experience component to actually feel their own emotions
  • Humans are in control, can design robots to fit our needs as companions or servants
  • People resist the idea of robots with emotions because they seem inherently contradictory – but Picard has found that human emotion is essential to advancing robotics

Learn more about Rosalind Picard on her webpage and follow her on Twitter @RosalindPicard.


Leave a Reply