Subtitles section Play video Print subtitles >> Thalia Wheatley: Over the course of millions of years, the human brain evolved to solve certain problems. And you might not have realized this, but one of the biggest problems we evolved to solve is how to detect the mind in our environment. Because minds more than anything else affect our daily life, our well-being. Minds think. Minds feel. Mind act and interact with us. They're our friends, our families, our mates, our enemies. And yet, mind are also invisible. So how do we detect another mind? Well, this is a question that my research aims to solve. And we do so, of course, by using visible cues, cues such as motion. And even in the absence of human form, we can detect the contents of another mind. This log for example, is proud [laughing]. And we can do so also in sound, in tone of voice, even in music. [ Piano playing ] There we go. There he is. [ Piano ] Oh [laughing]. Perhaps the most reliable, the most salient icon of another mind is this: the face. And even as its root word suggests, the face is simply a facade, a facade of another mind. And it's faces as the icons of minds that are so important to us, that we become hardwired to detect them. Right from birth, we find them captivating. A newborn for example, will track two dots and a line in the configuration of a face but only when it's in the configuration of a face. They won't track something that looks like this. And this fascination with faces extends throughout the life span. What I'm going to show you here is a commercial from this year's Super Bowl. And the red blobs indicate where people were looking when they're viewing the commercial. [ Commercial playing ] What you can see is that people spend all their time gazing at the faces on the screen, even the face of a doll. And faces are so important to us that we actually have a visual strategy. We err on the side of detection. When in doubt, see a face. And this is exactly the reason why we see faces everywhere. We see faces, for example, in clouds. We see faces in buildings. We see faces in parking meters [laughing]. We even see faces in grilled cheese sandwiches [laughing]. And science has discovered the way our brain does this, the way a brain detects our face, by looking at our electrical activity. Whenever we see something, millions of neurons fire in our brain, causing electrical activity. And this activity can be measured by putting electrodes on the scalp. And science has found, for example, that this is the kind of electrical response you see when you see an object such as a clock. And keep this in mind because now I'm going to show you what the electrical response looks like when we see a face. It's much higher. And this is a doll face, and here is a human face. Now, this can't be the entire story, right, because we do discriminate between dolls and humans. If we didn't, we'd wander home with people like this, right? We'd started to converse with mannequins. And of course we don't. We don't just seek a face. We seek a mental connection. We look to see if the lights are on in someone's home. So how does the brain do this? Well, it turns out that, if you give the brain a couple hundred more milliseconds, it does it quite well. The green line is the human face that sustains our attention over time, whereas the doll face drops off to the level of clocks. And if you give the brain this extra couple hundred milliseconds, it does a very good job at detecting a mind, so good, in fact, that if I looked at pairs of faces -- here's a doll face, an inanimate face on the left and a real human face on the right. And if I made an artificial continuum of these faces and I asked you very simply where does the face come alive, you can do it; and you can do it consistently. Here, for example, is a movie. And I want you -- it's going to start as a doll face, and it's going to turn into a human face. And I want you to think about where you think it becomes alive. [ Pause ] And now we can do this one together. What I want you to do is raise your hand when this doll face becomes truly alive, okay? I want you to raise your hand when you think it becomes truly alive. [ Pause ] Okay. So what I saw was the majority of you put up your hands when the face was closer to the human endpoint than it was to the doll endpoint. And we see this many, many times. In fact, this is about the average place where people put the break. And it doesn't matter if you ask the question, When does the face become alive, or, When does the face seem to have a mind, or, When does the face look like it could experience pain, or, When does the face look like it could formulate a plan. The break is always the same spot. What this tells us is imputing a physical life is tantamount to imputing a mental life; that is, thoughts and feelings only happen in the context of another mind. So this is a critical tipping point because as soon as we see a face is alive we imbue it with all kinds of possible mental states. So where does this tipping point, this critical decision of seeing a mind and a face happen in the brain? Well, to answer that question, we showed these kinds of faces to people as they lay in this, an FMRI scanner, which is simply really a very expensive and sophisticated camera that takes pictures of your brain and, importantly, takes pictures of oxygen levels so that we can tell if parts of your brain are active during what tasks. Because neurons, when they fire, they eat up oxygen. So the fluctuating levels show us what parts of the brain are active when you see different kinds of things, things like faces. And these faces are particularly interesting to us because you can group them in a couple of different ways. You can think about them, for example, in terms of form. The human face and the doll face look very similar, and the real dog face and the stuffed dog face also look very similar. But you can also think about these faces in a completely different way. You can say, well, which faces come with a mind attached? And that is the human face and the real dog face; those faces are alive with minds, and the toys are not alive. So what did we find? Well, here is the picture of this skull. To orient you, the eyes are on the right and you see the bridge of the nose and the ears on the left. And if you shave off a part of this skull, you see this area of activation in visual cortex, way back in the brain. And this area of activation really likes to group faces based on form. So it activates this particular pattern for human and doll faces -- it's very similar -- and another pattern for dogs and stuffed dogs. We think that this part of the brain is the source of this response I showed you earlier, the response that is indiscriminate. Any face will do. But this can't be the whole story, of course. And if you get out of visual cortex, you'll start to see that, as you move forward in the brain, the brain starts to do something completely different. It starts to process faces in terms of a mind. We think that this response is related to the sustained attention to human faces in particular. And, indeed, most of the brain cares only about human faces, human minds. Areas such as frontal cortex. And, in fact, all over the brain you see this pattern. You see parts of the brain involved in social and emotional understanding and empathy and perspective taking. So what I've showed you so far is that we are experts in detecting a mind in a face and where that is in the brain. But what I haven't told you yet is how do we do it. How do we glean mind from a face? To answer this question, we went back to the morphs; and we showed people just a little part of them. We showed them either the nose, the mouth, the eye, or a patch of skin. And we asked, just given this little piece of information, can people tell, is it alive? Is this patch of skin, was it from a doll; or was it from a human face? And it turns out that a patch of skin ask hopeless. We have no idea if the skin is from a living thing or nonliving thing. A nose fares no better. A mouth is a little better but still not very good. Give people an eye, and the task is effortless. People can detect whether the lights are on in someone's home in an eye. They don't need the entire face. So this suggests that the age-old aphorism is correct, that eyes truly are the windows to the soul. They are the portals to another mind. And you might have already realized this if you've ever seen a CGI movie. CGI stands for computer graphics imagery, and a CGI movie you might have seen is Polar Express. Now, Polar Express was a technological achievement. It took motion capture sensors on Tom Hanks, extracted his motion, put it on the animation of a conductor. But the movie fell short. It was widely panned as having characters that were flat and lifeless. Something was not quite right with them. And so what they did for Beowulf is they put a lot more sensors on the face and other parts of the body to try to make the characters more lifelike, more realistic. And this is what a New York Times critic said about Beowulf: Although the human face and especially the eyes in Beowulf look somewhat less creepy than they did in the Polar Express, they still have neither the spark of true life nor that of an artist's unfettered imagination. You see the cladding but not the soul. This begs an interesting question because it suggests that there might be a chicken and egg thing going on here. For example, did the human brain always have the ability to detect another mind, or is this a recent development in response to all these sort of human imposters in our midst? That is, have we recently developed the ability to see mind in a face because we're surrounded by synthetic faces. We have a world that is populated by things like this: mannequins. We go to the movies. We see animations. We play video games with avatars. So to answer this question whether this is a fundamental and ancient property of human brains or a recent development, we went somewhere that they don't have these things. We flew halfway around the world from Hanover, New Hampshire, to BanLung, Cambodia, to this tribe, a remote hill tribe of Luoc [phonetic]. And we set up in a hut, and we spread out the morphs on the floor, eleven morphed images from a human face to a doll face. And we asked them very simply to break the faces in terms of where they become alive. Here's a guy doing it. Closest to us is the doll face; closest to him is the human face. And he's going to break them by the animacy point. Here we go. That's four faces that he considers alive and seven faces he's pushing into the nonalive category. So the majority of faces he deems not alive, and the minority he believes are alive. And this is just like our U.S. sample, right? We're very stringent about what counts as having a mind in a face. We can see faces everywhere, but we're strict about imputing a mind. What I haven't shown you is what we actually showed them first when we got there. We didn't, in fact, have faces that looked like this when we arrived. We only had faces that we had shown our American sample. We showed them first Caucasian morphs. And they were hopeless at this task. They really couldn't see a mind in any of the faces [laughing]. And they, in fact, even would switch the endpoints. And the first couple of people who did this, we thought, well, they didn't understand the task. But, no; they're like, this is this Barbie face is the living face, and this human face is in nonliving face. So we thought, well, can this be it? And to find out, we hurriedly made Cambodian morphs while we were there. And, of course, when we put the Cambodian morphs, they were like, oh, that's easy. And they split the way the U.S. sample does. So while the Caucasian face set was an experimental failure, it was probably our most important finding because it suggests that you only see minds in faces that you're familiar with. It suggests that, if you limit yourself to seeing one kind of face, you're only really going to impute minds to those kinds of faces. You quite literally risk seeing other kinds of faces that you're not familiar with as mere objects. Conversely, if we live in a diverse world where we see all kinds of different faces, then we can tune up the brain and start to see people as minds, not just objects. And we want to do this. We need to do this because this is the way that we engage the rest of the brain, the brain that's critical for social and emotional understanding and empathy. Without these part of the brain, we'll just merely see people as objects. And we want to see faces not as objects but as minds because, by doing so, we can push past the superficial to the humanity in all of us. Thank you. [ Applause ]
A2 US brain doll human mind alive detect TEDxDartmouth 2011- Thalia Wheatley: How the Brain Perceives Other Minds - March 6, 2011 4715 200 Camellia Chang posted on 2014/09/21 More Share Save Report Video vocabulary