Subtitles section Play video
In the 1990s, Nancy Kanwisher was in an fMRI machine recording responses from her own brain.
Weird hobby, I know, but when she looked at images of people's faces, she noticed something peculiar about her responses.
A small part of her brain was way more active than the others.
As it turned out, she had stumbled upon a new part of the brain.
Today, it's called the fusiform face area, an entire region partly dedicated to recognizing faces.
The discovery revolutionized this area of neuroscience, and led to some other pretty amazing findings.
Like the fact that blind people use this area of their brain to identify faces, too.
And that sometimes, all you need to recognize a face is the sound of chewing.
[♪ INTRO ♪)]
This episode was made in partnership with the Kavli Prize.
The Kavli Prize honors scientists for breakthroughs in astrophysics, nanoscience, and neuroscience, transforming our understanding of the big, the small, and the complex.
There's a long, sometimes proud, sometimes regrettable, history of scientists experimenting on themselves.
It's happened enough times to fill multiple SciShow videos with examples.
I'm not saying it's the recommended protocol, but there's something kinda cool about a scientist who's so confident in the safety of their methods that they become their own test subject.
And Kanwisher is certainly part of that history.
In her case, she studied her own brain.
Let me transport you back in time to the late 90s.
Seinfeld was reinventing the sitcom, the Cold War was finally over, and Smash Mouth was popular… for some reason.
Things were changing in neuroscience, too.
Scientists were finally getting to use tools like functional magnetic resonance imaging, or fMRI machines, that let them see inside the brain while it was still working.
At the time, there were only four fMRI machines in the world.
So if you had a chance to use one between the hours of 6 and 9 a.m. on a Saturday morning, you jumped at it.
And if the experiments you were able to run at that unpopular time of the week didn't yield significant results, well, you didn't get out of bed for nothing.
You're going in that machine!
Or at least Kanwisher did.
At first, she wanted to study if our mind's eye uses the same neural pathways as our actual eyes.
She wondered at what point in the process attention butts in, and how we reconcile new information with what we've seen in the past.
But after those investigations didn't pan out,
Kanwisher figured her best shot at getting some publishable data was to look at faces.
Neuroscientists knew that humans and other animals have brain cells that respond to faces, so that was at least a starting point.
But they didn't know much else about the process.
When Kanwisher got in the fMRI machine and looked at pictures of faces, she noticed that one part of her brain really stood out from the others.
It was way more actively involved in facial processing than any other part.
She knew she was onto something.
So her research group compared several other people's brains and found that, in many cases, this region of their brains was particularly excited by faces, too.
The fMRI images showed much less activity in that area of the brain when her study participants looked at pictures of houses, hands, crabs, or other objects.
It appeared to be attuned to faces.
And that includes the faces of other animals or cartoons.
So that explains why they decided to include face in the name of this region of the brain.
And since it happened to be located in a part of the fusiform gyrus, presto, we had the fusiform face area, or FFA.
Kanwisher's team wouldn't have discovered the FFA without the data from people in fMRI machines.
But those techniques couldn't tell them what was going on inside the FFA, like which brain cells were responsible for focusing on each minute facial detail.
For that information, one of her postdoctoral researchers,
Winrich Freiwald, and a collaborator, Doris Sal, turned to monkeys.
The monkey version of the FFA is not identical to ours, but it's similar in its selectivity for faces.
In fact, these researchers published a whole monkey-human comparison, showing how similar they are.
So they put monkeys through the fMRI machine, just like Kanwisher and the other human participants.
The monkeys were shown the same kind of images that the humans had been.
And the researchers recorded responses of individual cells within the monkey FFA.
They found that more than 90% of them were dedicated to facial recognition.
Now that they knew that most cells in the monkey FFA worked together on the same project of recognizing a face, this research team took their findings to the next level.
They successfully predicted which face the monkey was looking at based on which cells were active.
It's like looking at a quadratic equation and knowing what shape will appear on the resulting graph.
But for brains.
Like, literal mind reading.
As you've probably guessed, monkey mind reading wasn't necessarily the end goal of these studies.
People are pretty self-centered.
So we still had a lot to learn about how human FFAs work under different circumstances.
Like, when you can't see faces at all.
Which led to the next study.
The most surprising thing about the FFA might not be how specific it gets, but how broadly it's used.
I mean, you could be born blind, and still have an active FFA used for the same purposes as a sighted person's.
This was another big Kanwisher study.
One of the first challenges her team faced was designing an experiment for people who can't see.
They got around that in a few ways.
One of them was 3D-printed models of faces, hands, chairs, and mazes for blind people to touch while they were in an fMRI machine.
They tested sighted people to see how their brains responded to touching a face versus seeing one.
In both cases, the FFA was activated.
Then they compared the result of blind participants touching those models with sighted participants doing the same, and they found their results lined up really nicely.
And to add another layer of assurance, they swapped out the feeling of a face for the sounds of a face.
They compared laughing and chewing with other sounds, like walking, clapping, engines, and waves.
And the laughing and chewing sounds that came from faces activated the participants' FFAs more than any other sounds tested.
Whether it's through touch or sound, they showed that you don't need to see a face to recognize one.
In fact, the FFA works the same way for people who have never seen a thing in their lives and people who look at stuff all day every day.
Maybe because it's connected to the same other parts of the brain that help fill in the blanks.
Which might explain why there are people who can see but still don't recognize faces.
It's a condition called prosopagnosia.
In those cases, the FFA seems to work just like anyone else's, but its communication with other parts of the brain may go haywire.
Thanks to our understanding of the role FFA plays in facial recognition, we're getting closer to figuring out what's going on with that condition and others like it.
So, just like we needed multiple researchers to come together to make these discoveries, we need many parts of the brain to work together to create our view of the world and the people in it.
For their contributions to our understanding of facial recognition,
Kanwisher, Freiwald, and Cao were awarded the 2024 Neuroscience Kavli Prize.
You can learn more about their research and personal journeys at www.kavliprize.org.
[♪ OUTRO ♪)]