What Do We See When We Look at Someone's Face?

Prof. Galit Yovel, a researcher of faces, explains the science behind line-ups, passport control and why some of us can't follow Game of Thrones.

Tomer Appelbaum

Talking to: Prof. Galit Yovel, 44, lecturer and researcher in the School of Psychological Sciences and the Sagol School of Neuroscience, Tel Aviv University. Where: In her office at the university. When: Thursday, 11:30 A.M.

What’s involved in the study of face perception?

I try to understand how we recognize people, what kind of information the face offers that helps us recognize them, and also what’s so special about faces that make them such a distinctive stimulus. It’s a simple stimulus – eyes, nose, mouth – but also full of information. There are specific mechanisms in the brain that react when we see faces.

Some people find it difficult to recognizes faces, while for others it’s easy.

There are very clear interpersonal differences in this regard. Studies of twins showed that the differences are genetic and hereditary. And they are not related to any other ability or skill, nor to intelligence or to IQ.

Is there an index for checking this? I have a hard time remembering faces, I need to be told the person’s name. Maybe it’s a memory problem?

AFP

We could give you a face-recognition test and situate you on a scale in relation to other people. There are other indications, too: For example, people with your difficulty have a hard time watching movies or television series.

That’s true for me. I couldn’t watch “Game of Thrones” – there were too many characters.

Indeed, the difficulty lies in following all the characters. You’re probably at the low end of the scale; most people are somewhere in the middle. Then ones at the very top remember almost every person they ever met, each and every face. They can also recognize people without connection to their age. For example, in one test we show pictures of celebs before they became famous, and the people at the top of the scale were able to recognize them even when they were young.

What about people whose job involves identifying faces, such as border-control officials?

It’s more complicated than it seems. As a researcher of faces, when I’m in line to have my passport checked, for example, I wonder what the chances are that the Italian or American officer will know who is who in the group of Chinese people in front of him. In fact, a recent study of Australian border police, done by simulating their work situation, found that they make a great many mistakes. They didn’t identify 14 percent of the people and they thought 6 percent were different people, even though it was the same person [as in the passport photo].

The ability to recognize people doesn’t improve with time: The ability of someone who has been checking passports for 30 years is no greater than that of someone who’s just started. We think identifying faces is easy, because most of the time we are recognizing people we know. We rarely have to identify the face of someone we don’t know.

What about police lineups?

I am appalled by the thought that people are convicted and sentenced to prison on the basis of eyewitness evidence. There’s a very high chance of making a mistake. It’s estimated that about 75 percent of convicted people who were afterward found to be innocent were the victims of mistaken testimony from an eyewitness.

Is that also because it’s hard for us to identify faces of people from a different race? Perhaps because we are not usually exposed to such faces?

The question is what kind of exposure you need to recognize, say, faces of Chinese people. Is it enough to make do with passive exposure – simply to look at a lot of Chinese faces? A study of nurses in a maternity ward – they have intensive passive exposure to infants – found that even though the nurses see the faces of so many infants, they are unable to recognize them or tell them apart. Passive exposure is simply not enough. But in other experiments, in which we gave names to infants and let people look at them, they got to know the faces and were able to recognize them.

So for recognition, we need a kind of label in addition to a face.

Facial recognition requires two opposite things. On the one hand, to distinguish among different faces, but at the same time to generalize: to look at many different pictures of the same person from different angles and in different lighting, and to grasp that it’s the same person. It’s very difficult to generalize with unfamiliar faces. If I show you many different pictures of a person you don’t know, you will mistakenly think they are different people. That won’t happen with a face that you know.

‘Serious social handicap’

Let’s talk about prosopagnosia – the total inability to recognize faces.

Prosopagnosia is a serious social handicap. Face recognition is critical for us as social animals. We don’t give it much thought, because it seems to us natural. One person who suffered from this told me that when he came to the airport to pick up his daughter, he stared at the mass of people who came out and waited to see who would approach him, since that person would be his daughter. It’s very difficult to get along without being able to recognize faces. People who suffer from prosopagnosia make use of different signals. They recognize people by their gait, their voice, their bodily proportions.

And it’s not necessarily due to brain damage.

It’s estimated that 2 percent of the population is simply born with this condition. Research into this has only being going on for 15 years. What’s interesting is that these people may have suffered from this problem their whole lives without exactly understanding what it was. It’s not like dyslexia, say, where it’s clear to you that you’re unable to read but the child next to you in class can. It’s very hard to understand prosopagnosia. Some people thought they had a social problem, which of course also exists, because the condition affects your personality.

Think what it’s like coming to work every day and not being able to recognize your colleagues. One woman said that if she is in a public washroom and there are a lot of people, she makes faces in the mirror so she can recognize herself.

And it was only after the face-blindness phenomenon was discovered that scientists found that there is a specific region of the brain responsible for this.

It was discovered in the 1940s that people who suffer damage on the right side of the brain have difficulty recognizing faces. One such region was discovered in 1997 by Nancy Kanwisher, who was my post-doctoral supervisor.

She explained in a TED talk how she discovered it – by spending hours in an fMRI and looking at faces.

She conducted very elegant studies to rule out alternative hypotheses, such as whether it’s possible that this area of the brain responds to body parts in general and not just to faces. Afterward, it turned out that other regions are also involved in face recognition. Recently we have started showing people video clips of faces, not just pictures, and we discovered more regions that respond to stimulation of faces in motion but not to static faces.

Is it true that we perceive the face as a totality? I recognize you in a picture, but if you show me each part of the face separately I won’t be able to make the identification.

Yes, and that’s also why it’s hard to recognize an upside-down face in a picture, because in that case we examine nose, mouth and eyes individually.

Two screws and a banana

Take Hanoch Piven’s work for example. His “Woody Allen” consists of two screws and a banana, so how do we understand that it’s Woody Allen?

We recently invited Piven to speak at a conference on the subject. He said that he prepares a certain base and then starts to play around with it. He analyzes the distinctive elements of a person’s face – eyes set far apart, say, or an aquiline nose. Not every face lends itself to Piven’s style. For Ronald Reagan, for example, he took a balloon and stretched it out, because Reagan’s face was like that, and also shiny. Piven arranges the details to suit the face and selects the items he uses with a view to the knowledge we possess about the person. With Woody Allen, for example, the reference was to Allen’s film “Bananas.” Semantic information helps us with recognition. We fill in the gaps that the purely visual information leaves with knowledge we have.

What about face-recognition technologies? For example, Facebook claims that their algorithm identifies faces better than people do.

In the past few years, the proportion of mistakes has been similar. There are cases in which the computerized systems outperform humans at identifying unfamiliar faces, because the technologies can scan and analyze a huge number of pictures. People make far fewer mistakes with faces they know than algorithms do, and this is what the algorithms are aiming for. Again, it’s because we [people] input additional information. What it boils down to is that the sight mechanism alone is insufficient to identify people at the highest possible level.

So the blind spot of the computerized systems is that they lack the additional knowledge.

Yes. Consider, for example, the terrorist attack in the Boston Marathon. The computerized system was unable to identify the photographs of the suspects taken in the street, even though they were in the stock of pictures [in the police database]. The perpetrators were finally identified by their aunt – she recognized them because she knew them. She wasn’t able to identify them by their faces, because they wore glasses and hats, but based on the way they stood and the height difference between them, she put all the clues together and figured out who they were. The computerized system simply was not able to do that.

It’s said that in the future we’ll be able to get through the airport on the basis of our face alone – but isn’t the technology still limited and dependent on our cooperation?

Less and less. The first systems had to scan faces in a particular position, like passport pictures – with a certain type of lighting, the person looking into the camera and not smiling, and so on. Today the challenge is to identify you when you’re looking to the side, let’s say.

And these days people are posting selfies, which is just what these systems need.

More than anything, the systems need a great many pictures, and those exist. I don’t know whether selfies are exactly what’s needed, because in my view they distort the face slightly. In any case, the systems are improving vastly. They can recognize you even if you are looking to the side, for example.

How can people concerned for their privacy trick them?

By closing their eyes, wearing sunglasses, growing a beard. But the truth is that it’s becoming more complicated to trick them. Some makeup artists have developed special methods for coloring the face that makes it difficult for the algorithms to detect the various facial organs. A Japanese company recently developed special glasses that project light around the eyes, which confuses the algorithms. But those devices will also be overcome eventually. In the future, if we want to evade automatic face recognition, we will simply have to wear a burka.