UTD - Face Perception and Recognition Laboratories at UTD

Human Face Perception and Recognition

Alice J. O'Toole, Ph.D.

The University of Texas at Dallas


Selected Projects

Face adaptation, human perception and recognition


Probing the visual representation of faces with adaptation: A view from the other side of the mean

Jiang, F., Blanz, V. & O'Toole, A. J. (2006). Psychological Science, 17, 493-500.

Sensory adaptation and visual after-effects have long given insight into the neural codes underlying basic dimensions of visual perception. Recently discovered perceptual adaptation effects for complex shapes like faces can offer similar insight into high-level visual representations. We show first that adaptation to faces transfers across three-dimensional viewpoint, making it ideal for investigating the visual encoding mechanisms responsible for perceptual constancy. Next, participants perceptually adapted to face morphs from laser scans that varied selectively in reflectance or shape. Adaptation to these selective variations affected the perception of opposite faces, both from the same viewpoint and from a different viewpoint. The results indicate that viewpoint generalization of faces is supported by high level representations that are more visually complete than posited by current theories. These findings have implications for probing the receptive fields of face-selective neurons and for the kinds of information retained in neurally inspired computational models of face recognition.


Learning the moves: The effect of familiarity and facial motion on person recognition over large changes in viewing format

Roark, D., O'Toole, A. J., Abdi, H. & Barrett, S. E. (2006). Perception, 35,761-773.

We examined the role of familiarity and motion on person recognition from novel viewing formats. Participants were familiarized with previously unknown people from gait videos and were tested on faces (Experiment 1a) or were familiarized with faces and were tested with gait videos (Experiment 1b). The results showed that repetition of a single stimulus, either the face or gait, produced strong recognition gains across large changes in viewing format. Additionally, the presentation of moving faces resulted in better performance than static faces. In Experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the advantage found in the first set of experiments.


Recognition of moving faces: A psychological and neural framework.

O'Toole, A. J. Roark, D. & Abdi, H. (2002). Trends in Cognitive Sciences, 6, 261-266.

Useful information for identifying a human face can be found both in the invariant structure of its features and in its idiosyncratic movements and gestures. When both kinds of information are available, the psychological evidence indicates that: 1.) dynamic information contributes more to recognition under non-optimal viewing conditions, (e.g. poor illumination, low image resolution, or when recognition takes place from a distance); 2.) dynamic information contributes more as a viewerıs experience with the face increases; and 3.) a structure-from-motion analysis can make a perceptually-based contribution to face recognition. Poor viewing conditions may increase the contribution of the structure-from-motion analysis, but familiarity with the face is probably not a factor. The distributed neural system proposed recently for face perception, with minor modifications, can accommodate the psychological findings with moving faces.


Face recognition algorithms as models of the other-race effect

Furl, N., Phillips, P. J., & O'Toole, A. J. (2002). Cognitive Science. 96, 1-19.

People recognize faces of their own race more accurately than faces of other-races. The "contact" hypothesis suggests that this "other-race effect" occurs as a result of the greater experience we have with own- versus other-race faces. The computational mechanisms that may underlie different versions of the contact hypothesis were explored in this study. We replicated the other-race effect with human participants and evaluated four classes of computational face recognition algorithms for the presence of an other-race effect. Consistent with the predictions of a developmental contact hypothesis, "experience-based models" demonstrated an other-race effect only when the representational system was developed through experience that warped the perceptual space in a way that was sensitive to the overall structure of the modelıs experience with faces of different races. When the modelıs representation relied on a feature set optimized to encode the information in the learned faces, experience-based algorithms recognized minority race faces more accurately than majority race faces. The results suggest a developmental learning process that warps the perceptual space to enhance the encoding of distinctions relevant for own-race faces. This feature space limits the quality of face representations for other-race faces.


Prototype-referenced shape encoding revealed by high-level aftereffects

Leopold, D., O'Toole, A.J., Vetter, V., and Blanz, V. (2001). Nature Neuroscience, 4,, 89-94.

We used high-level configural aftereffects induced by adaptation to realistic faces to investigate visual representations underlying complex pattern perception. We found that exposure to an individual face for a few seconds generated a significant and precise bias in the subsequent perception of face identity. In the context of a computationally derived face space, adaptation specifically shifted perception along a trajectory passing through the adapting and average faces, selectively facilitating recognition of the test face lying on this trajectory and impairing that of other faces. The results suggest that the encoding of faces and other complex patterns draws upon contrastive neural mechanisms that reference the central tendency of the stimulus set.


Children's recognition and categorization of faces

Wild, H. H., Barrett, S. E., Spence, M. J., O'Toole, A. J., Cheng, Y. D., & Brooke, J. (2000). Journal of Experimental Child Psychology, 77, 261-299.

The ability of children and adults to classify the sex of children's and adults' faces using only the biologically-based internal facial structure was investigated. Face images of 7 to 10 year old children and of adults in their twenties were edited digitally to eliminate hairstyle and clothing cues to sex. Seven-year-olds, nine-year-olds, and adults classified a subset of these faces by sex and were asked, subsequently, to recognize the faces from among the entire set of faces. This recognition task was designed to assess the relationship between categorization and recognition accuracy. Participants categorized the adult faces by sex at levels of accuracy varying from just above chance (seven-year-olds) to nearly perfect (adults). All participant groups performed less accurately for children's faces than for adults' faces. The seven-year-olds were unable to classify the children's faces by sex at levels above chance. Finally, the faces of children and adults were equally recognizable --- a finding that has theoretical implications for understanding the relationship between categorizing and identifying faces.


Prototype female and male children's faces made by morphed averaging


Computational Approaches to Sex Classification of Adults' and Children's Faces

Cheng, Y., O'Toole, A. J., Abdi, H. (2001). Cognitive Science, 25.

The faces of both adults and children can be classified accurately by sex, even in the absence of sex-stereotyped social cues such as hair and clothing \citeyear(Wild, et al.,){Wild00}. Although much is known from psychological and computational studies about the information that supports sex classification for adults' faces, children's faces have been much less studied. The purpose of the present study was to quantify and compare the information available in adults' versus children's faces for sex classification and to test alternative theories of how human observers distinguish male and female faces for these different age groups. We implemented four computational/neural network models of this task that differed in terms of the age categories from which the sex classification features were derived. Two of the four strategies replicated the advantage for classifying adults' faces found in previous work. To determine which of these strategies was a better model of human performance, we compared the performance of the two models with that of human subjects at the level of individual faces. The results suggest that humans judge the sex of adults' and children's faces using feature sets derived from the appropriate face age category, rather than applying features derived from another age category or from a combination of age categories.


Caricatures


Three-dimensional Caricatures : An algorithm for aging faces?

O'Toole, A. J., Vetter, T., Volz, H. & Salter, E. M. (1997). Three-dimensional caricatures of human heads : Distinctiveness and the perception of facial age. Perception, 26 , 719-732.

We applied a standard facial caricaturing algorithm to a three-dimensional representation of human heads. This algorithm sometimes produced heads that appeared ``caricatured''. More commonly, however, exaggerating the distinctive three-dimensional information in a face seemed to produce an increase in the apparent age of the face --- both at a local level, by exaggerating small facial creases into wrinkles, and at a more global level via changes that seemed to make the underlying structure of the skull more evident. Concomitantly, de-emphasis of the distinctive three-dimensional information in a face made it appear relatively younger than the veridical and caricatured faces. More formally, face age judgements made by human observers were ordered according to the level of caricature, with anti-caricatures judged younger than veridical faces, and veridical faces judged younger than caricatured faces. We discuss these results in terms of the importance of the nature of the features made more distinct by a caricaturing algorithm and the nature of human representation(s) of faces.

Original Laser Scan 27 yr old male

Caricatures - increased distance from mean

Larger sample aging figures


The Perception of Face Gender

O'Toole, A. J., Deffenbacher, K. A., Valentin, D. McKee, K. Huff, D., Abdi, H. (1998, in press). The perception of face gender: The role of stimulus structure in recognition and classification. Memory and Cognition.

We have looked at the information specifying the gender of a face and how the quality of this information impacts performance on simple face processing tasks such as recognition and classification (O'Toole, Deffenbacher, Valentin, McKee, Abdi & Huff, 1998; Memory & Cognition). We found that the caricatured or exaggerated aspects of gender related to the speed with which male and female faces could be categorized by gender, whereas the closeness of the face to the subcategory mean related to the recognizability of the face. Aspects of the attractiveness of male faces also related to the face's distance from the mean, but this was not true for female faces. This face gender work builds on previous work (O'Toole, Abdi, Deffenbacher, and Valentin, 1993, JOSA), using a PCA approach to quantifying the different kinds of information in faces. We showed that the information contained in eigenvectors with relatively larger eigenvalues contain shape-based information useful for categorizing faces into general categories like "male" or "female" The first figure below shows: 1.) the first eigenvector of a cross-product matrix made of 152 faces (half male and half female). This approximates the mean face; 2.) the second eigenvector; 3.) the first eigenvector plus the second; and 4.) the first eigenvector minus the second. Projections of individual faces onto this euigenvector do a good job predicting the sex of the face.

By contrast, eigenvectors with relatively smaller eigenvalues contain better information for face recognition. The next figure shows: 1.) a normal face; 2.) a face reconstructed with the first 40 eigenvectors; and 3.) the face reconstructed with all but the first 40 eigenvectors. More information about the identity of this face is contained in these eigenvectors with smaller eigenvalues.

We have recently extended this work with colleagues at the Max Planck Institute for Biological Cybernetics to the exploration of gender-based information in laser scans of human heads. We compared the the quality of information in three-dimensional structure versus graylevel image-based representations of human heads for classifying the heads by sex (O'Toole, Vetter, Troje, & Bulthoff, 1997; Perception). We used data from laser scanned human heads, separated into 3D structure information and image intensity information. We then measured the extent to which low dimensional representations of heads in terms of their projection coefficients onto eigenvectors extracted either from the 3D structure or graylevel image intensity information could be used to predict the sex of the face. We found that the 3D information supported more accurate sex classification than the image intensity information across a range of low dimensional subspaces.

The figure below shows the mean head plus versus minus the first eigenvector for the head surface data. Individual face projections onto this eigenvector were highly correlated to the gender of the face.

A similar demonstration of the sixth eigenvector which also correlated highly with face gender.


Three-dimensional Morphing : Shape and Reflectance contributions to recognition over viewpoint change

Paper figures

Figure 1 - top row, rendered version of a laser scan face from three viewpoints. Row 2 - just the reflectance map, which wraps around the head surface (row 3), which is viewed here from three viewpoints.

Figure 2 - Column 1 - 4 normal faces. Column 2 - faces with their original reflectance information mapped onto the average 3D shape. Column 3 - faces with their 3D shape information with the average reflectance.

in progress, shape and texture normalized stimuli.


Lab and Collaborators

Alice J. O'Toole, Program in Cognition and Neuroscience, The University of Texas at Dallas.

Herve Abdi, Program in Cognition and Neuroscience, The University of Texas at Dallas.

Volker Blanz, Department of Computer Science, University of Frieburg, Frieburg, Germany.

Isabelle Buelthoff, Max Planck Institute for Biological Cybernetics, Teubingen, Germany.

Heinrich Buelthoff, Max Planck Institute for Biological Cybernetics, Teubingen, Germany.

Shimon Edelman, Cornell University, Ithaca, NY.

Thomas Vetter, Department of Computer Science, University of Basel, Switzerland.

Work in this lab has been supported by TSWG, DARPA, NIST, and NIMH.