Date Published: March 15, 2012
Publisher: Public Library of Science
Author(s): Kathrin Kaulard, Douglas W. Cunningham, Heinrich H. Bülthoff, Christian Wallraven, Marc O. Ernst. http://doi.org/10.1371/journal.pone.0032321
The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.
Faces are one of the most ecologically important stimuli of visual perception. Over the last decades, perceptual and cognitive studies have repeatedly shown that humans are remarkably good at recognizing face information like gender, age, identity and facial expressions. Facial expressions are special inasmuch as they constitute the only information in the face that – besides mouth movements for visual speech – rapidly and constantly changes in a variety of complex ways. We are, however, easily able to tell different expressions apart within only a short glance. Moreover, in order to extract the correct meaning of the different types of facial expression, we do not necessarily need to know the person; that is, facial expression processing seems largely invariant to facial identity (, , but see also –). With applications not only in the perceptual and cognitive sciences, but also in affective computing and computer animations, it is not surprising that facial expression research has gained lot of attention over the last decades.
One major goal of context condition was to investigate whether the context scenarios elicited clear expressions. The validation of the 55 descriptions revealed that for 50 descriptions, 7 or more naming answers were valid using a conservative validation criterion. Moreover, participants felt confident in their naming answers. Hence, we can confirm that the written descriptions can, indeed, be connected to well-defined facial expression concepts. Second, this study investigated free-naming performance for visually presented dynamic facial expressions. Our validation experiment does confirm that the vast majority of the visual presented facial expressions were recognizable. In addition, the expressions, although exhibiting a large degree of individual variation, were overall rated as being very natural.