Research Article: Automatic classification of human facial features based on their appearance

Date Published: January 29, 2019

Publisher: Public Library of Science

Author(s): Felix Fuentes-Hurtado, Jose A. Diego-Mas, Valery Naranjo, Mariano Alcañiz, Seyedali Mirjalili.

http://doi.org/10.1371/journal.pone.0211314

Abstract

Classification or typology systems used to categorize different human body parts have existed for many years. Nevertheless, there are very few taxonomies of facial features. Ergonomics, forensic anthropology, crime prevention or new human-machine interaction systems and online activities, like e-commerce, e-learning, games, dating or social networks, are fields in which classifications of facial features are useful, for example, to create digital interlocutors that optimize the interactions between human and machines. However, classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. This work presents a computer-based procedure to automatically classify facial features based on their global appearance. This procedure deals with the difficulties associated with classifying features using judgements from human observers, and facilitates the development of taxonomies of facial features. Taxonomies obtained through this procedure are presented for eyes, mouths and noses.

Partial Text

Humans have especially developed their perceptual capacity to process faces and to extract information from facial features [1,2]. Our brain has a specialized neural network for processing facial information [3] that allows us to identify people, their gender, age, and race, or even to judge their emotions. Using our behavioral capacity to perceive faces, we make attributions such as personality, intelligence or trustworthiness based on facial appearance [4]. Therefore, faces play a central role in our relationships with other people and in our everyday decisions [5,6].

Our first objective was to obtain a large database of facial features of different ethnic groups with a neutral expression. Many real face databases are accessible for research purposes [68], however, to the best of our knowledge, there are not large public databases of real facial features available. Therefore, we developed an algorithm to process images from a whole face database and to extract images of the facial features.

Four subsets (Asian, Black, Latino, and White) of three facial features (eyes, noses, and mouths) previously obtained were grouped according to their appearance, measured through 45 eigenvalues, using the K-Means clustering algorithm. In order to determine the most suitable number of clusters, several runs of the algorithm were performed increasing the K from 5 to 30, and the Dunn’s Index for each obtained set of clusters was calculated. The results of iterative clustering algorithms like K-Means can vary depending on the initialization, which consists of selecting random initial positions for the clusters. That could yield different results in each execution; therefore, a round of 10 K-Means runs for each K were performed to check the coherence of the results throughout executions. The experiment was implemented using Matlab R2016a on an Intel(R) Core(TM) i7-4770S at 3.10GHz processor PC with 16 GB of RAM.

Classification systems to categorize human body parts, or taxonomies obtained from them, provide a standardized way to describe or configure the human body, and a lot of work has been done to categorize many different body parts. Describing facial features using a common terminology is essential in disciplines such us ergonomics, forensics, surgery or criminology. Moreover, the growth of new technologies that use virtual interlocutors or avatars has led to an increasing interest in synthetizing faces and facial expressions that symbolize the user’s presence in new human-machine interaction systems and online activities.

Although judging the similarity of facial features is a subjective process with wide inter-observer and intra-observer variability, the results of the validation survey developed in this work show that the proposed procedure can be considered appropriate for the automatic classification of facial features based on their appearance. This procedure deals with the difficulties associated to classify features using judgements from human observers, and facilitates the development of facial features taxonomies.

 

Source:

http://doi.org/10.1371/journal.pone.0211314

 

Leave a Reply

Your email address will not be published.