Facial Expression Recognition in the Wild Using Convolutional Neural Networks

Facial Expression Recognition (FER) is the task of predicting a specific facial expression given a facial image. FER has demonstrated remarkable progress due to the advancement of deep learning. Generally, a FER system as a prediction model is built using two sub-modules: 1. Facial image representat...

Full description

Bibliographic Details
Main Author: Farzaneh, Amir Hossein
Format: Text
Language:unknown
Published: Utah State University 2020
Subjects:
DML
Online Access:https://dx.doi.org/10.26076/a049-b4b3
https://digitalcommons.usu.edu/etd/7851
id ftdatacite:10.26076/a049-b4b3
record_format openpolar
spelling ftdatacite:10.26076/a049-b4b3 2023-05-15T16:01:44+02:00 Facial Expression Recognition in the Wild Using Convolutional Neural Networks Farzaneh, Amir Hossein 2020 https://dx.doi.org/10.26076/a049-b4b3 https://digitalcommons.usu.edu/etd/7851 unknown Utah State University Text article-journal ScholarlyArticle 2020 ftdatacite https://doi.org/10.26076/a049-b4b3 2021-11-05T12:55:41Z Facial Expression Recognition (FER) is the task of predicting a specific facial expression given a facial image. FER has demonstrated remarkable progress due to the advancement of deep learning. Generally, a FER system as a prediction model is built using two sub-modules: 1. Facial image representation model that learns a mapping from the input 2D facial image to a compact feature representation in the embedding space, and 2. A classifier module that maps the learned features to the label space comprising seven labels of neutral, happy, sad, surprise, anger, fear, or disgust. Ultimately, the prediction model aims to predict one of the seven aforementioned labels for the given input image. This process is carried out using a supervised learning algorithm where the model minimizes an objective function that measures the error between the prediction and true label by searching for the best mapping function. Our work is inspired by Deep Metric Learning (DML) approaches to learn an efficient embedding space for the classifier module. DML fundamentally aims to achieve maximal separation in the embedding space by creating compact and well-separated clusters with the capability of feature discrimination. However, conventional DML methods ignore the underlying challenges associated with wild FER datasets, where images exhibit large intra-class variation and inter-class similarity. First, we tackle the extreme class imbalance that leads to a separation bias toward facial expression classes populated with more data (e.g., happy and neutral) against minority classes (e.g., disgust and fear). To eliminate this bias, we propose a discriminant objective function to optimize the embedding space to enforce inter-class separation of features for both majority and minority classes. Second, we design an adaptive mechanism to selectively discriminate features in the embedding space to promote generalization to yield a prediction model that classifies unseen images more accurately. We are inspired by the human visual attention model described as the perception of the most salient visual cues in the observed scene. Accordingly, our attentive mechanism adaptively selects important features to discriminate in the DML’s objective function. We conduct experiments on two popular large-scale wild FER datasets (RAF-DB and AffectNet) to show the enhanced discriminative power of our proposed methods compared with several state-of-the-art FER methods. Text DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
description Facial Expression Recognition (FER) is the task of predicting a specific facial expression given a facial image. FER has demonstrated remarkable progress due to the advancement of deep learning. Generally, a FER system as a prediction model is built using two sub-modules: 1. Facial image representation model that learns a mapping from the input 2D facial image to a compact feature representation in the embedding space, and 2. A classifier module that maps the learned features to the label space comprising seven labels of neutral, happy, sad, surprise, anger, fear, or disgust. Ultimately, the prediction model aims to predict one of the seven aforementioned labels for the given input image. This process is carried out using a supervised learning algorithm where the model minimizes an objective function that measures the error between the prediction and true label by searching for the best mapping function. Our work is inspired by Deep Metric Learning (DML) approaches to learn an efficient embedding space for the classifier module. DML fundamentally aims to achieve maximal separation in the embedding space by creating compact and well-separated clusters with the capability of feature discrimination. However, conventional DML methods ignore the underlying challenges associated with wild FER datasets, where images exhibit large intra-class variation and inter-class similarity. First, we tackle the extreme class imbalance that leads to a separation bias toward facial expression classes populated with more data (e.g., happy and neutral) against minority classes (e.g., disgust and fear). To eliminate this bias, we propose a discriminant objective function to optimize the embedding space to enforce inter-class separation of features for both majority and minority classes. Second, we design an adaptive mechanism to selectively discriminate features in the embedding space to promote generalization to yield a prediction model that classifies unseen images more accurately. We are inspired by the human visual attention model described as the perception of the most salient visual cues in the observed scene. Accordingly, our attentive mechanism adaptively selects important features to discriminate in the DML’s objective function. We conduct experiments on two popular large-scale wild FER datasets (RAF-DB and AffectNet) to show the enhanced discriminative power of our proposed methods compared with several state-of-the-art FER methods.
format Text
author Farzaneh, Amir Hossein
spellingShingle Farzaneh, Amir Hossein
Facial Expression Recognition in the Wild Using Convolutional Neural Networks
author_facet Farzaneh, Amir Hossein
author_sort Farzaneh, Amir Hossein
title Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_short Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_full Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_fullStr Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_full_unstemmed Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_sort facial expression recognition in the wild using convolutional neural networks
publisher Utah State University
publishDate 2020
url https://dx.doi.org/10.26076/a049-b4b3
https://digitalcommons.usu.edu/etd/7851
genre DML
genre_facet DML
op_doi https://doi.org/10.26076/a049-b4b3
_version_ 1766397473812643840