Facial Expression Recognition in the Wild Using Convolutional Neural Networks

Facial Expression Recognition (FER) is the task of predicting a specific facial expression given a facial image. FER has demonstrated remarkable progress due to the advancement of deep learning. Generally, a FER system as a prediction model is built using two sub-modules: 1. Facial image representat...

Full description

Bibliographic Details
Main Author: Farzaneh, Amir Hossein
Format: Text
Language:unknown
Published: DigitalCommons@USU 2020
Subjects:
DML
Online Access:https://digitalcommons.usu.edu/etd/7851
https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=8991&context=etd
id ftutahsudc:oai:digitalcommons.usu.edu:etd-8991
record_format openpolar
spelling ftutahsudc:oai:digitalcommons.usu.edu:etd-8991 2023-05-15T16:01:44+02:00 Facial Expression Recognition in the Wild Using Convolutional Neural Networks Farzaneh, Amir Hossein 2020-08-01T07:00:00Z application/pdf https://digitalcommons.usu.edu/etd/7851 https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=8991&context=etd unknown DigitalCommons@USU https://digitalcommons.usu.edu/etd/7851 https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=8991&context=etd Copyright for this work is held by the author. Transmission or reproduction of materials protected by copyright beyond that allowed by fair use requires the written permission of the copyright owners. Works not in the public domain cannot be commercially exploited without permission of the copyright owner. Responsibility for any use rests exclusively with the user. For more information contact digitalcommons@usu.edu. PDM All Graduate Theses and Dissertations facial expression recognition wild convolutional neural network deep learning discriminant loss function attention adaptive emotion Computer Sciences text 2020 ftutahsudc 2022-03-07T22:00:19Z Facial Expression Recognition (FER) is the task of predicting a specific facial expression given a facial image. FER has demonstrated remarkable progress due to the advancement of deep learning. Generally, a FER system as a prediction model is built using two sub-modules: 1. Facial image representation model that learns a mapping from the input 2D facial image to a compact feature representation in the embedding space, and 2. A classifier module that maps the learned features to the label space comprising seven labels of neutral, happy, sad, surprise, anger, fear, or disgust. Ultimately, the prediction model aims to predict one of the seven aforementioned labels for the given input image. This process is carried out using a supervised learning algorithm where the model minimizes an objective function that measures the error between the prediction and true label by searching for the best mapping function. Our work is inspired by Deep Metric Learning (DML) approaches to learn an efficient embedding space for the classifier module. DML fundamentally aims to achieve maximal separation in the embedding space by creating compact and well-separated clusters with the capability of feature discrimination. However, conventional DML methods ignore the underlying challenges associated with wild FER datasets, where images exhibit large intra-class variation and inter-class similarity. First, we tackle the extreme class imbalance that leads to a separation bias toward facial expression classes populated with more data (e.g., happy and neutral) against minority classes (e.g., disgust and fear). To eliminate this bias, we propose a discriminant objective function to optimize the embedding space to enforce inter-class separation of features for both majority and minority classes. Second, we design an adaptive mechanism to selectively discriminate features in the embedding space to promote generalization to yield a prediction model that classifies unseen images more accurately. We are inspired by the human visual attention model described as the perception of the most salient visual cues in the observed scene. Accordingly, our attentive mechanism adaptively selects important features to discriminate in the DML's objective function. We conduct experiments on two popular large-scale wild FER datasets (RAF-DB and AffectNet) to show the enhanced discriminative power of our proposed methods compared with several state-of-the-art FER methods. Text DML Utah State University: DigitalCommons@USU
institution Open Polar
collection Utah State University: DigitalCommons@USU
op_collection_id ftutahsudc
language unknown
topic facial expression recognition
wild
convolutional neural network
deep learning
discriminant
loss function
attention
adaptive
emotion
Computer Sciences
spellingShingle facial expression recognition
wild
convolutional neural network
deep learning
discriminant
loss function
attention
adaptive
emotion
Computer Sciences
Farzaneh, Amir Hossein
Facial Expression Recognition in the Wild Using Convolutional Neural Networks
topic_facet facial expression recognition
wild
convolutional neural network
deep learning
discriminant
loss function
attention
adaptive
emotion
Computer Sciences
description Facial Expression Recognition (FER) is the task of predicting a specific facial expression given a facial image. FER has demonstrated remarkable progress due to the advancement of deep learning. Generally, a FER system as a prediction model is built using two sub-modules: 1. Facial image representation model that learns a mapping from the input 2D facial image to a compact feature representation in the embedding space, and 2. A classifier module that maps the learned features to the label space comprising seven labels of neutral, happy, sad, surprise, anger, fear, or disgust. Ultimately, the prediction model aims to predict one of the seven aforementioned labels for the given input image. This process is carried out using a supervised learning algorithm where the model minimizes an objective function that measures the error between the prediction and true label by searching for the best mapping function. Our work is inspired by Deep Metric Learning (DML) approaches to learn an efficient embedding space for the classifier module. DML fundamentally aims to achieve maximal separation in the embedding space by creating compact and well-separated clusters with the capability of feature discrimination. However, conventional DML methods ignore the underlying challenges associated with wild FER datasets, where images exhibit large intra-class variation and inter-class similarity. First, we tackle the extreme class imbalance that leads to a separation bias toward facial expression classes populated with more data (e.g., happy and neutral) against minority classes (e.g., disgust and fear). To eliminate this bias, we propose a discriminant objective function to optimize the embedding space to enforce inter-class separation of features for both majority and minority classes. Second, we design an adaptive mechanism to selectively discriminate features in the embedding space to promote generalization to yield a prediction model that classifies unseen images more accurately. We are inspired by the human visual attention model described as the perception of the most salient visual cues in the observed scene. Accordingly, our attentive mechanism adaptively selects important features to discriminate in the DML's objective function. We conduct experiments on two popular large-scale wild FER datasets (RAF-DB and AffectNet) to show the enhanced discriminative power of our proposed methods compared with several state-of-the-art FER methods.
format Text
author Farzaneh, Amir Hossein
author_facet Farzaneh, Amir Hossein
author_sort Farzaneh, Amir Hossein
title Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_short Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_full Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_fullStr Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_full_unstemmed Facial Expression Recognition in the Wild Using Convolutional Neural Networks
title_sort facial expression recognition in the wild using convolutional neural networks
publisher DigitalCommons@USU
publishDate 2020
url https://digitalcommons.usu.edu/etd/7851
https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=8991&context=etd
genre DML
genre_facet DML
op_source All Graduate Theses and Dissertations
op_relation https://digitalcommons.usu.edu/etd/7851
https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=8991&context=etd
op_rights Copyright for this work is held by the author. Transmission or reproduction of materials protected by copyright beyond that allowed by fair use requires the written permission of the copyright owners. Works not in the public domain cannot be commercially exploited without permission of the copyright owner. Responsibility for any use rests exclusively with the user. For more information contact digitalcommons@usu.edu.
op_rightsnorm PDM
_version_ 1766397474153431040