Paper Reading AI Learner

AffWild Net and Aff-Wild Database

2019-10-11 18:57:18
Alvertos Benroumpi, Dimitrios Kollias

Abstract

Emotions recognition is the task of recognizing people's emotions. Usually it is achieved by analyzing expression of peoples faces. There are two ways for representing emotions: The categorical approach and the dimensional approach by using valence and arousal values. Valence shows how negative or positive an emotion is and arousal shows how much it is activated. Recent deep learning models, that have to do with emotions recognition, are using the second approach, valence and arousal. Moreover, a more interesting concept, which is useful in real life is the "in the wild" emotions recognition. "In the wild" means that the images analyzed for the recognition task, come from from real life sources(online videos, online photos, etc.) and not from staged experiments. So, they introduce unpredictable situations in the images, that have to be modeled. The purpose of this project is to study the previous work that was done for the "in the wild" emotions recognition concept, design a new dataset which has as a standard the "Aff-wild" database, implement new deep learning models and evaluate the results. First, already existing databases and deep learning models are presented. Then, inspired by them a new database is created which includes 507.208 frames in total from 106 videos, which were gathered from online sources. Then, the data are tested in a CNN model based on CNN-M architecture, in order to be sure about their usability. Next, the main model of this project is implemented. That is a Regression GAN which can execute unsupervised and supervised learning at the same time. More specifically, it keeps the main functionality of GANs, which is to produce fake images that look as good as the real ones, while it can also predict valence and arousal values for both real and fake images. Finally, the database created earlier is applied to this model and the results are presented and evaluated.

Abstract (translated)

URL

https://arxiv.org/abs/1910.05376

PDF

https://arxiv.org/pdf/1910.05376.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot