Multi-Label Action Unit Detection on Multiple Head Poses with Dynamic Region Learning

Abstract

This paper presents a multi-label Action Unit (AU) detection method applied on multi-pose facial images. Action Unit detection on multiple head poses is an issue that robust AU detectors must deal with, as it is uncommon for a person to maintain always the same pose when displaying facial expressions. To this end, this work proposes a region learning approach, that dynamically creates regions of interest inside a convolutional neural network (CNN) using facial landmark points. The dynamic region learning (DRL) ensures that each AU is in the center of the region, and also follows the head pose movement. The DRL is built on top of the VGG-Face network, and transfer-learning is used to start the training. The experiments were conducted on the Facial Expression Recognition and Analysis Challenge (FERA 2017) database, which contains nine different head poses. The results show that the dynamic region learning is able to adapt to the nine poses in the database, improving the state-of-the-art with an an average F1-score of 0.582.

Publication
IEEE International Conference on Image Processing 2018
Avatar
Vítor Albiero
Research Scientist

My research interests include responsible AI, computer vision, machine learning and biometrics.