Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/7976
Title: Affective state level recognition in naturalistic facial and vocal expressions
Authors: Meng, H
Bianchi-Berthouze, N
Keywords: Affective computing;Continuous emotion recognition;Dimensional model of affect;HMM;Machine learning;Naturalistic affective expressions
Issue Date: 2013
Publisher: Systems, Man, and Cybernetics Society
Citation: IEEE Transactions on Cybernetics, 44(3), 315 - 328, 2013
Abstract: Naturalistic affective expressions change at a rate much slower than the typical rate at which video or audio is recorded. This increases the probability that consecutive recorded instants of expressions represent the same affective content. In this paper, we exploit such a relationship to improve the recognition performance of continuous naturalistic affective expressions. Using datasets of naturalistic affective expressions (AVEC 2011 audio and video dataset, PAINFUL video dataset) continuously labeled over time and over different dimensions, we analyze the transitions between levels of those dimensions (e.g., transitions in pain intensity level). We use an information theory approach to show that the transitions occur very slowly and hence suggest modeling them as first-order Markov models. The dimension levels are considered to be the hidden states in the Hidden Markov Model (HMM) framework. Their discrete transition and emission matrices are trained by using the labels provided with the training set. The recognition problem is converted into a best path-finding problem to obtain the best hidden states sequence in HMMs. This is a key difference from previous use of HMMs as classifiers. Modeling of the transitions between dimension levels is integrated in a multistage approach, where the first level performs a mapping between the affective expression features and a soft decision value (e.g., an affective dimension level), and further classification stages are modeled as HMMs that refine that mapping by taking into account the temporal relationships between the output decision labels. The experimental results for each of the unimodal datasets show overall performance to be significantly above that of a standard classification system that does not take into account temporal relationships. In particular, the results on the AVEC 2011 audio dataset outperform all other systems presented at the international competition.
Description: This article is made available through the Brunel Open Access Publishing Fund and is available to view at the link below.
URI: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6507321
http://bura.brunel.ac.uk/handle/2438/7976
DOI: http://dx.doi.org/10.1109/TCYB.2013.2253768
ISSN: 2168-2267
Appears in Collections:Electronic and Computer Engineering
Publications
Brunel OA Publishing Fund
Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Notice.pdf25.52 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.