Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/7976
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMeng, H-
dc.contributor.authorBianchi-Berthouze, N-
dc.date.accessioned2014-02-03T12:15:59Z-
dc.date.available2014-02-03T12:15:59Z-
dc.date.issued2013-
dc.identifier.citationIEEE Transactions on Cybernetics, 44(3), 315 - 328, 2013en_US
dc.identifier.issn2168-2267-
dc.identifier.urihttp://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6507321en
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/7976-
dc.descriptionThis article is made available through the Brunel Open Access Publishing Fund and is available to view at the link below.en_US
dc.description.abstractNaturalistic affective expressions change at a rate much slower than the typical rate at which video or audio is recorded. This increases the probability that consecutive recorded instants of expressions represent the same affective content. In this paper, we exploit such a relationship to improve the recognition performance of continuous naturalistic affective expressions. Using datasets of naturalistic affective expressions (AVEC 2011 audio and video dataset, PAINFUL video dataset) continuously labeled over time and over different dimensions, we analyze the transitions between levels of those dimensions (e.g., transitions in pain intensity level). We use an information theory approach to show that the transitions occur very slowly and hence suggest modeling them as first-order Markov models. The dimension levels are considered to be the hidden states in the Hidden Markov Model (HMM) framework. Their discrete transition and emission matrices are trained by using the labels provided with the training set. The recognition problem is converted into a best path-finding problem to obtain the best hidden states sequence in HMMs. This is a key difference from previous use of HMMs as classifiers. Modeling of the transitions between dimension levels is integrated in a multistage approach, where the first level performs a mapping between the affective expression features and a soft decision value (e.g., an affective dimension level), and further classification stages are modeled as HMMs that refine that mapping by taking into account the temporal relationships between the output decision labels. The experimental results for each of the unimodal datasets show overall performance to be significantly above that of a standard classification system that does not take into account temporal relationships. In particular, the results on the AVEC 2011 audio dataset outperform all other systems presented at the international competition.en_US
dc.language.isoenen_US
dc.publisherSystems, Man, and Cybernetics Societyen_US
dc.subjectAffective computingen_US
dc.subjectContinuous emotion recognitionen_US
dc.subjectDimensional model of affecten_US
dc.subjectHMMen_US
dc.subjectMachine learningen_US
dc.subjectNaturalistic affective expressionsen_US
dc.titleAffective state level recognition in naturalistic facial and vocal expressionsen_US
dc.typeArticleen_US
dc.identifier.doihttp://dx.doi.org/10.1109/TCYB.2013.2253768-
Appears in Collections:Electronic and Computer Engineering
Publications
Brunel OA Publishing Fund
Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Notice.pdf25.52 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.