Please use this identifier to cite or link to this item:
|Title:||Real Time Automated Facial Expression Recognition App Development on Smart Phones|
|Keywords:||Face recognition;Support vector machines;Feature extraction;Nose;Face;Smart phones|
|Citation:||8th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), pp. 384 - 392|
|Abstract:||Automated facial expression recognition (AFER) is a crucial technology to and a challenging task for human computer interaction. Previous methods of AFER have incorporated different features and classification methods and use basic testing approaches. In this paper, we employ the best feature descriptor for AFER by empirically evaluating the feature descriptors named the Facial Landmarks descriptor and the Center of Gravity descriptor. We examine each feature descriptor by considering one classification method, such as the Support Vector Machine (SVM) method, with three unique facial expression recognition (FER) datasets. In addition to test accuracies, we present confusion matrices of AFER. We also analyze the effect of using these feature and image resolutions on AFER performance. Our study indicates that the Facial Landmarks descriptor is the best choice to run AFER on mobile phones. The results of our study demonstrate that the proposed facial expression recognition on a mobile phone application is successful and provides up to 96.3% recognition accuracy|
|Description:||INSPEC Accession Number: 17392045|
|Appears in Collections:||Dept of Electronic and Computer Engineering Research Papers|
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.