Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/33151Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ghatoray, SK | - |
| dc.contributor.author | Li, Y | - |
| dc.date.accessioned | 2026-04-14T08:45:38Z | - |
| dc.date.available | 2026-04-14T08:45:38Z | - |
| dc.date.issued | 2026-02-06 | - |
| dc.identifier | ORCiD: Yongmin Li https://orcid.org/0000-0003-1668-2440 | - |
| dc.identifier.citation | Ghatoray, S.K. and Li, Y. (2026) 'Extracting Meaningful Insights from User Research Videos', International Journal of Human–Computer Interaction, 0 (ahead of print), pp. 1–19. doi: 10.1080/10447318.2026.2619613. | en-US |
| dc.identifier.issn | 1044-7318 | - |
| dc.identifier.uri | https://bura.brunel.ac.uk/handle/2438/33151 | - |
| dc.description.abstract | Recognising and tracking user emotions in research videos is vital to understanding user needs and expectations. Limited research exists on automating emotion extraction from multimodal videos in user experience (UX). This study proposes a conceptual framework for automated extraction of actionable insights using facial, speech-to-text, and text-based emotion recognition to capture nuanced emotional data. The multimodal approach integrates visible and spoken cues through temporal alignment and fusion techniques, enabling robust behavioural pattern detection. An interactive AI analyst tool is used to query the integrated data in natural language, reduce manual workload, and improve the efficiency and scalability of UX evaluation. A case study of the implementation of the proposed framework is also provided with details of individual components, such as facial emotion recognition, speech-to-text, text-based emotion recognition, temporal alignment and fusion, and insight extraction via interactive AI. | en-US |
| dc.format.extent | 1–19 | - |
| dc.format.medium | Print-Electronic | - |
| dc.language | en-US | en-US |
| dc.language.iso | en | en-US |
| dc.publisher | Routledge (Taylor and Francis Group) | en-US |
| dc.rights | Creative Commons Attribution 4.0 International | - |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | - |
| dc.subject | user experience | en-US |
| dc.subject | insight extraction | en-US |
| dc.subject | facial emotion | en-US |
| dc.subject | recognition | en-US |
| dc.subject | text-based emotion recognition | en-US |
| dc.subject | interactive AI | en-US |
| dc.title | Extracting Meaningful Insights from User Research Videos | en-US |
| dc.type | Article | en-US |
| dc.date.dateAccepted | 2026-01-15 | - |
| dc.identifier.doi | https://doi.org/10.1080/10447318.2026.2619613 | - |
| dc.relation.isPartOf | International Journal of Human–Computer Interaction | - |
| pubs.issue | 0 | - |
| pubs.publication-status | Published online | - |
| pubs.volume | 00 | - |
| dc.identifier.eissn | 1532-7590 | - |
| dc.rights.license | https://creativecommons.org/licenses/by/4.0/legalcode.en | - |
| dcterms.dateAccepted | 2026-01-15 | - |
| dc.rights.holder | The Author(s) | - |
| dc.contributor.orcid | Li, Yongmin [0000-0003-1668-2440] | - |
| Appears in Collections: | Department of Computer Science Research Papers | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| FullText.pdf | Copyright © 2026 The Author(s). Published with license by Taylor & Francis Group, LLC. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent. | 2.51 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License