Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/33151
Title: Extracting Meaningful Insights from User Research Videos
Authors: Ghatoray, SK
Li, Y
Keywords: user experience;insight extraction;facial emotion;recognition;text-based emotion recognition;interactive AI
Issue Date: 6-Feb-2026
Publisher: Routledge (Taylor and Francis Group)
Citation: Ghatoray, S.K. and Li, Y. (2026) 'Extracting Meaningful Insights from User Research Videos', International Journal of Human–Computer Interaction, 0 (ahead of print), pp. 1–19. doi: 10.1080/10447318.2026.2619613.
Abstract: Recognising and tracking user emotions in research videos is vital to understanding user needs and expectations. Limited research exists on automating emotion extraction from multimodal videos in user experience (UX). This study proposes a conceptual framework for automated extraction of actionable insights using facial, speech-to-text, and text-based emotion recognition to capture nuanced emotional data. The multimodal approach integrates visible and spoken cues through temporal alignment and fusion techniques, enabling robust behavioural pattern detection. An interactive AI analyst tool is used to query the integrated data in natural language, reduce manual workload, and improve the efficiency and scalability of UX evaluation. A case study of the implementation of the proposed framework is also provided with details of individual components, such as facial emotion recognition, speech-to-text, text-based emotion recognition, temporal alignment and fusion, and insight extraction via interactive AI.
URI: https://bura.brunel.ac.uk/handle/2438/33151
DOI: https://doi.org/10.1080/10447318.2026.2619613
ISSN: 1044-7318
Other Identifiers: ORCiD: Yongmin Li https://orcid.org/0000-0003-1668-2440
Appears in Collections:Department of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2026 The Author(s). Published with license by Taylor & Francis Group, LLC. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.2.51 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons