Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/32953
Title: Artificial Intelligence & Human-Computer Interaction: The development and deployment of emotionally intelligent, LLM-augmented conversational agent software, utilising generative AI to improve pedagogical and educational management processes
Other Titles: Artificial Intelligence: Emotionally intelligent, LLM-based conversational agents with Gen-AI for pedagogical, psychological and management processes
Authors: Yusuf, Habeebullah
Advisors: Money, A
Daylamani-Zad, D
Keywords: Software engineering;Large Language Models
Issue Date: 2025
Publisher: Brunel University London
Abstract: Background: Viva voce examinations, a form of performance-based assessment (PBA), are valued for their ability to elicit deep insights into student understanding and oral articulation. However, they are often associated with challenges such as increased student anxiety, examiner inconsistency and high teacher workload. While artificial intelligence (AI) has seen broad adoption in education, its application in oral assessments remains limited and underexplored. Objectives: This research investigates the use of emotionally responsive AI conversational agents to address key challenges in formative viva assessments. It aims to develop two such agents; AIvaluate (a teacher-mediated agent) and AIvaluate2 (a fully autonomous agent), and evaluate their impact on student anxiety, teacher burden, assessment quality and student experience. Methods: The research follows a Design Science Research (DSR) methodology and employs the Rapid Application Development (RAD) framework for iterative prototyping. Four empirical studies were conducted in an education setting using a mixed-methods approach. Data collection methods included real-time emotional self-reporting, System Usability Scale (SUS), feedback quality scoring, grade comparisons, surveys and semi-structured interviews. Quantitative data was analysed using statistical methods such as paired-sample comparisons and TOST equivalence testing. Qualitative data was subjected to reflexive thematic analysis. Results: Findings suggest that AIvaluate successfully reduced student anxiety and teacher workload during assessments. AIvaluate2 enhanced perceptions of assessment credibility, fairness and usability, contributing to a more emotionally supportive and consistent student experience. However, certain limitations, such as the absence of human-like empathy and rapport were noted in AI-led formats. Contributions: This thesis contributes: 1. A conceptual framework for categorising pedagogical AI conversational agents. 2. Empirical evidence supporting the educational utility of LLM-augmented AI conversational agents in oral assessments. 3. Two validated AI software artefacts (AIvaluate and AIvaluate2) designed for formative viva use. 4. Insights into the role of emotional responsiveness and usability in AI-led assessment environments. Conclusion: The study advances the application of AI in education by demonstrating how emotionally intelligent conversational agents can reduce affective and operational barriers in formative oral assessments. It offers practical, scalable tools for AI-enabled pedagogy with implications for further and higher education contexts.
Description: This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London
URI: http://bura.brunel.ac.uk/handle/2438/32953
Appears in Collections:Computer Science
Department of Computer Science Theses

Files in This Item:
File Description SizeFormat 
FulltextThesis.pdfEmbargoed until 26/02/20297.31 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.