Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/29924
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorKoulouri, T-
dc.contributor.advisorTucker, A-
dc.contributor.authorElahi, Md Monjur-
dc.date.accessioned2024-10-11T17:04:29Z-
dc.date.available2024-10-11T17:04:29Z-
dc.date.issued2023-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/29924-
dc.descriptionThis thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University Londonen_US
dc.description.abstractAdvancements in artificial intelligence (AI) have increased the demand for interpretable decision-making processes. This study explores Explainable AI (XAI) by examining the relationship between machine learning model types and explanatory mechanisms. An ensemble of models—XGBoost, Neural Network, Naive Bayes, Decision Tree, and K-Nearest Neighbour—was developed, offering a balance between transparency and accuracy. The models were evaluated using a car assessment task, where participants made decisions and were then shown predictions from single and ensemble models, sometimes accompanied by explanations like LIME and SHAP. Results show that the ensemble model outperformed most constituent models in accuracy and positively influenced user trust, particularly in scenarios of appropriate compliance and incorrect predictions. While explanations had limited effect on trust, the level of agreement within the ensemble significantly influenced user behaviour. Preconceptions such as familiarity and risk appetite also affected compliance. SHAP's waterfall plot emerged as the preferred explanation type. This research contributes methodologically with a novel ensemble model balancing accuracy and interpretability, and empirically by deepening understanding of human-AI interaction. Practical recommendations are provided for presenting explanations to improve user trust in machine learning applications.en_US
dc.publisherBrunel University Londonen_US
dc.relation.urihttp://bura.brunel.ac.uk/handle/2438/29924/1/FulltextThesis.pdf-
dc.subjectHuman-Computer Interactionen_US
dc.subjectArtificial Intelligence (AI) Trustworthinessen_US
dc.subjectModel Interpretabilityen_US
dc.subjectAlgorithmic Decision-Makingen_US
dc.subjectExplainable AI (XAI)en_US
dc.titleA user-centric exploration of transparency, explanations, and trust in multi-model and single-model decision support systemsen_US
dc.typeThesisen_US
Appears in Collections:Computer Science
Dept of Computer Science Theses

Files in This Item:
File Description SizeFormat 
FulltextThesis.pdf3.27 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.