Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/29696
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCastagna, F-
dc.contributor.authorMcBurney, P-
dc.contributor.authorParsons, S-
dc.date.accessioned2024-09-10T12:08:03Z-
dc.date.available2024-09-10T12:08:03Z-
dc.date.issued2024-03-21-
dc.identifierORCiD: Federico Castagna https://orcid.org/0000-0002-5142-4386-
dc.identifier.citationCastagna, F., McBurney, P. and Parsons, S. (2024) 'Explanation–Question–Response dialogue: An argumentative tool for explainable AI', Argument & Computation, 2024, 0 (in press, corrected proof), pp. 1 - 23. doi: 10.3233/aac-230015.en_US
dc.identifier.issn1946-2166-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/29696-
dc.description.abstractAdvancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way that it is generated, which is largely obscure to laypeople. A dialectical interaction with such systems may enhance the users’ understanding and build a more robust trust towards AI. Commonly employed as specific formalisms for modelling intra-agent communications, dialogue games prove to be useful tools to rely upon when dealing with user’s explanation needs. The literature already offers some dialectical protocols that expressly handle explanations and their delivery. This paper fully formalises the novel Explanation–Question–Response (EQR) dialogue and its properties, whose main purpose is to provide satisfactory information (i.e., justified according to argumentative semantics) whilst ensuring a simplified protocol, in comparison with other existing approaches, for humans and artificial agents.en_US
dc.description.sponsorshipThis research was partially funded by the UK Engineering & Physical Sciences Research Council (EPSRC) under grant #EP/P010105/1.en_US
dc.format.extent1 - 23-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherIOS Pressen_US
dc.rightsCopyright © 2024 – The authors. Published by IOS Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (CC BY-NC 4.0).-
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/-
dc.subjectexplainable AIen_US
dc.subjectdialogue gamesen_US
dc.subjectcomputational argumentationen_US
dc.subjectdialogue protocolsen_US
dc.subjectlarge language modelsen_US
dc.titleExplanation–Question–Response dialogue: An argumentative tool for explainable AIen_US
dc.typeArticleen_US
dc.date.dateAccepted2024-02-26-
dc.identifier.doihttps://doi.org/10.3233/aac-230015-
dc.relation.isPartOfArgument & Computation-
pubs.issuein press, corrected proof-
pubs.publication-statusPublished-
pubs.volume0-
dc.identifier.eissn1946-2174-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc/4.0/legalcode.en-
dc.rights.holderThe authors-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2024 – The authors. Published by IOS Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (CC BY-NC 4.0).293.54 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons