Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/30635
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMalin, B-
dc.contributor.authorKalganova, T-
dc.contributor.authorBoulgouris, N-
dc.date.accessioned2025-02-02T11:37:55Z-
dc.date.available2025-02-02T11:37:55Z-
dc.date.issued2024-12-31-
dc.identifierORCiD: Ben Malin https://orcid.org/0009-0006-5791-2555-
dc.identifierORCiD: Tatiana Kalganova https://orcid.org/0000-0003-4859-7152-
dc.identifierORCiD: Nikolaos Boulgouris https://orcid.org/0000-0002-5382-6856-
dc.identifierarXiv:2501.00269v1 [cs.CL]-
dc.identifier.citationMalin, B., and . (2024) 'A review of faithfulness metrics for hallucination assessment in Large Language Models', arXiv preprint, arXiv:2501.00269v1 [cs.CL], pp. 1 - 13. doi: 10.48550/arXiv.2501.00269.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/30635-
dc.description.abstractThis review examines the means with which faithfulness has been evaluated across open-ended summarization, question-answering and machine translation tasks. We find that the use of LLMs as a faithfulness evaluator is commonly the metric that is most highly correlated with human judgement. The means with which other studies have mitigated hallucinations is discussed, with both retrieval augmented generation (RAG) and prompting framework approaches having been linked with superior faithfulness, whilst other recommendations for mitigation are provided. Research into faithfulness is integral to the continued widespread use of LLMs, as unfaithful responses can pose major risks to many areas whereby LLMs would otherwise be suitable. Furthermore, evaluating open-ended generation provides a more comprehensive measure of LLM performance than commonly used multiple-choice benchmarking, which can help in advancing the trust that can be placed within LLMs.en_US
dc.description.sponsorshipThis work has been funded by the European Union.en_US
dc.format.extent1 - 13-
dc.format.mediumElectronic-
dc.language.isoen_USen_US
dc.publisherCornell Universityen_US
dc.relation.urihttps://arxiv.org/abs/2501.00269v1-
dc.rightsAttribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectcs.CLen_US
dc.subjectevaluationen_US
dc.subjectfact extractionen_US
dc.subjectfaithfulnessen_US
dc.subjecthallucinationen_US
dc.subjectLLMen_US
dc.subjectmachine translationen_US
dc.subjectquestion-answeringen_US
dc.subjectRAGen_US
dc.subjectsummarizationen_US
dc.titleA review of faithfulness metrics for hallucination assessment in Large Language Modelsen_US
dc.typePreprinten_US
dc.identifier.doihttps://doi.org/10.48550/arXiv.2501.00269-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderThe Author(s)-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Preprint.pdfCopyright © 2024 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).307.69 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons