Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/18483
Title: An Annotation Model on End-to-End Chest Radiology Reports
Authors: Huang, X
Fang, Y
Lu, M
Yao, Y
Li, M
Keywords: Annotation;chest radiology report;deep learning;end-to-end model;indication
Issue Date: 20-May-2019
Publisher: IEEE
Citation: IEEE Access, 2019, 7 pp. 65757 - 65765
Abstract: Annotating radiographic images with tags is an indispensable preliminary work in computer-aided medical research, which requires professional physician participated in and is quite timeconsuming. Therefore, how to automatically annotate radiographic images has become the focus of researchers. However, image report texts, containing crucial radiologic information, have not to be given enough attention for images annotation. In this paper, we propose a neural sequence-to-sequence annotation model. Especially, in the decoding phase, a probability is first learned to copy existing words from report texts or generate new words. Second, to incorporate the patient’s background information, ‘‘indication’’ section of the report is encoded as a sentence embedding, and concatenated with the decoder neural unit input. What’s more, we devise a more reasonable evaluation metric for this annotation task, aiming at assessing the importance of different words. On the Open-i dataset, our model outperforms existing non-neural and neural baselines under the BLEU-4 metrics. To our best knowledge, we are the first to use sequence-to-sequence model for radiographic image annotation.
URI: http://bura.brunel.ac.uk/handle/2438/18483
DOI: http://dx.doi.org/10.1109/ACCESS.2019.2917922
ISSN: 2169-3536
http://dx.doi.org/10.1109/ACCESS.2019.2917922
Appears in Collections:Dept of Electronic and Computer Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf6.22 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.