Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31390
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, B-
dc.contributor.authorXu, L-
dc.contributor.authorLiu, K-H-
dc.contributor.authorYang, R-
dc.contributor.authorLi, M-Z-
dc.contributor.authorGuo, X-Y-
dc.date.accessioned2025-06-04T10:43:39Z-
dc.date.available2025-06-04T10:43:39Z-
dc.date.issued2024-10-18-
dc.identifierORCiD: Bo Zhang https://orcid.org/0000-0002-2289-2877-
dc.identifierORCiD: Ke-Hao Liu https://orcid.org/0000-0002-4364-5066-
dc.identifierORCiD: Ru Yang https://orcid.org/0000-0001-7879-681X-
dc.identifierORCiD: Mao-Zhen Li https://orcid.org/0000-0002-0820-5487-
dc.identifierArticle number: 111083-
dc.identifier.citationZhang, B. et al. (2024) 'Piecewise convolutional neural network relation extraction with self-attention mechanism', Pattern Recognition, 159, 111083, pp. 1 - 10. doi: 10.1016/j.patcog.2024.111083.en_US
dc.identifier.issn0031-3203-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/31390-
dc.descriptionData availability: Data will be made available on request.en_US
dc.description.abstractThe task of relation extraction in natural language processing is to identify the relation between two specified entities in a sentence. However, the existing model methods do not fully utilize the word feature information and pay little attention to the influence degree of the relative relation extraction results of each word. In order to address the aforementioned issues, we propose a relation extraction method based on self-attention mechanism (SPCNN-VAE) to solve the above problems. First, we use a multi-head self-attention mechanism to process word vectors and generate sentence feature vector representations, which can be used to extract semantic dependencies between words in sentences. Then, we introduce the word position to combine the sentence feature representation with the position feature representation of words to form the input representation of piecewise convolutional neural network (PCNN). Furthermore, to identify the word feature information that is most useful for relation extraction, an attention-based pooling operation is employed to capture key convolutional features and classify the feature vectors. Finally, regularization is performed by a variational autoencoder (VAE) to enhance the encoding ability of model word information features. The performance analysis is performed on SemEval 2010 task 8, and the experimental results show that the proposed relation extraction model is effective and outperforms some competitive baselines.en_US
dc.description.sponsorshipThis work was supported in part by the National Natural Science Foundation of China under Grant 62372300, Grant 62302306, Grant 62201350, and Grant 62477032, in part by the National Key Research and Development Program of China under Grant No. 2022YFB4501704.en_US
dc.format.extent1 - 10-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectrelation extractionen_US
dc.subjectmulti-head attentionen_US
dc.subjectPCNNen_US
dc.subjectvariational autoencoderen_US
dc.titlePiecewise convolutional neural network relation extraction with self-attention mechanismen_US
dc.typeArticleen_US
dc.date.dateAccepted2024-10-14-
dc.identifier.doihttps://doi.org/10.1016/j.patcog.2024.111083-
dc.relation.isPartOfPattern Recognition-
pubs.publication-statusPublished-
pubs.volume159-
dc.identifier.eissn1873-5142-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.en-
dcterms.dateAccepted2024-10-14-
dc.rights.holderElsevier Ltd.-
Appears in Collections:Dept of Electronic and Electrical Engineering Embargoed Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 18 October 20251.37 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons