Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/23840
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMa, L-
dc.contributor.authorShao, Z-
dc.contributor.authorLi, X-
dc.contributor.authorLin, Q-
dc.contributor.authorLi, J-
dc.contributor.authorLeung, VCM-
dc.contributor.authorNandi, AK-
dc.date.accessioned2021-12-29T19:16:13Z-
dc.date.available2021-12-29T19:16:13Z-
dc.date.issued2022-01-13-
dc.identifierORCiD: Lijia Ma https://orcid.org/0000-0002-1201-8051-
dc.identifierORCiD: Qiuzhen Lin https://orcid.org/0000-0003-2415-0401-
dc.identifierORCiD: Jianqiang Li https://orcid.org/0000-0002-2208-962X-
dc.identifierORCiD: Victor C. M. Leung https://orcid.org/0000-0003-3529-2640-
dc.identifierORCiD: Asoke K. Nandi https://orcid.org/0000-0001-6248-2875-
dc.identifier.citationMa, L. et al. (2022) 'Influence Maximization in Complex Networks by Using Evolutionary Deep Reinforcement Learning', IEEE Transactions on Emerging Topics in Computational Intelligence, 7 (4), pp. 995 - 1009. doi: 10.1109/TETCI.2021.3136643.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/23840-
dc.description.abstractInfluence maximization (IM) in complex networks tries to activate a small subset of seed nodes that could maximize the propagation of influence. The studies on IM have attracted much attention due to their wide applications such as item recommendation, viral marketing, information propagation and disease immunization. Existing works mainly model the IM problem as a discrete optimization problem, and use either approximate or meta-heuristic algorithms to address this problem. However, these works are hard to find a good tradeoff between effectiveness and efficiency due to the NP-hard and large-scale network properties of the IM problem. In this article, we propose an evolutionary deep reinforcement learning algorithm (called EDRL-IM) for IM in complex networks. First, EDRL-IM models the IM problem as a continuous weight parameter optimization of deep Q network (DQN). Then, it combines an evolutionary algorithm (EA) and a deep reinforcement learning algorithm (DRL) to evolve the DQN. The EA simultaneously evolves a population of individuals, and each of which represents a possible DQN and returns a solution to the IM problem through a dynamic markov node selection strategy, while the DRL integrates all information and network-specific knowledge of DQNs to accelerate their evolution. Systematic experiments on both benchmark and real-world networks show the superiority of EDRL-IM over the state-of-the-art IM methods in finding seed nodes.-
dc.description.sponsorshipJoint Funds of the National Natural Science Foundation of China under Key Program under Grant U1713212; National Natural Science Foundation of China under Grants 61672358, 61572330, 61772393 and 61836005; Natural Science Foundation of Guangdong Province under grant 2017A030313338.en_US
dc.description.sponsorship10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 62173236, 61803269, 61876110, 61806130, 61976142, U1713212, 62072315 and 61836005); 10.13039/501100003453-Natural Science Foundation of Guangdong Province (Grant Number: 2020A1515010790); Technology Research Project of Shenzhen City (Grant Number: JCYJ20190808174801673).-
dc.format.extent995 - 1009-
dc.format.mediumElectronic-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCopyright © 2021 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works by sending a request to pubs-permissions@ieee.org. For more information, see https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/-
dc.rights.urihttps://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/-
dc.subjectcomplex networks,en_US
dc.subjectinfluence maximizationen_US
dc.subjectdeep reinforcement learningen_US
dc.subjectevolutionary algorithmen_US
dc.subjectoptimizationen_US
dc.titleInfluence Maximization in Complex Networks by Using Evolutionary Deep Reinforcement Learningen_US
dc.typeArticleen_US
dc.date.dateAccepted2021-11-03-
dc.identifier.doihttps://doi.org/10.1109/TETCI.2021.3136643-
dc.relation.isPartOfIEEE Transactions on Emerging Topics in Computational Intelligence-
pubs.issue4-
pubs.publication-statusPublished online-
pubs.volume7-
dc.identifier.eissn2471-285X-
dcterms.dateAccepted2021-11-03-
dc.rights.holderInstitute of Electrical and Electronics Engineers (IEEE)-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2021 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works by sending a request to pubs-permissions@ieee.org. For more information, see https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/2.54 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.