Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/30207
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRen, X-
dc.contributor.authorLai, CS-
dc.contributor.authorGuo, Z-
dc.contributor.authorTaylor, G-
dc.date.accessioned2024-11-20T16:39:26Z-
dc.date.available2024-11-20T16:39:26Z-
dc.date.issued2024-10-16-
dc.identifierORCiD: Chun Sing Lai https://orcid.org/0000-0002-4169-4438-
dc.identifierORCiD: Zekun Guo https://orcid.org/0000-0001-6894-847X-
dc.identifierORCiD: Gareth Taylor https://orcid.org/0000-0003-0867-2365-
dc.identifier.citationRen, X. et al. (2024) 'Eco-Driving With Partial Wireless Charging Lane at Signalized Intersection: A Reinforcement Learning Approach', IEEE Transactions on Consumer Electronics, 70 (4), pp. 6547 - 6559. doi: 10.1109/TCE.2024.3482101.en_US
dc.identifier.issn0098-3063-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/30207-
dc.description.abstractConsumer electronics such as advanced GPS, vehicular sensors, inertial measurement units (IMUs), and wireless modules integrate vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) within internet of thing (IoT), enabling connected autonomous electric vehicles (CAEVs) to optimize energy optimization through eco-driving. In scenarios with traffic light intersections and partial wireless charging lanes (WCL), an eco-driving algorithm must consider net and gross energy consumption, safety, and traffic efficiency. We introduced a deep reinforcement learning (DRL) based eco-driving control approach, employing a twin-delayed deep deterministic policy gradient (TD3) agent for real-time acceleration planning. This approach uses reward functions for acceleration, velocity, safety, and efficiency, incorporating a dynamic velocity range model which not only enables the vehicle to smoothly pass the signalized intersections but also uses partial WCL efficiently and time-adaptively while ensuring traffic efficiency in diverse traffic scenarios. Tested in Simulation of Urban Mobility (SUMO) across various intersections with partial WCL, our method significantly lowered net and gross energy consumption by up to 44.01% and 17.19%, respectively, compared to conventional driving, while adhering to traffic and safety norms.en_US
dc.description.sponsorship10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 62206062)en_US
dc.format.extent6547 - 6559-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsAttribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectconsumer electronicsen_US
dc.subjectvehicle-to-vehicle communicationsen_US
dc.subjectvehicle-to-infrastructure communicationen_US
dc.subjectconnected autonomous electric vehiclesen_US
dc.subjectautonomous electric vehiclesen_US
dc.subjecteco-drivingen_US
dc.subjectwireless charging laneen_US
dc.subjectdeep reinforcement learningen_US
dc.titleEco-Driving With Partial Wireless Charging Lane at Signalized Intersection: A Reinforcement Learning Approachen_US
dc.typeArticleen_US
dc.date.dateAccepted2024-10-11-
dc.identifier.doihttps://doi.org/10.1109/TCE.2024.3482101-
dc.relation.isPartOfIEEE Transactions on Consumer Electronics-
pubs.issue4-
pubs.publication-statusPublished-
pubs.volume70-
dc.identifier.eissn1558-4127-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dcterms.dateAccepted2024-10-11-
dc.rights.holderThe Authors-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2024 The Authors. Published by IEEE. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/4.05 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons