Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/24944
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLai, CS-
dc.contributor.authorChen, D-
dc.contributor.authorZhang, J-
dc.contributor.authorZhang, X-
dc.contributor.authorXu, X-
dc.contributor.authorTaylor, G-
dc.contributor.authorLai, LL-
dc.date.accessioned2022-07-21T09:27:20Z-
dc.date.available2022-07-21T09:27:20Z-
dc.date.issued2022-08-05-
dc.identifier124852-
dc.identifier.citationLai, C.S., Chen, D., Zhang, J., Zhang, X., Xu, X., Taylor, G. and Lai, L.L. (2022) 'Profit maximization for large-scale energy storage systems to enable fast EV charging infrastructure in distribution networks', Energy, 259, 124852, pp. 1-21. doi: 10.1016/j.energy.2022.124852.en_US
dc.identifier.issn0360-5442-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/24944-
dc.description.abstractCoppyright © 2022 The Author(s). Large-scale integration of battery energy storage systems (BESS) in distribution networks has the potential to enhance the utilization of photovoltaic (PV) power generation and mitigate the negative effects caused by electric vehicles (EV) fast charging behavior. This paper presents a novel deep reinforcement learning-based power scheduling strategy for BESS which is installed in an active distribution network. The network includes fast EV charging demand, PV power generation, and electricity arbitrage from main grid. The aim is to maximize the profit of BESS operator whilst maintaining voltage limits. The novel strategy adopts a Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and requires forecasted PV power generation and EV smart charging demand. The proposed strategy is compared with Deep Deterministic Policy Gradient (DDPG), Particle Swarm Optimization and Simulated Annealing algorithms to verify its effectiveness. Case studies are conducted with smart EV charging dataset from Project Shift (UK Power Networks Innovation) and the UK photovoltaic dataset. The Internal Rate of Return results with TD3 and DDPG algorithms are 9.46% and 8.69%, respectively, which show that the proposed strategy can enhance power scheduling and outperforms the mainstream methods in terms of reduced levelized cost of storage and increased net present value.-
dc.description.sponsorshipBrunel University London BRIEF Funding; the Department of Finance and Education of Guangdong Province 2016 [202]: Key Discipline Construction Program, China; Education Department of Guangdong Province: New and Integrated Energy System Theory and Technology Research Group [Project Number 2016KCXTD022]; EPSRC grant reference EP/S032053/1.en_US
dc.format.extent1 - 21-
dc.format.mediumPrint-Electronic-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsCoppyright © 2022 The Author(s). Published by Elsevier Ltd. under a Creative Commons license (https://creativecommons.org/licenses/by/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectdistribution network optimizationen_US
dc.subjectfast EV charging demanden_US
dc.subjectdeep reinforcement learningen_US
dc.subjectbattery energy storage systemsen_US
dc.titleProfit maximization for large-scale energy storage systems to enable fast EV charging infrastructure in distribution networksen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1016/j.energy.2022.124852-
dc.relation.isPartOfEnergy-
pubs.publication-statusPublished online-
pubs.volume259-
dc.identifier.eissn1873-6785-
dc.rights.holderThe Author(s)-
Appears in Collections:Brunel OA Publishing Fund
Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf8.7 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons