Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/30403
Title: Multi-agent reinforcement learning with synchronized and decomposed reward automaton synthesized from reactive temporal logic
Authors: Zhu, C
Zhu, J
Si, W
Wang, X
Wang, F
Keywords: multi-agent reinforcement learning;autonomous reasoning;swarm intelligence
Issue Date: 12-Nov-2024
Publisher: Elsevier
Citation: Zhu, C. et al. (2024) 'Multi-agent reinforcement learning with synchronized and decomposed reward automaton synthesized from reactive temporal logic', Knowledge-Based Systems, 306, 112703, pp. 1 - 16. doi: 10.1016/j.knosys.2024.112703.
Abstract: Multi-agent systems (MAS) consist of multiple autonomous agents interacting to achieve collective objectives. Multi-agent reinforcement learning (MARL) enhances these systems by enabling agents to learn optimal behaviors through interaction, thus improving their coordination in dynamic environments. However, MARL faces significant challenges in adapting to complex dependencies on past states and actions, which are not adequately represented by the current state alone in reactive systems. This paper addresses these challenges by considering MAS operating under task specifications formulated as Generalized Reactivity of rank 1 (GR(1)). These synthesized strategies are used as a priori knowledge to guide the learning. To tackle the difficulties of handling non-Markovian tasks in reactive systems, we propose a novel synchronized decentralized training paradigm that guides agents to learn within the MARL framework using a reward structure constructed from decomposed synthesized strategies of GR(1). We initially formalize the synthesis of GR(1) strategies as a reachability problem of winning states of the system. Subsequently, we develop a decomposition mechanism that constructs individual reward structures for decentralized MARL, incorporating potential values calculated through value iteration. Theoretical proofs are provided to verify that the safety and liveness properties are preserved. We evaluate our approach against other state-of-the-art methods under various GR(1) specifications and scenario maps, demonstrating superior learning efficacy and optimal rewards per episode. Additionally, we show that the decentralized training paradigm outperforms the centralized training paradigm. The value iteration strategy used to calculate potential values for the reward structure is compared against two other strategies, showcasing its advantages.
Description: Data availability: No data was used for the research described in the article.
URI: https://bura.brunel.ac.uk/handle/2438/30403
DOI: https://doi.org/10.1016/j.knosys.2024.112703
ISSN: 0950-7051
Other Identifiers: ORCiD: Chenyang Zhu https://orcid.org/0000-0002-2145-0559
ORCiD: Fang Wang https://orcid.org/0000-0003-1987-9150
112703
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 12 November 2025. Copyright © 2024 Elsevier B.V. All rights reserved. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.knosys.2024.112703, archived on this repository under a Creative Commons CC BY-NC-ND attribution licence (https://creativecommons.org/licenses/by-nc-nd/4.0/).3.46 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons