Please use this identifier to cite or link to this item:
|Title:||Hierarchical Intermittent Motor Control with Deterministic Policy Gradient|
|Keywords:||hierarchical reinforcement learning;intermittent control;deterministic policy gradient;continuous action control, motor control|
|Citation:||IEEE Access, 2019, 7, pp. 41799 - 41810|
|Abstract:||It has been evidenced that the neural motor control exploits the hierarchical and intermittent representation. In this paper, we propose a hierarchical deep reinforcement learning (DRL) method to learn the continuous control policy across multiple levels, by unifying the neuroscience principle of the minimum transition hypothesis. The control policies in the two levels of the hierarchy operate at different time scales. The high-level controller produces the intermittent actions to set a sequence of goals for the low-level controller, which in turn conducts the basic skills with the modulation of goals. The goal planning and the basic motor skills are trained jointly with the proposed algorithm: hierarchical intermittent deep deterministic policy gradient (HI-DDPG). The performance of the method is validated in two continuous control problems. The results show that the method successfully learns to temporally decompose compound tasks into sequences of basic motions with sparse transitions and outperforms the previous DRL methods that lack a hierarchical continuous representation.|
|Description:||© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.|
|Appears in Collections:||Dept of Computer Science Research Papers|
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.