Curriculum learning for deep reinforcement learning in swarm robotic navigation task
DOI:
https://doi.org/10.35925/j.multi.2023.3.18Keywords:
swarm robots, navigation task, deep reinforcement learning, currcuilum learning, proximal policy optimizationAbstract
This study investigates the training of a swarm consisting of five E-puck robots using Deep reinforcement learning with curriculum learning in a 3D environment. The primary objective is to decompose the navigation task into a curriculum comprising progressively more challenging stages based on curriculum complexity metrics. These metrics encompass swarm size, collision avoidance complexity, and distances between targets and robots. The performance evaluation of the swarm includes key metrics such as success rate, collision rate, training efficiency, and generalization capabilities. To assess their effectiveness, a comparative analysis is conducted between curriculum learning and the proximal policy optimization algorithm. The results demonstrate that curriculum learning outperforms traditional one, yielding higher success rates, improved collision avoidance, and enhanced training efficiency. The trained swarm also exhibits robust generalization for novel scenarios.