Learning to Locomote with Artificial Neural-Network and CPG-based Control in a Soft Snake Robot


In this paper, we present a new locomotion control method for soft robot snakes. Inspired by biological snakes, our control architecture is composed of two key modules: A reinforcement learning (RL) module for achieving adaptive goal-tracking behaviors with changing goals, and a central pattern generator (CPG) system with Matsuoka oscillators for generating stable and diverse locomotion patterns. The two modules are interconnected into a closed-loop system: The RL module, analogizing the locomotion region located in the midbrain of vertebrate animals, regulates the input to the CPG system given state feedback from the robot. The output of the CPG system is then translated into pressure inputs to pneumatic actuators of the soft snake robot. Based on the fact that the oscillation frequency and wave amplitude of the Matsuoka oscillator can be independently controlled under different time scales, we further adapt the option-critic framework to improve the learning performance measured by optimality and data efficiency. The performance of the proposed controller is experimentally validated with both simulated and real soft snake robots.

In IEEE/RSJ International Conference on Intelligent Robots and Systems