Training Is Execution: A Reinforcement Learning-Based Collision Avoidance Algorithm for Volatile Scenarios

Collision avoidance is one of the most fundamental and challenging aspects of socially aware robot navigation. Recently, numerous works based on reinforcement learning (RL) have been proposed for collision avoidance in experimental settings. However, reinforcement learning approaches face two challe...

Full description

Bibliographic Details
Published in:IEEE Access
Main Authors: Jian Ban, Gongyan Li
Format: Article in Journal/Newspaper
Language:English
Published: IEEE 2024
Subjects:
Online Access:https://doi.org/10.1109/ACCESS.2024.3448292
https://doaj.org/article/a36ac01ef97c494ba5b27a06f1f1ea82
Description
Summary:Collision avoidance is one of the most fundamental and challenging aspects of socially aware robot navigation. Recently, numerous works based on reinforcement learning (RL) have been proposed for collision avoidance in experimental settings. However, reinforcement learning approaches face two challenges: managing lengthy training processes and generalizing in volatile environments. These issues arise because reinforcement learning must initiate training whenever it encounters new environments or environmental changes. In this work, we present an algorithm that executes without prior training and adaptively optimizes itself when the environment changes. Its training is its execution. The novelty of this work is twofold: 1. developing a fluctuating epsilon model, which is entirely new and designed from scratch. It adjusts the exploration probability based on the performance of the algorithm. 2. developing a velocity obstacle-based exploration model, which innovatively combines Reinforcement Learning (RL) with the Velocity Obstacles (VO) method by replacing random exploration in RL with the classic collision avoidance algorithm, the Optimal Reciprocal Collision Avoidance (ORCA) algorithm, to create our innovative exploration model. Our proposed algorithm is evaluated in simulation scenarios. Experimental results demonstrate that our comprehensive model outperforms the state-of-the-art (SOTA) baseline algorithms, achieving a 3.23% increase in success rate. Moreover, our model continues to excel even when the environment changes, consistently improving its performance. Specifically, after environmental changes, our model’s success rate exceeds that of the baseline by 17.49%.