abstract
- This paper presents the implementation of a reinforcement learning based navigation architecture for autonomous vehicles in urban scenarios. These types of scenarios represent a challenging task due to the presence of dynamic and static road elements. This work validates the use and feasibility of high-level reinforcement learning controllers in the autonomous vehicle software pipeline. Tests are performed using a 1:10 downscaled autonomous prototype on a track with one main and two secondary roads. The platform is equipped with a LIDAR, inertial measurement units, a stereo camera and motor drives for steering and propulsion. Experiments yield favorable outcomes in terms of collision avoidance, lane keeping and navigational comfort. © 2023 IEEE.