Continuous Control With Ensemble Deep Deterministic Policy Gradients

Bibliographic Details
Title: Continuous Control With Ensemble Deep Deterministic Policy Gradients
Authors: Januszewski, Piotr, Olko, Mateusz, Królikowski, Michał, Świątkowski, Jakub, Andrychowicz, Marcin, Kuciński, Łukasz, Miłoś, Piotr
Publication Year: 2021
Collection: Computer Science
Subject Terms: Computer Science - Machine Learning, Computer Science - Artificial Intelligence
More Details: The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: an average of multiple actors trained from the same data boosts performance; the existing methods are unstable across training runs, epochs of training, and evaluation runs; a commonly used additive action noise is not required for effective training; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning; the critics' initialization plays the major role in ensemble-based actor-critic exploration. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from OpenAI Gym MuJoCo. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2111.15382
Accession Number: edsarx.2111.15382
Database: arXiv
More Details
Description not available.