Options
Curious Hierarchical Actor-Critic Reinforcement Learning
Publikationstyp
Conference Paper
Date Issued
2020-09
Sprache
English
First published in
Number in series
12397 LNCS
Start Page
408
End Page
419
Citation
29th International Conference on Artificial Neural Networks (ICANN 2020)
Contribution to Conference
Publisher DOI
Scopus ID
Hierarchical abstraction and curiosity-driven exploration are two common paradigms in current reinforcement learning approaches to break down difficult problems into a sequence of simpler ones and to overcome reward sparsity. However, there is a lack of approaches that combine these paradigms, and it is currently unknown whether curiosity also helps to perform the hierarchical abstraction. As a novelty and scientific contribution, we tackle this issue and develop a method that combines hierarchical reinforcement learning with curiosity. Herein, we extend a contemporary hierarchical actor-critic approach with a forward model to develop a hierarchical notion of curiosity. We demonstrate in several continuous-space environments that curiosity can more than double the learning performance and success rates for most of the investigated benchmarking problems. We also provide our source code (https://github.com/knowledgetechnologyuhh/goalconditionedRLbaselines) and a supplementary video (https://www2.informatik.uni-hamburg.de/wtm/videos/chacᵢcannᵣoeder₂020.mp4).