Röder, FrankFrankRöderEppe, ManfredManfredEppeNguyen, Phuong D. H.Phuong D. H.NguyenWermter, StefanStefanWermter2022-04-202022-04-202020-0929th International Conference on Artificial Neural Networks (ICANN 2020)http://hdl.handle.net/11420/12325Hierarchical abstraction and curiosity-driven exploration are two common paradigms in current reinforcement learning approaches to break down difficult problems into a sequence of simpler ones and to overcome reward sparsity. However, there is a lack of approaches that combine these paradigms, and it is currently unknown whether curiosity also helps to perform the hierarchical abstraction. As a novelty and scientific contribution, we tackle this issue and develop a method that combines hierarchical reinforcement learning with curiosity. Herein, we extend a contemporary hierarchical actor-critic approach with a forward model to develop a hierarchical notion of curiosity. We demonstrate in several continuous-space environments that curiosity can more than double the learning performance and success rates for most of the investigated benchmarking problems. We also provide our source code (https://github.com/knowledgetechnologyuhh/goalconditionedRLbaselines) and a supplementary video (https://www2.informatik.uni-hamburg.de/wtm/videos/chacᵢcannᵣoeder₂020.mp4).enCurious Hierarchical Actor-Critic Reinforcement LearningConference Paper10.1007/978-3-030-61616-8_33Conference Paper