= PDF Reprint, = BibTeX entry, = Online Abstract
S. A. Sontakke, A. Mehrjou, L. Itti, B. Scholkopf, Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning, In: Proceedings of the 38th International Conference on Machine Learning, (Meila, Marina, Zhang, Tong Ed.), Vol. 139, pp. 9848--9858, PMLR, 18--24 Jul 2021. [2021 acceptance rate: 21.5%] (Cited by 49)
Abstract: Humans show an innate ability to learn the regularities of the world through interaction. By performing experiments in our environment, we are able to discern the causal factors of variation and infer how they affect the dynamics of our world. Analogously, here we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner. We introduce a novel intrinsic reward, called causal curiosity, and show that it allows our agents to learn optimal sequences of actions, and to discover causal factors in the dynamics. The learned behavior allows the agent to infer a binary quantized representation for the ground-truth causal factors in every environment. Additionally, we find that these experimental behaviors are semantically meaningful (e.g., to differentiate between heavy and light blocks, our agents learn to lift them), and are learnt in a self-supervised manner with approximately 2.5 times less data than conventional supervised planners. We show that these behaviors can be re-purposed and fine-tuned (e.g., from lifting to pushing or other downstream tasks). Finally, we show that the knowledge of causal factor representations aids zero-shot learning for more complex tasks.
Themes: Machine Learning, Bayesian Theory of Surprise
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 09 Jan 2024 12:10:23 PM PST