Incentivizing Exploration With Causal Curiosity as Intrinsic Motivation
Résumé
Reinforcement learning (RL) has shown remarkable success in decision-making tasks but often lacks the ability to decipher and leverage causal relationships in complex environments. This paper introduces a novel "causal model-based reinforcement learning agent" that integrates causal inference with model-based RL to improve exploration and decision-making. Our approach incorporates an intrinsic motivation mechanism based on causal curiosity, quantified by the changes in the agent's internal causal model. We present an algorithm that maintains separate value functions for extrinsic rewards and intrinsic causal discovery, allowing for a balanced exploration of both task-oriented goals and causal structures. Theoretical analysis suggests convergence properties under certain conditions, while empirical results in a blackjack task and structural causal model environments demonstrate improved learning efficiency and strategic decision making compared to standard RL. This work contributes to bridging the gap between reinforcement learning and causal inference.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |