
Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning
In offline reinforcement learning (RL) an optimal policy is learnt solel...
read it

Causal Reinforcement Learning using Observational and Interventional Data
Learning efficiently a causal model of the environment is a key challeng...
read it

Causal Inference QNetwork: Toward Resilient Reinforcement Learning
Deep reinforcement learning (DRL) has demonstrated impressive performanc...
read it

The Difficulty of Passive Learning in Deep Reinforcement Learning
Learning to act from observational data without active environmental int...
read it

Twostage Deep Reinforcement Learning for Inverterbased VoltVAR Control in Active Distribution Networks
Modelbased Vol/VAR optimization method is widely used to eliminate volt...
read it

Modeling Attention in Panoramic Video: A Deep Reinforcement Learning Approach
Panoramic video provides immersive and interactive experience by enablin...
read it

On Strong Observational Refinement and Forward Simulation
Hyperproperties are correctness conditions for labelled transition syste...
read it
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data
Empowered by expressive function approximators such as neural networks, deep reinforcement learning (DRL) achieves tremendous empirical successes. However, learning expressive function approximators requires collecting a large dataset (interventional data) by interacting with the environment. Such a lack of sample efficiency prohibits the application of DRL to critical scenarios, e.g., autonomous driving and personalized medicine, since trial and error in the online setting is often unsafe and even unethical. In this paper, we study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting. To incorporate the possibly confounded observational data, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. More specifically, DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting by a multiplicative factor, which decreases towards zero when the confounded observational data are more informative upon the adjustments. Our algorithm and analysis serve as a step towards causal reinforcement learning.
READ FULL TEXT
Comments
There are no comments yet.