Leveraging Mutual Information for Asymmetric Learning under Partial Observability

Published in 8th Annual Conference on Robot Learning, 2024

Author: Hai Nguyen, Long Dinh, Robert Platt, Christopher Amato.

Links: paper site

Absstract: Even though partial observability is prevalent in robotics, most reinforcement learning studies avoid it due to the difficulty of learning a policy that can efficiently memorize past events and seek information. Fortunately, in many cases, learning can be done in an asymmetric setting where states are available during training but not during execution. Prior studies often leverage the state to indirectly influence the training of a history-based actor (actor-critic methods) or a history-based critic (value-based methods). Instead, we propose using stateobservation and state-history mutual information to improve the agent’s architecture and ability to seek information and memorize efficiently through intrinsic rewards and an auxiliary task. Our method outperforms strong baselines through extensive experiments and achieves successful sim-to-real transfers to a real robot.