PIRLNav: Pretraining with Imitation and RL Finetuning for OBJECTNAV.
CVPR 2023(2023)
Georgia Institute of Technology
Abstract
We study ObjectGoal Navigation - where a virtual robot situated in a new environment is asked to navigate to an object. Prior work [1] has shown that imitation learning (IL) using behavior cloning (BC) on a dataset of human demonstrations achieves promising results. However, this has limitations - 1) BC policies generalize poorly to new states, since the training mimics actions not their consequences, and 2) collecting demonstrations is expensive. On the other hand, reinforcement learning (RL) is trivially scalable, but requires careful reward engineering to achieve desirable behavior. We present PIRLNav, a two-stage learning scheme for BC pretraining on human demonstrations followed by RL-finetuning. This leads to a policy that achieves a success rate of 65.0% on OBJECTNAV (+5.0% absolute over previous state-of-the-art). Using this BC→RL training recipe, we present a rigorous empirical analysis of design choices. First, we investigate whether human demonstrations can be replaced with ‘free’ (automatically generated) sources of demonstrations, e.g. shortest paths (SP) or task-agnostic frontier exploration (FE) trajectories. We find that BC→RL on human demonstrations outperforms BC→RL on SP and FE trajectories, even when controlled for the same BC-pretraining success on TRAIN, and even on a subset of VAL episodes where BC-pretraining success favors the SP or FE policies. Next, we study how RL-finetuning performance scales with the size of the BC pretraining dataset. We find that as we increase the size of the BC-pretraining dataset and get to high BC accuracies, the improvements from RL-finetuning are smaller, and that 90% of the performance of our best BC→RL policy can be achieved with less than half the number of BC demonstrations. Finally, we analyze failure modes of our OBJECTNAV policies, and present guidelines for further improving them. Project page: ram81.github.io/projects/pirlnav.
MoreTranslated text
Key words
Embodied vision: Active agents,simulation
PDF
View via Publisher
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined