On-Policy Distillation of Language Models: Learning from Self-Generated MistakesRishabh Agarwal,Nino Vieillard,Yongchao Zhou,Piotr Stanczyk,Sabela Ramos Garea,Matthieu Geist,Olivier BachemICLR 2024(2024)引用 112|浏览217关键词Language models,Distillation,RLHFAI 理解论文溯源树样例生成溯源树,研究论文发展脉络Chat Paper正在生成论文摘要