LLM-Neo: Parameter Efficient Knowledge Distillation for Large Language Models. Runming Yang,Taiqiang Wu,Jiahao Wang, Pengfei Hu, Yik-Chung Wu,Ngai Wong,Yujiu YangCoRR(2024)引用 0|浏览3AI 理解论文溯源树样例生成溯源树,研究论文发展脉络Chat Paper正在生成论文摘要