Model Tells You What to Discard: Adaptive KV Cache Compression for LLMsSuyu Ge,Yunan Zhang,Liyuan Liu,Minjia Zhang,Jiawei Han,Jianfeng GaoICLR 2024(2024)引用 192|浏览87关键词Large Language Model,Efficient Inference,Generative Inference,Key-Value CacheAI 理解论文溯源树样例生成溯源树,研究论文发展脉络Chat Paper正在生成论文摘要