Chrome Extension
WeChat Mini Program
Use on ChatGLM

KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization

NeurIPS 2024(2024)

Cited 20|Views32
Key words
large language model,efficiency,kv cache,quantization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined