16.2 RNGD: A 5nm Tensor-Contraction Processor for Power-Efficient Inference on Large Language Models
IEEE International Solid-State Circuits Conference(2025)
关键词
Large Language Models,Throughput,Machine Learning Models,Flow Control,Multi-core,Matrix Multiplication,Temperature Sensor,Energy Model,Computing Units,Distribution Matrix,Traditional Architecture,Voltage Sag,Multicast,Type Conversion,Pipelining,Memory Bandwidth,Vector Process,Ring Topology,Address Space,Memory Transfer
AI 理解论文
溯源树
样例

生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要