订阅小程序
旧版功能

16.2 RNGD: A 5nm Tensor-Contraction Processor for Power-Efficient Inference on Large Language Models

Sang Min Lee, Hanjoon Kim,Jeseung Yeon, Minho Kim, Changjae Park, Byeongwook Bae, Yojung Cha, Wooyoung Choe,Jonguk Choi, Younggeun Choi, Ki Jin Han, Seokha Hwang, Kiseok Jang, Jaewoo Jeon, Hyunmin Jeong, Yeonsu Jung, Hyewon Kim, Sewon Kim,Suhyung Kim, Won Kim, Yongseung Kim, Youngsik Kim, Hyukdong Kwon, Jeong Ki Lee,Juyun Lee, Kyungjae Lee, Seokho Lee, Minwoo Noh, Junyoung Park, Jimin Seo, June Paik

IEEE International Solid-State Circuits Conference(2025)

引用 0|浏览2
关键词
Large Language Models,Throughput,Machine Learning Models,Flow Control,Multi-core,Matrix Multiplication,Temperature Sensor,Energy Model,Computing Units,Distribution Matrix,Traditional Architecture,Voltage Sag,Multicast,Type Conversion,Pipelining,Memory Bandwidth,Vector Process,Ring Topology,Address Space,Memory Transfer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要