WeChat Mini Program
Old Version Features

A 7-Nm Four-Core Mixed-Precision AI Chip with 26.2-TFLOPS Hybrid-FP8 Training, 104.9-TOPS INT4 Inference, and Workload-Aware Throttling

IEEE Journal of Solid-State Circuits(2021)

Cited 18|Views2
Key words
Training,Artificial intelligence,AI accelerators,Inference algorithms,Computer architecture,Bandwidth,System-on-chip,Approximate computing,artificial intelligence (AI),deep neural networks (DNNs),hardware accelerators,machine learning (ML),reduced precision computation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined