Optimize FPGA-Based Neural Network Accelerator with Bit-Shift Quantization.
International Symposium on Circuits and Systems(2020)
Key words
large scale DNN accelerator,quantized parameters,DNN parameters,DNN inference,Minimum Mean Absolute Error,quantization method,shift-and-add operations,FPGA-based DNN accelerator,Bit-Shift method,LUTs,DSPs,Digital Signal Processors,Multiply Accumulates,high power efficiency,Deep Neural Network,Field Programmable Gate Arrays,Bit-Shift quantization,optimize FPGA-based Neural Network accelerator,Xilinx VU095 FPGA,converted MAC calculations,compressed parameters,Bit-Shift architecture,frequency 190.0 MHz
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined