VWA: Hardware Efficient Vectorwise Accelerator for Convolutional Neural Network

Kuo Wei Chang*, Tian-Sheuan Chang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations


Hardware accelerators for convolution neural networks (CNNs) enable real-time applications of artificial intelligence technology. However, most of the existing designs suffer from low hardware utilization or high area cost due to complex data flow. This paper proposes a hardware efficient vectorwise CNN accelerator that adopts a 3 × 3 filter optimized systolic array using 1-D broadcast data flow to generate partial sum. This enables easy reconfiguration for different kinds of kernels with interleaved input or elementwise input data flow. This simple and regular data flow results in low area cost while attains high hardware utilization. The presented design achieves 99%, 97%, 93.7%, and 94% hardware utilization for VGG-16, ResNet-34, GoogLeNet, and Mobilenet, respectively. Hardware implementation with TSMC 40nm technology takes 266.9K NAND gate count and 191KB SRAM to support 168GOPS throughput while consumes only 154.98mW when running at 500MHz operating frequency, which has superior area and power efficiency than other designs.

Original languageEnglish
Article number8854849
Pages (from-to)145-154
Number of pages10
JournalIEEE Transactions on Circuits and Systems I: Regular Papers
Issue number1
StatePublished - Jan 2020


  • accelerators
  • Convolution neural networks (CNNs)
  • hardware design

Fingerprint Dive into the research topics of 'VWA: Hardware Efficient Vectorwise Accelerator for Convolutional Neural Network'. Together they form a unique fingerprint.

Cite this