A 2.17-mw acoustic dsp processor with cnn-fft accelerators for intelligent hearing assistive devices

Yu Chi Lee, Tai-Shih Chi, Chia Hsiang Yang*

*Corresponding author for this work

研究成果: Article同行評審

摘要

This article presents an acoustic DSP processor containing a neural network core for intelligent hearing assistive devices. The processor includes the accelerators for convolutional neural networks (CNNs) and fast Fourier transform (FFT). The CNN-based speech enhancement algorithm predicts the desired mask for the Fourier spectrogram of the speech signal to enhance speech intelligibility. Several design techniques are applied to enable efficient hardware mapping. The computational complexity for the CNN can be reduced by 23.6% by frame sharing, and a fast mask generation + partial sums pre-computation technique further reduces output latency by up to 64%. The size of the memory for the model is reduced by 75% using weight quantization. FFT is implemented by leveraging the packing algorithm to reduce the computational complexity by 43%. Reconfigurable processing elements are shared to support both FFT and CNN, realizing a saving in the area of 42%. In addition, input sharing and output sharing are used to, respectively, reduce data movements by 94% and 75%. A reordered FFT structure also eliminates up to 256 multiplexers. Fabricated in a 40-nm CMOS technology, the chip's core area is 4.2 mm2 and the power dissipation is 2.17 mW at a clock frequency of 5 MHz from a 0.6-V supply. The embedded CNN accelerator supports both convolutional and fully connected (FC) layers and achieves a comparable energy efficiency with state-of-The-Art CNN accelerators, despite the flexibility for FFT. The speech intelligibility is enhanced by up to 41% in the low SNR regime.

原文English
文章編號9082141
頁(從 - 到)2247-2258
頁數12
期刊IEEE Journal of Solid-State Circuits
55
發行號8
DOIs
出版狀態Published - 八月 2020

引用此