Residue systolic implementations for neural networks

C. N. Zhang*, M. Wang, Chien-Chao Tseng

*Corresponding author for this work

Research output: Contribution to journalArticle

1 Scopus citations

Abstract

In this work we propose two techniques for improving VLSI implementations for artificial neural networks (ANNs). By making use of two kinds of processing elements (PEs), one dedicated to the basic operations (addition and multiplication) and another to evaluate the activation function, the total time and cost for the VLSI array implementation of ANNs can be decreased by a factor of two compared with previous work. Taking the advantage of residue number system, the efficiency of each PE can be further increased. Two RNS- based array processor designs are proposed. The first is built by look-up tables, and the second is constructed by binary adders accomplished by the mixed- radix conversion (MRC), such that the hardwares are simple and high speed operations are obtained. The proposed techniques are general enough to be extended to cover other forms of loading and learning algorithms.

Original languageEnglish
Pages (from-to)149-156
Number of pages8
JournalNeural Computing & Applications
Volume3
Issue number3
DOIs
StatePublished - 1 Sep 1995

Keywords

  • Mixed-radix conversion
  • Neural network
  • Parallel processing
  • Residue number system
  • Systolic array

Fingerprint Dive into the research topics of 'Residue systolic implementations for neural networks'. Together they form a unique fingerprint.

  • Cite this