Real-time high-ratio image compression using adaptive VLSI neuroprocessors

Bing J. Sheu*, Wai-Chi  Fang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

An adaptive VLSI neuroprocessor based on vector quantization algorithm has been developed for real-time high-ratio image compression applications. This VLSI neural-network-based vector quantization (NNVQ) module combines a fully parallel vector quantizer with a pipelined codebook generator for a broad area of data compression applications. The NNVQ module is capable of producing good-quality reconstructed data at high compression ratios more than 20. The vector quantizer chip has been designed, fabricated, and tested. It contains 64 inner-product neural units and a high-speed extendable winner-take-all block. This mixed-signal chip occupies a compact silicon area of 4.6 × 6.8 mm2 in a 2.0-μm scalable CMOS technology. The throughput rate of the 2-μm NNVQ module is 2 million vectors per second and its equivalent computation power is 3.33 billion connections per second.

Original languageEnglish
Title of host publicationProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Editors Anon
PublisherPubl by IEEE
Pages1173-1176
Number of pages4
ISBN (Print)078030033
DOIs
StatePublished - 1 Dec 1991
EventProceedings of the 1991 International Conference on Acoustics, Speech, and Signal Processing - ICASSP 91 - Toronto, Ont, Can
Duration: 14 May 199117 May 1991

Publication series

NameProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Volume2
ISSN (Print)0736-7791

Conference

ConferenceProceedings of the 1991 International Conference on Acoustics, Speech, and Signal Processing - ICASSP 91
CityToronto, Ont, Can
Period14/05/9117/05/91

Fingerprint Dive into the research topics of 'Real-time high-ratio image compression using adaptive VLSI neuroprocessors'. Together they form a unique fingerprint.

Cite this