If you made any changes in Pure these will be visible here soon.

Personal profile

Research Interests

Auditory Signal Processing, Speech Signal Processing, Audio Perception Coding

Experience

1996/8~2003/5 Research Assistant,  Institute for Systems Research, University of Maryland

2003/8~2005/6 Post-Doctoral Research Associate, Center for Auditory and Acoustic Research, University of Maryland

Education/Academic qualification

PhD, University of Maryland, College Park

External positions

Fingerprint Dive into the research topics where Tai-Shih Chi is active. These topic labels come from the works of this person. Together they form a unique fingerprint.

Network Recent external collaboration on country level. Dive into details by clicking on the dots.

Projects

Deep neural networks embedded auditory perception models for binaural computational auditory scene analysis

Chi, T.

1/08/2031/07/21

Project: Government MinistryMinistry of Science and Technology

Deep neural networks embedded auditory perception models for binaural computational auditory scene analysis

Chi, T.

1/08/1931/07/20

Project: Government MinistryMinistry of Science and Technology

Deep neural networks embedded auditory perception models for binaural computational auditory scene analysis

Chi, T.

1/08/1831/07/19

Project: Government MinistryMinistry of Science and Technology

Discriminative learning for monaural sound source separation

Chi, T.

1/08/1731/07/18

Project: Government MinistryMinistry of Science and Technology

Discriminative learning for monaural sound source separation

Chi, T.

1/08/1631/07/17

Project: Government MinistryMinistry of Science and Technology

Research Output

A 2.17-mW Acoustic DSP Processor With CNN-FFT Accelerators for Intelligent Hearing Assistive Devices

Lee, Y-C., Chi, T-S. & Yang, C-H., Aug 2020, In : IEEE Journal of Solid-State Circuits. 55, 8, p. 2247-2258 12 p.

Research output: Contribution to journalArticle

A 2.17mW Acoustic DSP Processor with CNN-FFT Accelerators for Intelligent Hearing Aided Devices

Lee, Y. C., Chi, T. S. & Yang, C. H., Mar 2019, Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019. Institute of Electrical and Electronics Engineers Inc., p. 97-101 5 p. 8771631. (Proceedings 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2019).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

  • 2 Scopus citations

    A multi-scale fully convolutional network for singing melody extraction

    Gao, P., You, C. Y. & Chi, T-S., Nov 2019, 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2019. Institute of Electrical and Electronics Engineers Inc., p. 1288-1293 6 p. 9023231. (2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2019).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  • 1 Scopus citations

    Autoencoding HRTFS for DNN Based HRTF Personalization Using Anthropometric Features

    Chen, T. Y., Kuo, T. H. & Chi, T-S., 1 May 2019, 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., p. 271-275 5 p. 8683814. (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; vol. 2019-May).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  • 1 Scopus citations

    CNN Based Two-stage Multi-resolution End-to-end Model for Singing Melody Extraction

    Chen, M. T., Li, B. J. & Chi, T-S., 1 May 2019, 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., p. 1005-1009 5 p. 8683630. (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; vol. 2019-May).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  • 2 Scopus citations