Voice conversion based on locally linear embedding

Hsin Te Hwang, Yi Chiao Wu, Yu Huai Peng, Chin Cheng Hsu, Yu Tsao, Hsin Min Wang, Yih-Ru Wang, Sin-Horng Chen

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a novel locally linear embedding (LLE)-based framework for exemplar-based spectral conversion (SC). The key feature of the proposed SC framework is that it integrates the LLE algorithm, a manifold learning method, with the conventional exemplar-based SC method. One important advantage of the LLE-based SC framework is that it can be applied to either one-to-one SC or many-to-one SC. For one-to-one SC, a parallel speech corpus consisting of the pre-specified source and target speakers' speeches is used to construct the paired source and target dictionaries in advance. During online conversion, the LLE-based SC method converts the source spectral features to the target like spectral features based on the paired dictionaries. On the other hand, when applied to many-to-one SC, our system is capable of converting the voice of any unseen source speaker to that of a desired target speaker, without the requirement of collecting parallel training speech utterances from them beforehand. To further improve the quality of the converted speech, the maximum likelihood parameter generation (MLPG) and global variance (GV) methods are adopted in the proposed SC systems. Experimental results demonstrate that the proposed one-to-one SC system is comparable with the state-of-the-art Gaussian mixture model (GMM)-based one-to-one SC system in terms of speech quality and speaker similarity, and the many-to-one SC system can approximate the performance of the one-to-one SC system.

Original languageEnglish
Pages (from-to)1493-1516
Number of pages24
JournalJournal of Information Science and Engineering
Volume34
Issue number6
DOIs
StatePublished - 1 Jan 2018

Keywords

  • Exemplar-based
  • Locally linear embedding
  • Manifold learning
  • Many-to-one
  • Voice conversion

Fingerprint Dive into the research topics of 'Voice conversion based on locally linear embedding'. Together they form a unique fingerprint.

Cite this