Exploring mutual information for GMM-based spectral conversion

Hsin Te Hwang*, Yu Tsao, Hsin Min Wang, Yih-Ru Wang, Sin-Horng Chen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

In this paper, we propose a maximum mutual information (MMI) training criterion to refine the parameters of the joint density GMM (JDGMM) set to tackle the over-smoothing issue in voice conversion (VC). Conventionally, the maximum likelihood (ML) criterion is used to train a JDGMM set, which characterizes the joint property of the source and target feature vectors. The MMI training criterion, on the other hand, updates the parameters of the JDGMM set to increase its capability on modeling the dependency between the source and target feature vectors, and thus to make the converted sounds closer to the natural ones. The subjective listening test demonstrates that the quality and individuality of the converted speech by the proposed ML followed by MMI (ML+MMI) training method is better that by the ML training method.

Original languageEnglish
Title of host publication2012 8th International Symposium on Chinese Spoken Language Processing, ISCSLP 2012
Pages50-54
Number of pages5
DOIs
StatePublished - 1 Dec 2012
Event2012 8th International Symposium on Chinese Spoken Language Processing, ISCSLP 2012 - Hong Kong, China
Duration: 5 Dec 20128 Dec 2012

Publication series

Name2012 8th International Symposium on Chinese Spoken Language Processing, ISCSLP 2012

Conference

Conference2012 8th International Symposium on Chinese Spoken Language Processing, ISCSLP 2012
CountryChina
CityHong Kong
Period5/12/128/12/12

Keywords

  • GMM
  • mutual information
  • Voice conversion

Fingerprint Dive into the research topics of 'Exploring mutual information for GMM-based spectral conversion'. Together they form a unique fingerprint.

Cite this