Skip to the content.

Abstract

Expressive voice conversion aims to transfer both speaker identity and expressive attributes from a target speech to a given source speech. In this work, we improve over a self-supervised, non-autoregressive framework with a conditional variational autoencoder, focusing on reducing source timbre leakage and improving linguistic-acoustic disentanglement for better style transfer. To minimize style leakage, we use multilingual discrete speech units for content representation and reinforce embeddings with augmentation-based similarity loss and mix-style layer normalization. To enhance expressivity transfer, we incorporate local F0 information via cross-attention and extract style embeddings enriched with global pitch and energy features. Experiments show our model outperforms baselines in emotion and speaker similarity, demonstrating superior style adaptation and reduced source style leakage.

Neutral:

Source Audio Target Audio Synthesized Audio

Angry:

English-to-English

Source Audio Target Audio Synthesized Audio

Cross-lingual

Source Audio Target Audio Synthesized Audio

Happy:

English-to-English

Source Audio Target Audio Synthesized Audio

Cross-lingual

Source Audio Target Audio Synthesized Audio

Sad:

English-to-English

Source Audio Target Audio Synthesized Audio

Cross-lingual Samples:

Source Audio Target Audio Synthesized Audio