* OpenVoice: Open Source Voice Cloning

OpenVoice, is a collaborative effort by MIT, Tsinghua University, and Canadian AI startup MyShell. It is an open-source alternative for voice cloning. This platform boasts near-instant cloning capabilities and detailed control features that surpass existing technologies.

OpenVoice: Versatile Instant Voice Cloning on Google Collab:

MIT researchers Tsinghua University researchers MyShell, a Canadian AI startup Launched in January 2024 by MyShell

Open-sourced, meaning its code is publicly available for anyone to use and modify.

Other "OpenVoice" Platforms:


Founded by Heath Ahrens in 2007 Offers text-to-speech and voice-changing tools

Sources for Code annd info:

Open Voice Site: https://research.myshell.ai/open-voice

Open Voice Github: https://github.com/myshell-ai/OpenVoice

Open Voice Paper: https://arxiv.org/abs/2312.01479

We introduce OpenVoice, a versatile instant voice cloning approach that requires only a short audio clip from the reference speaker to replicate their voice and generate speech in multiple languages.

OpenVoice enables granular control over voice styles, including emotion, accent, rhythm, pauses, and intonation, in addition to replicating the tone color of the reference speaker.

OpenVoice also achieves zero-shot cross-lingual voice cloning for languages not included in the massive-speaker training set.

OpenVoice is also computationally efficient, costing tens of times less than commercially available APIs that offer even inferior performance.

The technical report and source code can be found at:



Accurate Tone Color Cloning

OpenVoice can accurately clone the reference tone color and generate speech in multiple languages and accents.

Flexible Voice Style Control

OpenVoice enables granular control over voice styles, such as emotion and accent, as well as other style parameters including rhythm, pauses, and intonation. Here we demonstrate the control over emotion and accent of the generated voice.

Zero-shot Cross-lingual Voice Cloning

The reference voice and the generated voice can be in any languages outside the massive-speaker multi-lingual dataset. We use “U” to denote the unseen languages in the following examples.

Last updated