Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.0%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: D-S-Sahithi
  • License: mpl-2.0
  • Language: Python
  • Default Branch: main
  • Size: 135 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 10 months ago · Last pushed 9 months ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

** Coqui TTS is a library for advanced Text-to-Speech generation.**

Pretrained models in +1100 languages.

Tools for training new models and fine-tuning existing models in any language.

Utilities for dataset analysis and curation.

Discord PyPI - Python Version License PyPI version Downloads DOI GithubActions GithubActions GithubActions Docs

News

  • Fork of the original, unmaintained repository. New PyPI package: coqui-tts
  • 0.25.0: OpenVoice models now available for voice conversion.
  • 0.24.2: Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.
  • 0.20.0: XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.
  • 0.19.0: XTTS fine-tuning code is out. Check the example recipes.
  • 0.14.1: You can use Fairseq models in ~1100 languages with TTS.

Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.

| Type | Platforms | | -------------------------------------------- | ----------------------------------- | | Bug Reports, Feature Requests & Ideas | GitHub Issue Tracker | | Usage Questions | GitHub Discussions | | General Discussion | GitHub Discussions or Discord |

The issues and discussions in the original repository are also still a useful source of information.

Links and Resources

| Type | Links | | ------------------------------- | --------------------------------------- | | Documentation | ReadTheDocs | Installation | TTS/README.md| | Contributing | CONTRIBUTING.md| | Released Models | Standard models and Fairseq models in ~1100 languages|

Features

  • High-performance text-to-speech and voice conversion models, see list below.
  • Fast and efficient model training with detailed training logs on the terminal and Tensorboard.
  • Support for multi-speaker and multilingual TTS.
  • Released and ready-to-use models.
  • Tools to curate TTS datasets under dataset_analysis/.
  • Command line and Python APIs to use and test your models.
  • Modular (but not too much) code base enabling easy implementation of new ideas.

Model Implementations

Spectrogram models

End-to-End Models

Vocoders

Voice Conversion

Others

You can also help us implement more models.

Installation

TTS is tested on Ubuntu 24.04 with python >= 3.10, < 3.13, but should also work on Mac and Windows.

If you are only interested in synthesizing speech with the pretrained TTS models, installing from PyPI is the easiest option.

bash pip install coqui-tts

If you plan to code or train models, clone TTS and install it locally.

bash git clone https://github.com/idiap/coqui-ai-TTS cd coqui-ai-TTS pip install -e .

Optional dependencies

The following extras allow the installation of optional dependencies:

| Name | Description | |------|-------------| | all | All optional dependencies | | notebooks | Dependencies only used in notebooks | | server | Dependencies to run the TTS server | | bn | Bangla G2P | | ja | Japanese G2P | | ko | Korean G2P | | zh | Chinese G2P | | languages | All language-specific dependencies |

You can install extras with one of the following commands:

bash pip install coqui-tts[server,ja] pip install -e .[server,ja]

Platforms

If you are on Ubuntu (Debian), you can also run the following commands for installation.

bash make system-deps make install

Docker Image

You can also try out Coqui TTS without installation with the docker image. Simply run the following command and you will be able to run TTS:

bash docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/idiap/coqui-tts-cpu python3 TTS/server/server.py --list_models #To get the list of available models python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server

You can then enjoy the TTS server here More details about the docker images (like GPU support) can be found here

Synthesizing speech by TTS

Python API

Multi-speaker and multi-lingual model

```python import torch from TTS.api import TTS

Get device

device = "cuda" if torch.cuda.is_available() else "cpu"

List available TTS models

print(TTS().list_models())

Initialize TTS

tts = TTS("ttsmodels/multilingual/multi-dataset/xttsv2").to(device)

List speakers

print(tts.speakers)

Run TTS

XTTS supports both, but many models allow only one of the speaker and

speaker_wav arguments

TTS with list of amplitude values as output, clone the voice from speaker_wav

wav = tts.tts( text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en" )

TTS to a file, use a preset speaker

tts.ttstofile( text="Hello world!", speaker="Craig Gutsy", language="en", file_path="output.wav" ) ```

Single speaker model

```python

Initialize TTS with the target model name

tts = TTS("tts_models/de/thorsten/tacotron2-DDC").to(device)

Run TTS

tts.ttstofile(text="Ich bin eine Testnachricht.", filepath=OUTPUTPATH) ```

Voice conversion (VC)

Converting the voice in source_wav to the voice of target_wav:

python tts = TTS("voice_conversion_models/multilingual/vctk/freevc24").to("cuda") tts.voice_conversion_to_file( source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav" )

Other available voice conversion models: - voice_conversion_models/multilingual/multi-dataset/knnvc - voice_conversion_models/multilingual/multi-dataset/openvoice_v1 - voice_conversion_models/multilingual/multi-dataset/openvoice_v2

For more details, see the documentation.

Voice cloning by combining single speaker TTS model with the default VC model

This way, you can clone voices by using any model in TTS. The FreeVC model is used for voice conversion after synthesizing speech.

```python

tts = TTS("ttsmodels/de/thorsten/tacotron2-DDC") tts.ttswithvctofile( "Wie sage ich auf Italienisch, dass ich dich liebe?", speakerwav="target/speaker.wav", file_path="output.wav" ) ```

TTS using Fairseq models in ~1100 languages

For Fairseq models, use the following name format: tts_models/<lang-iso_code>/fairseq/vits. You can find the language ISO codes here and learn about the Fairseq models here.

```python

TTS with fairseq models

api = TTS("ttsmodels/deu/fairseq/vits") api.ttstofile( "Wie sage ich auf Italienisch, dass ich dich liebe?", filepath="output.wav" ) ```

Command-line interface tts

Synthesize speech on the command line.

You can either use your trained model or choose a model from the provided list.

  • List provided models:

sh tts --list_models

  • Get model information. Use the names obtained from --list_models. sh tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>" For example: sh tts --model_info_by_name tts_models/tr/common-voice/glow-tts tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2

Single speaker models

  • Run TTS with the default model (tts_models/en/ljspeech/tacotron2-DDC):

sh tts --text "Text for TTS" --out_path output/path/speech.wav

  • Run TTS and pipe out the generated TTS wav file data:

sh tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay

  • Run a TTS model with its default vocoder model:

sh tts --text "Text for TTS" \ --model_name "<model_type>/<language>/<dataset>/<model_name>" \ --out_path output/path/speech.wav

For example:

sh tts --text "Text for TTS" \ --model_name "tts_models/en/ljspeech/glow-tts" \ --out_path output/path/speech.wav

  • Run with specific TTS and vocoder models from the list. Note that not every vocoder is compatible with every TTS model.

sh tts --text "Text for TTS" \ --model_name "<model_type>/<language>/<dataset>/<model_name>" \ --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" \ --out_path output/path/speech.wav

For example:

sh tts --text "Text for TTS" \ --model_name "tts_models/en/ljspeech/glow-tts" \ --vocoder_name "vocoder_models/en/ljspeech/univnet" \ --out_path output/path/speech.wav

  • Run your own TTS model (using Griffin-Lim Vocoder):

sh tts --text "Text for TTS" \ --model_path path/to/model.pth \ --config_path path/to/config.json \ --out_path output/path/speech.wav

  • Run your own TTS and Vocoder models:

sh tts --text "Text for TTS" \ --model_path path/to/model.pth \ --config_path path/to/config.json \ --out_path output/path/speech.wav \ --vocoder_path path/to/vocoder.pth \ --vocoder_config_path path/to/vocoder_config.json

Multi-speaker models

  • List the available speakers and choose a <speaker_id> among them:

sh tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs

  • Run the multi-speaker TTS model with the target speaker ID:

sh tts --text "Text for TTS." --out_path output/path/speech.wav \ --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>

  • Run your own multi-speaker TTS model:

sh tts --text "Text for TTS" --out_path output/path/speech.wav \ --model_path path/to/model.pth --config_path path/to/config.json \ --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>

Voice conversion models

sh tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" \ --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>

Owner

  • Login: D-S-Sahithi
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you want to cite 🐸💬, feel free to use this (but only if you loved it 😊)"
title: "Coqui TTS"
abstract: "A deep learning toolkit for Text-to-Speech, battle-tested in research and production"
date-released: 2021-01-01
authors:
  - family-names: "Eren"
    given-names: "Gölge"
  - name: "The Coqui TTS Team"
version: 1.4
doi: 10.5281/zenodo.6334862
license: "MPL-2.0"
url: "https://github.com/idiap/coqui-ai-TTS"
repository-code: "https://github.com/idiap/coqui-ai-TTS"
keywords:
  - machine learning
  - deep learning
  - artificial intelligence
  - text to speech
  - TTS

GitHub Events

Total
  • Member event: 2
  • Push event: 3
  • Public event: 1
Last Year
  • Member event: 2
  • Push event: 3
  • Public event: 1

Dependencies

.github/actions/setup-uv/action.yml actions
.github/workflows/docker.yaml actions
.github/workflows/pypi-release.yml actions
.github/workflows/style_check.yml actions
.github/workflows/tests.yml actions
Dockerfile docker
recipes/bel-alex73/docker-prepare/Dockerfile docker
TTS/demos/xtts_ft_demo/requirements.txt pypi
TTS/encoder/requirements.txt pypi
pyproject.toml pypi