Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.3%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: balisujohn
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Size: 52.9 MB
Statistics
  • Stars: 2
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed over 2 years ago
Metadata Files
Readme License Citation

README.md

This is a reverse engineering fork of tortoise-tts

In order to set this up, run the following commands (tested on Ubuntu 22.04 with cuda 12.0 and a 1070ti)

``` python3.9 -m venv env source env/bin/activate python3 -m pip install -r requirements.txt python3 -m pip isntall torchviz python3 -m pip install -e .

```

In order to generate the ggml-model.bin file, first run:

``` python3 tortoise/do_tts.py --text "This is a test message" --voice mol --preset fast --seed 0

``` This will cause the precursor files

auto_conditioning.pt and autoregressive.pt`

to get generated.

Then run ``` python3 ./convert-pt-to-ggml.py autoregressive.pt

`` Which will result inggml-model.bin` being generated. You can then move this model to examples/tortoise/ in tortoise.cpp to use it with tortoise.cpp.

You can find a pre-converted version of ggml-model.bin here https://huggingface.co/balisujohn/tortoise-ggml. Check the commit message to see which tortoise.cpp commit the particular model version is compatible with.

TorToiSe

Tortoise is a text-to-speech program built with the following priorities:

  1. Strong multi-voice capabilities.
  2. Highly realistic prosody and intonation.

This repo contains all the code needed to run Tortoise TTS in inference mode.

Manuscript: https://arxiv.org/abs/2305.07243

Hugging Face space

A live demo is hosted on Hugging Face Spaces. If you'd like to avoid a queue, please duplicate the Space and add a GPU. Please note that CPU-only spaces do not work for this demo.

https://huggingface.co/spaces/Manmay/tortoise-tts

Install via pip

bash pip install tortoise-tts

If you would like to install the latest development version, you can also install it directly from the git repository:

bash pip install git+https://github.com/neonbjb/tortoise-tts

What's in a name?

I'm naming my speech-related repos after Mojave desert flora and fauna. Tortoise is a bit tongue in cheek: this model is insanely slow. It leverages both an autoregressive decoder and a diffusion decoder; both known for their low sampling rates. On a K80, expect to generate a medium sized sentence every 2 minutes.

well..... not so slow anymore now we can get a 0.25-0.3 RTF on 4GB vram and with streaming we can get < 500 ms latency !!!

Demos

See this page for a large list of example outputs.

A cool application of Tortoise + GPT-3 (not affiliated with this repository): https://twitter.com/lexman_ai. Unfortunately, this proejct seems no longer to be active.

Usage guide

Local installation

If you want to use this on your own computer, you must have an NVIDIA GPU.

On Windows, I highly recommend using the Conda installation path. I have been told that if you do not do this, you will spend a lot of time chasing dependency problems.

First, install miniconda: https://docs.conda.io/en/latest/miniconda.html

Then run the following commands, using anaconda prompt as the terminal (or any other terminal configured to work with conda)

This will: 1. create conda environment with minimal dependencies specified 1. activate the environment 1. install pytorch with the command provided here: https://pytorch.org/get-started/locally/ 1. clone tortoise-tts 1. change the current directory to tortoise-tts 1. run tortoise python setup install script

shell conda create --name tortoise python=3.9 numba inflect conda activate tortoise conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia conda install transformers=4.29.2 git clone https://github.com/neonbjb/tortoise-tts.git cd tortoise-tts python setup.py install

Optionally, pytorch can be installed in the base environment, so that other conda environments can use it too. To do this, simply send the conda install pytorch... line before activating the tortoise environment.

Note: When you want to use tortoise-tts, you will always have to ensure the tortoise conda environment is activated.

If you are on windows, you may also need to install pysoundfile: conda install -c conda-forge pysoundfile

Docker

An easy way to hit the ground running and a good jumping off point depending on your use case.

```sh git clone https://github.com/neonbjb/tortoise-tts.git cd tortoise-tts

docker build . -t tts

docker run --gpus all \ -e TORTOISEMODELSDIR=/models \ -v /mnt/user/data/tortoisetts/models:/models \ -v /mnt/user/data/tortoisetts/results:/results \ -v /mnt/user/data/.cache/huggingface:/root/.cache/huggingface \ -v /root:/work \ -it tts ``` This gives you an interactive terminal in an environment that's ready to do some tts. Now you can explore the different interfaces that tortoise exposes for tts.

For example:

sh cd app conda activate tortoise time python tortoise/do_tts.py \ --output_path /results \ --preset ultra_fast \ --voice geralt \ --text "Time flies like an arrow; fruit flies like a bananna."

Apple Silicon

On macOS 13+ with M1/M2 chips you need to install the nighly version of PyTorch, as stated in the official page you can do:

shell pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu

Be sure to do that after you activate the environment. If you don't use conda the commands would look like this:

shell python3.10 -m venv .venv source .venv/bin/activate pip install numba inflect psutil pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu pip install transformers git clone https://github.com/neonbjb/tortoise-tts.git cd tortoise-tts pip install .

Be aware that DeepSpeed is disabled on Apple Silicon since it does not work. The flag --use_deepspeed is ignored. You may need to prepend PYTORCH_ENABLE_MPS_FALLBACK=1 to the commands below to make them work since MPS does not support all the operations in Pytorch.

do_tts.py

This script allows you to speak a single phrase with one or more voices. shell python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast

faster inference read.py

This script provides tools for reading large amounts of text.

shell python tortoise/read_fast.py --textfile <your text to be read> --voice random

read.py

This script provides tools for reading large amounts of text.

shell python tortoise/read.py --textfile <your text to be read> --voice random

This will break up the textfile into sentences, and then convert them to speech one at a time. It will output a series of spoken clips as they are generated. Once all the clips are generated, it will combine them into a single file and output that as well.

Sometimes Tortoise screws up an output. You can re-generate any bad clips by re-running read.py with the --regenerate argument.

API

Tortoise can be used programmatically, like so:

python reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths] tts = api.TextToSpeech() pcm_audio = tts.tts_with_preset("your text here", voice_samples=reference_clips, preset='fast')

To use deepspeed:

python reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths] tts = api.TextToSpeech(use_deepspeed=True) pcm_audio = tts.tts_with_preset("your text here", voice_samples=reference_clips, preset='fast')

To use kv cache:

python reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths] tts = api.TextToSpeech(kv_cache=True) pcm_audio = tts.tts_with_preset("your text here", voice_samples=reference_clips, preset='fast')

To run model in float16:

python reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths] tts = api.TextToSpeech(half=True) pcm_audio = tts.tts_with_preset("your text here", voice_samples=reference_clips, preset='fast') for Faster runs use all three:

python reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths] tts = api.TextToSpeech(use_deepspeed=True, kv_cache=True, half=True) pcm_audio = tts.tts_with_preset("your text here", voice_samples=reference_clips, preset='fast')

Acknowledgements

This project has garnered more praise than I expected. I am standing on the shoulders of giants, though, and I want to credit a few of the amazing folks in the community that have helped make this happen:

  • Hugging Face, who wrote the GPT model and the generate API used by Tortoise, and who hosts the model weights.
  • Ramesh et al who authored the DALLE paper, which is the inspiration behind Tortoise.
  • Nichol and Dhariwal who authored the (revision of) the code that drives the diffusion model.
  • Jang et al who developed and open-sourced univnet, the vocoder this repo uses.
  • Kim and Jung who implemented univnet pytorch model.
  • lucidrains who writes awesome open source pytorch models, many of which are used here.
  • Patrick von Platen whose guides on setting up wav2vec were invaluable to building my dataset.

Notice

Tortoise was built entirely by me using my own hardware. My employer was not involved in any facet of Tortoise's development.

License

Tortoise TTS is licensed under the Apache 2.0 license.

If you use this repo or the ideas therein for your research, please cite it! A bibtex entree can be found in the right pane on GitHub.

Owner

  • Name: John Balis
  • Login: balisujohn
  • Kind: user

Pursuing a Doctorate of Computer Sciences at UW Madison. Interested in reinforcement learning. My focus is primarily sim2real RL for robotics.

Citation (CITATION.cff)

cff-version: 1.3.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Betker"
  given-names: "James"
  orcid: "https://orcid.org/my-orcid?orcid=0000-0003-3259-4862"
title: "TorToiSe text-to-speech"
version: 2.0
date-released: 2022-04-28
url: "https://github.com/neonbjb/tortoise-tts"

GitHub Events

Total
Last Year

Committers

Last synced: about 1 year ago

All Time
  • Total Commits: 266
  • Total Committers: 34
  • Avg Commits per committer: 7.824
  • Development Distribution Score (DDS): 0.432
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
James Betker j****r@g****m 151
manmay nakhashi m****i@g****m 49
Johan Nordberg i****s@j****m 15
Jose 3****r 6
Roberts Slisans r****s@g****m 5
John U. Balis p****s@g****m 3
chris c****s@z****e 3
n8bot 2****t 3
osanseviero o****o@g****m 3
rgkirch 6****h 2
mrfakename me@m****e 2
Marcus Llewellyn m****n@g****m 2
Alex 3****l 1
Cason Clagg c****g@g****m 1
Chao Gao r****e@g****m 1
Danila Berezin 7****3 1
Dylan Caponi d****b@g****m 1
원빈 정 w****g@m****i 1
spottenn s****n@g****m 1
William Gaylord c****0@g****m 1
Vladimir Sofronov n****i@g****m 1
Tristan Drake t****0@g****m 1
Sergey s****n@m****u 1
NourEldin Osama 5****a 1
Nirant N****K 1
Mark Baushenko e****y@g****m 1
Kian-Meng Ang k****g@c****g 1
Kevin Stock k****n@k****g 1
Josh Ziegler j****h@p****m 1
Jai Mu k****o@h****u 1
and 4 more...
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: about 1 year ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

Dockerfile docker
  • nvidia/cuda 12.2.0-base-ubuntu22.04 build
requirements.txt pypi
  • appdirs *
  • deepspeed ==0.8.3
  • einops ==0.4.1
  • ffmpeg *
  • hjson *
  • inflect *
  • librosa ==0.9.1
  • llvmlite *
  • nbconvert ==5.3.1
  • numba *
  • numpy *
  • progressbar *
  • psutil *
  • py-cpuinfo *
  • pydantic ==1.9.1
  • rotary_embedding_torch *
  • scipy *
  • sounddevice *
  • threadpoolctl *
  • tokenizers *
  • torchaudio *
  • tornado ==4.2
  • tqdm *
  • transformers ==4.31.0
  • unidecode *
setup.py pypi
  • deepspeed ==0.8.3
  • einops *
  • inflect *
  • librosa *
  • progressbar *
  • rotary_embedding_torch *
  • scipy *
  • tokenizers *
  • tqdm *
  • transformers ==4.31.0
  • unidecode *