vitsserver
๐ป VITS ONNX TTS server designed for fast inference ๐ฅ
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
โCITATION.cff file
Found CITATION.cff file -
โcodemeta.json file
Found codemeta.json file -
โ.zenodo.json file
Found .zenodo.json file -
โDOI references
-
โAcademic publication links
-
โCommitters with academic emails
-
โInstitutional organization owner
-
โJOSS paper metadata
-
โScientific vocabulary similarity
Low similarity (12.5%) to scientific vocabulary
Keywords
Repository
๐ป VITS ONNX TTS server designed for fast inference ๐ฅ
Basic Info
Statistics
- Stars: 126
- Watchers: 6
- Forks: 7
- Open Issues: 3
- Releases: 4
Topics
Metadata Files
README.md

Vits-Server ๐ฅ
โก A VITS ONNX server designed for fast inference, supporting streaming and additional inference settings to enable model preference settings and optimize performance.
๐งช Experimental purposes only
This project is for experimental purposes only.
If you are looking for a production-ready TTS implementation, go to https://github.com/RVC-Boss/GPT-SoVITS
Advantages ๐ช
- [x] Long Voice Generation, Support Streaming. ้ฟ่ฏญ้ณๆนๆฌกๆจ็ๅๅนถใ
- [x] Automatic language type parsing for text, eliminating the need for language recognition segmentation. ่ชๅจ่ฏๅซ่ฏญ่จ็ฑปๅๅนถๅค็ไธๅใ
- [x] Supports multiple audio formats, including ogg, wav, flac, and silk. ๅคๆ ผๅผ่ฟๅๅๅ ฅใ
- [x] Multiple models, streaming inference. ๅคๆจกๅๅๅงๅใ
- [x] Additional inference settings to enable model preference settings and optimize performance. ้ขๅค็ๆจ็่ฎพ็ฝฎ๏ผๅฏ็จๆจกๅๅๅฅฝ่ฎพ็ฝฎใ
- [x] Auto Convert PTH to ONNX. ่ชๅจ่ฝฌๆขpthๅฐonnxใ
- [ ] Support for multiple languages, including Chinese, English, Japanese, and Korean. ๅค่ฏญ่จๅคๆจกๅๅๅนถๆฏๆ๏ผไปปๅกๆนๆฌกๅๅๅฐไธๅๆจกๅ๏ผใ
API Documentation ๐
We offer out-of-the-box call systems.
python
client = VITS("http://127.0.0.1:9557")
res = client.generate_voice(model_id="model_01", text="ไฝ ๅฅฝ๏ผไธ็๏ผ", speaker_id=0, audio_type="wav",
length_scale=1.0, noise_scale=0.5, noise_scale_w=0.5, auto_parse=True)
with open("output.wav", "wb") as f:
for chunk in res.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
Running ๐
We recommend using a virtual environment to isolate the runtime environment. Because this project's dependencies may
potentially disrupt your dependency library, we recommend using pipenv to manage the dependency package.
Config Server ๐
Configuration is in .env, including the following fields:
```dotenv VITSSERVERHOST=0.0.0.0 VITSSERVERPORT=9557 VITSSERVERRELOAD=false
VITSSERVERWORKERS=1
VITSSERVERINIT_CONFIG="https://....json"
VITSSERVERINIT_MODEL="https://.....pth or onnx"
```
or you can use the following command to set the environment variable:
```shell export VITSSERVERHOST="0.0.0.0" export VITSSERVERPORT="9557" export VITSSERVERRELOAD="false" export VITSDISABLEGPU="false"
```
VITS_SERVER_RELOAD means auto restart server when file changed.
Running from pipenv ๐ and pm2.json ๐
```shell apt-get update && apt-get install -y build-essential libsndfile1 vim gcc g++ cmake apt install python3-pip pip3 install pipenv pipenv install # Create and install dependency packages pipenv shell # Activate the virtual environment python3 main.py # Run
then ctrl+c exit
```
```shell apt install npm npm install pm2 -g pm2 start pm2.json
then the server will run in the background
```
and we have a one-click script to install pipenv and npm:
```shell curl -LO https://raw.githubusercontent.com/LlmKira/VitsServer/main/deployscript.sh && chmod +x deployscript.sh && ./deploy_script.sh
```
Building from Docker ๐
we have docker pull sudoskys/vits-server:main to docker hub.
you can also build from Dockerfile.
shell
docker build -t <image-name> .
where <image-name> is the name you want to give to the image. Then, use the following command to start the container:
shell
docker run -d -p 9557:9557 -v <local-path>/vits_model:/app/model <image-name>
where <local-path> is the local folder path you want to map to the /app/model directory in the container.
Model Configuration ๐
In the model folder, place the model.pth/ model.onnx and corresponding model.json files. If it is .pth, it
will be automatically converted to .onnx!
you can use .env to set VITS_SERVER_INIT_CONFIG and VITS_SERVER_INIT_MODEL to download model files.
dotenv
VITS_SERVER_INIT_CONFIG="https://....json"
VITS_SERVER_INIT_MODEL="https://.....pth?trace=233 or onnx?trace=233"
model folder structure:
.
โโโ 1000_epochs.json
โโโ 1000_epochs.onnx
โโโ 1000_epochs.pth
โโโ 233_epochs.json
โโโ 233_epochs.onnx
โโโ 233_epochs.pth
Model ID is 1000_epochs and 233_epochs.
when you put model files in the model folder, you should restart the server.
Model Extension Design ๐
You can add extra fields in the model configuration to obtain information such as the model name corresponding to the model ID through the API.
json5
{
//...
"info": {
"name": "coco",
"description": "a vits model",
"author": "someone",
"cover": "https://xxx.com/xxx.jpg",
"email": "xx@ws.com"
},
"infer": {
"noise_scale": 0.667,
"length_scale": 1.0,
"noise_scale_w": 0.8
}
//....
}
infer is the default(prefer) inference settings for the model.
info is the model information.
How can I retrieve these model information?
You can access {your_base_url}/model/list?show_speaker=True&show_ms_config=True to obtain detailed information about
model roles and configurations.
TODO ๐
- [ ] Test Silk format
- [x] Docker for automatic deployment
- [x] Shell script for automatic deployment
Acknowledgements ๐
We would like to acknowledge the contributions of the following projects in the development of this project:
- MoeGoe: https://github.com/CjangCjengh/MoeGoe
- vitswithchatbot: https://huggingface.co/Mahiruoshi/vitswithchatbot
- vits: https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer
- espnet: https://github.com/espnet/espnet_onnx
- onnxruntime: https://onnxruntime.ai/
Owner
- Name: LLM Kira
- Login: LlmKira
- Kind: organization
- Email: me@dianas.cyou
- Location: Singapore
- Website: https://llmkira.github.io/Docs
- Repositories: 23
- Profile: https://github.com/LlmKira
Cat Friendly Promotion Association Lab
Citation (CITATION.cff)
cff-version: 1.2.0
title: "GitHub - LlmKira/VitsServer: ๐ป A VITS ONNX server designed for fast inference"
abstract: This repository contains the source code for VitsServer, a server designed for fast inference of ONNX models in the VITS format.
authors:
- name: LlmKira
type: Organization
url: https://github.com/LlmKira
keywords:
- vits
- onnx
version: 1.0.0
date-released: 2023-04-01
url: https://github.com/LlmKira/VitsServer
citation:
- text: "LlmKira (2023). GitHub - LlmKira/VitsServer: ๐ป A VITS ONNX server designed for fast inference. GitHub."
doi:
- text: "LlmKira. (2023). VitsServer [Source code]. GitHub. https://github.com/LlmKira/VitsServer"
doi:
license: BSD-3-Clause
repository-code: https://github.com/LlmKira/VitsServer
GitHub Events
Total
- Watch event: 9
- Push event: 1
Last Year
- Watch event: 9
- Push event: 1
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| sudoskys | c****o@h****m | 77 |
| Dark Litss | 8****3 | 10 |
Issues and Pull Requests
Last synced: 9 months ago
All Time
- Total issues: 8
- Total pull requests: 6
- Average time to close issues: 15 days
- Average time to close pull requests: about 9 hours
- Total issue authors: 5
- Total pull request authors: 2
- Average comments per issue: 2.38
- Average comments per pull request: 0.0
- Merged pull requests: 6
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- sudoskys (4)
- NikitaKononov (1)
- liukaiyueyuo (1)
- Lemondogdog (1)
- ricardomlee (1)
Pull Request Authors
- sudoskys (5)
- lss233 (1)