Coder Social home page Coder Social logo

daswer123 / xtts-api-server Goto Github PK

View Code? Open in Web Editor NEW
359.0 7.0 80.0 2.32 MB

A simple FastAPI Server to run XTTSv2

License: MIT License

Python 83.24% Jupyter Notebook 1.85% Dockerfile 0.81% Batchfile 14.11%
sillytavern tts tts-api xtts xttsv2 realtime-tts

xtts-api-server's Introduction

A simple FastAPI Server to run XTTSv2

This project is inspired by silero-api-server and utilizes XTTSv2.

This server was created for SillyTavern but you can use it for your needs

Feel free to make PRs or use the code for your own needs

There's a google collab version you can use it if your computer is weak.

If you are looking for an option for normal XTTS use look here https://github.com/daswer123/xtts-webui

Recently I have little time to do this project, so I advise you to get acquainted with a similar project

Changelog

You can keep track of all changes on the release page

TODO

  • Make it possible to change generation parameters through the generation request and through a different endpoint

Installation

Simple installation :

pip install xtts-api-server

This will install all the necessary dependencies, including a CPU support only version of PyTorch

I recommend that you install the GPU version to improve processing speed ( up to 3 times faster )

Windows

python -m venv venv
venv\Scripts\activate
pip install xtts-api-server
pip install torch==2.1.1+cu118 torchaudio==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118

Linux

sudo apt install -y python3-dev python3-venv portaudio19-dev
python -m venv venv
source venv\bin\activate
pip install xtts-api-server
pip install torch==2.1.1+cu118 torchaudio==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118

Manual

# Clone REPO
git clone https://github.com/daswer123/xtts-api-server
cd xtts-api-server
# Create virtual env
python -m venv venv
venv/scripts/activate or source venv/bin/activate
# Install deps
pip install -r requirements.txt
pip install torch==2.1.1+cu118 torchaudio==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118
# Launch server
python -m xtts_api_server
 

Use Docker image with Docker Compose

A Dockerfile is provided to build a Docker image, and a docker-compose.yml file is provided to run the server with Docker Compose as a service.

You can build the image with the following command:

mkdir xtts-api-server
cd xtts-api-server
docker run -d daswer123/xtts-api-server

OR

cd docker
docker compose build

Then you can run the server with the following command:

docker compose up # or with -d to run in background

Starting Server

python -m xtts_api_server will run on default ip and port (localhost:8020)

Use the --deepspeed flag to process the result fast ( 2-3x acceleration )

usage: xtts_api_server [-h] [-hs HOST] [-p PORT] [-sf SPEAKER_FOLDER] [-o OUTPUT] [-t TUNNEL_URL] [-ms MODEL_SOURCE] [--listen] [--use-cache] [--lowvram] [--deepspeed] [--streaming-mode] [--stream-play-sync]

Run XTTSv2 within a FastAPI application

options:
  -h, --help show this help message and exit
  -hs HOST, --host HOST
  -p PORT, --port PORT
  -d DEVICE, --device DEVICE `cpu` or `cuda`, you can specify which video card to use, for example, `cuda:0`
  -sf SPEAKER_FOLDER, --speaker-folder The folder where you get the samples for tts
  -o OUTPUT, --output Output folder
  -mf MODELS_FOLDERS, --model-folder Folder where models for XTTS will be stored, finetuned models should be stored in this folder
  -t TUNNEL_URL, --tunnel URL of tunnel used (e.g: ngrok, localtunnel)
  -ms MODEL_SOURCE, --model-source ["api","apiManual","local"]
  -v MODEL_VERSION, --version You can download the official model or your own model, official version you can find [here](https://huggingface.co/coqui/XTTS-v2/tree/main)  the model version name is the same as the branch name [v2.0.2,v2.0.3, main] etc. Or you can load your model, just put model in models folder
  --listen Allows the server to be used outside the local computer, similar to -hs 0.0.0.0
  --use-cache Enables caching of results, your results will be saved and if there will be a repeated request, you will get a file instead of generation
  --lowvram The mode in which the model will be stored in RAM and when the processing will move to VRAM, the difference in speed is small
  --deepspeed allows you to speed up processing by several times, automatically downloads the necessary libraries
  --streaming-mode Enables streaming mode, currently has certain limitations, as described below.
  --streaming-mode-improve Enables streaming mode, includes an improved streaming mode that consumes 2gb more VRAM and uses a better tokenizer and more context.
  --stream-play-sync Additional flag for streaming mod that allows you to play all audio one at a time without interruption

You can specify the path to the file as text, then the path counts and the file will be voiced

You can load your own model, for this you need to create a folder in models and load the model with configs, note in the folder should be 3 files config.json vocab.json model.pth

If you want your host to listen, use -hs 0.0.0.0 or use --listen

The -t or --tunnel flag is needed so that when you get speakers via get you get the correct link to hear the preview. More info here

Model-source defines in which format you want to use xtts:

  1. local - loads version 2.0.2 by default, but you can specify the version via the -v flag, model saves into the models folder and uses XttsConfig and inference.
  2. apiManual - loads version 2.0.2 by default, but you can specify the version via the -v flag, model saves into the models folder and uses the tts_to_file function from the TTS api
  3. api - will load the latest version of the model. The -v flag won't work.

All versions of the XTTSv2 model can be found here the model version name is the same as the branch name [v2.0.2,v2.0.3, main] etc.

The first time you run or generate, you may need to confirm that you agree to use XTTS.

About Streaming mode

Streaming mode allows you to get audio and play it back almost immediately. However, it has a number of limitations.

You can see how this mode works here and here

Now, about the limitations

  1. Can only be used on a local computer
  2. Playing audio from the your pc
  3. Does not work endpoint tts_to_file only tts_to_audio and it returns 1 second of silence.

You can specify the version of the XTTS model by using the -v flag.

Improved streaming mode is suitable for complex languages such as Chinese, Japanese, Hindi or if you want the language engine to take more information into account when processing speech.

--stream-play-sync flag - Allows you to play all messages in queue order, useful if you use group chats. In SillyTavern you need to turn off streaming to work correctly

API Docs

API Docs can be accessed from http://localhost:8020/docs

How to add speaker

By default the speakers folder should appear in the folder, you need to put there the wav file with the voice sample, you can also create a folder and put there several voice samples, this will give more accurate results

Selecting Folder

You can change the folders for speakers and the folder for output via the API.

Note on creating samples for quality voice cloning

The following post is a quote by user Material1276 from reddit

Some suggestions on making good samples

Keep them about 7-9 seconds long. Longer isn't necessarily better.

Make sure the audio is down sampled to a Mono, 22050Hz 16 Bit wav file. You will slow down processing by a large % and it seems cause poor quality results otherwise (based on a few tests). 24000Hz is the quality it outputs at anyway!

Using the latest version of Audacity, select your clip and Tracks > Resample to 22050Hz, then Tracks > Mix > Stereo to Mono. and then File > Export Audio, saving it as a WAV of 22050Hz

If you need to do any audio cleaning, do it before you compress it down to the above settings (Mono, 22050Hz, 16 Bit).

Ensure the clip you use doesn't have background noises or music on e.g. lots of movies have quiet music when many of the actors are talking. Bad quality audio will have hiss that needs clearing up. The AI will pick this up, even if we don't, and to some degree, use it in the simulated voice to some extent, so clean audio is key!

Try make your clip one of nice flowing speech, like the included example files. No big pauses, gaps or other sounds. Preferably one that the person you are trying to copy will show a little vocal range. Example files are in here

Make sure the clip doesn't start or end with breathy sounds (breathing in/out etc).

Using AI generated audio clips may introduce unwanted sounds as its already a copy/simulation of a voice, though, this would need testing.

Credit

  1. Thanks to the author Kolja Beigel for the repository RealtimeTTS , I took some of its code for my project.
  2. Thanks erew123 for the note about creating samples and the code to download the models
  3. Thanks lendot for helping to fix the multiprocessing bug and adding code to use multiple samples for speakers

xtts-api-server's People

Contributors

chanis2 avatar codingjoe avatar cohee1207 avatar daswer123 avatar deffcolony avatar lendot avatar mickdekkers avatar parisneo avatar wang-haoxian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

xtts-api-server's Issues

Issue with the streaming mode

Here's my issue when I try to use the streaming mode, I'm on windows :

2024-01-19 13:33:23.733 | WARNING | mp_main::78 - 'Streaming Mode' has certain limitations, you can read about them here https://github.com/daswer123/xtts-api-server#about-streaming-mode
2024-01-19 13:33:23.733 | INFO | mp_main::81 - You launched an improved version of streaming, this version features an improved tokenizer and more context when processing sentences, which can be good for complex languages like Chinese
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\spawn.py", line 122, in spawn_mai exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\spawn.py", line 131, in _main
prepare(preparation_data)
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 291, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\server.py", line 85, in
engine = CoquiEngine(specific_model=MODEL_VERSION,use_deepspeed=DEEPSPEED,local_models_path=str(model_path))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\RealtimeTTS\engines\base_engine.py", line 11, in call
instance = super().call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\RealtimeTTS\engines\coqui_engine.py", line 83, in init
set_start_method('spawn')
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\context.py", line 247, in set_start_method
raise RuntimeError('context has already been set')
RuntimeError: context has already been set
Traceback (most recent call last):
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\server.py", line 85, in
engine = CoquiEngine(specific_model=MODEL_VERSION,use_deepspeed=DEEPSPEED,local_models_path=str(model_path))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\RealtimeTTS\engines\base_engine.py", line 11, in call
instance = super().call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Modelisation_IA\xtts-api-server\xtts_api_server\RealtimeTTS\engines\coqui_engine.py", line 113, in init
self.main_synthesize_ready_event.wait()
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\synchronize.py", line 356, in wait
self._cond.wait(timeout)
File "C:\Users\MrHaurrus\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\synchronize.py", line 268, in wait
return self._wait_semaphore.acquire(True, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt

Short term caching

Would you consider adding (or a PR to add) a small cache so if the same parameters are requested, it will return the recently created output, rather than generating it again? Experimenting with RVC settings in SillyTavern would be much more smooth if it did this.

GPU Management

Hey, sorry to bother again but I was wondering...
Would it be possible to run the server on a particular GPU? Can this be supported in a command line argument for the server?

Issue Installing

I have tried multiple time to install on windows and I get this error message

building`` 'TTS.tts.utils.monotonic_align.core' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tts
Building wheel for mojimoji (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for mojimoji (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [11 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\mojimoji
copying mojimoji_init_.pyi -> build\lib.win-amd64-cpython-311\mojimoji
copying mojimoji\py.typed -> build\lib.win-amd64-cpython-311\mojimoji
running build_ext
building 'mojimoji' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for mojimoji
Failed to build tts mojimoji
ERROR: Could not build wheels for tts, mojimoji, which is required to install pyproject.toml-based projects

I already installed Microsoft Visual C++

Found signed 32-bit PCM at 22k wav > 16/24

I'd encourage some experimenting!
Also this isn't the correct XTTS, but since it's more of an operations communication it has no value in issues anyway -- was just wondering if you had a settings cache style you wanted to use or if you're interested in a regular old, save variables to a text file, + a reset to defaults style cache?
I'm not familiar with gradio but doing this in python is fairly straight forward, so I can't imagine it will be an issue for me to take care of -- if you weren't about to launch another fleet of settings. If you are, I can wait a month or two to add these sorts of mild convenience features around the edges, and I'm aware the programs may always change, but I have no idea what you're working on to know if there's any reason to hold off, or to say, use json instead of .txt, etc.

Need arguments for parallel capacity

I've tested the project with ab and found out that the parallel capacity is very low(when sending 50 concurrency with ab it can only make about 2-3 parallel process).
And when I check the VRAM,it only occupied about 4GB.So I was wondering if you would add argument(function) to allow specify parallel capacity(more model workers)?Or you can just take as much VRAM as possible to create more model workers(like VLLM's do).
Thanks for the great job you've done and looking forward for your reply!

"python.exe: No module named xtts_api_server"

Apologize I'm a noob at this, but am trying to get this up and running. I followed all the steps for windows install, but I run into this message whenever I try to start the api server.

brave_saHMEh7dzx

What am I doing wrong?

Is is possible to obtain output in a compressed format?

Hi, have tried your project out with success.

Apologies if this has been asked before. So far, I don't see any configuration options for audio compression when requesting TTS data from the server. Would it be possible to set output format (e.g. ogg/mp3) or bitrate when requesting a TTS? The current approach, while working, can result in larger files that take up more time and bandwidth to download.

Thanks!

cc: @henk717

Need a progress information to be returned as a Fast endpoint for Lollms

Hi there, I hope you are well.
I have integrated your server into my little project called lollms and it is working fine except that the user has no idea about the progress of the generation. When generating an audio file, I want to show a progressbar that allows the user to see where the generation process is.

Can you please add an endpoint to the API that allows me to request the current progress? Somethig like get_progress and it returns a value from 0 to 100 or from 0 to 1.

I can read the code and do it on myself then do a pull request but maybe you can do it way faster and I have lots of things to code on lollms so I don't know if I have the time.

Just tell me if you would prefer that I do it myself or if you can do it faster.

Thank you very much for this cool project.

Install instructions

python -m venv venv
source venv/bin/activate

Should be placed in the installation instructions for Linux users. I just attempted to upgrade/install and everything flew by installing without creating an environment. I have no idea if this has broken other dependencies and I just recovered from a difficult upgrade.

May I suggest you please keep a development version separate while you work on new pull requests/updates?

How do we get fine tuned models to show up in speaker list?

I suspect that Silly Tavern just hasnt been updated to support fine tuned models yet or has a bug thats not showing fine tuned models in the speaker list (ticket for that is here: SillyTavern/SillyTavern#1657 )
But just incase I am doing it wrong, here is how i am loading xtts api.
First of all I updated it pip (before the update it did not recognize the models folder command -mf however after the update it does)

The full bat i am using to load it is:

cd xtts
call venv\Scripts\activate
python -m xtts_api_server -mf C:\NewAI\SillyTavern\SillyTavern\xtts\models -sf C:\NewAI\SillyTavern\SillyTavern\xtts\speakers2 --streaming-mode-improve --deepspeed --stream-play-sync

The speakers2 part was a test with just one wav in it rather than messing with my current rather full speakers folder.
The fine tuned voice is in the models folder, in its own folder.
so models then NarratorNew folder
inside NarratorNew folder is:
config.json
model.pth (large file... VERY large file)
reference.wav
speakers_xtts.pth
vocab.json

When fine tuning in the webui this model worked fine (great infact) but with out instructions on how to moce this to an api server install I somewhat guessed its possible I got it wrong? am i missing something or is it a case of sillytavern is the issue? (it only shows the wav file in speaker2 folder)

For now I will continue finetuning as i have a fair few i want todo.

Also how do we format the json packet for manual sending to the server to use specific fine tuned models?

Currently I am using:

@echo off
setlocal enabledelayedexpansion

rem Set the API endpoint and function
set API_ENDPOINT=http://localhost:8020/tts_to_file

rem Set the input values
rem set SPEAKER_WAV="dave2.wav"
set SPEAKER_WAV="stanlyNarrator.wav"
set LANGUAGE="en"
set FILE_NAME_OR_PATH="narrator.wav"

rem Check if a file is dropped onto the batch file
if "%~1" neq "" (
    set "TEXT_FILE=%~1"
) else (
    echo No text file dropped. Exiting.
    exit /b
)

rem Read the contents of the dropped text file into the TEXT variable
set "TEXT="
for /f "delims=" %%i in ('type "%TEXT_FILE%"') do (
    set "LINE=%%i"
    rem Escape special characters in the line
    set "LINE=!LINE:"=\"!"
    set "TEXT=!TEXT!!LINE! "
)

rem Trim trailing whitespace
set "TEXT=!TEXT:~0,-1!"

rem Build the JSON payload
set JSON_PAYLOAD={^
  "text": "!TEXT!",^
  "speaker_wav": %SPEAKER_WAV%,^
  "language": %LANGUAGE%,^
  "file_name_or_path": %FILE_NAME_OR_PATH%^
}

rem Write the JSON payload to a temporary file
echo %JSON_PAYLOAD% > temp.json

rem Make the curl request
curl -v -X POST -H "Content-Type: application/json" -d @temp.json %API_ENDPOINT%

rem Remove the temporary file
del temp.json

but naturally set SPEAKER_WAV="stanlyNarrator.wav" needs to be changed to the fine tuned model, but not sure what format to use there (which could also be the silly tavern issue lol

Error while using streaming mode

Here is my conf of server.py :

DEVICE = os.getenv('DEVICE',"cuda")
OUTPUT_FOLDER = os.getenv('OUTPUT', 'output')
SPEAKER_FOLDER = os.getenv('SPEAKER', 'speakers')
MODEL_FOLDER = os.getenv('MODEL', 'models')
BASE_HOST = os.getenv('BASE_URL', '127.0.0.1:8020')
BASE_URL = os.getenv('BASE_URL', '127.0.0.1:8020')
MODEL_SOURCE = os.getenv("MODEL_SOURCE", "local")
MODEL_VERSION = os.getenv("MODEL_VERSION","v2.0.2")
LOWVRAM_MODE = os.getenv("LOWVRAM_MODE") == 'true'
DEEPSPEED = os.getenv("DEEPSPEED") == 'true'
USE_CACHE = os.getenv("USE_CACHE") == 'true'

STREAM_MODE = os.getenv("STREAM_MODE") == 'true'
STREAM_MODE_IMPROVE = os.getenv("STREAM_MODE_IMPROVE") == 'false'
STREAM_PLAY_SYNC = os.getenv("STREAM_PLAY_SYNC") == 'false'

and here's the request.py I make :

import requests

tts_url = 'http://127.0.0.1:8020/tts_stream/'
tts_response = requests.post(tts_url, json={
"text": " Greetings, traveler. What brings you to Whiterun? I hope you're here for noble reasons, as our Jarl takes great care in ensuring his palace is free from harm.",
"speaker_wav": "D:/Modelisation_IA/xtts-api-server-custom/models/MrHaurrus/test.wav",
"language": "en",
})

here's the result ouput :

(venv) PS D:\Modelisation_IA\xtts-api-server-main\xtts_api_server> python server.py
2024-01-19 12:33:53.248 | INFO | main::72 - Model: 'v2.0.2' starts to load,wait until it loads
2024-01-19 12:33:59.905 | INFO | tts_funcs:load_model:190 - Pre-create latents for all current speakers
2024-01-19 12:33:59.906 | INFO | tts_funcs:create_latents_for_all:270 - Latents created for all 0 speakers.
2024-01-19 12:33:59.907 | INFO | tts_funcs:load_model:193 - Model successfully loaded
D:\Modelisation_IA\xtts-api-server-main\xtts_api_server\venv\Lib\site-packages\pydantic_internal_fields.py:149: UserWarning: Field "model_name" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting model_config['protected_namespaces'] = ().
warnings.warn(
INFO: Started server process [47132]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8020 (Press CTRL+C to quit)
INFO: 127.0.0.1:50160 - "POST /tts_stream/ HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:50160 - "POST /tts_stream HTTP/1.1" 405 Method Not Allowed

Hindi language is getting this error

2024-03-30 00:20:19.741 | INFO | xtts_api_server.server:tts_to_audio:291 - Processing TTS to audio with request: text='रिश्तों में सम्मान क्यों ज़रूरी है? क्योंकि यह भागीदारों के बीच विश्वास, खुला संचार और आपसी समझ को बढ़ावा देता है' speaker_wav='David_Attenborough CC3' language='hi'
2024-03-30 00:20:19.742 | ERROR | xtts_api_server.server:tts_to_audio:317 - 'hi'
INFO: ::1:60119 - "POST /tts_to_audio/ HTTP/1.1" 500 Internal Server Error

Error while downloading the model (first start)

After running:
python -m xtts_api_server

File "/home/eradan/miniconda3/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='coqui.gateway.scarf.sh', port=443): Max retries exceeded with url: /hf-coqui/XTTS-v2/main/model.pth (Caused by SSLError(SSLError(1, '[SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:1007)')))

apiManual and local generation lower quality then api

I found that if you change the local_generation function to use the following inference call it will match the api voice.

        out = self.model.inference(
            text,
            language,
            gpt_cond_latent=gpt_cond_latent,
            speaker_embedding=speaker_embedding,
            temperature=0.75,
            length_penalty=1.0,
            repetition_penalty=5.0,
            top_k=50,
            top_p=0.85,
            enable_text_splitting=True
        )

I copied the temperature, length_penalty, repetition_penalty, top_k, top_p from the config.json in the model directory.

Re the readme bit on your front page.

I originally wrote it. I was wrong and it does want 22050Hz input files, but outputs at 24000Hz. So you may want to change it to the below (I changed it on Reddit already, have revised it below for you! & thanks for linking my post).

Also, people need to update their TTS oobabooga/text-generation-webui#4723 (comment)

The 2.0.3 model sounds rubbish by the way... so people will want to stick with the 2.0.2 model

Thanks!

Some suggestions on making good samples

Keep them about 7-9 seconds long. Longer isn't necessarily better.

Make sure the audio is down sampled to a Mono, 22050Hz 16 Bit wav file. You will slow down processing by a large % and it seems cause poor quality results otherwise (based on a few tests). 24000Hz is the quality it outputs at anyway!

Using the latest version of Audacity, select your clip and Tracks > Resample to 22050Hz, then Tracks > Mix > Stereo to Mono. and then File > Export Audio, saving it as a WAV of 22050Hz

-If you need to do any audio cleaning, do it before you compress it down to the above settings (Mono, 22050Hz, 16 Bit).

Ensure the clip you use doesn't have background noises or music on e.g. lots of movies have quiet music when many of the actors are talking. Bad quality audio will have hiss that needs clearing up. The AI will pick this up, even if we don't, and to some degree, use > it in the simulated voice to some extent, so clean audio is key!

Try make your clip one of nice flowing speech, like the included example files. No big pauses, gaps or other sounds. Preferably one that the person you are trying to copy will show a little vocal range. Example files are in \text-generation- >webui\extensions\coqui_tts\voices

Make sure the clip doesn't start or end with breathy sounds (breathing in/out etc).

Using AI generated audio clips may introduce unwanted sounds as its already a copy/simulation of a voice, though, this would need testing.

Skipping the rest of a sentence.

I'm not sure if this is a xttsv2 issue or a software issue, but I tend to the first because it happen not every time.

I noticed that the AI often tend to ignore the rest of a sentence after a "-" in it.

For example: "You go to the outside - you look around. You walk back inside."

Voice output: "You go to the outside. You walk back inside."

Problem downloading the model, not able to make connection.

is anyone experimenting this issue as well?

i think is not making handshake with hugging face,
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))

These are the last logs before it breaks.

2024-03-05 16:15:32.392 | INFO | xtts_api_server.server::73 - Model: 'v2.0.2' starts to load, wait until it loads
[XTTS] Downloading config.json...

Implementing support for speakers with multiple speaker_wav files

This is mostly informational; I'm not seeing a Discussion link for this repo, so hopefully it's ok to post this here.

I don't see it mentioned much, but xtts allows one to provide multiple 6-second speaker_wav files, which it then averages together when synthesizing the text. I've found this to be really helpful in getting certain cloned voices closer to the source. I really wanted to try this out in SillyTavern so I decided to take a stab at adding this functionality to xtts_api_server. You can see what I have so far at my fork

Note: this is very incomplete as of this writing and not ready for the masses, but I wanted to at least mention it now in case anyone wants to give it a whirl and let me know if there are any problems.

Existing single-file speakers work just as they have. If you want to add a speaker with multiple speaker_wavs, just create a directory in speakers/ with the name of your speaker and dump all their speaker_wav files in there. If you're previewing a multi-wav speaker in extensions, it'll play whichever wav file it found in that directory first. This doesn't yet work with streaming mode. Likewise, only --model-source local (the default) works.

Spawn Start Method?

Hello, do you have time to take a look at this one below?

~/xtts$ python -m xtts_api_server -d cuda:1 --streaming-mode-improve
/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
2023-12-13 16:58:24.724 | WARNING | xtts_api_server.server::53 - 'Streaming Mode' has certain limitations, you can read about them here https://github.com/daswer123/xtts-api-server#about-streaming-mode
2023-12-13 16:58:24.724 | INFO | xtts_api_server.server::56 - You launched an improved version of streaming, this version features an improved tokenizer and more context when processing sentences, which can be good for complex languages like Chinese
2023-12-13 16:58:24.724 | INFO | xtts_api_server.server::58 - Load model for Streaming

Using model: xtts
Process Process-1:
Traceback (most recent call last):
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/xtts_api_server/RealtimeTTS/engines/coqui_engine.py", line 296, in _synthesize_worker
tts.to(device)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/nn/modules/module.py", line 833, in _apply
param_applied = fn(param)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/cuda/init.py", line 284, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/site-packages/xtts_api_server/RealtimeTTS/engines/coqui_engine.py", line 304, in _synthesize_worker
logging.exception(f"Error initializing main coqui engine model: {e}")
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/logging/init.py", line 2113, in exception
error(msg, *args, exc_info=exc_info, **kwargs)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/logging/init.py", line 2105, in error
root.error(msg, *args, **kwargs)
File "/home/jay/miniconda3/envs/xtts/lib/python3.10/logging/init.py", line 1506, in error
self._log(ERROR, msg, args, **kwargs)
TypeError: Log._log() got an unexpected keyword argument 'exc_info'

Feature Request: MaryTTS Compatibility

Hello,

Would it be possible to write MarryTTS compatibility into this (similar to what coqui-tts has)?

The specific intent here is to provide compatibility with Home Assistant

Output folder

Hi,

I'm trying to find the generated audio files from the Sillytavern conversation. I used the --output argument and input the output folder but no sound files are generated anywhere even if I input another folder. How could I find/create the folder where the wav audio files are generated? Thanks

GET / HTTP/1.1 404 Not Found

Not sure what is going wrong. I followed the setup instructions in the ReadMe, but when I click the lcoalhost link, the error shows up in the command line.

INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:8020 (Press CTRL+C to quit)
INFO:     ::1:59218 - "GET / HTTP/1.1" 404 Not Found```

Error during pip install xtts-api-server: error: Microsoft Visual C++ 14.0 or greater is required

I got the following error when installing xtts-api-server:

pip install xtts-api-server
..
...
....
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/

Failed to build tts
ERROR: Could not build wheels for tts, which is required to install pyproject.toml-based projects

The only way to resolve it was a 10gb download of Visual Studio C++ and MS Build Tools 2015; selected from the visual studio installer, after you select Visual Studio C++.

Is this normal? Did all of you guys have to install VS C++ and the Build Tools? That was 10gb of software I'd rarely use. I'm not complaining, just wondering if this is normal?

Reading image promps and Auto Gen while editing message

Not sure if this is ST extras side, or xtts-api-server side, but when Auto generation is on

  1. When receiving an image from /sd command TTS reads out "[Character] sends a picture containing [all of the prompts used to generate image]"
  2. When editing a message in chat auto generation attempts to read out every iteration of every change as its happening, often creating nonsense.

Running ST 1.11.3
Running xtts-api-server with "--device cpu"
Boxes checked in TTS ST Extension
✓Enabled
✓Auto Generation
✓Ignore text, even "quotes", inside asterisks
✓Skip codeblocks

Update Issue

Hi!

I just updated the xtts-api-server and I'm getting the error below. Is this something that can be ignored? The server seems to be working as far as I can tell...

miniconda3/envs/xtts/lib/python3.10/site-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_name" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(
INFO:     Started server process [1520484]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:8020 (Press CTRL+C to quit)

Error Installing

Im getting a certain error when trying to run the 3rd and foruth command.

Error: To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

(venv) E:\ST\SillyTavern\xtts>pip install torch==2.1.1+cu118 torchaudio==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118
Looking in indexes: https://download.pytorch.org/whl/cu118
ERROR: Could not find a version that satisfies the requiremen

Audio playback in group chats

Wen you are in a group chat (tested with two characters) and both answer to the same question, only the second one will be played fully. The first one is either cut off or doesn't start at all. I'm not sure if audio inference itself is interrupted by the second request or just the playback. Also I'm not sure if it is a problem with my configuration, SillyTavern or xtts-api-server.

The xtts-api-server shows the following output:
INFO: ::1:49923 - "POST /tts_to_audio/ HTTP/1.1" 200 OK
INFO: ::1:49923 - "POST /tts_to_audio/ HTTP/1.1" 200 OK

I noticed that for both posts "49923" is used, could this be related?

In theory, if it is caused by xtts-api-server do we just need to pause while an audio stream is active?

This is how I start the server:

Python.exe -m xtts_api_server -d "cuda" -sf MODELS -o RECORDING -ms "local" -v 2.0.2 --streaming-mode-improve

-hs 0.0.0.0 breaks preview_url

if listening on 0.0.0.0, then the preview_url created in get_speakers_special is http://0.0.0.0:8020/sample/foo.wav, which is bogus as far as any client is concerned. I've been getting around this by giving -hs the explicit ip address on that machine I'll be talking to it on, which is fine for my purposes but not overly robust.

I"m not quite sure how this should be handled, but one idea would be to use the Host: header in the HTTP request to form the preview URL.. That way if a machine with two or more interfaces is running xtts_api_server -hs 0.0.0.0, Host should in most cases tell you which one the client has reached/is intending to talk to.

error installing tts mojimoji

Hi, i followed the istrucionts on sillytavern documents and i got this error when i do the command "pip install xtts-api-server".

All components installed well until these lines:

"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.38.33130\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD "-IC:\Users\ExoZ NiK\AppData\Local\Temp\pip-build-env-076xj5q_\overlay\Lib\site-packages\numpy\core\include" "-IC:\Users\ExoZ NiK\AppData\Local\Programs\Python\Python310\include" "-IC:\Users\ExoZ NiK\AppData\Local\Programs\Python\Python310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.38.33130\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" /TcTTS/tts/utils/monotonic_align/core.c /Fobuild\temp.win-amd64-cpython-310\Release\TTS/tts/utils/monotonic_align/core.obj
core.c
C:\Users\ExoZ NiK\AppData\Local\Programs\Python\Python310\include\pyconfig.h(59): fatal error C1083: Non Š possibile aprire il file inclusione: 'io.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.38.33130\bin\HostX86\x64\cl.exe' failed with exit code 2
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tts
Building wheel for mojimoji (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for mojimoji (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [16 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\mojimoji
copying mojimoji_init_.pyi -> build\lib.win-amd64-cpython-310\mojimoji
copying mojimoji\py.typed -> build\lib.win-amd64-cpython-310\mojimoji
running build_ext
building 'mojimoji' extension
creating build\temp.win-amd64-cpython-310
creating build\temp.win-amd64-cpython-310\Release
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.38.33130\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD "-IC:\Users\ExoZ NiK\AppData\Local\Programs\Python\Python310\include" "-IC:\Users\ExoZ NiK\AppData\Local\Programs\Python\Python310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.38.33130\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" /EHsc /Tpmojimoji.cpp /Fobuild\temp.win-amd64-cpython-310\Release\mojimoji.obj
mojimoji.cpp
C:\Users\ExoZ NiK\AppData\Local\Programs\Python\Python310\include\pyconfig.h(59): fatal error C1083: Non Š possibile aprire il file inclusione: 'io.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.38.33130\bin\HostX86\x64\cl.exe' failed with exit code 2
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for mojimoji
Failed to build tts mojimoji
ERROR: Could not build wheels for tts, mojimoji, which is required to install pyproject.toml-based projects

What could be the problem?

xtts-webui sounds different than xtts-api-server

(Sorry for the second issue, it was unrelated and I assume this is a stupid user moment)

So I have both xtts-api-server up and running using your docker container, all hooked up, running great.

To fine tune I set up the Colab of xtts-webui, and batch uploaded a bunch of wav files, and it sounds literally amazing. 1:1 it sounds perfect, I was honestly shocked at how accurate it was.

I thought copying the samples/<<name>>.wav into the api's samples/<<name>>.wav would be enough, but on the self-hosted API server it sounds like a completely different person. Maybe a hint that they are the same person, but a very large difference.

What is the proper way to "export" the fine-tuned model from the webui and add it to the API server? If it is just copying the wav file, is there something else I'm missing for my api server? Everything is generic, nothing customized.

Edit: Also the downloaded wav is just the first wav file, where I uploaded a batch of... 15 or so and had it clean them up and do all of the processing. So I assume really that's the problem - is there a "combined" wav or model that I should instead download?

Thanks for building the tools!

Docker won't run

It looks like the download functions for the models is currently broken. I've tried the default image and have also tried to tinker with it myself.

The models will download, and I can verify they are on the disk, but it appears that it immediately has no idea where the model/configs are. In the code it seems to confuse /app/models and /app/xtts_api_server/models, so maybe there's something there?

Full output from running the docker image:

2023-12-31 04:50:30.093 | INFO     | xtts_api_server.tts_funcs:create_directories:221 - Folder in the path /app/output has been created
2023-12-31 04:50:30.094 | INFO     | xtts_api_server.server:<module>:76 - Model: 'v2.0.2' starts to load,wait until it loads
[XTTS] Downloading config.json...
100%|██████████| 4.36k/4.36k [00:00<00:00, 2.51MiB/s]
[XTTS] Downloading model.pth...
100%|██████████| 1.86G/1.86G [00:34<00:00, 54.1MiB/s]
[XTTS] Downloading vocab.json...
100%|██████████| 335k/335k [00:00<00:00, 2.48MiB/s]
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/app/xtts_api_server/__main__.py", line 46, in <module>
    from xtts_api_server.server import app
  File "/app/xtts_api_server/server.py", line 77, in <module>
    XTTS.load_model()
  File "/app/xtts_api_server/tts_funcs.py", line 146, in load_model
    self.model = TTS(model_path=checkpoint_dir,config_path=config_path).to(self.device)
  File "/usr/local/lib/python3.10/dist-packages/TTS/api.py", line 70, in __init__
    self.config = load_config(config_path) if config_path else None
  File "/usr/local/lib/python3.10/dist-packages/TTS/config/__init__.py", line 91, in load_config
    with fsspec.open(config_path, "r", encoding="utf-8") as f:
  File "/usr/local/lib/python3.10/dist-packages/fsspec/core.py", line 103, in __enter__
    f = self.fs.open(self.path, mode=mode)
  File "/usr/local/lib/python3.10/dist-packages/fsspec/spec.py", line 1295, in open
    f = self._open(
  File "/usr/local/lib/python3.10/dist-packages/fsspec/implementations/local.py", line 180, in _open
    return LocalFileOpener(path, mode, fs=self, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/fsspec/implementations/local.py", line 302, in __init__
    self._open()
  File "/usr/local/lib/python3.10/dist-packages/fsspec/implementations/local.py", line 307, in _open
    self.f = open(self.path, mode=self.mode)
FileNotFoundError: [Errno 2] No such file or directory: '/app/xtts_api_server/models/v2.0.2/config.json'

Wav files not loading

I managed to connect server to mobile version of SillyTavern, because instead of a red error, when I press "Reload", the text "Successfully applied settings Select TTS Provider" appears. Only wav recordings do not appear in Available voices. After a while, red "Could not load voices" appears. How to fix this? My wav file is in the "speakers" folder.

This is my folder
RDT_20231222_2327241191680028301250043

And this is running server
RDT_20231222_2327182768013694486925900

Streaming modes forces cuda:0

I have two graphics cards in my system. cuda:0 is for LLM and I am trying to use cuda:1 for xtts. Without "--streaming-mode" or "--streaming-mode-improve" It will use CPU or CUDA:1 just fine, but if either is used it seems to be hard coded for cuda:0 for some reason? There are no errors to post, it works either way, but its causing slow LLM speeds from low vram. Here is how I am launching it:
(From pc, Windows, using venv)
"call venv\Scripts\activate
python -m xtts_api_server --streaming-mode-improve --stream-play-sync --device cuda:1"

Another proof it is being overridden is that this will give an error since I have no cuda:9:
"python -m xtts_api_server --device cuda:9", error "RuntimeError: CUDA error: invalid device ordinal"

But this runs with no error and runs on cuda:0 anyway.
"python -m xtts_api_server --streaming-mode-improve --stream-play-sync --device cuda:9"

I've read the "About Streaming mode" info link, so I hope I am not missing something.
I tried looking at the code, but I am not a python guy, but I found no obvious issue.
Thanks for any into or a fix!

TypeError with --streaming-mode flag

If I run python -m xtts_api_server --streaming-mode I get the following error:

2023-12-13 00:07:58.526 | WARNING | xtts_api_server.server::53 - 'Streaming Mode' has certain limitations, you can read about them here https://github.com/daswer123/xtts-api-server#about-streaming-mode
2023-12-13 00:07:58.526 | INFO | xtts_api_server.server::58 - Load model for Streaming
Using model: xtts
ERROR:root:Error initializing main coqui engine model: expected str, bytes or os.PathLike object, not NoneType
Traceback (most recent call last):
File "/home/naa/git/XTTSv2/venv/lib/python3.11/site-packages/xtts_api_server/RealtimeTTS/engines/coqui_engine.py", line 227, in _synthesize_worker
tts.load_checkpoint(
File "/home/naa/git/XTTSv2/venv/lib/python3.11/site-packages/TTS/tts/models/xtts.py", line 759, in load_checkpoint
speaker_file_path = speaker_file_path or os.path.join(checkpoint_dir, "speakers_xtts.pth")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/naa/git/XTTSv2/venv/lib/python3.11/site-packages/xtts_api_server/RealtimeTTS/engines/coqui_engine.py", line 227, in _synthesize_worker
tts.load_checkpoint(
File "/home/naa/git/XTTSv2/venv/lib/python3.11/site-packages/TTS/tts/models/xtts.py", line 759, in load_checkpoint
speaker_file_path = speaker_file_path or os.path.join(checkpoint_dir, "speakers_xtts.pth")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType

It works fine without the --streaming-mode flag.

I installed it via

pip install xtts-api-server
pip install torch==2.1.1+cu118 torchaudio==2.1.1+cu118 --index-url https://download.pytorch.org/whl/cu118

but had to run pip install pydub as well, since it was missing.

The version of xtts-api-server installed is 0.6.2. I am on Linux.

When USE_CACHE enabled, /tts_stream fails when cache hits

/tts_stream uses generator to return a StreamingResponse,

but in the XTTS.process_tts_to_file, it returns the whole result if cache hit

if cached_result is not None:
      logger.info("Using cached result.")
      return cached_result  # Return the path to the cached result.

should returns a generator if stream=True.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.