Coder Social home page Coder Social logo

lotka / subsai Goto Github PK

View Code? Open in Web Editor NEW

This project forked from abdeladim-s/subsai

0.0 0.0 0.0 12.71 MB

๐ŸŽž๏ธ Subtitles generation tool (Web-UI + CLI + Python package) powered by OpenAI's Whisper and its variants ๐ŸŽž๏ธ

Home Page: https://abdeladim-s.github.io/subsai/

License: GNU General Public License v3.0

Python 99.59% Dockerfile 0.41%

subsai's Introduction

๏ธ๐ŸŽž๏ธ Subs AI ๐ŸŽž๏ธ

Subtitles generation tool (Web-UI + CLI + Python package) powered by OpenAI's Whisper and its variants

Features

  • Supported Models

    • openai/whisper
      • Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.

    • linto-ai/whisper-timestamped
      • Multilingual Automatic Speech Recognition with word-level timestamps and confidence

    • ggerganov/whisper.cpp (using abdeladim-s/pywhispercpp)
      • High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model

        • Plain C/C++ implementation without dependencies
        • Runs on the CPU
    • guillaumekln/faster-whisper
      • faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models.

        This implementation is up to 4 times faster than openai/whisper for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU.

    • m-bain/whisperX
      • fast automatic speech recognition (70x realtime with large-v2) with word-level timestamps and speaker diarization.

        • โšก๏ธ Batched inference for 70x realtime transcription using whisper large-v2
        • ๐Ÿชถ faster-whisper backend, requires <8GB gpu memory for large-v2 with beam_size=5
        • ๐ŸŽฏ Accurate word-level timestamps using wav2vec2 alignment
        • ๐Ÿ‘ฏโ€โ™‚๏ธ Multispeaker ASR using speaker diarization from pyannote-audio (speaker ID labels)
        • ๐Ÿ—ฃ๏ธ VAD preprocessing, reduces hallucination & batching with no WER degradation.
    • ๐Ÿ†• jianfch/stable-ts
      • Stabilizing Timestamps for Whisper: This library modifies Whisper to produce more reliable timestamps and extends its functionality.

  • Web UI

  • Command Line Interface

    • For simple or batch processing
  • Python package

    • In case you want to develop your own scripts
  • Supports different subtitle formats thanks to tkarabela/pysubs2

    • SubRip
    • WebVTT
    • substation alpha
    • MicroDVD
    • MPL2
    • TMP
  • Supports audio and video files

Installation

Quoted from the official openai/whisper installation

It requires the command-line tool ffmpeg to be installed on your system, which is available from most package managers:

# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg

# on Arch Linux
sudo pacman -S ffmpeg

# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg

# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg

# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg

You may need rust installed as well, in case tokenizers does not provide a pre-built wheel for your platform. If you see installation errors during the pip install command above, please follow the Getting started page to install Rust development environment. Additionally, you may need to configure the PATH environment variable, e.g. export PATH="$HOME/.cargo/bin:$PATH". If the installation fails with No module named 'setuptools_rust', you need to install setuptools_rust, e.g. by running:

pip install setuptools-rust
  • Once ffmpeg is installed, install subsai
pip install git+https://github.com/abdeladim-s/subsai

Usage

Web-UI

To use the web-UI, run the following command on the terminal

subsai-webui

And a web page will open on your default browser, otherwise navigate to the links provided by the command

You can also run the Web-UI using Docker.

CLI

usage: subsai [-h] [--version] [-m MODEL] [-mc MODEL_CONFIGS] [-f FORMAT] [-df DESTINATION_FOLDER] [-tm TRANSLATION_MODEL]
              [-tc TRANSLATION_CONFIGS] [-tsl TRANSLATION_SOURCE_LANG] [-ttl TRANSLATION_TARGET_LANG]
              media_file [media_file ...]

positional arguments:
  media_file            The path of the media file, a list of files, or a text file containing paths for batch processing.

options:
  -h, --help            show this help message and exit
  --version             show program's version number and exit
  -m MODEL, --model MODEL
                        The transcription AI models. Available models: ['openai/whisper', 'linto-ai/whisper-timestamped']
  -mc MODEL_CONFIGS, --model-configs MODEL_CONFIGS
                        JSON configuration (path to a json file or a direct string)
  -f FORMAT, --format FORMAT, --subtitles-format FORMAT
                        Output subtitles format, available formats ['.srt', '.ass', '.ssa', '.sub', '.json', '.txt', '.vtt']
  -df DESTINATION_FOLDER, --destination-folder DESTINATION_FOLDER
                        The directory where the subtitles will be stored, default to the same folder where the media file(s) is stored.
  -tm TRANSLATION_MODEL, --translation-model TRANSLATION_MODEL
                        Translate subtitles using AI models, available models: ['facebook/m2m100_418M', 'facebook/m2m100_1.2B',
                        'facebook/mbart-large-50-many-to-many-mmt']
  -tc TRANSLATION_CONFIGS, --translation-configs TRANSLATION_CONFIGS
                        JSON configuration (path to a json file or a direct string)
  -tsl TRANSLATION_SOURCE_LANG, --translation-source-lang TRANSLATION_SOURCE_LANG
                        Source language of the subtitles
  -ttl TRANSLATION_TARGET_LANG, --translation-target-lang TRANSLATION_TARGET_LANG
                        Target language of the subtitles

Example of a simple usage

subsai ./assets/test1.mp4 --model openai/whisper --model-configs '{"model_type": "small"}' --format srt

Note: For Windows CMD, You will need to use the following : subsai ./assets/test1.mp4 --model openai/whisper --model-configs "{\"model_type\": \"small\"}" --format srt

You can also provide a simple text file for batch processing (Every line should contain the absolute path to a single media file)

subsai media.txt --model openai/whisper --format srt

From Python

from subsai import SubsAI

file = './assets/test1.mp4'
subs_ai = SubsAI()
model = subs_ai.create_model('openai/whisper', {'model_type': 'base'})
subs = subs_ai.transcribe(file, model)
subs.save('test1.srt')

For more advanced usage, read the documentation.

Examples

Simple examples can be found in the examples folder

Docker

  1. Make sure that you have docker installed.
  2. Clone and cd to the repository
  3. docker compose build
  4. docker compose run -p 8501:8501 -v /path/to/your/media_files/folder:/media_files subsai-webui # subsai-webui-cpu for cpu only
  5. You can access your media files through the mounted media_files folder.

Notes

  • If you have an NVIDIA graphics card, you may need to install cuda to use the GPU capabilities.
  • AMD GPUs compatible with Pytorch should be working as well. #67
  • Transcription time is shown on the terminal, keep an eye on it while running the web UI.
  • If you didn't like Dark mode web UI, you can switch to Light mode from settings > Theme > Light.

Contributing

If you find a bug, have a suggestion or feedback, please open an issue for discussion.

License

This project is licensed under the GNU General Licence version 3 or later. You can modify or redistribute it under the conditions of these licences (See LICENSE for more information).

subsai's People

Contributors

abdeladim-s avatar jlinkels avatar fernandocamporredondo avatar hecate2 avatar sanskar-mk2 avatar lostxine avatar kaminn avatar redcathode avatar woodongwong avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.