Coder Social home page Coder Social logo

gjb117 / transformers.js Goto Github PK

View Code? Open in Web Editor NEW

This project forked from xenova/transformers.js

0.0 0.0 0.0 95.16 MB

State-of-the-art Machine Learning for the web. Run πŸ€— Transformers directly in your browser, with no need for a server!

Home Page: https://huggingface.co/docs/transformers.js

License: Apache License 2.0

JavaScript 95.94% Python 4.06%

transformers.js's Introduction


transformers.js javascript library logo

NPM Downloads License Documentation

State-of-the-art Machine Learning for the web. Run πŸ€— Transformers directly in your browser, with no need for a server!

Transformers.js is designed to be functionally equivalent to Hugging Face's transformers python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:

  • πŸ“ Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
  • πŸ–ΌοΈ Computer Vision: image classification, object detection, and segmentation.
  • πŸ—£οΈ Audio: automatic speech recognition and audio classification.
  • πŸ™ Multimodal: zero-shot image classification.

Transformers.js uses ONNX Runtime to run models in the browser. The best part about it, is that you can easily convert your pretrained PyTorch, TensorFlow, or JAX models to ONNX using πŸ€— Optimum.

For more information, check out the full documentation.

Quick tour

It's super simple to translate from existing code! Just like the python library, we support the pipeline API. Pipelines group together a pretrained model with preprocessing of inputs and postprocessing of outputs, making it the easiest way to run models with the library.

Python (original) Javascript (ours)
from transformers import pipeline

# Allocate a pipeline for sentiment-analysis
pipe = pipeline('sentiment-analysis')

out = pipe('I love transformers!')
# [{'label': 'POSITIVE', 'score': 0.999806941}]
import { pipeline } from '@xenova/transformers';

// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');

let out = await pipe('I love transformers!');
// [{'label': 'POSITIVE', 'score': 0.999817686}]

You can also use a different model by specifying the model id or path as the second argument to the pipeline function. For example:

// Use a different model for sentiment-analysis
let pipe = await pipeline('sentiment-analysis', 'nlptown/bert-base-multilingual-uncased-sentiment');

Installation

To install via NPM, run:

npm i @xenova/transformers

Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using ES Modules, you can import the library with:

<script type="module">
    import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]';
</script>

Examples

Want to jump straight in? Get started with one of our sample applications/templates:

Name Description Source code
React Multilingual translation website link
Whisper Web Speech recognition w/ Whisper link
Browser extension Text classification extension link
Electron Text classification application link
Node.js Sentiment analysis API link
Next.js Coming soon link

Custom usage

By default, Transformers.js uses hosted pretrained models and precompiled WASM binaries, which should work out-of-the-box. You can customize this as follows:

Settings

import { env } from '@xenova/transformers';

// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';

// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;

// Set location of .wasm files. Defaults to use a CDN.
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';

For a full list of available settings, check out the API Reference.

Convert your models to ONNX

We recommend using our conversion script to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses πŸ€— Optimum to perform conversion and quantization of your model.

python -m scripts.convert --quantize --model_id <model_name_or_path>

For example, convert and quantize bert-base-uncased using:

python -m scripts.convert --quantize --model_id bert-base-uncased

This will save the following files to ./models/:

bert-base-uncased/
β”œβ”€β”€ config.json
β”œβ”€β”€ tokenizer.json
β”œβ”€β”€ tokenizer_config.json
└── onnx/
    β”œβ”€β”€ model.onnx
    └── model_quantized.onnx

Supported tasks/models

Here is the list of all tasks and models currently supported by Transformers.js. If you don't see your task/model listed here or it is not yet supported, feel free to open up a feature request here.

Tasks

Natual Language Processing

Task ID Description Supported?
Conversational conversational Generating conversational text that is relevant, coherent and knowledgable given a prompt. ❌
Fill-Mask fill-mask Masking some of the words in a sentence and predicting which words should replace those masks. βœ…
Question Answering question-answering Retrieve the answer to a question from a given text. βœ…
Sentence Similarity sentence-similarity Determining how similar two texts are. βœ…
Summarization summarization Producing a shorter version of a document while preserving its important information. βœ…
Table Question Answering table-question-answering Answering a question about information from a given table. ❌
Text Classification text-classification or sentiment-analysis Assigning a label or class to a given text. βœ…
Text Generation text-generation Producing new text by predicting the next word in a sequence. βœ…
Text-to-text Generation text2text-generation Converting one text sequence into another text sequence. βœ…
Token Classification token-classification or ner Assigning a label to each token in a text. βœ…
Translation translation Converting text from one language to another. βœ…
Zero-Shot Classification zero-shot-classification Classifying text into classes that are unseen during training. βœ…

Vision

Task ID Description Supported?
Depth Estimation depth-estimation Predicting the depth of objects present in an image. ❌
Image Classification image-classification Assigning a label or class to an entire image. βœ…
Image Segmentation image-segmentation Divides an image into segments where each pixel is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation. βœ…
Image-to-Image image-to-image Transforming a source image to match the characteristics of a target image or a target image domain. ❌
Mask Generation mask-generation Generate masks for the objects in an image. ❌
Object Detection object-detection Identify objects of certain defined classes within an image. βœ…
Video Classification n/a Assigning a label or class to an entire video. ❌
Unconditional Image Generation n/a Generating images with no condition in any context (like a prompt text or another image). ❌

Audio

Task ID Description Supported?
Audio Classification audio-classification Assigning a label or class to a given audio. ❌
Audio-to-Audio n/a Generating audio from an input audio source. ❌
Automatic Speech Recognition automatic-speech-recognition Transcribing a given audio into text. βœ…
Text-to-Speech n/a Generating natural-sounding speech given text input. ❌

Tabular

Task ID Description Supported?
Tabular Classification n/a Classifying a target category (a group) based on set of attributes. ❌
Tabular Regression n/a Predicting a numerical value given a set of attributes. ❌

Multimodal

Task ID Description Supported?
Document Question Answering document-question-answering Answering questions on document images. ❌
Feature Extraction feature-extraction Transforming raw data into numerical features that can be processed while preserving the information in the original dataset. βœ…
Image-to-Text image-to-text Output text from a given image. βœ…
Text-to-Image text-to-image Generates images from input text. ❌
Visual Question Answering visual-question-answering Answering open-ended questions based on an image. ❌
Zero-Shot Image Classification zero-shot-image-classification Classifying images into classes that are unseen during training. βœ…

Reinforcement Learning

Task ID Description Supported?
Reinforcement Learning n/a Learning from actions by interacting with an environment through trial and error and receiving rewards (negative or positive) as feedback. ❌

Models

  1. ALBERT (from Google Research and the Toyota Technological Institute at Chicago) released with the paper ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
  2. BART (from Facebook) released with the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
  3. BERT (from Google) released with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
  4. CLIP (from OpenAI) released with the paper Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
  5. CodeGen (from Salesforce) released with the paper A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
  6. DETR (from Facebook) released with the paper End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
  7. DistilBERT (from HuggingFace), released together with the paper DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into DistilGPT2, RoBERTa into DistilRoBERTa, Multilingual BERT into DistilmBERT and a German version of DistilBERT.
  8. FLAN-T5 (from Google AI) released in the repository google-research/t5x by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
  9. GPT Neo (from EleutherAI) released in the repository EleutherAI/gpt-neo by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
  10. GPT-2 (from OpenAI) released with the paper Language Models are Unsupervised Multitask Learners by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
  11. MarianMT Machine translation models trained using OPUS data by JΓΆrg Tiedemann. The Marian Framework is being developed by the Microsoft Translator Team.
  12. MobileBERT (from CMU/Google Brain) released with the paper MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
  13. MT5 (from Google AI) released with the paper mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
  14. NLLB (from Meta) released with the paper No Language Left Behind: Scaling Human-Centered Machine Translation by the NLLB team.
  15. SqueezeBERT (from Berkeley) released with the paper SqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
  16. T5 (from Google AI) released with the paper Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
  17. T5v1.1 (from Google AI) released in the repository google-research/text-to-text-transfer-transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
  18. Vision Transformer (ViT) (from Google AI) released with the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
  19. Whisper (from OpenAI) released with the paper Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.

transformers.js's People

Contributors

xenova avatar chelouche9 avatar davidgortega avatar kungfooman avatar ekolve avatar mishig25 avatar chrislee973 avatar dependabot[bot] avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.